url
stringlengths 14
1.76k
| text
stringlengths 100
1.02M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
https://artofproblemsolving.com/wiki/index.php?title=2020_AMC_10A_Problems/Problem_6&diff=126660&oldid=119734
|
# Difference between revisions of "2020 AMC 10A Problems/Problem 6"
The following problem is from both the 2020 AMC 12A #4 and 2020 AMC 10A #6, so both problems redirect to this page.
## Problem
How many -digit positive integers (that is, integers between and , inclusive) having only even digits are divisible by
## Solution 1
The ones digit, for all numbers divisible by 5, must be either or . However, from the restriction in the problem, it must be even, giving us exactly one choice () for this digit. For the middle two digits, we may choose any even integer from , meaning that we have total options. For the first digit, we follow similar intuition but realize that it cannot be , hence giving us 4 possibilities. Therefore, using the multiplication rule, we get . ~ciceronii swrebby
## Solution 2
The ones digit, for all the numbers that have to divisible be 5, must be a or a . Since the problem states that we can only use even digits, the last digit must be . From there, there are no other restrictions since the divisibility rule for 5 states that the last digit must be a or a . So there are even digit options for the first number then for the middle 2. So when we have to do . ~bobthefam
~IceMatrix
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8386790752410889, "perplexity": 543.1570628752296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571993.68/warc/CC-MAIN-20220814022847-20220814052847-00066.warc.gz"}
|
http://link.springer.com/article/10.1007%2FBF01189692
|
Water, Air, and Soil Pollution
, Volume 80, Issue 1, pp 425–433
# Mercury cycling in the Allequash Creek watershed, northern Wisconsin
• David P. Krabbenhoft
• Janina M. Benoit
• Christopher L. Babiarz
• James P. Hurley
• Anders W. Andren
Part V Mercury Dynamics in Watersheds
DOI: 10.1007/BF01189692
Krabbenhoft, D.P., Benoit, J.M., Babiarz, C.L. et al. Water Air Soil Pollut (1995) 80: 425. doi:10.1007/BF01189692
## Abstract
Although there have been recent significant gains in our understanding of mercury (Hg) cycling in aquatic environments, few studies have addressed Hg cycling on a watershed scale. In particular, attention to Hg species transfer between watershed components (upland soils, groundwater, wetlands, streams, and lakes) has been lacking. This study describes spatial and temporal distributions of total Hg and MeHg among watershed components of the Allequash Creek watershed (northern Wisconsin, USA). Substantial increases in total Hg and MeHg were observed as groundwater discharged through peat to form springs that flow into the stream, or rivulets that drain across the surface of the wetland. This increase was concomitant with increases in DOC. During fall, when the Allequash Creek wetland released a substantial amount of DOC to the stream, a 2–3 fold increase in total Hg concentrations was observed along the entire length of the stream. Methylmercury, however, did not show a similar response. Substantial variability was observed in total Hg (0.9 to 6.3) and MeHg (<0.02 to 0.33) concentrations during synoptic surveys of the entire creek. For the Allequash Creek watershed, the contributing groundwater basin is about 50% larger than the topographic drainage basin. Total Hg concentrations in groundwater, the area of the groundwater basin, and annual stream flow data give a watershed-yield rate of 1.2 mg/km2/d, which equates to a retention rate of 96%. The calculated MeHg yield rate for the wetland area is 0.6 to 1.5 mg/km2/d, a value that is 3–6 fold greater than the atmospheric deposition rate.
## Authors and Affiliations
• David P. Krabbenhoft
• 1
• Janina M. Benoit
• 2
• Christopher L. Babiarz
• 2
• James P. Hurley
• 3
• 2
• Anders W. Andren
• 2
1. 1.Water Resources DivisionU.S. Geological SurveyMadison
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8455184698104858, "perplexity": 10607.432539952428}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718423.28/warc/CC-MAIN-20161020183838-00284-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://biblio.ugent.be/publication/8570508
|
1 file | 936.87 KB
# Operational range of a gas-solid vortex unit
(2018) 338. p.702-715
Author
Organization
Abstract
The Gas-Solid Vortex Unit is an advancing fluidization technology with the potential to overcome the limitations of conventional fluidized beds. The conditions for stable fluidization are investigated, that are the upper and lower limit, i.e. minimum and maximum capacity (Ws,min and Ws,max). Based on dimensional analysis three non-dimensional groups are identified, governing the fluidization phenomena: the superficial radial particle Reynolds number Rep, R, the swirl ratio S and the unit loading λ. Data from different authors is gathered for minimum (42 datasets, 3 geometries) and maximum (251 datasets, 8 geometries) capacity and used in regression analysis. Parameters are estimated for different proposed functional dependencies of the identified dimensionless groups. The model equations describing the minimum and maximum unit loading best, including their 95% confidence intervals, are: λmax ¼ ð Þ 4:0 0:4 10−3 Reð Þ 0:4430:011 p;R Sð Þ 0:4540:018 λmin ¼ ð Þ 1:15 0:05 10−4 Rep;R The two equations describe the limits of the operational range of a GSVU for which stable fluidization is possible. The applicability of the model equations is verified against a wide range of data taken from different publications. ©
• (...).pdf
• full text
• |
• UGent only
• |
• PDF
• |
• 936.87 KB
## Citation
Chicago
Friedle, Maximilian, Guy Marin, and Geraldine Heynderickx. 2018. “Operational Range of a Gas-solid Vortex Unit.” Powder Technology 338: 702–715.
APA
Friedle, M., Marin, G., & Heynderickx, G. (2018). Operational range of a gas-solid vortex unit. POWDER TECHNOLOGY , 338, 702–715.
Vancouver
1.
Friedle M, Marin G, Heynderickx G. Operational range of a gas-solid vortex unit. POWDER TECHNOLOGY . 2018;338:702–15.
MLA
Friedle, Maximilian, Guy Marin, and Geraldine Heynderickx. “Operational Range of a Gas-solid Vortex Unit.” POWDER TECHNOLOGY 338 (2018): 702–715. Print.
@article{8570508,
abstract = {The Gas-Solid Vortex Unit is an advancing fluidization technology with the potential to overcome the limitations
of conventional fluidized beds. The conditions for stable fluidization are investigated, that are the upper and
lower limit, i.e. minimum and maximum capacity (Ws,min and Ws,max). Based on dimensional analysis three
non-dimensional groups are identified, governing the fluidization phenomena: the superficial radial particle
Reynolds number Rep, R, the swirl ratio S and the unit loading \ensuremath{\lambda}.
Data from different authors is gathered for minimum (42 datasets, 3 geometries) and maximum (251 datasets, 8
geometries) capacity and used in regression analysis. Parameters are estimated for different proposed functional
dependencies of the identified dimensionless groups. The model equations describing the minimum and maximum
\ensuremath{\lambda}max {\textonequarter} {\dh} {\TH} 4:0 0:4 10\ensuremath{-}3 Re{\dh} {\TH} 0:4430:011
p;R S{\dh} {\TH} 0:4540:018
\ensuremath{\lambda}min {\textonequarter} {\dh} {\TH} 1:15 0:05 10\ensuremath{-}4 Rep;R
The two equations describe the limits of the operational range of a GSVU for which stable fluidization is possible.
The applicability of the model equations is verified against a wide range of data taken from different publications.
author = {Friedle, Maximilian and Marin, Guy and Heynderickx, Geraldine},
issn = {0032-5910 },
journal = {POWDER TECHNOLOGY },
language = {eng},
pages = {702--715},
title = {Operational range of a gas-solid vortex unit},
url = {http://dx.doi.org/10.1016/j.powtec.2018.07.062},
volume = {338},
year = {2018},
}
Altmetric
View in Altmetric
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5976445078849792, "perplexity": 18399.149432862923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512421.5/warc/CC-MAIN-20181019170918-20181019192418-00020.warc.gz"}
|
https://www.science.gov/topicpages/m/multi-robot+multi-target+particle.html
|
#### Sample records for multi-robot multi-target particle
1. Multi-Robot, Multi-Target Particle Swarm Optimization Search in Noisy Wireless Environments
SciTech Connect
Kurt Derr; Milos Manic
2009-05-01
Multiple small robots (swarms) can work together using Particle Swarm Optimization (PSO) to perform tasks that are difficult or impossible for a single robot to accomplish. The problem considered in this paper is exploration of an unknown environment with the goal of finding a target(s) at an unknown location(s) using multiple small mobile robots. This work demonstrates the use of a distributed PSO algorithm with a novel adaptive RSS weighting factor to guide robots for locating target(s) in high risk environments. The approach was developed and analyzed on multiple robot single and multiple target search. The approach was further enhanced by the multi-robot-multi-target search in noisy environments. The experimental results demonstrated how the availability of radio frequency signal can significantly affect robot search time to reach a target.
2. A Particle Multi-Target Tracker for Superpositional Measurements Using Labeled Random Finite Sets
Papi, Francesco; Kim, Du Yong
2015-08-01
In this paper we present a general solution for multi-target tracking with superpositional measurements. Measurements that are functions of the sum of the contributions of the targets present in the surveillance area are called superpositional measurements. We base our modelling on Labeled Random Finite Set (RFS) in order to jointly estimate the number of targets and their trajectories. This modelling leads to a labeled version of Mahler's multi-target Bayes filter. However, a straightforward implementation of this tracker using Sequential Monte Carlo (SMC) methods is not feasible due to the difficulties of sampling in high dimensional spaces. We propose an efficient multi-target sampling strategy based on Superpositional Approximate CPHD (SA-CPHD) filter and the recently introduced Labeled Multi-Bernoulli (LMB) and Vo-Vo densities. The applicability of the proposed approach is verified through simulation in a challenging radar application with closely spaced targets and low signal-to-noise ratio.
3. Multi-robot control interface
DOEpatents
Bruemmer, David J.; Walton, Miles C.
2011-12-06
Methods and systems for controlling a plurality of robots through a single user interface include at least one robot display window for each of the plurality of robots with the at least one robot display window illustrating one or more conditions of a respective one of the plurality of robots. The user interface further includes at least one robot control window for each of the plurality of robots with the at least one robot control window configured to receive one or more commands for sending to the respective one of the plurality of robots. The user interface further includes a multi-robot common window comprised of information received from each of the plurality of robots.
4. INL Multi-Robot Control Interface
SciTech Connect
2005-03-30
The INL Multi-Robot Control Interface controls many robots through a single user interface. The interface includes a robot display window for each robot showing the robotâs condition. More than one window can be used depending on the number of robots. The user interface also includes a robot control window configured to receive commands for sending to the respective robot and a multi-robot common window showing information received from each robot.
5. A modular approach to multi-robot control
SciTech Connect
Anderson, R.J.; Lilly, K.W.
1996-03-01
The ability to rapidly command multi-robot behavior is crucial for the acceptance and effective utilization of multiple robot control. To achieve this, a modular- multiple robot control solution is being, pursued using the SMART modular control architecture. This paper investigates the development of a new dual-arm kinematics module (DUAL-KLN) which allows multiple robots, previously controlled as separate stand-alone systems, to be controlled as a coordinated multi-robot system. The DUAL-KIN module maps velocity and force information from a center point of interest on a grasped object to the tool centers of each grasping robot. Three-port network equations are used and mapped into the scattering operator domain in a computationally efficient form. Application examples of the DUAL-KLN module in multi-robot coordinated control are given.
6. An improved PSO-based approach with dynamic parameter tuning for cooperative multi-robot target searching in complex unknown environments
Cai, Yifan; Yang, Simon X.
2013-10-01
Target searching in complex unknown environments is a challenging aspect of multi-robot cooperation. In this paper, an improved particle swarm optimisation (PSO) based approach is proposed for a team of mobile robots to cooperatively search for targets in complex unknown environments. The improved cooperation rules for a multi-robot system are applied in the potential field function, which acts as the fitness function of the PSO. The main improvements are the district-difference degree and dynamic parameter tuning. In the simulation studies, various complex situations are investigated and compared to the previous research results. The results demonstrate that the proposed approach can enable the multi-robot system to accomplish the target searching tasks in complex unknown environments.
7. Cubature Information SMC-PHD for Multi-Target Tracking.
PubMed
Liu, Zhe; Wang, Zulin; Xu, Mai
2016-01-01
In multi-target tracking, the key problem lies in estimating the number and states of individual targets, in which the challenge is the time-varying multi-target numbers and states. Recently, several multi-target tracking approaches, based on the sequential Monte Carlo probability hypothesis density (SMC-PHD) filter, have been presented to solve such a problem. However, most of these approaches select the transition density as the importance sampling (IS) function, which is inefficient in a nonlinear scenario. To enhance the performance of the conventional SMC-PHD filter, we propose in this paper two approaches using the cubature information filter (CIF) for multi-target tracking. More specifically, we first apply the posterior intensity as the IS function. Then, we propose to utilize the CIF algorithm with a gating method to calculate the IS function, namely CISMC-PHD approach. Meanwhile, a fast implementation of the CISMC-PHD approach is proposed, which clusters the particles into several groups according to the Gaussian mixture components. With the constructed components, the IS function is approximated instead of particles. As a result, the computational complexity of the CISMC-PHD approach can be significantly reduced. The simulation results demonstrate the effectiveness of our approaches. PMID:27171088
8. Cubature Information SMC-PHD for Multi-Target Tracking.
PubMed
Liu, Zhe; Wang, Zulin; Xu, Mai
2016-05-09
In multi-target tracking, the key problem lies in estimating the number and states of individual targets, in which the challenge is the time-varying multi-target numbers and states. Recently, several multi-target tracking approaches, based on the sequential Monte Carlo probability hypothesis density (SMC-PHD) filter, have been presented to solve such a problem. However, most of these approaches select the transition density as the importance sampling (IS) function, which is inefficient in a nonlinear scenario. To enhance the performance of the conventional SMC-PHD filter, we propose in this paper two approaches using the cubature information filter (CIF) for multi-target tracking. More specifically, we first apply the posterior intensity as the IS function. Then, we propose to utilize the CIF algorithm with a gating method to calculate the IS function, namely CISMC-PHD approach. Meanwhile, a fast implementation of the CISMC-PHD approach is proposed, which clusters the particles into several groups according to the Gaussian mixture components. With the constructed components, the IS function is approximated instead of particles. As a result, the computational complexity of the CISMC-PHD approach can be significantly reduced. The simulation results demonstrate the effectiveness of our approaches.
9. Cubature Information SMC-PHD for Multi-Target Tracking
PubMed Central
Liu, Zhe; Wang, Zulin; Xu, Mai
2016-01-01
In multi-target tracking, the key problem lies in estimating the number and states of individual targets, in which the challenge is the time-varying multi-target numbers and states. Recently, several multi-target tracking approaches, based on the sequential Monte Carlo probability hypothesis density (SMC-PHD) filter, have been presented to solve such a problem. However, most of these approaches select the transition density as the importance sampling (IS) function, which is inefficient in a nonlinear scenario. To enhance the performance of the conventional SMC-PHD filter, we propose in this paper two approaches using the cubature information filter (CIF) for multi-target tracking. More specifically, we first apply the posterior intensity as the IS function. Then, we propose to utilize the CIF algorithm with a gating method to calculate the IS function, namely CISMC-PHD approach. Meanwhile, a fast implementation of the CISMC-PHD approach is proposed, which clusters the particles into several groups according to the Gaussian mixture components. With the constructed components, the IS function is approximated instead of particles. As a result, the computational complexity of the CISMC-PHD approach can be significantly reduced. The simulation results demonstrate the effectiveness of our approaches. PMID:27171088
10. Decentralized multi-robot simultaneous localization and mapping
Jaai, R.; Chopra, N.; Balachandran, B.; Karki, H.
2011-04-01
In the simultaneous localization and mapping (SLAM) problem, one addresses the problem of using mobile sensor platforms or robotic systems to map unknown environments while simultaneously localizing the mobile systems relative to the map. Applications include mapping in oil storage tanks, oil pipes, search and rescue operations, surveillance operations, exploration operations. In this effort, a previously proposed multi-robot localization algorithm is extended to implement SLAM. The decentralized algorithm is demonstrated to work in dynamic robot networks. Experimental and numerical studies conducted with multiple networked mobile platforms are also discussed to validate the analytical findings.
11. Multi-robot team design for real-world applications
SciTech Connect
Parker, L.E.
1996-10-01
Many of these applications are in dynamic environments requiring capabilities distributed in functionality, space, or time, and therefore often require teams of robots to work together. While much research has been done in recent years, current robotics technology is still far from achieving many of the real world applications. Two primary reasons for this technology gap are that (1) previous work has not adequately addressed the issues of fault tolerance and adaptivity in multi-robot teams, and (2) existing robotics research is often geared at specific applications and is not easily generalized to different, but related, applications. This paper addresses these issues by first describing the design issues of key importance in these real-world cooperative robotics applications: fault tolerance, reliability, adaptivity, and coherence. We then present a general architecture addressing these design issues (called ALLIANCE) that facilities multi-robot cooperation of small- to medium-sized teams in dynamic environments, performing missions composed of loosely coupled subtasks. We illustrate an implementation of ALLIANCE in a real-world application, called Bounding Overwatch, and then discuss how this architecture addresses our key design issues.
12. A Stigmergic Cooperative Multi-Robot Control Architecture
NASA Technical Reports Server (NTRS)
Howsman, Thomas G.; O'Neil, Daniel; Craft, Michael A.
2004-01-01
In nature, there are numerous examples of complex architectures constructed by relatively simple insects, such as termites and wasps, which cooperatively assemble their nests. A prototype cooperative multi-robot control architecture which may be suitable for the eventual construction of large space structures has been developed which emulates this biological model. Actions of each of the autonomous robotic construction agents are only indirectly coordinated, thus mimicking the distributed construction processes of various social insects. The robotic construction agents perform their primary duties stigmergically, i.e., without direct inter-agent communication and without a preprogrammed global blueprint of the final design. Communication and coordination between individual agents occurs indirectly through the sensed modifications that each agent makes to the structure. The global stigmergic building algorithm prototyped during the initial research assumes that the robotic builders only perceive the current state of the structure under construction. Simulation studies have established that an idealized form of the proposed architecture was indeed capable of producing representative large space structures with autonomous robots. This paper will explore the construction simulations in order to illustrate the multi-robot control architecture.
13. A Biologically Inspired Cooperative Multi-Robot Control Architecture
NASA Technical Reports Server (NTRS)
Howsman, Tom; Craft, Mike; ONeil, Daniel; Howell, Joe T. (Technical Monitor)
2002-01-01
A prototype cooperative multi-robot control architecture suitable for the eventual construction of large space structures has been developed. In nature, there are numerous examples of complex architectures constructed by relatively simple insects, such as termites and wasps, which cooperatively assemble their nests. The prototype control architecture emulates this biological model. Actions of each of the autonomous robotic construction agents are only indirectly coordinated, thus mimicking the distributed construction processes of various social insects. The robotic construction agents perform their primary duties stigmergically i.e., without direct inter-agent communication and without a preprogrammed global blueprint of the final design. Communication and coordination between individual agents occurs indirectly through the sensed modifications that each agent makes to the structure. The global stigmergic building algorithm prototyped during the initial research assumes that the robotic builders only perceive the current state of the structure under construction. Simulation studies have established that an idealized form of the proposed architecture was indeed capable of producing representative large space structures with autonomous robots. This paper will explore the construction simulations in order to illustrate the multi-robot control architecture.
14. Cooperative multi-robot observation of multiple moving targets
SciTech Connect
Parker, L.E.; Emmons, B.A.
1997-03-01
An important issue that arises in the automation of many security, surveillance, and reconnaissance tasks is that of monitoring, or observing, the movements of targets navigating in a bounded area of interest. A key research issue in these problems is that of sensor placement--determining where sensors should be located to maintain the targets in view. In complex applications of this type, the use of multiple sensors dynamically moving over time is required. In this paper, the authors investigate the sue of a cooperative team of autonomous sensor-based robots for multi-robot observation of multiple moving targets. They focus primarily on developing the distributed control strategies that allow the robot team to attempt to maximize the collective tie during which each object is being observed by at least one robot in the area of interest. The initial efforts in this problem address the aspects of distributed control in homogeneous robot teams with equivalent sensing and movement capabilities working in an uncluttered, bounded area. This paper first formalizes the problem, discusses related work, and then shows that this problem is NP-hard. They then present a distributed approximate approach to solving this problem that combines low-level multi-robot control with higher-level control.
15. Multi Robot Flocking Using Cooperative Control for Space Exploration
Chandran, Priya
2012-07-01
This paper aims at achieving flocking behavior of multi robot systems for space explorations. Cooperative control of unmanned vehicles is used in the survey of unknown environments. Distributed control of multiple vehicles achieves the objective of exploration of wide areas while avoiding obstacles on their path. Gradient based algorithm is used to obtain necessary attractive/repulsive force to maintain flock. Similar force is used to avoid obstacles, which may be present in the environment. Velocity consensus algorithm helps in maintaining the necessary geometry of the flock. A target agent specifies the group behavior for the flock. Two wheel differential robot model with second order dynamics is considered here. Robot motion is assumed to be on plane terrain.
16. Multi-robot motion control for cooperative observation
SciTech Connect
Parker, L.E.
1997-06-01
An important issue that arises in the automation of many security, surveillance, and reconnaissance tasks is that of monitoring (or observing) the movements of targets navigating in a bounded area of interest. A key research issue in these problems is that of sensor placement--determining where sensors should be located to maintain the targets in view. In complex applications involving limited-range sensors, the use of multiple sensors dynamically moving over time is required. In this paper, the authors investigate the use of a cooperative team of autonomous sensor-based robots for the observation of multiple moving targets. They focus primarily on developing the distributed control strategies that allow the robot team to attempt to minimize the total time in which targets escape observation by some robot team member in the area of interest. This paper first formalizes the problem and discusses related work. The authors then present a distributed approximate approach to solving this problem that combines low-level multi-robot control with higher-level reasoning control based on the ALLIANCE formalism. They analyze the effectiveness of the approach by comparing it to 3 other feasible algorithms for cooperative control, showing the superiority of the approach for a large class of problems.
17. Task-oriented multi-robot learning in behavior-based systems
SciTech Connect
Parker, L.E.
1996-12-31
A large application domain for multi-robot teams involves task- oriented missions, in which potentially heterogeneous robots must solve several distinct tasks. Previous research addressing this problem in multi-robot systems has largely focused on issues of efficiency, while ignoring the real-world situated robot needs of fault tolerance and adaptivity. This paper addresses this problem by developing an architecture called L-ALLIANCE that incorporates task- oriented action selection mechanisms into a behavior-based system, thus increasing the efficiency of robot team performance while maintaining the desirable characteristics of fault tolerance and adaptivity. We present our investigations of several competing control strategies and derive an approach that works well in a wide variety of multi-robot task-oriented mission scenarios. We provide a formal model of this technique to illustrate how it can be incorporated into any behavior-based system.
18. Motivation and Context-Based Multi-Robot Architecture for Dynamic Task, Role and Behavior Selections
Lee, Dong-Hyun; Kim, Jong-Hwan
This paper proposes a multi-robot coordination architecture for dynamic task, role and behavior selections. The proposed architecture employs the motivation of task, the utility of role, a probabilistic behavior selection and a team strategy for efficient multi-robot coordination. Multiple robots in a team can coordinate with each other by selecting appropriate task, role and behavior in adversarial and dynamic environment. The effectiveness of the proposed architecture is demonstrated in dynamic environment robot soccer by carrying out computer simulation and real environment.
19. Behavior-based multi-robot collaboration for autonomous construction tasks
NASA Technical Reports Server (NTRS)
Stroupe, Ashley; Huntsberger, Terry; Okon, Avi; Aghazarian, Hrand; Robinson, Matthew
2005-01-01
The Robot Construction Crew (RCC) is a heterogeneous multi-robot system for autonomous construction of a structure through assembly of Long components. The two robot team demonstrates component placement into an existing structure in a realistic environment. The task requires component acquisition, cooperative transport, and cooperative precision manipulation. A behavior-based architecture provides adaptability. The RCC approach minimizes computation, power, communication, and sensing for applicability to space-related construction efforts, but the techniques are applicable to terrestrial construction tasks.
20. Heterogeneous Multi-Robot System for Mapping Environmental Variables of Greenhouses.
PubMed
Roldán, Juan Jesús; Garcia-Aunon, Pablo; Garzón, Mario; de León, Jorge; Del Cerro, Jaime; Barrientos, Antonio
2016-01-01
The productivity of greenhouses highly depends on the environmental conditions of crops, such as temperature and humidity. The control and monitoring might need large sensor networks, and as a consequence, mobile sensory systems might be a more suitable solution. This paper describes the application of a heterogeneous robot team to monitor environmental variables of greenhouses. The multi-robot system includes both ground and aerial vehicles, looking to provide flexibility and improve performance. The multi-robot sensory system measures the temperature, humidity, luminosity and carbon dioxide concentration in the ground and at different heights. Nevertheless, these measurements can be complemented with other ones (e.g., the concentration of various gases or images of crops) without a considerable effort. Additionally, this work addresses some relevant challenges of multi-robot sensory systems, such as the mission planning and task allocation, the guidance, navigation and control of robots in greenhouses and the coordination among ground and aerial vehicles. This work has an eminently practical approach, and therefore, the system has been extensively tested both in simulations and field experiments. PMID:27376297
1. Heterogeneous Multi-Robot System for Mapping Environmental Variables of Greenhouses
PubMed Central
Roldán, Juan Jesús; Garcia-Aunon, Pablo; Garzón, Mario; de León, Jorge; del Cerro, Jaime; Barrientos, Antonio
2016-01-01
The productivity of greenhouses highly depends on the environmental conditions of crops, such as temperature and humidity. The control and monitoring might need large sensor networks, and as a consequence, mobile sensory systems might be a more suitable solution. This paper describes the application of a heterogeneous robot team to monitor environmental variables of greenhouses. The multi-robot system includes both ground and aerial vehicles, looking to provide flexibility and improve performance. The multi-robot sensory system measures the temperature, humidity, luminosity and carbon dioxide concentration in the ground and at different heights. Nevertheless, these measurements can be complemented with other ones (e.g., the concentration of various gases or images of crops) without a considerable effort. Additionally, this work addresses some relevant challenges of multi-robot sensory systems, such as the mission planning and task allocation, the guidance, navigation and control of robots in greenhouses and the coordination among ground and aerial vehicles. This work has an eminently practical approach, and therefore, the system has been extensively tested both in simulations and field experiments. PMID:27376297
2. Heterogeneous Multi-Robot System for Mapping Environmental Variables of Greenhouses.
PubMed
Roldán, Juan Jesús; Garcia-Aunon, Pablo; Garzón, Mario; de León, Jorge; Del Cerro, Jaime; Barrientos, Antonio
2016-07-01
The productivity of greenhouses highly depends on the environmental conditions of crops, such as temperature and humidity. The control and monitoring might need large sensor networks, and as a consequence, mobile sensory systems might be a more suitable solution. This paper describes the application of a heterogeneous robot team to monitor environmental variables of greenhouses. The multi-robot system includes both ground and aerial vehicles, looking to provide flexibility and improve performance. The multi-robot sensory system measures the temperature, humidity, luminosity and carbon dioxide concentration in the ground and at different heights. Nevertheless, these measurements can be complemented with other ones (e.g., the concentration of various gases or images of crops) without a considerable effort. Additionally, this work addresses some relevant challenges of multi-robot sensory systems, such as the mission planning and task allocation, the guidance, navigation and control of robots in greenhouses and the coordination among ground and aerial vehicles. This work has an eminently practical approach, and therefore, the system has been extensively tested both in simulations and field experiments.
3. Multi-Target State Extraction for the SMC-PHD Filter
PubMed Central
Si, Weijian; Wang, Liwei; Qu, Zhiyu
2016-01-01
The sequential Monte Carlo probability hypothesis density (SMC-PHD) filter has been demonstrated to be a favorable method for multi-target tracking. However, the time-varying target states need to be extracted from the particle approximation of the posterior PHD, which is difficult to implement due to the unknown relations between the large amount of particles and the PHD peaks representing potential target locations. To address this problem, a novel multi-target state extraction algorithm is proposed in this paper. By exploiting the information of measurements and particle likelihoods in the filtering stage, we propose a validation mechanism which aims at selecting effective measurements and particles corresponding to detected targets. Subsequently, the state estimates of the detected and undetected targets are performed separately: the former are obtained from the particle clusters directed by effective measurements, while the latter are obtained from the particles corresponding to undetected targets via clustering method. Simulation results demonstrate that the proposed method yields better estimation accuracy and reliability compared to existing methods. PMID:27322274
4. Multi-Target State Extraction for the SMC-PHD Filter.
PubMed
Si, Weijian; Wang, Liwei; Qu, Zhiyu
2016-01-01
The sequential Monte Carlo probability hypothesis density (SMC-PHD) filter has been demonstrated to be a favorable method for multi-target tracking. However, the time-varying target states need to be extracted from the particle approximation of the posterior PHD, which is difficult to implement due to the unknown relations between the large amount of particles and the PHD peaks representing potential target locations. To address this problem, a novel multi-target state extraction algorithm is proposed in this paper. By exploiting the information of measurements and particle likelihoods in the filtering stage, we propose a validation mechanism which aims at selecting effective measurements and particles corresponding to detected targets. Subsequently, the state estimates of the detected and undetected targets are performed separately: the former are obtained from the particle clusters directed by effective measurements, while the latter are obtained from the particles corresponding to undetected targets via clustering method. Simulation results demonstrate that the proposed method yields better estimation accuracy and reliability compared to existing methods. PMID:27322274
5. Multi-Target State Extraction for the SMC-PHD Filter.
PubMed
Si, Weijian; Wang, Liwei; Qu, Zhiyu
2016-06-17
The sequential Monte Carlo probability hypothesis density (SMC-PHD) filter has been demonstrated to be a favorable method for multi-target tracking. However, the time-varying target states need to be extracted from the particle approximation of the posterior PHD, which is difficult to implement due to the unknown relations between the large amount of particles and the PHD peaks representing potential target locations. To address this problem, a novel multi-target state extraction algorithm is proposed in this paper. By exploiting the information of measurements and particle likelihoods in the filtering stage, we propose a validation mechanism which aims at selecting effective measurements and particles corresponding to detected targets. Subsequently, the state estimates of the detected and undetected targets are performed separately: the former are obtained from the particle clusters directed by effective measurements, while the latter are obtained from the particles corresponding to undetected targets via clustering method. Simulation results demonstrate that the proposed method yields better estimation accuracy and reliability compared to existing methods.
6. A capillary based chemiluminscent multi-target immunoassay.
PubMed
Cao, Yuan-Cheng
2015-05-01
Renewed interest in capillary format immunoassays has lead to increasingly costly and complex approaches to preparation and readout. This study describes a simple multi-target method based on a capillary platform using horseradish peroxidase (HRP) labelled IgG to visualize an antibody antigen complex. When goat-anti-human IgG was employed as the probe and human IgG as target, the system allowed detection of target to less than 1 ng/mL using a standard detection approach. The capillaries were read visually or with a commercial grade CCD camera. Multi-target detection was demonstrated using a model system of rat-anti-mouse, goat-anti-human and mouse-anti-rat IgG. These probes were encoded to different locations in the capillary, providing a simple inexpensive approach to achieve multi-target assays.
7. Nonlinear robust controller design for multi-robot systems with unknown payloads
NASA Technical Reports Server (NTRS)
Song, Y. D.; Anderson, J. N.; Homaifar, A.; Lai, H. Y.
1992-01-01
This work is concerned with the control problem of a multi-robot system handling a payload with unknown mass properties. Force constraints at the grasp points are considered. Robust control schemes are proposed that cope with the model uncertainty and achieve asymptotic path tracking. To deal with the force constraints, a strategy for optimally sharing the task is suggested. This strategy basically consists of two steps. The first detects the robots that need help and the second arranges that help. It is shown that the overall system is not only robust to uncertain payload parameters, but also satisfies the force constraints.
8. Behavior-Based Multi-Robot Collaboration for Autonomous Construction Tasks
NASA Technical Reports Server (NTRS)
Stroupe, Ashley; Huntsberger, Terry; Okon, Avi; Aghazarian, Hrand; Robinson, Matthew
2005-01-01
We present a heterogeneous multi-robot system for autonomous construction of a structure through assembly of long components. Placement of a component within an existing structure in a realistic environment is demonstrated on a two-robot team. The task requires component acquisition, cooperative transport, and cooperative precision manipulation. Far adaptability, the system is designed as a behavior-based architecture. Far applicability to space-related construction efforts, computation, power, communication, and sensing are minimized, though the techniques developed are also applicable to terrestrial construction tasks.
9. A reinforcement learning trained fuzzy neural network controller for maintaining wireless communication connections in multi-robot systems
Zhong, Xu; Zhou, Yu
2014-05-01
This paper presents a decentralized multi-robot motion control strategy to facilitate a multi-robot system, comprised of collaborative mobile robots coordinated through wireless communications, to form and maintain desired wireless communication coverage in a realistic environment with unstable wireless signaling condition. A fuzzy neural network controller is proposed for each robot to maintain the wireless link quality with its neighbors. The controller is trained through reinforcement learning to establish the relationship between the wireless link quality and robot motion decision, via consecutive interactions between the controller and environment. The tuned fuzzy neural network controller is applied to a multi-robot deployment process to form and maintain desired wireless communication coverage. The effectiveness of the proposed control scheme is verified through simulations under different wireless signal propagation conditions.
10. Sequential measurement-driven multi-target Bayesian filter
Liu, Zong-xiang; Li, Li-juan; Xie, Wei-xin; Li, Liang-qun
2015-12-01
Bayesian filter is an efficient approach for multi-target tracking in the presence of clutter. Recently, considerable attention has been focused on probability hypothesis density (PHD) filter, which is an intensity approximation of the multi-target Bayesian filter. However, PHD filter is inapplicable to cases in which target detection probability is low. The use of this filter may result in a delay in data processing because it handles received measurements periodically, once every sampling period. To track multiple targets in the case of low detection probability and to handle received measurements in real time, we propose a sequential measurement-driven Bayesian filter. The proposed filter jointly propagates the marginal distributions and existence probabilities of each target in the filter recursion. We also present an implementation of the proposed filter for linear Gaussian models. Simulation results demonstrate that the proposed filter can more accurately track multiple targets than the Gaussian mixture PHD filter or cardinalized PHD filter.
11. Towards Human-Friendly Efficient Control of Multi-Robot Teams
NASA Technical Reports Server (NTRS)
Stoica, Adrian; Theodoridis, Theodoros; Barrero, David F.; Hu, Huosheng; McDonald-Maiers, Klaus
2013-01-01
This paper explores means to increase efficiency in performing tasks with multi-robot teams, in the context of natural Human-Multi-Robot Interfaces (HMRI) for command and control. The motivating scenario is an emergency evacuation by a transport convoy of unmanned ground vehicles (UGVs) that have to traverse, in shortest time, an unknown terrain. In the experiments the operator commands, in minimal time, a group of rovers through a maze. The efficiency of performing such tasks depends on both, the levels of robots' autonomy, and the ability of the operator to command and control the team. The paper extends the classic framework of levels of autonomy (LOA), to levels/hierarchy of autonomy characteristic of Groups (G-LOA), and uses it to determine new strategies for control. An UGVoriented command language (UGVL) is defined, and a mapping is performed from the human-friendly gesture-based HMRI into the UGVL. The UGVL is used to control a team of 3 robots, exploring the efficiency of different G-LOA; specifically, by (a) controlling each robot individually through the maze, (b) controlling a leader and cloning its controls to followers, and (c) controlling the entire group. Not surprisingly, commands at increased G-LOA lead to a faster traverse, yet a number of aspects are worth discussing in this context.
12. Cooperative motion control for multi-target observation
SciTech Connect
Parker, L.E.
1997-08-01
An important issue that arises in the automation of many security, surveillance, and reconnaissance tasks is that of monitoring (or observing) the movements of targets navigating in a bounded area of interest. A key research issue in these problems is that of sensor placement--determining where sensors should be located to maintain the targets in view. In complex applications involving limited-range sensors, the use of multiple sensors dynamically moving over time is required. In this paper, the author investigates the use of a cooperative team of autonomous sensor-based robots for the observation of multiple moving targets. The focus is primarily on developing the distributed control strategies that allow the robot team to attempt to minimize the total time in which targets escape observation by some robot team member in the area of interest. This paper first formalizes the problem and discusses related work. The author then presents a distributed approximate approach to solving this problem that combines low-level multi-robot control with higher-level reasoning control based on the ALLIANCE formalism. The effectiveness of the approach is analyzed by comparing it to three other feasible algorithms for cooperative control, showing the superiority of the approach for a large class of problems.
13. A Case Study of Collaboration with Multi-Robots and Its Effect on Children's Interaction
ERIC Educational Resources Information Center
Hwang, Wu-Yuin; Wu, Sheng-Yi
2014-01-01
Learning how to carry out collaborative tasks is critical to the development of a student's capacity for social interaction. In this study, a multi-robot system was designed for students. In three different scenarios, students controlled robots in order to move dice; we then examined their collaborative strategies and their behavioral…
14. Dynamical Behavior of Multi-Robot Systems Using Lattice Gas Automata
SciTech Connect
Cameron, S.M.; Robinett, R.; Stantz, K.M.; Trahan, M.W.; Wagner, J.S.
1999-03-11
Recent attention has been given to the deployment of an adaptable sensor array realized by multi-robotic systems. Our group has been studying the collective behavior of autonomous, multi-agent systems and their applications in the area of remote-sensing and emerging threats. To accomplish such tasks, an interdisciplinary research effort at Sandia National Laboratories are conducting tests in the fields of sensor technology, robotics, and multi-robotic and multi-agents architectures. Our goal is to coordinate a constellation of point sensors that optimizes spatial coverage and multivariate signal analysis using unmanned robotic vehicles (e.g., RATLERs, Robotic All-ten-sin Lunar Exploration Rover-class vehicles). Overall design methodology is to evolve complex collective behaviors realized through simple interaction (kinetic) physics and artificial intelligence to enable real-time operational responses to emerging threats. This paper focuses on our recent work understanding the dynamics of many-body systems using the physics-based hydrodynamic model of lattice gas automata. Three design features are investigated. One, for single-speed robots, a hexagonal nearest-neighbor interaction topology is necessary to preserve standard hydrodynamic flow. Two, adaptability, defined by the swarm's deformation rate, can be controlled through the hydrodynamic viscosity term, which, in turn, is defined by the local robotic interaction rules. Three, due to the inherent non-linearity of the dynamical equations describing large ensembles, development of stability criteria ensuring convergence to equilibrium states is developed by scaling information flow rates relative to a swarm's hydrodynamic flow rate. An initial test case simulates a swarm of twenty-five robots that maneuvers past an obstacle while following a moving target. A genetic algorithm optimizes applied nearest-neighbor forces in each of five spatial regions distributed over the simulation domain. Armed with knowledge, the
15. L-ALLIANCE: a mechanism for adaptive action selection in heterogeneous multi-robot teams
SciTech Connect
Parker, L.E.
1995-11-01
In practical applications of robotics, it is usually quite difficult, if not impossible, for the system designer to fully predict the environmental states in which the robots will operate. The complexity of the problem is further increased when dealing with teams of robots which themselves may be incompletely known and characterized in advance. It is thus highly desirable for robot teams to be able to adapt their performance during the mission due to changes in the environment, or to changes in other robot team members. In previous work, we introduced a behavior-based mechanism called the ALLIANCE architecture -- that facilitates the fault tolerant cooperative control of multi-robot teams. However, this previous work did not address the issue of how to dynamically update the control parameters during a mission to adapt to ongoing changes in the environment or in the robot team, and to ensure the efficiency of the collective team actions. In this paper, we address this issue by proposing the L-ALLIANCE mechanism, which defines an automated method whereby robots can use knowledge learned from previous experience to continually improve their collective action selection when working on missions composed of loosely coupled, discrete subtasks. This ability to dynamically update robotic control parameters provides a number of distinct advantages: it alleviates the need for human tuning of control parameters, it facilitates the use of custom-designed multi-robot teams for any given application, it improves the efficiency of the mission performance, and It allows robots to continually adapt their performance over time due to changes in the robot team and/or the environment. We describe the L-ALLIANCE mechanism, present the results of various alternative update strategies we investigated, present the formal model of the L-ALLIANCE mechanism, and present the results of a simple proof of concept implementation on a small team of heterogeneous mobile robots.
16. An inexpensive multi-target carousel for PLD systems
Clark, J. H.; Weston, R. G.
1996-05-01
Pulsed laser deposition (PLD) of thin films is widely regarded as the best growth technique for the development of novel multilayer structures and devices; in particular, high-temperature superconducting (HTS) devices. To achieve this, it is essential to have the capability to deposit material from different targets sequentially in situ, namely without opening the deposition chamber. Here we present details of a multi-target carousel for multilayer depositions. It has been designed to allow target rotation and selection without any latching mechanisms or in-vacuum motors and is also suitable for use in automated systems. Additionally, it is robust, relatively inexpensive, compact, scalable and simple to build. Single-layer 0957-0233/7/5/014/img1 (YBCO) thin films grown using the multi-target carousel show no reduction in quality compared to ones grown using a single-target system. Moreover, the carousel has successfully been used to deposit Ag onto YBCO in situ to realize low-resistance YBCO - Ag - YBCO resistor structures; which would have been impossible with ex situ metallization. In addition, in situ homo-epitaxial MgO buffer layers on (100) MgO substrates prior to the deposition of YBCO have been investigated as a means of improving HTS film quality.
17. Multi-Target Detection from Full-Waveform Airborne Laser Scanner Using Phd Filter
Fuse, T.; Hiramatsu, D.; Nakanishi, W.
2016-06-01
We propose a new technique to detect multiple targets from full-waveform airborne laser scanner. We introduce probability hypothesis density (PHD) filter, a type of Bayesian filtering, by which we can estimate the number of targets and their positions simultaneously. PHD filter overcomes some limitations of conventional Gaussian decomposition method; PHD filter doesn't require a priori knowledge on the number of targets, assumption of parametric form of the intensity distribution. In addition, it can take a similarity between successive irradiations into account by modelling relative positions of the same targets spatially. Firstly we explain PHD filter and particle filter implementation to it. Secondly we formulate the multi-target detection problem on PHD filter by modelling components and parameters within it. At last we conducted the experiment on real data of forest and vegetation, and confirmed its ability and accuracy.
18. Mid-course multi-target tracking using continuous representation
NASA Technical Reports Server (NTRS)
1991-01-01
The thrust of this paper is to present a new approach to multi-target tracking for the mid-course stage of the Strategic Defense Initiative (SDI). This approach is based upon a continuum representation of a cluster of flying objects. We assume that the velocities of the flying objects can be embedded into a smooth velocity field. This assumption is based upon the impossibility of encounters in a high density cluster between the flying objects. Therefore, the problem is reduced to an identification of a moving continuum based upon consecutive time frame observations. In contradistinction to the previous approaches, here each target is considered as a center of a small continuous neighborhood subjected to a local-affine transformation, and therefore, the target trajectories do not mix. Obviously, their mixture in plane of sensor view is apparent. The approach is illustrated by an example.
19. Bayesian multi-target tracking and sequential object recognition
Armbruster, Walter
2008-04-01
The paper considers the following problem: given a 3D model of a reference target and a sequence of images of a 3D scene, identify the object in the scene most likely to be the reference target and determine its current pose. Finding the best match in each frame independently of previous decisions is not optimal, since past information is ignored. Our solution concept uses a novel Bayesian framework for multi target tracking and object recognition to define and sequentially update the probability that the reference target is any one of the tracked objects. The approach is applied to problems of automatic lock-on and missile guidance using a laser radar seeker. Field trials have resulted in high target hit probabilities despite low resolution imagery and temporarily highly occluded targets.
20. Antibacterial Drug Leads: DNA and Enzyme Multi-Targeting
PubMed Central
Zhu, Wei; Wang, Yang; Li, Kai; Gao, Jian; Huang, Chun-Hsiang; Chen, Chun-Chi; Ko, Tzu-Ping; Zhang, Yonghui; Guo, Rey-Ting; Oldfield, Eric
2015-01-01
We report the results of an investigation of the activity of a series of amidine and bisamidine compounds against Staphylococcus aureus and Escherichia coli. The most active compounds bound to an AT-rich DNA dodecamer (CGCGAATTCGCG)2, and using DSC were found to increase the melting transition by up to 24 °C. Several compounds also inhibited undecaprenyl diphosphate synthase (UPPS) with IC50 values of 100–500 nM and we found good correlations (R2 = 0.89, S. aureus; R2 = 0.79, E. coli)) between experimental and predicted cell growth inhibition by using DNA ΔTm and UPPS IC50 experimental results together with 1 computed descriptor. We also solved the structures of three bisamidines binding to DNA as well as three UPPS structures. Overall, the results are of general interest in the context of the development of resistance-resistant antibiotics that involve multi-targeting. PMID:25574764
1. Deterministic optimal maneuver strategy for multi-target missions
NASA Technical Reports Server (NTRS)
Dwivedi, N. P.
1975-01-01
This paper presents an optimal strategy for making impulsive correction to a multi-target trajectory by a single maneuver. The concept of an optimal maneuver time is introduced. The choice of suitable weighting functions is explored to enable one to properly translate the subjective desire of mission success into an objective cost function whose minimization yields the optimal strategy. It is shown that a number of strategies previously formulated are derivable from one general expression. A number of other interesting properties of the optimal strategy are described. Numerical results are presented for a typical two-target mission. It is shown that the strategy formulated is optimal. For some perturbations, there exists an optimal maneuver time different from the time of initiation of the perturbation. That is, the physical properties of the trajectory can be exploited to select the optimal time of making a corrective maneuver.
2. Development and human factors analysis of an augmented reality interface for multi-robot tele-operation and control
Lee, Sam; Lucas, Nathan P.; Ellis, R. Darin; Pandya, Abhilash
2012-06-01
This paper presents a seamlessly controlled human multi-robot system comprised of ground and aerial robots of semiautonomous nature for source localization tasks. The system combines augmented reality interfaces capabilities with human supervisor's ability to control multiple robots. The role of this human multi-robot interface is to allow an operator to control groups of heterogeneous robots in real time in a collaborative manner. It used advanced path planning algorithms to ensure obstacles are avoided and that the operators are free for higher-level tasks. Each robot knows the environment and obstacles and can automatically generate a collision-free path to any user-selected target. It displayed sensor information from each individual robot directly on the robot in the video view. In addition, a sensor data fused AR view is displayed which helped the users pin point source information or help the operator with the goals of the mission. The paper studies a preliminary Human Factors evaluation of this system in which several interface conditions are tested for source detection tasks. Results show that the novel Augmented Reality multi-robot control (Point-and-Go and Path Planning) reduced mission completion times compared to the traditional joystick control for target detection missions. Usability tests and operator workload analysis are also investigated.
3. Performance impact of mutation operators of a subpopulation-based genetic algorithm for multi-robot task allocation problems.
PubMed
Liu, Chun; Kroll, Andreas
2016-01-01
Multi-robot task allocation determines the task sequence and distribution for a group of robots in multi-robot systems, which is one of constrained combinatorial optimization problems and more complex in case of cooperative tasks because they introduce additional spatial and temporal constraints. To solve multi-robot task allocation problems with cooperative tasks efficiently, a subpopulation-based genetic algorithm, a crossover-free genetic algorithm employing mutation operators and elitism selection in each subpopulation, is developed in this paper. Moreover, the impact of mutation operators (swap, insertion, inversion, displacement, and their various combinations) is analyzed when solving several industrial plant inspection problems. The experimental results show that: (1) the proposed genetic algorithm can obtain better solutions than the tested binary tournament genetic algorithm with partially mapped crossover; (2) inversion mutation performs better than other tested mutation operators when solving problems without cooperative tasks, and the swap-inversion combination performs better than other tested mutation operators/combinations when solving problems with cooperative tasks. As it is difficult to produce all desired effects with a single mutation operator, using multiple mutation operators (including both inversion and swap) is suggested when solving similar combinatorial optimization problems.
4. Performance impact of mutation operators of a subpopulation-based genetic algorithm for multi-robot task allocation problems.
PubMed
Liu, Chun; Kroll, Andreas
2016-01-01
Multi-robot task allocation determines the task sequence and distribution for a group of robots in multi-robot systems, which is one of constrained combinatorial optimization problems and more complex in case of cooperative tasks because they introduce additional spatial and temporal constraints. To solve multi-robot task allocation problems with cooperative tasks efficiently, a subpopulation-based genetic algorithm, a crossover-free genetic algorithm employing mutation operators and elitism selection in each subpopulation, is developed in this paper. Moreover, the impact of mutation operators (swap, insertion, inversion, displacement, and their various combinations) is analyzed when solving several industrial plant inspection problems. The experimental results show that: (1) the proposed genetic algorithm can obtain better solutions than the tested binary tournament genetic algorithm with partially mapped crossover; (2) inversion mutation performs better than other tested mutation operators when solving problems without cooperative tasks, and the swap-inversion combination performs better than other tested mutation operators/combinations when solving problems with cooperative tasks. As it is difficult to produce all desired effects with a single mutation operator, using multiple mutation operators (including both inversion and swap) is suggested when solving similar combinatorial optimization problems. PMID:27588254
5. Adapting an Ant Colony Metaphor for Multi-Robot Chemical Plume Tracing
PubMed Central
Meng, Qing-Hao; Yang, Wei-Xing; Wang, Yang; Li, Fei; Zeng, Ming
2012-01-01
We consider chemical plume tracing (CPT) in time-varying airflow environments using multiple mobile robots. The purpose of CPT is to approach a gas source with a previously unknown location in a given area. Therefore, the CPT could be considered as a dynamic optimization problem in continuous domains. The traditional ant colony optimization (ACO) algorithm has been successfully used for combinatorial optimization problems in discrete domains. To adapt the ant colony metaphor to the multi-robot CPT problem, the two-dimension continuous search area is discretized into grids and the virtual pheromone is updated according to both the gas concentration and wind information. To prevent the adapted ACO algorithm from being prematurely trapped in a local optimum, the upwind surge behavior is adopted by the robots with relatively higher gas concentration in order to explore more areas. The spiral surge (SS) algorithm is also examined for comparison. Experimental results using multiple real robots in two indoor natural ventilated airflow environments show that the proposed CPT method performs better than the SS algorithm. The simulation results for large-scale advection-diffusion plume environments show that the proposed method could also work in outdoor meandering plume environments. PMID:22666056
6. Multi-target pursuit formation of multi-agent systems
Yan, Jing; Guan, Xin-Ping; Luo, Xiao-Yuan
2011-01-01
The main goal of this paper is to design a team of agents that can accomplish multi-target pursuit formation using a developed leader—follower strategy. It is supposed that every target can accept a certain number of agents. First, each agent can automatically choose its target based on the distance from the agent to the target and the number of agents accepted by the target. In view of the fact that all agents are randomly dispersed in the workplace at the initial time, we present a numbering strategy for them. During the movement of agents, not every agent can always obtain pertinent state information about the targets. So, a developed leader—follower strategy and a pursuit formation algorithm are proposed. Under the proposed method, agents with the same target can maintain a circle formation. Furthermore, it turns out that the pursuit formation algorithm for agents to the desired formation is convergent. Simulation studies are provided to illustrate the effectiveness of the proposed method.
7. Multi-Target Tracking by Discrete-Continuous Energy Minimization.
PubMed
Milan, Anton; Schindler, Konrad; Roth, Stefan
2016-10-01
The task of tracking multiple targets is often addressed with the so-called tracking-by-detection paradigm, where the first step is to obtain a set of target hypotheses for each frame independently. Tracking can then be regarded as solving two separate, but tightly coupled problems. The first is to carry out data association, i.e., to determine the origin of each of the available observations. The second problem is to reconstruct the actual trajectories that describe the spatio-temporal motion pattern of each individual target. The former is inherently a discrete problem, while the latter should intuitively be modeled in continuous space. Having to deal with an unknown number of targets, complex dependencies, and physical constraints, both are challenging tasks on their own and thus most previous work focuses on one of these subproblems. Here, we present a multi-target tracking approach that explicitly models both tasks as minimization of a unified discrete-continuous energy function. Trajectory properties are captured through global label costs, a recent concept from multi-model fitting, which we introduce to tracking. Specifically, label costs describe physical properties of individual tracks, e.g., linear and angular dynamics, or entry and exit points. We further introduce pairwise label costs to describe mutual interactions between targets in order to avoid collisions. By choosing appropriate forms for the individual energy components, powerful discrete optimization techniques can be leveraged to address data association, while the shapes of individual trajectories are updated by gradient-based continuous energy minimization. The proposed method achieves state-of-the-art results on diverse benchmark sequences.
8. Phenolic thio- and selenosemicarbazones as multi-target drugs.
PubMed
Calcatierra, Verónica; López, Óscar; Fernández-Bolaños, José G; Plata, Gabriela B; Padrón, José M
2015-04-13
A series of isosteric phenolic thio- and selenosemicarbazones have been obtained by condensation of naturally-occurring phenolic aldehydes and thio(seleno)semicarbazides. Title compounds were designed as potential multi-target drugs, and a series of structure-activity relationships could be established upon their in vitro assays: antioxidant activity, α-glucosidase inhibition and antiproliferative activity against six human tumor cell lines: A549 (non-small cell lung), HBL-100 (breast), HeLa (cervix), SW1573 (non-small cell lung), T-47D (breast) and WiDr (colon). For the antiradical activity, selenium atom and 2 or 3 phenolic hydroxyl groups proved to be essential motifs; remarkably, the compound with the most potent activity, with a trihydroxyphenyl scaffold (EC50 = 4.87 ± 1.57 μM) was found to be stronger than natural hydroxytyrosol, a potent antioxidant present in olive oil (EC50 = 13.80 ± 1.41 μM). Furthermore, one of the thiosemicarbazones was found to be a strong non-competitive inhibitor of α-glucosidase (Ki = 9.6 ± 1.6 μM), with an 8-fold increase in activity compared to acarbose (Ki = 77.9 ± 11.4 μM), marketed for the treatment of type-2 diabetes. Most of the synthesized compounds also exhibited relevant antiproliferative activities; in particular, seleno derivatives showed GI50 values lower than 6.0 μM for all the tested cell lines; N-naphthyl mono- and dihydroxylated derivatives behaved as more potent antiproliferative agents than 5-fluorouracil or cisplatin. PMID:25752525
9. A Multi-Robot Sense-Act Approach to Lead to a Proper Acting in Environmental Incidents.
PubMed
Conesa-Muñoz, Jesús; Valente, João; Del Cerro, Jaime; Barrientos, Antonio; Ribeiro, Angela
2016-01-01
Many environmental incidents affect large areas, often in rough terrain constrained by natural obstacles, which makes intervention difficult. New technologies, such as unmanned aerial vehicles, may help address this issue due to their suitability to reach and easily cover large areas. Thus, unmanned aerial vehicles may be used to inspect the terrain and make a first assessment of the affected areas; however, nowadays they do not have the capability to act. On the other hand, ground vehicles rely on enough power to perform the intervention but exhibit more mobility constraints. This paper proposes a multi-robot sense-act system, composed of aerial and ground vehicles. This combination allows performing autonomous tasks in large outdoor areas by integrating both types of platforms in a fully automated manner. Aerial units are used to easily obtain relevant data from the environment and ground units use this information to carry out interventions more efficiently. This paper describes the platforms and sensors required by this multi-robot sense-act system as well as proposes a software system to automatically handle the workflow for any generic environmental task. The proposed system has proved to be suitable to reduce the amount of herbicide applied in agricultural treatments. Although herbicides are very polluting, they are massively deployed on complete agricultural fields to remove weeds. Nevertheless, the amount of herbicide required for treatment is radically reduced when it is accurately applied on patches by the proposed multi-robot system. Thus, the aerial units were employed to scout the crop and build an accurate weed distribution map which was subsequently used to plan the task of the ground units. The whole workflow was executed in a fully autonomous way, without human intervention except when required by Spanish law due to safety reasons. PMID:27517934
10. A Multi-Robot Sense-Act Approach to Lead to a Proper Acting in Environmental Incidents
PubMed Central
Conesa-Muñoz, Jesús; Valente, João; del Cerro, Jaime; Barrientos, Antonio; Ribeiro, Angela
2016-01-01
Many environmental incidents affect large areas, often in rough terrain constrained by natural obstacles, which makes intervention difficult. New technologies, such as unmanned aerial vehicles, may help address this issue due to their suitability to reach and easily cover large areas. Thus, unmanned aerial vehicles may be used to inspect the terrain and make a first assessment of the affected areas; however, nowadays they do not have the capability to act. On the other hand, ground vehicles rely on enough power to perform the intervention but exhibit more mobility constraints. This paper proposes a multi-robot sense-act system, composed of aerial and ground vehicles. This combination allows performing autonomous tasks in large outdoor areas by integrating both types of platforms in a fully automated manner. Aerial units are used to easily obtain relevant data from the environment and ground units use this information to carry out interventions more efficiently. This paper describes the platforms and sensors required by this multi-robot sense-act system as well as proposes a software system to automatically handle the workflow for any generic environmental task. The proposed system has proved to be suitable to reduce the amount of herbicide applied in agricultural treatments. Although herbicides are very polluting, they are massively deployed on complete agricultural fields to remove weeds. Nevertheless, the amount of herbicide required for treatment is radically reduced when it is accurately applied on patches by the proposed multi-robot system. Thus, the aerial units were employed to scout the crop and build an accurate weed distribution map which was subsequently used to plan the task of the ground units. The whole workflow was executed in a fully autonomous way, without human intervention except when required by Spanish law due to safety reasons. PMID:27517934
11. A Multi-Robot Sense-Act Approach to Lead to a Proper Acting in Environmental Incidents.
PubMed
Conesa-Muñoz, Jesús; Valente, João; Del Cerro, Jaime; Barrientos, Antonio; Ribeiro, Angela
2016-08-10
Many environmental incidents affect large areas, often in rough terrain constrained by natural obstacles, which makes intervention difficult. New technologies, such as unmanned aerial vehicles, may help address this issue due to their suitability to reach and easily cover large areas. Thus, unmanned aerial vehicles may be used to inspect the terrain and make a first assessment of the affected areas; however, nowadays they do not have the capability to act. On the other hand, ground vehicles rely on enough power to perform the intervention but exhibit more mobility constraints. This paper proposes a multi-robot sense-act system, composed of aerial and ground vehicles. This combination allows performing autonomous tasks in large outdoor areas by integrating both types of platforms in a fully automated manner. Aerial units are used to easily obtain relevant data from the environment and ground units use this information to carry out interventions more efficiently. This paper describes the platforms and sensors required by this multi-robot sense-act system as well as proposes a software system to automatically handle the workflow for any generic environmental task. The proposed system has proved to be suitable to reduce the amount of herbicide applied in agricultural treatments. Although herbicides are very polluting, they are massively deployed on complete agricultural fields to remove weeds. Nevertheless, the amount of herbicide required for treatment is radically reduced when it is accurately applied on patches by the proposed multi-robot system. Thus, the aerial units were employed to scout the crop and build an accurate weed distribution map which was subsequently used to plan the task of the ground units. The whole workflow was executed in a fully autonomous way, without human intervention except when required by Spanish law due to safety reasons.
12. Powerful inner/outer controlled multi-target magnetic nanoparticle drug carrier prepared by liquid photo-immobilization
Guan, Yan-Qing; Zheng, Zhe; Huang, Zheng; Li, Zhibin; Niu, Shuiqin; Liu, Jun-Ming
2014-05-01
Nanomagnetic materials offer exciting avenues for advancing cancer therapies. Most researches have focused on efficient delivery of drugs in the body by incorporating various drug molecules onto the surface of nanomagnetic particles. The challenge is how to synthesize low toxic nanocarriers with multi-target drug loading. The cancer cell death mechanisms associated with those nanocarriers remain unclear either. Following the cell biology mechanisms, we develop a liquid photo-immobilization approach to attach doxorubicin, folic acid, tumor necrosis factor-α, and interferon-γ onto the oleic acid molecules coated Fe3O4 magnetic nanoparticles to prepare a kind of novel inner/outer controlled multi-target magnetic nanoparticle drug carrier. In this work, this approach is demonstrated by a variety of structural and biomedical characterizations, addressing the anti-cancer effects in vivo and in vitro on the HeLa, and it is highly efficient and powerful in treating cancer cells in a valuable programmed cell death mechanism for overcoming drug resistance.
13. Powerful inner/outer controlled multi-target magnetic nanoparticle drug carrier prepared by liquid photo-immobilization
PubMed Central
Guan, Yan-Qing; Zheng, Zhe; Huang, Zheng; Li, Zhibin; Niu, Shuiqin; Liu, Jun-Ming
2014-01-01
Nanomagnetic materials offer exciting avenues for advancing cancer therapies. Most researches have focused on efficient delivery of drugs in the body by incorporating various drug molecules onto the surface of nanomagnetic particles. The challenge is how to synthesize low toxic nanocarriers with multi-target drug loading. The cancer cell death mechanisms associated with those nanocarriers remain unclear either. Following the cell biology mechanisms, we develop a liquid photo-immobilization approach to attach doxorubicin, folic acid, tumor necrosis factor-α, and interferon-γ onto the oleic acid molecules coated Fe3O4 magnetic nanoparticles to prepare a kind of novel inner/outer controlled multi-target magnetic nanoparticle drug carrier. In this work, this approach is demonstrated by a variety of structural and biomedical characterizations, addressing the anti-cancer effects in vivo and in vitro on the HeLa, and it is highly efficient and powerful in treating cancer cells in a valuable programmed cell death mechanism for overcoming drug resistance. PMID:24845203
14. Novel multi-targeted polymerase chain reaction for diagnosis of presumed tubercular uveitis
PubMed Central
2013-01-01
Background The objective of this study was to report the use of multi-targeted polymerase chain reaction (PCR) in the diagnosis of presumed tubercular uveitis. Multi-targeted PCR using three targets specific for Mycobacterium tuberculosis, i.e., IS6110, MPB64, and protein b, was performed on intraocular fluid samples of 25 subjects. Nine had presumed tubercular uveitis, six had intraocular inflammation secondary to a nontubercular etiology (disease controls), and ten had no evidence of intraocular inflammation (normal controls). As described previously, response to antitubercular therapy was considered as the gold standard. Results Multi-targeted PCR was positive in seven out of nine patients with presumed tubercular uveitis and negative in all normal and disease controls. The sensitivity and specificity were 77.77% and 100%, respectively. For the diagnosis of presumed tubercular uveitis, multi-targeted PCR had a positive predictive value of 100% and a negative predictive value of 88.88%. Conclusion Multi-targeted PCR can be a valuable tool for diagnosing presumed tubercular uveitis. PMID:23514226
15. Enhanced bandwidth of a microstrip antenna using a parasitic mushroom-like metamaterial structure for multi-robot cooperative navigation
Lee, Cherl-Hee; Lee, Jonghun; Kim, Yoon-Gu; An, Jinung
2015-01-01
The broadband design of a microstrip patch antenna is presented and experimentally studied for multi-robot cooperation. A parasitic mushroom-like metamaterial (MTM) patch close to a microstrip top patch is excited through gap-coupling, thereby producing a resonance frequency. Because of the design, the resonance frequency of the parasitic MTM patch is adjacent to that of the main patch, and the presented antenna can achieve an enhanced bandwidth of 450 MHz, which is about two times the bandwidth of a conventional patch antenna without the MTM parasitic patch. The error rate of packet transmissions for measuring the distance between a leader robot and a follower robot was also improved by almost two-fold.
16. Piezo-microfluidic transport system for multi-targets biochip detections
Li, Chia-Chin; Wang, Pei-Wen; Lee, Chih-Kung
2016-03-01
Detecting minute trace of interferon-gamma and various bio-markers by using a single biochip was adopted as a platform to examine the technology advancements presented. As bio-detection faces the restriction that only very small quantity of specimen is available, ways to make the best use of the sample available are a must. Since samples concentration will affect the binding rate of an immunoassay, the testing order will become an influencing factor if multiple biomarkers testing situation are needed by using only a single trace of sample. More specifically, if we test disease A first and then detect disease B using the sample just been measured by testing disease A, we most likely will get different results if we reverse the testing order. With an attempt to examine and maybe resolve the issues mentioned above, a micro-fluid control system was developed. The design requirements not only ask for microfluidic control but also demand the system developed has the potential to be integrated within the biochip once its performance is verified. A piezo-vibrating system that can generate traveling waves for microfluidic control was chosen due to its versatility and large force to volume ratio. A simulation software COMSOL was adopted first to predict the microfluidic behavior of the two-mode excited piezo-microfluidic transport system. Secondly, fluorescent particles was used to analyze the microfluidic behavior of system fabricated based on the simulation. Finally, Electrochemistry Impedance Spectroscopy (EIS) was implemented to verify the performance and extendibility of this newly developed system for multi-target detections.
17. Multi-Target Tracking Based on Multi-Bernoulli Filter with Amplitude for Unknown Clutter Rate.
PubMed
Yuan, Changshun; Wang, Jun; Lei, Peng; Bi, Yanxian; Sun, Zhongsheng
2015-01-01
Knowledge of the clutter rate is of critical importance in multi-target Bayesian tracking. However, estimating the clutter rate is a difficult problem in practice. In this paper, an improved multi-Bernoulli filter based on random finite sets for multi-target Bayesian tracking accommodating non-linear dynamic and measurement models, as well as unknown clutter rate, is proposed for radar sensors. The proposed filter incorporates the amplitude information into the state and measurement spaces to improve discrimination between actual targets and clutters, while adaptively generating the new-born object random finite sets using the measurements to eliminate reliance on prior random finite sets. A sequential Monte-Carlo implementation of the proposed filter is presented, and simulations are used to demonstrate the proposed filter's improvements in estimation accuracy of the target number and corresponding multi-target states, as well as the clutter rate. PMID:26690148
18. Random finite set multi-target trackers: stochastic geometry for space situational awareness
Vo, Ba-Ngu; Vo, Ba-Tuong
2015-05-01
This paper describes the recent development in the random finite set RFS paradigm in multi-target tracking. Over the last decade the Probability Hypothesis Density filter has become synonymous with the RFS approach. As result the PHD filter is often wrongly used as a performance benchmark for the RFS approach. Since there is a suite of RFS-based multi-target tracking algorithms, benchmarking tracking performance of the RFS approach by using the PHD filter, the cheapest of these, is misleading. Such benchmarking should be performed with more sophisticated RFS algorithms. In this paper we outline the high-performance RFS-based multi-target trackers such that the Generalized Labled Multi-Bernoulli filter, and a number of efficient approximations and discuss extensions and applications of these filters. Applications to space situational awareness are discussed.
19. Multi-Target Tracking Based on Multi-Bernoulli Filter with Amplitude for Unknown Clutter Rate.
PubMed
Yuan, Changshun; Wang, Jun; Lei, Peng; Bi, Yanxian; Sun, Zhongsheng
2015-12-04
Knowledge of the clutter rate is of critical importance in multi-target Bayesian tracking. However, estimating the clutter rate is a difficult problem in practice. In this paper, an improved multi-Bernoulli filter based on random finite sets for multi-target Bayesian tracking accommodating non-linear dynamic and measurement models, as well as unknown clutter rate, is proposed for radar sensors. The proposed filter incorporates the amplitude information into the state and measurement spaces to improve discrimination between actual targets and clutters, while adaptively generating the new-born object random finite sets using the measurements to eliminate reliance on prior random finite sets. A sequential Monte-Carlo implementation of the proposed filter is presented, and simulations are used to demonstrate the proposed filter's improvements in estimation accuracy of the target number and corresponding multi-target states, as well as the clutter rate.
20. Multi-Target Tracking Based on Multi-Bernoulli Filter with Amplitude for Unknown Clutter Rate
PubMed Central
Yuan, Changshun; Wang, Jun; Lei, Peng; Bi, Yanxian; Sun, Zhongsheng
2015-01-01
Knowledge of the clutter rate is of critical importance in multi-target Bayesian tracking. However, estimating the clutter rate is a difficult problem in practice. In this paper, an improved multi-Bernoulli filter based on random finite sets for multi-target Bayesian tracking accommodating non-linear dynamic and measurement models, as well as unknown clutter rate, is proposed for radar sensors. The proposed filter incorporates the amplitude information into the state and measurement spaces to improve discrimination between actual targets and clutters, while adaptively generating the new-born object random finite sets using the measurements to eliminate reliance on prior random finite sets. A sequential Monte-Carlo implementation of the proposed filter is presented, and simulations are used to demonstrate the proposed filter’s improvements in estimation accuracy of the target number and corresponding multi-target states, as well as the clutter rate. PMID:26690148
1. A modified high-intensity Cs sputter negative-ion source with multi-target mechanism
Houzhi, Si; Weizhong, Zhang; Jinhau, Zhu; Guangtian, Du; Tiaorong, Zhang; Xiang, Gao
1993-04-01
The source is based on Middleton's high-intensity mode, but modified to a multi-target version. It is equipped with a spherical molybdenum ionizer, a 20-position target wheel and a vacuum lock for loading and unloading sample batches. A metal-ceramic bonded section protected by a specially designed labyrinth shielding system results in reliable insulation of the cathode and convenient control of cesium vapor. The latter is particularly important when an oversupply of cesium occurs. The source was developed for accelerator mass spectrometry (AMS) applications. Recently, three versions based on the prototype of the source have been successfully tested to meet different requirements: (a) single target version, (b) multi-target version with manual sample change, and (c) multi-target version with remote control sample change. Some details of the technical and operational characteristics are presented.
2. Distributed Multi-Target Tracking and Data Association in Vision Networks.
PubMed
Kamal, Ahmed T; Bappy, Jawadul H; Farrell, Jay A; Roy-Chowdhury, Amit K
2016-07-01
Distributed algorithms have recently gained immense popularity. With regards to computer vision applications, distributed multi-target tracking in a camera network is a fundamental problem. The goal is for all cameras to have accurate state estimates for all targets. Distributed estimation algorithms work by exchanging information between sensors that are communication neighbors. Vision-based distributed multi-target state estimation has at least two characteristics that distinguishes it from other applications. First, cameras are directional sensors and often neighboring sensors may not be sensing the same targets, i.e., they are naive with respect to that target. Second, in the presence of clutter and multiple targets, each camera must solve a data association problem. This paper presents an information-weighted, consensus-based, distributed multi-target tracking algorithm referred to as the Multi-target Information Consensus (MTIC) algorithm that is designed to address both the naivety and the data association problems. It converges to the centralized minimum mean square error estimate. The proposed MTIC algorithm and its extensions to non-linear camera models, termed as the Extended MTIC (EMTIC), are robust to false measurements and limited resources like power, bandwidth and the real-time operational requirements. Simulation and experimental analysis are provided to support the theoretical results.
3. System-level multi-target drug discovery from natural products with applications to cardiovascular diseases.
PubMed
Zheng, Chunli; Wang, Jinan; Liu, Jianling; Pei, Mengjie; Huang, Chao; Wang, Yonghua
2014-08-01
The term systems pharmacology describes a field of study that uses computational and experimental approaches to broaden the view of drug actions rooted in molecular interactions and advance the process of drug discovery. The aim of this work is to stick out the role that the systems pharmacology plays across the multi-target drug discovery from natural products for cardiovascular diseases (CVDs). Firstly, based on network pharmacology methods, we reconstructed the drug-target and target-target networks to determine the putative protein target set of multi-target drugs for CVDs treatment. Secondly, we reintegrated a compound dataset of natural products and then obtained a multi-target compounds subset by virtual-screening process. Thirdly, a drug-likeness evaluation was applied to find the ADME-favorable compounds in this subset. Finally, we conducted in vitro experiments to evaluate the reliability of the selected chemicals and targets. We found that four of the five randomly selected natural molecules can effectively act on the target set for CVDs, indicating the reasonability of our systems-based method. This strategy may serve as a new model for multi-target drug discovery of complex diseases.
4. Distributed Multi-Target Tracking and Data Association in Vision Networks.
PubMed
Kamal, Ahmed T; Bappy, Jawadul H; Farrell, Jay A; Roy-Chowdhury, Amit K
2016-07-01
Distributed algorithms have recently gained immense popularity. With regards to computer vision applications, distributed multi-target tracking in a camera network is a fundamental problem. The goal is for all cameras to have accurate state estimates for all targets. Distributed estimation algorithms work by exchanging information between sensors that are communication neighbors. Vision-based distributed multi-target state estimation has at least two characteristics that distinguishes it from other applications. First, cameras are directional sensors and often neighboring sensors may not be sensing the same targets, i.e., they are naive with respect to that target. Second, in the presence of clutter and multiple targets, each camera must solve a data association problem. This paper presents an information-weighted, consensus-based, distributed multi-target tracking algorithm referred to as the Multi-target Information Consensus (MTIC) algorithm that is designed to address both the naivety and the data association problems. It converges to the centralized minimum mean square error estimate. The proposed MTIC algorithm and its extensions to non-linear camera models, termed as the Extended MTIC (EMTIC), are robust to false measurements and limited resources like power, bandwidth and the real-time operational requirements. Simulation and experimental analysis are provided to support the theoretical results. PMID:26441444
5. Systematic mining of analog series with related core structures in multi-target activity space.
PubMed
Gupta-Ostermann, Disha; Hu, Ye; Bajorath, Jürgen
2013-08-01
We have aimed to systematically extract analog series with related core structures from multi-target activity space to explore target promiscuity of closely related analogous. Therefore, a previously introduced SAR matrix structure was adapted and further extended for large-scale data mining. These matrices organize analog series with related yet distinct core structures in a consistent manner. High-confidence compound activity data yielded more than 2,300 non-redundant matrices capturing 5,821 analog series that included 4,288 series with multi-target and 735 series with multi-family activities. Many matrices captured more than three analog series with activity against more than five targets. The matrices revealed a variety of promiscuity patterns. Compound series matrices also contain virtual compounds, which provide suggestions for compound design focusing on desired activity profiles.
6. FISST based method for multi-target tracking in the image plane of optical sensors.
PubMed
Xu, Yang; Xu, Hui; An, Wei; Xu, Dan
2012-01-01
A finite set statistics (FISST)-based method is proposed for multi-target tracking in the image plane of optical sensors. The method involves using signal amplitude information in probability hypothesis density (PHD) filter which is derived from FISST to improve multi-target tracking performance. The amplitude of signals generated by the optical sensor is modeled first, from which the amplitude likelihood ratio between target and clutter is derived. An alternative approach is adopted for the situations where the signal noise ratio (SNR) of target is unknown. Then the PHD recursion equations incorporated with signal information are derived and the Gaussian mixture (GM) implementation of this filter is given. Simulation results demonstrate that the proposed method achieves significantly better performance than the generic PHD filter. Moreover, our method has much lower computational complexity in the scenario with high SNR and dense clutter. PMID:22736984
7. FISST Based Method for Multi-Target Tracking in the Image Plane of Optical Sensors
PubMed Central
Xu, Yang; Xu, Hui; An, Wei; Xu, Dan
2012-01-01
A finite set statistics (FISST)-based method is proposed for multi-target tracking in the image plane of optical sensors. The method involves using signal amplitude information in probability hypothesis density (PHD) filter which is derived from FISST to improve multi-target tracking performance. The amplitude of signals generated by the optical sensor is modeled first, from which the amplitude likelihood ratio between target and clutter is derived. An alternative approach is adopted for the situations where the signal noise ratio (SNR) of target is unknown. Then the PHD recursion equations incorporated with signal information are derived and the Gaussian mixture (GM) implementation of this filter is given. Simulation results demonstrate that the proposed method achieves significantly better performance than the generic PHD filter. Moreover, our method has much lower computational complexity in the scenario with high SNR and dense clutter. PMID:22736984
8. Cardinality Balanced Multi-Target Multi-Bernoulli Filter with Error Compensation
PubMed Central
He, Xiangyu; Liu, Guixi
2016-01-01
The cardinality balanced multi-target multi-Bernoulli (CBMeMBer) filter developed recently has been proved an effective multi-target tracking (MTT) algorithm based on the random finite set (RFS) theory, and it can jointly estimate the number of targets and their states from a sequence of sensor measurement sets. However, because of the existence of systematic errors in sensor measurements, the CBMeMBer filter can easily produce different levels of performance degradation. In this paper, an extended CBMeMBer filter, in which the joint probability density function of target state and systematic error is recursively estimated, is proposed to address the MTT problem based on the sensor measurements with systematic errors. In addition, an analytic implementation of the extended CBMeMBer filter is also presented for linear Gaussian models. Simulation results confirm that the proposed algorithm can track multiple targets with better performance. PMID:27589764
9. A detection method for infrared multi-target in aerospace backgound
Wang, Ningming; Zhang, Yazhou
2015-11-01
Main task of the infrared search and track system is analyzing and identifying targets of airspace. But first this is needed to detect all targets in infrared image. Therefore, the multi-target detection algorithms are studied and we propose an effective multi-target detection method. Firstly, an improved morphological operator is designed based on airspace background and target traits of infrared image. Background is weakened but targets are enhanced when infrared image is processed by the gray morphological filter. Then, potential targets are found by the maximum local sum algorithm. Finally, true targets are affirmed based on data association of sequence images. The infrared images got from long-wavelength infrared camera are processed with the method of the paper. Experiment results show that the method can detect targets in infrared image quickly and accurately.
10. Cardinality Balanced Multi-Target Multi-Bernoulli Filter with Error Compensation.
PubMed
He, Xiangyu; Liu, Guixi
2016-08-31
The cardinality balanced multi-target multi-Bernoulli (CBMeMBer) filter developed recently has been proved an effective multi-target tracking (MTT) algorithm based on the random finite set (RFS) theory, and it can jointly estimate the number of targets and their states from a sequence of sensor measurement sets. However, because of the existence of systematic errors in sensor measurements, the CBMeMBer filter can easily produce different levels of performance degradation. In this paper, an extended CBMeMBer filter, in which the joint probability density function of target state and systematic error is recursively estimated, is proposed to address the MTT problem based on the sensor measurements with systematic errors. In addition, an analytic implementation of the extended CBMeMBer filter is also presented for linear Gaussian models. Simulation results confirm that the proposed algorithm can track multiple targets with better performance.
11. FISST based method for multi-target tracking in the image plane of optical sensors.
PubMed
Xu, Yang; Xu, Hui; An, Wei; Xu, Dan
2012-01-01
A finite set statistics (FISST)-based method is proposed for multi-target tracking in the image plane of optical sensors. The method involves using signal amplitude information in probability hypothesis density (PHD) filter which is derived from FISST to improve multi-target tracking performance. The amplitude of signals generated by the optical sensor is modeled first, from which the amplitude likelihood ratio between target and clutter is derived. An alternative approach is adopted for the situations where the signal noise ratio (SNR) of target is unknown. Then the PHD recursion equations incorporated with signal information are derived and the Gaussian mixture (GM) implementation of this filter is given. Simulation results demonstrate that the proposed method achieves significantly better performance than the generic PHD filter. Moreover, our method has much lower computational complexity in the scenario with high SNR and dense clutter.
12. Cardinality Balanced Multi-Target Multi-Bernoulli Filter with Error Compensation.
PubMed
He, Xiangyu; Liu, Guixi
2016-01-01
The cardinality balanced multi-target multi-Bernoulli (CBMeMBer) filter developed recently has been proved an effective multi-target tracking (MTT) algorithm based on the random finite set (RFS) theory, and it can jointly estimate the number of targets and their states from a sequence of sensor measurement sets. However, because of the existence of systematic errors in sensor measurements, the CBMeMBer filter can easily produce different levels of performance degradation. In this paper, an extended CBMeMBer filter, in which the joint probability density function of target state and systematic error is recursively estimated, is proposed to address the MTT problem based on the sensor measurements with systematic errors. In addition, an analytic implementation of the extended CBMeMBer filter is also presented for linear Gaussian models. Simulation results confirm that the proposed algorithm can track multiple targets with better performance. PMID:27589764
13. Microfluidic immunomagnetic multi-target sorting--a model for controlling deflection of paramagnetic beads.
PubMed
Tsai, Scott S H; Griffiths, Ian M; Stone, Howard A
2011-08-01
We describe a microfluidic system that uses a magnetic field to sort paramagnetic beads by deflecting them in the direction normal to the flow. In the experiments we systematically study the dependence of the beads' deflection on bead size and susceptibility, magnet strength, fluid speed and viscosity, and device geometry. We also develop a design parameter that can aid in the design of microfluidic devices for immunomagnetic multi-target sorting. PMID:21677937
14. Dielectrophoresis-based classification of cells using multi-target multiple-hypothesis tracking.
PubMed
Dickerson, Samuel J; Chiarulli, Donald M; Levitan, Steven P; Carthel, Craig; Coraluppi, Stefano
2014-01-01
In this paper we present a novel methodology for classifying cells by using a combination of dielectrophoresis, image tracking and classification algorithms. We use dielectrophoresis to induce unique motion patterns in cells of interest. Motion is extracted via multi-target multiple-hypothesis tracking. Trajectories are then used to classify cells based on a generalized likelihood ratio test. We present results of a simulation study and of our prototype tracking the dielectrophoretic velocities of cells. PMID:25570230
15. Dielectrophoresis-based classification of cells using multi-target multiple-hypothesis tracking.
PubMed
Dickerson, Samuel J; Chiarulli, Donald M; Levitan, Steven P; Carthel, Craig; Coraluppi, Stefano
2014-01-01
In this paper we present a novel methodology for classifying cells by using a combination of dielectrophoresis, image tracking and classification algorithms. We use dielectrophoresis to induce unique motion patterns in cells of interest. Motion is extracted via multi-target multiple-hypothesis tracking. Trajectories are then used to classify cells based on a generalized likelihood ratio test. We present results of a simulation study and of our prototype tracking the dielectrophoretic velocities of cells.
16. Identification and characterization of carprofen as a multi-target FAAH/COX inhibitor
PubMed Central
Favia, Angelo D.; Habrant, Damien; Scarpelli, Rita; Migliore, Marco; Albani, Clara; Bertozzi, Sine Mandrup; Dionisi, Mauro; Tarozzo, Glauco; Piomelli, Daniele; Cavalli, Andrea; De Vivo, Marco
2013-01-01
Pain and inflammation are major therapeutic areas for drug discovery. Current drugs for these pathologies have limited efficacy, however, and often cause a number of unwanted side effects. In the present study, we identify the non-steroid anti-inflammatory drug, carprofen, as a multi-target-directed ligand that simultaneously inhibits cyclooxygenase-1 (COX-1), COX-2 and fatty acid amide hydrolase (FAAH). Additionally, we synthesized and tested several racemic derivatives of carprofen, sharing this multi-target activity. This may result in improved analgesic efficacy and reduced side effects (Naidu, et al (2009) J Pharmacol Exp Ther 329, 48-56; Fowler, C.J. et al. (2012) J Enzym Inhib Med Chem Jan 6; Sasso, et al (2012) Pharmacol Res 65, 553). The new compounds are among the most potent multi-target FAAH/COXs inhibitors reported so far in the literature, and thus may represent promising starting points for the discovery of new analgesic and anti-inflammatory drugs. PMID:23043222
17. Multi-target pharmacology: possibilities and limitations of the “skeleton key approach” from a medicinal chemist perspective
PubMed Central
Talevi, Alan
2015-01-01
Multi-target drugs have raised considerable interest in the last decade owing to their advantages in the treatment of complex diseases and health conditions linked to drug resistance issues. Prospective drug repositioning to treat comorbid conditions is an additional, overlooked application of multi-target ligands. While medicinal chemists usually rely on some version of the lock and key paradigm to design novel therapeutics, modern pharmacology recognizes that the mid- and long-term effects of a given drug on a biological system may depend not only on the specific ligand-target recognition events but also on the influence of the repeated administration of a drug on the cell gene signature. The design of multi-target agents usually imposes challenging restrictions on the topology or flexibility of the candidate drugs, which are briefly discussed in the present article. Finally, computational strategies to approach the identification of novel multi-target agents are overviewed. PMID:26441661
18. Inferring multi-target QSAR models with taxonomy-based multi-task learning
PubMed Central
2013-01-01
19. Multi-target drugs: the trend of drug research and development.
PubMed
Lu, Jin-Jian; Pan, Wei; Hu, Yuan-Jia; Wang, Yi-Tao
2012-01-01
Summarizing the status of drugs in the market and examining the trend of drug research and development is important in drug discovery. In this study, we compared the drug targets and the market sales of the new molecular entities approved by the U.S. Food and Drug Administration from January 2000 to December 2009. Two networks, namely, the target-target and drug-drug networks, have been set up using the network analysis tools. The multi-target drugs have much more potential, as shown by the network visualization and the market trends. We discussed the possible reasons and proposed the rational strategies for drug research and development in the future.
20. Pharmacological Characterization of Memoquin, a Multi-Target Compound for the Treatment of Alzheimer's Disease
PubMed Central
Capurro, Valeria; Busquet, Perrine; Lopes, Joao Pedro; Bertorelli, Rosalia; Tarozzo, Glauco; Bolognesi, Maria Laura; Piomelli, Daniele; Reggiani, Angelo; Cavalli, Andrea
2013-01-01
Alzheimer's disease (AD) is characterized by progressive loss of cognitive function, dementia and altered behavior. Over 30 million people worldwide suffer from AD and available therapies are still palliative rather than curative. Recently, Memoquin (MQ), a quinone-bearing polyamine compound, has emerged as a promising anti-AD lead candidate, mainly thanks to its multi-target profile. MQ acts as an acetylcholinesterase and β-secretase-1 inhibitor, and also possesses anti-amyloid and anti-oxidant properties. Despite this potential interest, in vivo behavioral studies with MQ have been limited. Here, we report on in vivo studies with MQ (acute and sub-chronic treatments; 7–15 mg/kg per os) carried out using two different mouse models: i) scopolamine- and ii) beta-amyloid peptide- (Aβ-) induced amnesia. Several aspects related to memory were examined using the T-maze, the Morris water maze, the novel object recognition, and the passive avoidance tasks. At the dose of 15 mg/kg, MQ was able to rescue all tested aspects of cognitive impairment including spatial, episodic, aversive, short and long-term memory in both scopolamine- and Aβ-induced amnesia models. Furthermore, when tested in primary cortical neurons, MQ was able to fully prevent the Aβ-induced neurotoxicity mediated by oxidative stress. The results support the effectiveness of MQ as a cognitive enhancer, and highlight the value of a multi-target strategy to address the complex nature of cognitive dysfunction in AD. PMID:23441223
1. PMHT Approach for Multi-Target Multi-Sensor Sonar Tracking in Clutter.
PubMed
Li, Xiaohua; Li, Yaan; Yu, Jing; Chen, Xiao; Dai, Miao
2015-01-01
Multi-sensor sonar tracking has many advantages, such as the potential to reduce the overall measurement uncertainty and the possibility to hide the receiver. However, the use of multi-target multi-sensor sonar tracking is challenging because of the complexity of the underwater environment, especially the low target detection probability and extremely large number of false alarms caused by reverberation. In this work, to solve the problem of multi-target multi-sensor sonar tracking in the presence of clutter, a novel probabilistic multi-hypothesis tracker (PMHT) approach based on the extended Kalman filter (EKF) and unscented Kalman filter (UKF) is proposed. The PMHT can efficiently handle the unknown measurements-to-targets and measurements-to-transmitters data association ambiguity. The EKF and UKF are used to deal with the high degree of nonlinearity in the measurement model. The simulation results show that the proposed algorithm can improve the target tracking performance in a cluttered environment greatly, and its computational load is low. PMID:26561817
2. PMHT Approach for Multi-Target Multi-Sensor Sonar Tracking in Clutter.
PubMed
Li, Xiaohua; Li, Yaan; Yu, Jing; Chen, Xiao; Dai, Miao
2015-11-06
Multi-sensor sonar tracking has many advantages, such as the potential to reduce the overall measurement uncertainty and the possibility to hide the receiver. However, the use of multi-target multi-sensor sonar tracking is challenging because of the complexity of the underwater environment, especially the low target detection probability and extremely large number of false alarms caused by reverberation. In this work, to solve the problem of multi-target multi-sensor sonar tracking in the presence of clutter, a novel probabilistic multi-hypothesis tracker (PMHT) approach based on the extended Kalman filter (EKF) and unscented Kalman filter (UKF) is proposed. The PMHT can efficiently handle the unknown measurements-to-targets and measurements-to-transmitters data association ambiguity. The EKF and UKF are used to deal with the high degree of nonlinearity in the measurement model. The simulation results show that the proposed algorithm can improve the target tracking performance in a cluttered environment greatly, and its computational load is low.
3. Social Grouping for Multi-Target Tracking and Head Pose Estimation in Video.
PubMed
Qin, Zhen; Shelton, Christian R
2016-10-01
Many computer vision tasks are more difficult when tackled without contextual information. For example, in multi-camera tracking, pedestrians may look very different in different cameras with varying pose and lighting conditions. Similarly, head direction estimation in high-angle surveillance video in which human head images are low resolution is challenging. Even humans can have trouble without contextual information. In this work, we couple novel contextual information, social grouping, with two important computer vision tasks: multi-target tracking and head pose/direction estimation in surveillance video. These three components are modeled in a probabilistic formulation and we provide effective solvers.We show that social grouping effectively helps to mitigate visual ambiguities in multi-camera tracking and head pose estimation. We further notice that in single-camera multi-target tracking, social grouping provides a natural high-order association cue that avoids existing complex algorithms for high-order track association. In experiments, we demonstrate improvements with our model over models without social grouping context and several state-of-art approaches on a number of publicly available datasets on tracking, head pose estimation, and group discovery.
4. Multi-target-qubit unconventional geometric phase gate in a multi-cavity system.
PubMed
Liu, Tong; Cao, Xiao-Zhi; Su, Qi-Ping; Xiong, Shao-Jie; Yang, Chui-Ping
2016-01-01
Cavity-based large scale quantum information processing (QIP) may involve multiple cavities and require performing various quantum logic operations on qubits distributed in different cavities. Geometric-phase-based quantum computing has drawn much attention recently, which offers advantages against inaccuracies and local fluctuations. In addition, multiqubit gates are particularly appealing and play important roles in QIP. We here present a simple and efficient scheme for realizing a multi-target-qubit unconventional geometric phase gate in a multi-cavity system. This multiqubit phase gate has a common control qubit but different target qubits distributed in different cavities, which can be achieved using a single-step operation. The gate operation time is independent of the number of qubits and only two levels for each qubit are needed. This multiqubit gate is generic, e.g., by performing single-qubit operations, it can be converted into two types of significant multi-target-qubit phase gates useful in QIP. The proposal is quite general, which can be used to accomplish the same task for a general type of qubits such as atoms, NV centers, quantum dots, and superconducting qubits. PMID:26898176
5. PMHT Approach for Multi-Target Multi-Sensor Sonar Tracking in Clutter
PubMed Central
Li, Xiaohua; Li, Yaan; Yu, Jing; Chen, Xiao; Dai, Miao
2015-01-01
Multi-sensor sonar tracking has many advantages, such as the potential to reduce the overall measurement uncertainty and the possibility to hide the receiver. However, the use of multi-target multi-sensor sonar tracking is challenging because of the complexity of the underwater environment, especially the low target detection probability and extremely large number of false alarms caused by reverberation. In this work, to solve the problem of multi-target multi-sensor sonar tracking in the presence of clutter, a novel probabilistic multi-hypothesis tracker (PMHT) approach based on the extended Kalman filter (EKF) and unscented Kalman filter (UKF) is proposed. The PMHT can efficiently handle the unknown measurements-to-targets and measurements-to-transmitters data association ambiguity. The EKF and UKF are used to deal with the high degree of nonlinearity in the measurement model. The simulation results show that the proposed algorithm can improve the target tracking performance in a cluttered environment greatly, and its computational load is low. PMID:26561817
6. Multi-Targeted Antithrombotic Therapy for Total Artificial Heart Device Patients
PubMed Central
Ramirez, Angeleah; Riley, Jeffrey B.; Joyce, Lyle D.
2016-01-01
Abstract: To prevent thrombotic or bleeding events in patients receiving a total artificial heart (TAH), agents have been used to avoid adverse events. The purpose of this article is to outline the adoption and results of a multi-targeted antithrombotic clinical procedure guideline (CPG) for TAH patients. Based on literature review of TAH anticoagulation and multiple case series, a CPG was designed to prescribe the use of multiple pharmacological agents. Total blood loss, Thromboelastograph® (TEG), and platelet light-transmission aggregometry (LTA) measurements were conducted on 13 TAH patients during the first 2 weeks of support in our institution. Target values and actual medians for postimplant days 1, 3, 7, and 14 were calculated for kaolinheparinase TEG, kaolin TEG, LTA, and estimated blood loss. Protocol guidelines were followed and anticoagulation management reduced bleeding and prevented thrombus formation as well as thromboembolic events in TAH patients postimplantation. The patients in this study were susceptible to a variety of possible complications such as mechanical device issues, thrombotic events, infection, and bleeding. Among them all it was clear that patients were at most risk for bleeding, particularly on postoperative days 1 through 3. However, bleeding was reduced into postoperative days 3 and 7, indicating that acceptable hemostasis was achieved with the anticoagulation protocol. The multidisciplinary, multi-targeted anticoagulation clinical procedure guideline was successful to maintain adequate antithrombotic therapy for TAH patients. PMID:27134306
7. Multi-Targeted Antithrombotic Therapy for Total Artificial Heart Device Patients.
PubMed
Ramirez, Angeleah; Riley, Jeffrey B; Joyce, Lyle D
2016-03-01
To prevent thrombotic or bleeding events in patients receiving a total artificial heart (TAH), agents have been used to avoid adverse events. The purpose of this article is to outline the adoption and results of a multi-targeted antithrombotic clinical procedure guideline (CPG) for TAH patients. Based on literature review of TAH anticoagulation and multiple case series, a CPG was designed to prescribe the use of multiple pharmacological agents. Total blood loss, Thromboelastograph(®) (TEG), and platelet light-transmission aggregometry (LTA) measurements were conducted on 13 TAH patients during the first 2 weeks of support in our institution. Target values and actual medians for postimplant days 1, 3, 7, and 14 were calculated for kaolinheparinase TEG, kaolin TEG, LTA, and estimated blood loss. Protocol guidelines were followed and anticoagulation management reduced bleeding and prevented thrombus formation as well as thromboembolic events in TAH patients postimplantation. The patients in this study were susceptible to a variety of possible complications such as mechanical device issues, thrombotic events, infection, and bleeding. Among them all it was clear that patients were at most risk for bleeding, particularly on postoperative days 1 through 3. However, bleeding was reduced into postoperative days 3 and 7, indicating that acceptable hemostasis was achieved with the anticoagulation protocol. The multidisciplinary, multi-targeted anticoagulation clinical procedure guideline was successful to maintain adequate antithrombotic therapy for TAH patients.
8. ASS234, As a New Multi-Target Directed Propargylamine for Alzheimer's Disease Therapy
PubMed Central
Marco-Contelles, José; Unzeta, Mercedes; Bolea, Irene; Esteban, Gerard; Ramsay, Rona R.; Romero, Alejandro; Martínez-Murillo, Ricard; Carreiras, M. Carmo; Ismaili, Lhassane
2016-01-01
Highlights: ASS2324 is a hybrid compound resulting from the juxtaposition of donepezil and the propargylamine PF9601NASS2324 is a multi-target directed propargylamine able to bind to all the AChE/BuChE and MAO A/B enzymesASS2324 shows antioxidant, neuroprotective and suitable permeability propertiesASS2324 restores the scopolamine-induced cognitive impairment to the same extent as donepezil, and is less toxicASS2324 prevents β-amyloid induced aggregation in the cortex of double transgenic miceASS2324 is the most advanced anti-Alzheimer agent for pre-clinical studies that we have identified in our laboratories The complex nature of Alzheimer's disease (AD) has prompted the design of Multi-Target-Directed Ligands (MTDL) able to bind to diverse biochemical targets involved in the progress and development of the disease. In this context, we have designed a number of MTD propargylamines (MTDP) showing antioxidant, anti-beta-amyloid, anti-inflammatory, as well as cholinesterase and monoamine oxidase (MAO) inhibition capacities. Here, we describe these properties in the MTDL ASS234, our lead-compound ready to enter in pre-clinical studies for AD, as a new multipotent, permeable cholinesterase/monoamine oxidase inhibitor, able to inhibit Aβ-aggregation, and possessing antioxidant and neuroprotective properties. PMID:27445665
9. Network Pharmacology Strategies Toward Multi-Target Anticancer Therapies: From Computational Models to Experimental Design Principles
PubMed Central
Tang, Jing; Aittokallio, Tero
2014-01-01
Polypharmacology has emerged as novel means in drug discovery for improving treatment response in clinical use. However, to really capitalize on the polypharmacological effects of drugs, there is a critical need to better model and understand how the complex interactions between drugs and their cellular targets contribute to drug efficacy and possible side effects. Network graphs provide a convenient modeling framework for dealing with the fact that most drugs act on cellular systems through targeting multiple proteins both through on-target and off-target binding. Network pharmacology models aim at addressing questions such as how and where in the disease network should one target to inhibit disease phenotypes, such as cancer growth, ideally leading to therapies that are less vulnerable to drug resistance and side effects by means of attacking the disease network at the systems level through synergistic and synthetic lethal interactions. Since the exponentially increasing number of potential drug target combinations makes pure experimental approach quickly unfeasible, this review depicts a number of computational models and algorithms that can effectively reduce the search space for determining the most promising combinations for experimental evaluation. Such computational-experimental strategies are geared toward realizing the full potential of multi-target treatments in different disease phenotypes. Our specific focus is on system-level network approaches to polypharmacology designs in anticancer drug discovery, where we give representative examples of how network-centric modeling may offer systematic strategies toward better understanding and even predicting the phenotypic responses to multi-target therapies.
10. Behavior-based cooperative robotics applied to multi-target observation
SciTech Connect
Parker, L.E.
1996-12-31
An important issue that arises in the automation of many security, surveillance, and reconnaissance tasks is that of monitoring (or observing) the movements of targets navigating in a bounded area of interest. A key research issue in these problems is that of sensor placement - determining where sensors should be located to maintain the targets in view. In complex applications involving limited-range sensors, the use of multiple sensors dynamically moving over time is required. In this paper, the author investigates the use of a cooperative team of autonomous sensor-based robots for the observation of multiple moving targets. The author focuses primarily on developing the distributed control strategies that allow the robot team to attempt to minimize the total time in which targets escape observation by some robot team member in the area of interest. The initial efforts on this problem address the aspects of distributed control in homogeneous robot teams with equivalent sensing and movement capabilities working in an uncluttered, bounded area. This paper first formalizes the problem, discusses related work, and then shows that this problem is NP-hard. The author then presents a distributed approximate approach to solving this problem that combines low-level multi-robot control with higher-level control. The low-level control is described in terms of force fields emanating from the targets and the robots. The higher level control is presented in the ALLIANCE formalism, which provides mechanisms for fault tolerant cooperative control, and allows robot team members to adjust their low-level actions based upon the actions of their teammates. The author then presents the results of the ongoing implementation of this approach, both in simulation and on physical robots. To the authors knowledge, this is the first paper addressing this research problem that has been implemented on physical robot teams.
11. Representation of multi-target activity landscapes through target pair-based compound encoding in self-organizing maps.
PubMed
Iyer, Preeti; Bajorath, Jürgen
2011-11-01
Activity landscape representations provide access to structure-activity relationships information in compound data sets. In general, activity landscape models integrate molecular similarity relationships with biological activity data. Typically, activity against a single target is monitored. However, for steadily increasing numbers of compounds, activity against multiple targets is reported, resulting in an opportunity, and often a need, to explore multi-target structure-activity relationships. It would be attractive to utilize activity landscape representations to aid in this process, but the design of activity landscapes for multiple targets is a complicated task. Only recently has a first multi-target landscape model been introduced, consisting of an annotated compound network focused on the systematic detection of activity cliffs. Herein, we report a conceptually different multi-target activity landscape design that is based on a 2D projection of chemical reference space using self-organizing maps and encodes compounds as arrays of pair-wise target activity relationships. In this context, we introduce the concept of discontinuity in multi-target activity space. The well-ordered activity landscape model highlights centers of discontinuity in activity space and is straightforward to interpret. It has been applied to analyze compound data sets with three, four, and five target annotations and identify multi-target structure-activity relationships determinants in analog series.
12. Improved Bearings-Only Multi-Target Tracking with GM-PHD Filtering.
PubMed
Zhang, Qian; Song, Taek Lyul
2016-01-01
In this paper, an improved nonlinear Gaussian mixture probability hypothesis density (GM-PHD) filter is proposed to address bearings-only measurements in multi-target tracking. The proposed method, called the Gaussian mixture measurements-probability hypothesis density (GMM-PHD) filter, not only approximates the posterior intensity using a Gaussian mixture, but also models the likelihood function with a Gaussian mixture instead of a single Gaussian distribution. Besides, the target birth model of the GMM-PHD filter is assumed to be partially uniform instead of a Gaussian mixture. Simulation results show that the proposed filter outperforms the GM-PHD filter embedded with the extended Kalman filter (EKF) and the unscented Kalman filter (UKF). PMID:27626423
13. Improved Bearings-Only Multi-Target Tracking with GM-PHD Filtering.
PubMed
Zhang, Qian; Song, Taek Lyul
2016-09-10
In this paper, an improved nonlinear Gaussian mixture probability hypothesis density (GM-PHD) filter is proposed to address bearings-only measurements in multi-target tracking. The proposed method, called the Gaussian mixture measurements-probability hypothesis density (GMM-PHD) filter, not only approximates the posterior intensity using a Gaussian mixture, but also models the likelihood function with a Gaussian mixture instead of a single Gaussian distribution. Besides, the target birth model of the GMM-PHD filter is assumed to be partially uniform instead of a Gaussian mixture. Simulation results show that the proposed filter outperforms the GM-PHD filter embedded with the extended Kalman filter (EKF) and the unscented Kalman filter (UKF).
14. [Possibilities for inhibiting tumor-induced angiogenesis: results with multi-target tyrosine kinase inhibitors].
PubMed
Török, Szilvia; Döme, Balázs
2012-03-01
Functional blood vasculature is essential for tumor progression. The main signalization pathways that play a key role in the survival and growth of tumor vessels originate from the VEGF-, PDGF- and FGF tyrosine kinase receptors. In the past decade, significant results have been published on receptor tyrosine kinase inhibitors (RTKIs). In this paper, the mechanisms of action and the results so far available of experimental and clinical studies on multi-target antiangiogenic TKIs are discussed. On the one hand, notable achievements have been made recently and these drugs are already used in clinical practice in some patient populations. On the other hand, the optimal combination and dosage of these drugs, selection of the apropriate biomarker and better understanding of the conflicting role of PDGFR and FGFR signaling in angiogenesis remain future challenges. PMID:22403757
15. Improved Bearings-Only Multi-Target Tracking with GM-PHD Filtering
PubMed Central
Zhang, Qian; Song, Taek Lyul
2016-01-01
In this paper, an improved nonlinear Gaussian mixture probability hypothesis density (GM-PHD) filter is proposed to address bearings-only measurements in multi-target tracking. The proposed method, called the Gaussian mixture measurements-probability hypothesis density (GMM-PHD) filter, not only approximates the posterior intensity using a Gaussian mixture, but also models the likelihood function with a Gaussian mixture instead of a single Gaussian distribution. Besides, the target birth model of the GMM-PHD filter is assumed to be partially uniform instead of a Gaussian mixture. Simulation results show that the proposed filter outperforms the GM-PHD filter embedded with the extended Kalman filter (EKF) and the unscented Kalman filter (UKF). PMID:27626423
16. Synthesis and pharmacological evaluation of piperidine (piperazine)-substituted benzoxazole derivatives as multi-target antipsychotics.
PubMed
Huang, Ling; Zhang, Wenjun; Zhang, Xiaohua; Yin, Lei; Chen, Bangyin; Song, Jinchun
2015-11-15
The present study describes the optimization of a series of novel benzoxazole-piperidine (piperazine) derivatives combining high dopamine D2 and serotonin 5-HT1A, 5-HT2A receptor affinities. Of these derivatives, the pharmacological features of compound 29 exhibited high affinities for the DA D2, 5-HT1A and 5-HT2A receptors, but low affinities for the 5-HT2C and histamine H1 receptors and human ether-a-go-go-related gene (hERG) channels. Furthermore, compound 29 reduced apomorphine-induced climbing and 1-(2,5-dimethoxy-4-iodophenyl)-2-aminopropane (DOI)-induced head twitching without observable catalepsy, even at the highest dose tested. Thus, compound 29 is a promising candidate as a multi-target antipsychotic treatment.
17. AVN-101: A Multi-Target Drug Candidate for the Treatment of CNS Disorders
PubMed Central
Ivachtchenko, Alexandre V.; Lavrovsky, Yan; Okun, Ilya
2016-01-01
Lack of efficacy of many new highly selective and specific drug candidates in treating diseases with poorly understood or complex etiology, as are many of central nervous system (CNS) diseases, encouraged an idea of developing multi-modal (multi-targeted) drugs. In this manuscript, we describe molecular pharmacology, in vitro ADME, pharmacokinetics in animals and humans (part of the Phase I clinical studies), bio-distribution, bioavailability, in vivo efficacy, and safety profile of the multimodal drug candidate, AVN-101. We have carried out development of a next generation drug candidate with a multi-targeted mechanism of action, to treat CNS disorders. AVN-101 is a very potent 5-HT7 receptor antagonist (Ki = 153 pM), with slightly lesser potency toward 5-HT6, 5-HT2A, and 5HT-2C receptors (Ki = 1.2–2.0 nM). AVN-101 also exhibits a rather high affinity toward histamine H1 (Ki = 0.58 nM) and adrenergic α2A, α2B, and α2C (Ki = 0.41–3.6 nM) receptors. AVN-101 shows a good oral bioavailability and facilitated brain-blood barrier permeability, low toxicity, and reasonable efficacy in animal models of CNS diseases. The Phase I clinical study indicates the AVN-101 to be well tolerated when taken orally at doses of up to 20 mg daily. It does not dramatically influence plasma and urine biochemistry, nor does it prolong QT ECG interval, thus indicating low safety concerns. The primary therapeutic area for AVN-101 to be tested in clinical trials would be Alzheimer’s disease. However, due to its anxiolytic and anti-depressive activities, there is a strong rational for it to also be studied in such diseases as general anxiety disorders, depression, schizophrenia, and multiple sclerosis. PMID:27232215
18. Promises of novel multi-target neuroprotective and neurorestorative drugs for Parkinson's disease.
PubMed
Youdim, Moussa B H; Kupershmidt, Lana; Amit, Tamar; Weinreb, Orly
2014-01-01
The cascade of neurotoxic events involved in neuronal degeneration suggests that it is naive to think mono-target drugs can induce disease modification by slowing the process of neurodegeneration in Parkinson's disease (PD). Employing the pharmacophore of rasagiline (N-propargyl-1-R-aminoindan), we have developed a series of novel multi-target neuroprotective drugs, including: (A) drugs [ladostigil, TV-3326 (N-propargyl-3R-aminoindan-5yl)-ethyl methylcarbamate)] with both cholinesterase-butyrylesterase (Ch-BuE) and brain-selective monamine oxidase-AB (MAO-AB) inhibitory activities and (B) iron chelator-radical scavenging drugs (M30) possessing brain-selective MAO-AB inhibitor activity and the neuroprotective-neurorescue propargylamine moiety of rasagiline. This was considered to be valid since brain MAO and iron increase in PD and aging, which could lead to oxidative stress-dependent neurodegeneration. The multi-target iron chelator, M30, has all the properties of ladostigil, but is not an acetylcholinesterase (CHE) inhibitor. However, M30 has both neuroprotective and neurorestorative activities for nigrostriatal dopamine neurons in post-lesion MPTP, lactacystin and 6-hydroxydopamine animal models of PD. The neurorestorative activity has been identified as being related to the ability of the drug to activate hypoxia-inducible factor (HIF) by inhibiting prolyl-4-hydroxylase. M30 regulates cell cycle arrest and induces the neurotrophins brain-derived neurotrophic factor (BDNF), vascular endothelial growth factor (VEGF), erythropoietin (EPO), as well as glia-derived neurotrophic factor (GDNF). These unique multiple actions of M30 make it potentially useful as a disease modifying drug for the treatment of PD. PMID:24262165
19. AVN-101: A Multi-Target Drug Candidate for the Treatment of CNS Disorders.
PubMed
Ivachtchenko, Alexandre V; Lavrovsky, Yan; Okun, Ilya
2016-05-25
Lack of efficacy of many new highly selective and specific drug candidates in treating diseases with poorly understood or complex etiology, as are many of central nervous system (CNS) diseases, encouraged an idea of developing multi-modal (multi-targeted) drugs. In this manuscript, we describe molecular pharmacology, in vitro ADME, pharmacokinetics in animals and humans (part of the Phase I clinical studies), bio-distribution, bioavailability, in vivo efficacy, and safety profile of the multimodal drug candidate, AVN-101. We have carried out development of a next generation drug candidate with a multi-targeted mechanism of action, to treat CNS disorders. AVN-101 is a very potent 5-HT7 receptor antagonist (Ki = 153 pM), with slightly lesser potency toward 5-HT6, 5-HT2A, and 5HT-2C receptors (Ki = 1.2-2.0 nM). AVN-101 also exhibits a rather high affinity toward histamine H1 (Ki = 0.58 nM) and adrenergic α2A, α2B, and α2C (Ki = 0.41-3.6 nM) receptors. AVN-101 shows a good oral bioavailability and facilitated brain-blood barrier permeability, low toxicity, and reasonable efficacy in animal models of CNS diseases. The Phase I clinical study indicates the AVN-101 to be well tolerated when taken orally at doses of up to 20 mg daily. It does not dramatically influence plasma and urine biochemistry, nor does it prolong QT ECG interval, thus indicating low safety concerns. The primary therapeutic area for AVN-101 to be tested in clinical trials would be Alzheimer's disease. However, due to its anxiolytic and anti-depressive activities, there is a strong rational for it to also be studied in such diseases as general anxiety disorders, depression, schizophrenia, and multiple sclerosis. PMID:27232215
20. A Network-Based Multi-Target Computational Estimation Scheme for Anticoagulant Activities of Compounds
PubMed Central
Li, Canghai; Chen, Lirong; Song, Jun; Tang, Yalin; Xu, Xiaojie
2011-01-01
Background Traditional virtual screening method pays more attention on predicted binding affinity between drug molecule and target related to a certain disease instead of phenotypic data of drug molecule against disease system, as is often less effective on discovery of the drug which is used to treat many types of complex diseases. Virtual screening against a complex disease by general network estimation has become feasible with the development of network biology and system biology. More effective methods of computational estimation for the whole efficacy of a compound in a complex disease system are needed, given the distinct weightiness of the different target in a biological process and the standpoint that partial inhibition of several targets can be more efficient than the complete inhibition of a single target. Methodology We developed a novel approach by integrating the affinity predictions from multi-target docking studies with biological network efficiency analysis to estimate the anticoagulant activities of compounds. From results of network efficiency calculation for human clotting cascade, factor Xa and thrombin were identified as the two most fragile enzymes, while the catalytic reaction mediated by complex IXa:VIIIa and the formation of the complex VIIIa:IXa were recognized as the two most fragile biological matter in the human clotting cascade system. Furthermore, the method which combined network efficiency with molecular docking scores was applied to estimate the anticoagulant activities of a serial of argatroban intermediates and eight natural products respectively. The better correlation (r = 0.671) between the experimental data and the decrease of the network deficiency suggests that the approach could be a promising computational systems biology tool to aid identification of anticoagulant activities of compounds in drug discovery. Conclusions This article proposes a network-based multi-target computational estimation method for
1. [Development of multi-target multi-spectral high-speed pyrometer].
PubMed
Xiao, Peng; Dai, Jing-Min; Wang, Qing-Wei
2008-11-01
The plume temperature of a solid propellant rocket engine (SPRE) is a fundamental parameter in denoting combustion status. It is necessary to measure the temperature along both the axis and the radius of the engine. In order to measure the plume temperature distribution of a solid propellant rocket engine, the multi-spectral thermometry has been approved. Previously the pyrometer was developed in the Harbin Institute of Technology of China in 1999, which completed the measurement of SPRE plume temperature and its distribution with multi-spectral technique in aerospace model development for the first time. Following this experience, a new type of multi-target multi-spectral high-speed pyrometer used in the ground experiments of SPRE plume temperature measurement was developed. The main features of the instrument include the use of a dispersing prism and a photo-diode array to cover the entire spectral band of 0.4 to 1.1 microm. The optic fibers are used in order to collect and transmit the thermal radiation fluxes. The instrument can measure simultaneously the temperature and emissivity of eight spectra for six uniformly distributed points on the target surface, which are well defined by the hole on the field stop lens. A specially designed S/H (Sample/Hold) circuit, with 48 sample and hold units that were triggered with a signal, measures the multi-spectral and multi-target outputs. It can sample 48 signals with a less than 10ns time difference which is most important for the temperature calculation. PMID:19271529
2. A novel square-root cubature information weighted consensus filter algorithm for multi-target tracking in distributed camera networks.
PubMed
Chen, Yanming; Zhao, Qingjie
2015-01-01
This paper deals with the problem of multi-target tracking in a distributed camera network using the square-root cubature information filter (SCIF). SCIF is an efficient and robust nonlinear filter for multi-sensor data fusion. In camera networks, multiple cameras are arranged in a dispersed manner to cover a large area, and the target may appear in the blind area due to the limited field of view (FOV). Besides, each camera might receive noisy measurements. To overcome these problems, this paper proposes a novel multi-target square-root cubature information weighted consensus filter (MTSCF), which reduces the effect of clutter or spurious measurements using joint probabilistic data association (JPDA) and proper weights on the information matrix and information vector. The simulation results show that the proposed algorithm can efficiently track multiple targets in camera networks and is obviously better in terms of accuracy and stability than conventional multi-target tracking algorithms. PMID:25951338
3. A novel square-root cubature information weighted consensus filter algorithm for multi-target tracking in distributed camera networks.
PubMed
Chen, Yanming; Zhao, Qingjie
2015-05-05
This paper deals with the problem of multi-target tracking in a distributed camera network using the square-root cubature information filter (SCIF). SCIF is an efficient and robust nonlinear filter for multi-sensor data fusion. In camera networks, multiple cameras are arranged in a dispersed manner to cover a large area, and the target may appear in the blind area due to the limited field of view (FOV). Besides, each camera might receive noisy measurements. To overcome these problems, this paper proposes a novel multi-target square-root cubature information weighted consensus filter (MTSCF), which reduces the effect of clutter or spurious measurements using joint probabilistic data association (JPDA) and proper weights on the information matrix and information vector. The simulation results show that the proposed algorithm can efficiently track multiple targets in camera networks and is obviously better in terms of accuracy and stability than conventional multi-target tracking algorithms.
4. A Novel Square-Root Cubature Information Weighted Consensus Filter Algorithm for Multi-Target Tracking in Distributed Camera Networks
PubMed Central
Chen, Yanming; Zhao, Qingjie
2015-01-01
This paper deals with the problem of multi-target tracking in a distributed camera network using the square-root cubature information filter (SCIF). SCIF is an efficient and robust nonlinear filter for multi-sensor data fusion. In camera networks, multiple cameras are arranged in a dispersed manner to cover a large area, and the target may appear in the blind area due to the limited field of view (FOV). Besides, each camera might receive noisy measurements. To overcome these problems, this paper proposes a novel multi-target square-root cubature information weighted consensus filter (MTSCF), which reduces the effect of clutter or spurious measurements using joint probabilistic data association (JPDA) and proper weights on the information matrix and information vector. The simulation results show that the proposed algorithm can efficiently track multiple targets in camera networks and is obviously better in terms of accuracy and stability than conventional multi-target tracking algorithms. PMID:25951338
5. Design, synthesis and evaluation of seleno-dihydropyrimidinones as potential multi-targeted therapeutics for Alzheimer's disease.
PubMed
Canto, Rômulo F S; Barbosa, Flavio A R; Nascimento, Vanessa; de Oliveira, Aldo S; Brighente, Inês M C; Braga, Antonio Luiz
2014-06-01
In this paper we report the design, synthesis and evaluation of a series of seleno-dihydropyrimidinones as potential multi-targeted therapeutics for Alzheimer's disease. The compounds show excellent results as acetylcholinesterase inhibitors, being as active as the standard drug. All these compounds also show very good antioxidant activity through different mechanisms of action.
6. Topology of classical molecular optimal control landscapes for multi-target objectives
Joe-Wong, Carlee; Ho, Tak-San; Rabitz, Herschel; Wu, Rebing
2015-04-01
This paper considers laser-driven optimal control of an ensemble of non-interacting molecules whose dynamics lie in classical phase space. The molecules evolve independently under control to distinct final states. We consider a control landscape defined in terms of multi-target (MT) molecular states and analyze the landscape as a functional of the control field. The topology of the MT control landscape is assessed through its gradient and Hessian with respect to the control. Under particular assumptions, the MT control landscape is found to be free of traps that could hinder reaching the objective. The Hessian associated with an optimal control field is shown to have finite rank, indicating an inherent degree of robustness to control noise. Both the absence of traps and rank of the Hessian are shown to be analogous to the situation of specifying multiple targets for an ensemble of quantum states. Numerical simulations are presented to illustrate the classical landscape principles and further characterize the system behavior as the control field is optimized.
7. Antenna Allocation in MIMO Radar with Widely Separated Antennas for Multi-Target Detection
PubMed Central
Gao, Hao; Wang, Jian; Jiang, Chunxiao; Zhang, Xudong
2014-01-01
In this paper, we explore a new resource called multi-target diversity to optimize the performance of multiple input multiple output (MIMO) radar with widely separated antennas for detecting multiple targets. In particular, we allocate antennas of the MIMO radar to probe different targets simultaneously in a flexible manner based on the performance metric of relative entropy. Two antenna allocation schemes are proposed. In the first scheme, each antenna is allocated to illuminate a proper target over the entire illumination time, so that the detection performance of each target is guaranteed. The problem is formulated as a minimum makespan scheduling problem in the combinatorial optimization framework. Antenna allocation is implemented through a branch-and-bound algorithm and an enhanced factor 2 algorithm. In the second scheme, called antenna-time allocation, each antenna is allocated to illuminate different targets with different illumination time. Both antenna allocation and time allocation are optimized based on illumination probabilities. Over a large range of transmitted power, target fluctuations and target numbers, both of the proposed antenna allocation schemes outperform the scheme without antenna allocation. Moreover, the antenna-time allocation scheme achieves a more robust detection performance than branch-and-bound algorithm and the enhanced factor 2 algorithm when the target number changes. PMID:25350505
8. Antenna allocation in MIMO radar with widely separated antennas for multi-target detection.
PubMed
Gao, Hao; Wang, Jian; Jiang, Chunxiao; Zhang, Xudong
2014-10-27
In this paper, we explore a new resource called multi-target diversity to optimize the performance of multiple input multiple output (MIMO) radar with widely separated antennas for detecting multiple targets. In particular, we allocate antennas of the MIMO radar to probe different targets simultaneously in a flexible manner based on the performance metric of relative entropy. Two antenna allocation schemes are proposed. In the first scheme, each antenna is allocated to illuminate a proper target over the entire illumination time, so that the detection performance of each target is guaranteed. The problem is formulated as a minimum makespan scheduling problem in the combinatorial optimization framework. Antenna allocation is implemented through a branch-and-bound algorithm and an enhanced factor 2 algorithm. In the second scheme, called antenna-time allocation, each antenna is allocated to illuminate different targets with different illumination time. Both antenna allocation and time allocation are optimized based on illumination probabilities. Over a large range of transmitted power, target fluctuations and target numbers, both of the proposed antenna allocation schemes outperform the scheme without antenna allocation. Moreover, the antenna-time allocation scheme achieves a more robust detection performance than branch-and-bound algorithm and the enhanced factor 2 algorithm when the target number changes.
9. Gait measurement system for the multi-target stepping task using a laser range sensor.
PubMed
Yorozu, Ayanori; Nishiguchi, Shu; Yamada, Minoru; Aoyama, Tomoki; Moriguchi, Toshiki; Takahashi, Masaki
2015-01-01
For the prevention of falling in the elderly, gait training has been proposed using tasks such as the multi-target stepping task (MTST), in which participants step on assigned colored targets. This study presents a gait measurement system using a laser range sensor for the MTST to evaluate the risk of falling. The system tracks both legs and measures general walking parameters such as stride length and walking speed. Additionally, it judges whether the participant steps on the assigned colored targets and detects cross steps to evaluate cognitive function. However, situations in which one leg is hidden from the sensor or the legs are close occur and are likely to lead to losing track of the legs or false tracking. To solve these problems, we propose a novel leg detection method with five observed leg patterns and global nearest neighbor-based data association with a variable validation region based on the state of each leg. In addition, methods to judge target steps and detect cross steps based on leg trajectory are proposed. From the experimental results with the elderly, it is confirmed that the proposed system can improve leg-tracking performance, judge target steps and detect cross steps with high accuracy. PMID:25985161
10. Multi-target camera tracking, hand-off and display LDRD 158819 final report
SciTech Connect
Anderson, Robert J.
2014-10-01
Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that takes live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of currently capability.
11. Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report
SciTech Connect
Anderson, Robert J.
2014-10-01
Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that takes live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of currently capability.
12. Gait measurement system for the multi-target stepping task using a laser range sensor.
PubMed
Yorozu, Ayanori; Nishiguchi, Shu; Yamada, Minoru; Aoyama, Tomoki; Moriguchi, Toshiki; Takahashi, Masaki
2015-05-13
For the prevention of falling in the elderly, gait training has been proposed using tasks such as the multi-target stepping task (MTST), in which participants step on assigned colored targets. This study presents a gait measurement system using a laser range sensor for the MTST to evaluate the risk of falling. The system tracks both legs and measures general walking parameters such as stride length and walking speed. Additionally, it judges whether the participant steps on the assigned colored targets and detects cross steps to evaluate cognitive function. However, situations in which one leg is hidden from the sensor or the legs are close occur and are likely to lead to losing track of the legs or false tracking. To solve these problems, we propose a novel leg detection method with five observed leg patterns and global nearest neighbor-based data association with a variable validation region based on the state of each leg. In addition, methods to judge target steps and detect cross steps based on leg trajectory are proposed. From the experimental results with the elderly, it is confirmed that the proposed system can improve leg-tracking performance, judge target steps and detect cross steps with high accuracy.
13. Joint decision and Naive Bayes learning for detection of space multi-target
Huang, Tao; Li, Zhulian; Zhou, Yu; Xiong, Yaoheng; Zhang, Haitao
2014-07-01
In the photoelectric tracking system, the detection of space multi-target is crucial for target localization and tracking. The difficulties include the interferences from CCD smear and strong noise, the few characteristics of spot-like targets and the challenge of multiple targets. In this paper, we propose a hybrid algorithm of joint decision and Naive Bayes (JD-NB) learning, and present the duty ratio feature to discriminate the target and smear blocks. Firstly, we extract the proper features and train the parameters of the Naive Bayes classifier. Secondly, target blocks are preliminarily estimated with the Naive Bayes. Lastly, the 4-adjacent blocks of the candidate target blocks are jointed to analyze the distribution pattern and the true target blocks are secondarily extracted by the method of pattern matching. Experimental results indicate that the proposed JD-NB algorithm not only possesses a high recognition rate of better than 90% for the target block, but also effectively overcomes the disturbance of the smear block. Moreover, it performs well in the detection of small and faint targets when the SNR of the block is higher than about 0.014.
14. Gait Measurement System for the Multi-Target Stepping Task Using a Laser Range Sensor
PubMed Central
Yorozu, Ayanori; Nishiguchi, Shu; Yamada, Minoru; Aoyama, Tomoki; Moriguchi, Toshiki; Takahashi, Masaki
2015-01-01
For the prevention of falling in the elderly, gait training has been proposed using tasks such as the multi-target stepping task (MTST), in which participants step on assigned colored targets. This study presents a gait measurement system using a laser range sensor for the MTST to evaluate the risk of falling. The system tracks both legs and measures general walking parameters such as stride length and walking speed. Additionally, it judges whether the participant steps on the assigned colored targets and detects cross steps to evaluate cognitive function. However, situations in which one leg is hidden from the sensor or the legs are close occur and are likely to lead to losing track of the legs or false tracking. To solve these problems, we propose a novel leg detection method with five observed leg patterns and global nearest neighbor-based data association with a variable validation region based on the state of each leg. In addition, methods to judge target steps and detect cross steps based on leg trajectory are proposed. From the experimental results with the elderly, it is confirmed that the proposed system can improve leg-tracking performance, judge target steps and detect cross steps with high accuracy. PMID:25985161
15. Topology of classical molecular optimal control landscapes for multi-target objectives
SciTech Connect
Joe-Wong, Carlee; Ho, Tak-San; Rabitz, Herschel; Wu, Rebing
2015-04-21
This paper considers laser-driven optimal control of an ensemble of non-interacting molecules whose dynamics lie in classical phase space. The molecules evolve independently under control to distinct final states. We consider a control landscape defined in terms of multi-target (MT) molecular states and analyze the landscape as a functional of the control field. The topology of the MT control landscape is assessed through its gradient and Hessian with respect to the control. Under particular assumptions, the MT control landscape is found to be free of traps that could hinder reaching the objective. The Hessian associated with an optimal control field is shown to have finite rank, indicating an inherent degree of robustness to control noise. Both the absence of traps and rank of the Hessian are shown to be analogous to the situation of specifying multiple targets for an ensemble of quantum states. Numerical simulations are presented to illustrate the classical landscape principles and further characterize the system behavior as the control field is optimized.
16. In Vitro and In Vivo Activity of Multi-Target Inhibitors Against Trypanosoma brucei
PubMed Central
Yang, Gyongseon; Zhu, Wei; Wang, Yang; Huang, Guozhong; Byun, Sooyoung; Choi, Gahee; Li, Kai; Huang, Zhuoli; Docampo, Roberto; Oldfield, Eric; No, Joo Hwan
2015-01-01
We tested a series of amidine and related compounds against Trypanosoma brucei. The most active compound was a biphenyldiamidine which had an EC50 of 7.7 nM against bloodstream form parasites. There was little toxicity against two human cell lines with CC50 > 100 μM. There was also good in vivo activity in a mouse model of infection with 100% survival at 3 mg/kg i.p. The most potent lead blocked replication of kinetoplast DNA (k-DNA), but not nuclear DNA, in the parasite. Some compounds also inhibited the enzyme farnesyl diphosphate synthase (FPPS) and some were uncouplers of oxidative phosphorylation. We developed a computational model for T. brucei cell growth inhibition (R2 = 0.76) using DNA ΔTm values for inhibitor binding, combined with T. brucei FPPS IC50 values. Overall, the results suggest that it may be possible to develop multi-target drug leads against T. brucei that act by inhibiting both k-DNA replication and isoprenoid biosynthesis. PMID:26295062
17. Curcumin: A multi-target disease-modifying agent for late-stage transthyretin amyloidosis
PubMed Central
Ferreira, Nelson; Gonçalves, Nádia P.; Saraiva, Maria J.; Almeida, Maria R.
2016-01-01
Transthyretin amyloidoses encompass a variety of acquired and hereditary diseases triggered by systemic extracellular accumulation of toxic transthyretin aggregates and fibrils, particularly in the peripheral nervous system. Since transthyretin amyloidoses are typically complex progressive disorders, therapeutic approaches aiming multiple molecular targets simultaneously, might improve therapy efficacy and treatment outcome. In this study, we evaluate the protective effect of physiologically achievable doses of curcumin on the cytotoxicity induced by transthyretin oligomers in vitro by showing reduction of caspase-3 activity and the levels of endoplasmic reticulum-resident chaperone binding immunoglobulin protein. When given to an aged Familial Amyloidotic Polyneuropathy mouse model, curcumin not only reduced transthyretin aggregates deposition and toxicity in both gastrointestinal tract and dorsal root ganglia but also remodeled congophilic amyloid material in tissues. In addition, curcumin enhanced internalization, intracellular transport and degradation of transthyretin oligomers by primary macrophages from aged Familial Amyloidotic Polyneuropathy transgenic mice, suggesting an impaired activation of naïve phagocytic cells exposed to transthyretin toxic intermediate species. Overall, our results clearly support curcumin or optimized derivatives as promising multi-target disease-modifying agent for late-stage transthyretin amyloidosis. PMID:27197872
18. Topology of classical molecular optimal control landscapes for multi-target objectives.
PubMed
Joe-Wong, Carlee; Ho, Tak-San; Rabitz, Herschel; Wu, Rebing
2015-04-21
This paper considers laser-driven optimal control of an ensemble of non-interacting molecules whose dynamics lie in classical phase space. The molecules evolve independently under control to distinct final states. We consider a control landscape defined in terms of multi-target (MT) molecular states and analyze the landscape as a functional of the control field. The topology of the MT control landscape is assessed through its gradient and Hessian with respect to the control. Under particular assumptions, the MT control landscape is found to be free of traps that could hinder reaching the objective. The Hessian associated with an optimal control field is shown to have finite rank, indicating an inherent degree of robustness to control noise. Both the absence of traps and rank of the Hessian are shown to be analogous to the situation of specifying multiple targets for an ensemble of quantum states. Numerical simulations are presented to illustrate the classical landscape principles and further characterize the system behavior as the control field is optimized.
19. Multi-Target Tracking With Time-Varying Clutter Rate and Detection Profile: Application to Time-Lapse Cell Microscopy Sequences.
PubMed
Rezatofighi, Seyed Hamid; Gould, Stephen; Vo, Ba Tuong; Vo, Ba-Ngu; Mele, Katarina; Hartley, Richard
2015-06-01
Quantitative analysis of the dynamics of tiny cellular and sub-cellular structures, known as particles, in time-lapse cell microscopy sequences requires the development of a reliable multi-target tracking method capable of tracking numerous similar targets in the presence of high levels of noise, high target density, complex motion patterns and intricate interactions. In this paper, we propose a framework for tracking these structures based on the random finite set Bayesian filtering framework. We focus on challenging biological applications where image characteristics such as noise and background intensity change during the acquisition process. Under these conditions, detection methods usually fail to detect all particles and are often followed by missed detections and many spurious measurements with unknown and time-varying rates. To deal with this, we propose a bootstrap filter composed of an estimator and a tracker. The estimator adaptively estimates the required meta parameters for the tracker such as clutter rate and the detection probability of the targets, while the tracker estimates the state of the targets. Our results show that the proposed approach can outperform state-of-the-art particle trackers on both synthetic and real data in this regime. PMID:25594963
20. Multi-Target Tracking With Time-Varying Clutter Rate and Detection Profile: Application to Time-Lapse Cell Microscopy Sequences.
PubMed
Rezatofighi, Seyed Hamid; Gould, Stephen; Vo, Ba Tuong; Vo, Ba-Ngu; Mele, Katarina; Hartley, Richard
2015-06-01
Quantitative analysis of the dynamics of tiny cellular and sub-cellular structures, known as particles, in time-lapse cell microscopy sequences requires the development of a reliable multi-target tracking method capable of tracking numerous similar targets in the presence of high levels of noise, high target density, complex motion patterns and intricate interactions. In this paper, we propose a framework for tracking these structures based on the random finite set Bayesian filtering framework. We focus on challenging biological applications where image characteristics such as noise and background intensity change during the acquisition process. Under these conditions, detection methods usually fail to detect all particles and are often followed by missed detections and many spurious measurements with unknown and time-varying rates. To deal with this, we propose a bootstrap filter composed of an estimator and a tracker. The estimator adaptively estimates the required meta parameters for the tracker such as clutter rate and the detection probability of the targets, while the tracker estimates the state of the targets. Our results show that the proposed approach can outperform state-of-the-art particle trackers on both synthetic and real data in this regime.
1. Key Targets for Multi-Target Ligands Designed to Combat Neurodegeneration
PubMed Central
Ramsay, Rona R.; Majekova, Magdalena; Medina, Milagros; Valoti, Massimo
2016-01-01
HIGHLIGHTS Compounds that interact with multiple targets but minimally with the cytochrome P450 system (CYP) address the many factors leading to neurodegeneration.Acetyl- and Butyryl-cholineEsterases (AChE, BChE) and Monoamine Oxidases A/B (MAO A, MAO B) are targets for Multi-Target Designed Ligands (MTDL).ASS234 is an irreversible inhibitor of MAO A >MAO B and has micromolar potency against the cholinesterases.ASS234 is a poor CYP substrate in human liver, yielding the depropargylated metabolite.SMe1EC2, a stobadine derivative, showed high radical scavenging property, in vitro and in vivo giving protection in head trauma and diabetic damage of endothelium.Control of mitochondrial function and morphology by manipulating fission and fusion is emerging as a target area for therapeutic strategies to decrease the pathological outcome of neurodegenerative diseases. Growing evidence supports the view that neurodegenerative diseases have multiple and common mechanisms in their aetiologies. These multifactorial aspects have changed the broadly common assumption that selective drugs are superior to “dirty drugs” for use in therapy. This drives the research in studies of novel compounds that might have multiple action mechanisms. In neurodegeneration, loss of neuronal signaling is a major cause of the symptoms, so preservation of neurotransmitters by inhibiting the breakdown enzymes is a first approach. Acetylcholinesterase (AChE) inhibitors are the drugs preferentially used in AD and that one of these, rivastigmine, is licensed also for PD. Several studies have shown that monoamine oxidase (MAO) B, located mainly in glial cells, increases with age and is elevated in Alzheimer (AD) and Parkinson's Disease's (PD). Deprenyl, a MAO B inhibitor, significantly delays the initiation of levodopa treatment in PD patients. These indications underline that AChE and MAO are considered a necessary part of multi-target designed ligands (MTDL). However, both of these targets are
2. Key Targets for Multi-Target Ligands Designed to Combat Neurodegeneration
PubMed Central
Ramsay, Rona R.; Majekova, Magdalena; Medina, Milagros; Valoti, Massimo
2016-01-01
HIGHLIGHTS Compounds that interact with multiple targets but minimally with the cytochrome P450 system (CYP) address the many factors leading to neurodegeneration.Acetyl- and Butyryl-cholineEsterases (AChE, BChE) and Monoamine Oxidases A/B (MAO A, MAO B) are targets for Multi-Target Designed Ligands (MTDL).ASS234 is an irreversible inhibitor of MAO A >MAO B and has micromolar potency against the cholinesterases.ASS234 is a poor CYP substrate in human liver, yielding the depropargylated metabolite.SMe1EC2, a stobadine derivative, showed high radical scavenging property, in vitro and in vivo giving protection in head trauma and diabetic damage of endothelium.Control of mitochondrial function and morphology by manipulating fission and fusion is emerging as a target area for therapeutic strategies to decrease the pathological outcome of neurodegenerative diseases. Growing evidence supports the view that neurodegenerative diseases have multiple and common mechanisms in their aetiologies. These multifactorial aspects have changed the broadly common assumption that selective drugs are superior to “dirty drugs” for use in therapy. This drives the research in studies of novel compounds that might have multiple action mechanisms. In neurodegeneration, loss of neuronal signaling is a major cause of the symptoms, so preservation of neurotransmitters by inhibiting the breakdown enzymes is a first approach. Acetylcholinesterase (AChE) inhibitors are the drugs preferentially used in AD and that one of these, rivastigmine, is licensed also for PD. Several studies have shown that monoamine oxidase (MAO) B, located mainly in glial cells, increases with age and is elevated in Alzheimer (AD) and Parkinson's Disease's (PD). Deprenyl, a MAO B inhibitor, significantly delays the initiation of levodopa treatment in PD patients. These indications underline that AChE and MAO are considered a necessary part of multi-target designed ligands (MTDL). However, both of these targets are
3. Key Targets for Multi-Target Ligands Designed to Combat Neurodegeneration.
PubMed
Ramsay, Rona R; Majekova, Magdalena; Medina, Milagros; Valoti, Massimo
2016-01-01
HIGHLIGHTS Compounds that interact with multiple targets but minimally with the cytochrome P450 system (CYP) address the many factors leading to neurodegeneration.Acetyl- and Butyryl-cholineEsterases (AChE, BChE) and Monoamine Oxidases A/B (MAO A, MAO B) are targets for Multi-Target Designed Ligands (MTDL).ASS234 is an irreversible inhibitor of MAO A >MAO B and has micromolar potency against the cholinesterases.ASS234 is a poor CYP substrate in human liver, yielding the depropargylated metabolite.SMe1EC2, a stobadine derivative, showed high radical scavenging property, in vitro and in vivo giving protection in head trauma and diabetic damage of endothelium.Control of mitochondrial function and morphology by manipulating fission and fusion is emerging as a target area for therapeutic strategies to decrease the pathological outcome of neurodegenerative diseases. Growing evidence supports the view that neurodegenerative diseases have multiple and common mechanisms in their aetiologies. These multifactorial aspects have changed the broadly common assumption that selective drugs are superior to "dirty drugs" for use in therapy. This drives the research in studies of novel compounds that might have multiple action mechanisms. In neurodegeneration, loss of neuronal signaling is a major cause of the symptoms, so preservation of neurotransmitters by inhibiting the breakdown enzymes is a first approach. Acetylcholinesterase (AChE) inhibitors are the drugs preferentially used in AD and that one of these, rivastigmine, is licensed also for PD. Several studies have shown that monoamine oxidase (MAO) B, located mainly in glial cells, increases with age and is elevated in Alzheimer (AD) and Parkinson's Disease's (PD). Deprenyl, a MAO B inhibitor, significantly delays the initiation of levodopa treatment in PD patients. These indications underline that AChE and MAO are considered a necessary part of multi-target designed ligands (MTDL). However, both of these targets are simply
4. Olive Oil Phenols as Promising Multi-targeting Agents Against Alzheimer's Disease.
PubMed
Rigacci, Stefania
2015-01-01
Amyloid diseases are characterized by the deposition of typically aggregated proteins/peptides in tissues, associated with degeneration and progressive functional impairment. Alzheimer's disease is one of the most studied neurodegenerative amyloid diseases and, in Western countries, a significant cause of dementia in the elderly. The so-called "Mediterranean diet" has been considered for long as the healthier dietary regimen, characterised by a great abundance in vegetables and fruits, extra virgin olive oil as the main source of fat, a moderate consumption of red wine and a reduced intake of proteins from red meat. Recent epidemiological studies support the efficacy of the Mediterranean diet not only against cardiovascular and cancer diseases (as previously demonstrated) but also against the cognitive decline associated with ageing, and several data are highlighting the role played by natural phenols, of which red wine and extra virgin olive oil are rich, in such context. In the meantime, studies conducted both in vivo and in vitro have started to reveal the great potential of the phenolic component of extra virgin olive oil (mainly oleuropein aglycone and oleocanthal) in counteracting amyloid aggregation and toxicity, with a particular emphasis on the pathways involved in the onset and progression of Alzheimer's disease: amyloid precursor protein processing, amyloid-beta (Aβ) peptide and tau aggregation, autophagy impairment, neuroinflammation. The aim of this review is to summarize the results of such research efforts, showing how the action of these phenols goes far beyond their renowned antioxidant activity and revealing their potential as multi-targeting agents against Alzheimer's disease. PMID:26092624
5. Olive Oil Phenols as Promising Multi-targeting Agents Against Alzheimer's Disease.
PubMed
Rigacci, Stefania
2015-01-01
Amyloid diseases are characterized by the deposition of typically aggregated proteins/peptides in tissues, associated with degeneration and progressive functional impairment. Alzheimer's disease is one of the most studied neurodegenerative amyloid diseases and, in Western countries, a significant cause of dementia in the elderly. The so-called "Mediterranean diet" has been considered for long as the healthier dietary regimen, characterised by a great abundance in vegetables and fruits, extra virgin olive oil as the main source of fat, a moderate consumption of red wine and a reduced intake of proteins from red meat. Recent epidemiological studies support the efficacy of the Mediterranean diet not only against cardiovascular and cancer diseases (as previously demonstrated) but also against the cognitive decline associated with ageing, and several data are highlighting the role played by natural phenols, of which red wine and extra virgin olive oil are rich, in such context. In the meantime, studies conducted both in vivo and in vitro have started to reveal the great potential of the phenolic component of extra virgin olive oil (mainly oleuropein aglycone and oleocanthal) in counteracting amyloid aggregation and toxicity, with a particular emphasis on the pathways involved in the onset and progression of Alzheimer's disease: amyloid precursor protein processing, amyloid-beta (Aβ) peptide and tau aggregation, autophagy impairment, neuroinflammation. The aim of this review is to summarize the results of such research efforts, showing how the action of these phenols goes far beyond their renowned antioxidant activity and revealing their potential as multi-targeting agents against Alzheimer's disease.
6. Imbricaric Acid and Perlatolic Acid: Multi-Targeting Anti-Inflammatory Depsides from Cetrelia monachorum
PubMed Central
Oettl, Sarah K.; Gerstmeier, Jana; Khan, Shafaat Y.; Wiechmann, Katja; Bauer, Julia; Atanasov, Atanas G.; Malainer, Clemens; Awad, Ezzat M.; Uhrin, Pavel; Heiss, Elke H.; Waltenberger, Birgit; Remias, Daniel; Breuss, Johannes M.; Boustie, Joel; Dirsch, Verena M.; Stuppner, Hermann; Werz, Oliver; Rollinger, Judith M.
2013-01-01
In vitro screening of 17 Alpine lichen species for their inhibitory activity against 5-lipoxygenase, microsomal prostaglandin E2 synthase-1 and nuclear factor kappa B revealed Cetrelia monachorum (Zahlbr.) W.L. Culb. & C.F. Culb. As conceivable source for novel anti-inflammatory compounds. Phytochemical investigation of the ethanolic crude extract resulted in the isolation and identification of 11 constituents, belonging to depsides and derivatives of orsellinic acid, olivetolic acid and olivetol. The two depsides imbricaric acid (4) and perlatolic acid (5) approved dual inhibitory activities on microsomal prostaglandin E2 synthase-1 (IC50 = 1.9 and 0.4 µM, resp.) and on 5-lipoxygenase tested in a cell-based assay (IC50 = 5.3 and 1.8 µM, resp.) and on purified enzyme (IC50 = 3.5 and 0.4 µM, resp.). Additionally, these two main constituents quantified in the extract with 15.22% (4) and 9.10% (5) showed significant inhibition of tumor necrosis factor alpha-induced nuclear factor kappa B activation in luciferase reporter cells with IC50 values of 2.0 and 7.0 µM, respectively. In a murine in vivo model of inflammation, 5 impaired the inflammatory, thioglycollate-induced recruitment of leukocytes to the peritoneum. The potent inhibitory effects on the three identified targets attest 4 and 5 a pronounced multi-target anti-inflammatory profile which warrants further investigation on their pharmacokinetics and in vivo efficacy. PMID:24130812
7. Experimental demonstration of a multi-target detection technique using an X-band optically steered phased array radar.
PubMed
Shi, Nuannuan; Li, Ming; Deng, Ye; Zhang, Lihong; Sun, Shuqian; Tang, Jian; Li, Wei; Zhu, Ninghua
2016-06-27
An X-band optically-steered phased array radar is developed to demonstrate high resolution multi-target detection. The beam forming is implemented based on wavelength-swept true time delay (TTD) technique. The beam forming system has a wide direction tuning range of ± 54 degree, low magnitude ripple of ± 0.5 dB and small delay error of 0.13 ps/nm. To further verify performance of the proposed optically-steered phased array radar, three experiments are then carried out to implement the single and multiple target detection. A linearly chirped X-band microwave signal is used as radar signal which is finally compressed at the receiver to improve the detection accuracy. The ranging resolution for multi-target detection is up to 2 cm within the measuring distance over 4 m and the azimuth angle error is less than 4 degree.
8. Experimental demonstration of a multi-target detection technique using an X-band optically steered phased array radar.
PubMed
Shi, Nuannuan; Li, Ming; Deng, Ye; Zhang, Lihong; Sun, Shuqian; Tang, Jian; Li, Wei; Zhu, Ninghua
2016-06-27
An X-band optically-steered phased array radar is developed to demonstrate high resolution multi-target detection. The beam forming is implemented based on wavelength-swept true time delay (TTD) technique. The beam forming system has a wide direction tuning range of ± 54 degree, low magnitude ripple of ± 0.5 dB and small delay error of 0.13 ps/nm. To further verify performance of the proposed optically-steered phased array radar, three experiments are then carried out to implement the single and multiple target detection. A linearly chirped X-band microwave signal is used as radar signal which is finally compressed at the receiver to improve the detection accuracy. The ranging resolution for multi-target detection is up to 2 cm within the measuring distance over 4 m and the azimuth angle error is less than 4 degree. PMID:27410597
9. Systems biology approaches and tools for analysis of interactomes and multi-target drugs.
PubMed
Schrattenholz, André; Groebe, Karlfried; Soskic, Vukic
2010-01-01
diseases" remains a most pressing medical need. Currently, a change of paradigm can be observed with regard to a new interest in agents that modulate multiple targets simultaneously, essentially "dirty drugs." Targeting cellular function as a system rather than on the level of the single target, significantly increases the size of the drugable proteome and is expected to introduce novel classes of multi-target drugs with fewer adverse effects and toxicity. Multiple target approaches have recently been used to design medications against atherosclerosis, cancer, depression, psychosis and neurodegenerative diseases. A focussed approach towards "systemic" drugs will certainly require the development of novel computational and mathematical concepts for appropriate modelling of complex data. But the key is the extraction of relevant molecular information from biological systems by implementing rigid statistical procedures to differential proteomic analytics.
10. Proposed Methodology for Application of Human-like gradual Multi-Agent Q-Learning (HuMAQ) for Multi-robot Exploration
Narayan Ray, Dip; Majumder, Somajyoti
2014-07-01
Several attempts have been made by the researchers around the world to develop a number of autonomous exploration techniques for robots. But it has been always an important issue for developing the algorithm for unstructured and unknown environments. Human-like gradual Multi-agent Q-leaming (HuMAQ) is a technique developed for autonomous robotic exploration in unknown (and even unimaginable) environments. It has been successfully implemented in multi-agent single robotic system. HuMAQ uses the concept of Subsumption architecture, a well-known Behaviour-based architecture for prioritizing the agents of the multi-agent system and executes only the most common action out of all the different actions recommended by different agents. Instead of using new state-action table (Q-table) each time, HuMAQ uses the immediate past table for efficient and faster exploration. The proof of learning has also been established both theoretically and practically. HuMAQ has the potential to be used in different and difficult situations as well as applications. The same architecture has been modified to use for multi-robot exploration in an environment. Apart from all other existing agents used in the single robotic system, agents for inter-robot communication and coordination/ co-operation with the other similar robots have been introduced in the present research. Current work uses a series of indigenously developed identical autonomous robotic systems, communicating with each other through ZigBee protocol.
11. Multi-Target Directed Donepezil-Like Ligands for Alzheimer's Disease.
PubMed
Unzeta, Mercedes; Esteban, Gerard; Bolea, Irene; Fogel, Wieslawa A; Ramsay, Rona R; Youdim, Moussa B H; Tipton, Keith F; Marco-Contelles, José
2016-01-01
HIGHLIGHTS ASS234 is a MTDL compound containing a moiety from Donepezil and the propargyl group from the PF 9601N, a potent and selective MAO B inhibitor. This compound is the most advanced anti-Alzheimer agent for preclinical studies identified in our laboratory.Derived from ASS234 both multipotent donepezil-indolyl (MTDL-1) and donepezil-pyridyl hybrids (MTDL-2) were designed and evaluated as inhibitors of AChE/BuChE and both MAO isoforms. MTDL-2 showed more high affinity toward the four enzymes than MTDL-1.MTDL-3 and MTDL-4, were designed containing the N-benzylpiperidinium moiety from Donepezil, a metal- chelating 8-hydroxyquinoline group and linked to a N-propargyl core and they were pharmacologically evaluated.The presence of the cyano group in MTDL-3, enhanced binding to AChE, BuChE and MAO A. It showed antioxidant behavior and it was able to strongly complex Cu(II), Zn(II) and Fe(III).MTDL-4 showed higher affinity toward AChE, BuChE.MTDL-3 exhibited good brain penetration capacity (ADMET) and less toxicity than Donepezil. Memory deficits in scopolamine-lesioned animals were restored by MTDL-3.MTDL-3 particularly emerged as a ligand showing remarkable potential benefits for its use in AD therapy. Alzheimer's disease (AD), the most common form of adult onset dementia, is an age-related neurodegenerative disorder characterized by progressive memory loss, decline in language skills, and other cognitive impairments. Although its etiology is not completely known, several factors including deficits of acetylcholine, β-amyloid deposits, τ-protein phosphorylation, oxidative stress, and neuroinflammation are considered to play significant roles in the pathophysiology of this disease. For a long time, AD patients have been treated with acetylcholinesterase inhibitors such as donepezil (Aricept®) but with limited therapeutic success. This might be due to the complex multifactorial nature of AD, a fact that has prompted the design of new Multi-Target-Directed Ligands
12. Multi-Target Directed Donepezil-Like Ligands for Alzheimer's Disease
PubMed Central
Unzeta, Mercedes; Esteban, Gerard; Bolea, Irene; Fogel, Wieslawa A.; Ramsay, Rona R.; Youdim, Moussa B. H.; Tipton, Keith F.; Marco-Contelles, José
2016-01-01
HIGHLIGHTS ASS234 is a MTDL compound containing a moiety from Donepezil and the propargyl group from the PF 9601N, a potent and selective MAO B inhibitor. This compound is the most advanced anti-Alzheimer agent for preclinical studies identified in our laboratory.Derived from ASS234 both multipotent donepezil-indolyl (MTDL-1) and donepezil-pyridyl hybrids (MTDL-2) were designed and evaluated as inhibitors of AChE/BuChE and both MAO isoforms. MTDL-2 showed more high affinity toward the four enzymes than MTDL-1.MTDL-3 and MTDL-4, were designed containing the N-benzylpiperidinium moiety from Donepezil, a metal- chelating 8-hydroxyquinoline group and linked to a N-propargyl core and they were pharmacologically evaluated.The presence of the cyano group in MTDL-3, enhanced binding to AChE, BuChE and MAO A. It showed antioxidant behavior and it was able to strongly complex Cu(II), Zn(II) and Fe(III).MTDL-4 showed higher affinity toward AChE, BuChE.MTDL-3 exhibited good brain penetration capacity (ADMET) and less toxicity than Donepezil. Memory deficits in scopolamine-lesioned animals were restored by MTDL-3.MTDL-3 particularly emerged as a ligand showing remarkable potential benefits for its use in AD therapy. Alzheimer's disease (AD), the most common form of adult onset dementia, is an age-related neurodegenerative disorder characterized by progressive memory loss, decline in language skills, and other cognitive impairments. Although its etiology is not completely known, several factors including deficits of acetylcholine, β-amyloid deposits, τ-protein phosphorylation, oxidative stress, and neuroinflammation are considered to play significant roles in the pathophysiology of this disease. For a long time, AD patients have been treated with acetylcholinesterase inhibitors such as donepezil (Aricept®) but with limited therapeutic success. This might be due to the complex multifactorial nature of AD, a fact that has prompted the design of new Multi-Target-Directed Ligands
13. Multi-target tacrine-coumarin hybrids: cholinesterase and monoamine oxidase B inhibition properties against Alzheimer's disease.
PubMed
Xie, Sai-Sai; Wang, Xiaobing; Jiang, Neng; Yu, Wenying; Wang, Kelvin D G; Lan, Jin-Shuai; Li, Zhong-Rui; Kong, Ling-Yi
2015-05-01
A series of novel tacrine-coumarin hybrids were designed, synthesized and evaluated as multi-target agents against Alzheimer's disease. The biological assays indicated that most of compounds displayed potent inhibitory activity toward AChE and BuChE, and clearly selective inhibition for MAO-B. Among these compounds, 14c exhibited strong inhibitory activity for AChE (IC50 values of 33.63 nM for eeAChE and 16.11 nM for hAChE) and BuChE (IC50 values of 80.72 nM for eqBuChE and 112.72 nM for hBuChE), and the highest inhibitory activity against hMAO-B (IC50 value of 0.24 μM). Kinetic and molecular modeling studies revealed that 14c was a mixed-type inhibitor, binding simultaneously to catalytic, peripheral and mid-gorge sites of AChE. It was also a competitive inhibitor, which covered the substrate and entrance cavities of MAO-B. Moreover, 14c could penetrate the CNS and show low cell toxicity. Overall, these results suggested that 14c might be an excellent multi-target agent for AD treatment. PMID:25812965
14. Development and application of a multi-targeting reference plasmid as calibrator for analysis of five genetically modified soybean events.
PubMed
Pi, Liqun; Li, Xiang; Cao, Yiwei; Wang, Canhua; Pan, Liangwen; Yang, Litao
2015-04-01
Reference materials are important in accurate analysis of genetically modified organism (GMO) contents in food/feeds, and development of novel reference plasmid is a new trend in the research of GMO reference materials. Herein, we constructed a novel multi-targeting plasmid, pSOY, which contained seven event-specific sequences of five GM soybeans (MON89788-5', A2704-12-3', A5547-127-3', DP356043-5', DP305423-3', A2704-12-5', and A5547-127-5') and sequence of soybean endogenous reference gene Lectin. We evaluated the specificity, limit of detection and quantification, and applicability of pSOY in both qualitative and quantitative PCR analyses. The limit of detection (LOD) was as low as 20 copies in qualitative PCR, and the limit of quantification (LOQ) in quantitative PCR was 10 copies. In quantitative real-time PCR analysis, the PCR efficiencies of all event-specific and Lectin assays were higher than 90%, and the squared regression coefficients (R(2)) were more than 0.999. The quantification bias varied from 0.21% to 19.29%, and the relative standard deviations were from 1.08% to 9.84% in simulated samples analysis. All the results demonstrated that the developed multi-targeting plasmid, pSOY, was a credible substitute of matrix reference materials, and could be used as a reliable reference calibrator in the identification and quantification of multiple GM soybean events. PMID:25673245
15. Development and application of a multi-targeting reference plasmid as calibrator for analysis of five genetically modified soybean events.
PubMed
Pi, Liqun; Li, Xiang; Cao, Yiwei; Wang, Canhua; Pan, Liangwen; Yang, Litao
2015-04-01
Reference materials are important in accurate analysis of genetically modified organism (GMO) contents in food/feeds, and development of novel reference plasmid is a new trend in the research of GMO reference materials. Herein, we constructed a novel multi-targeting plasmid, pSOY, which contained seven event-specific sequences of five GM soybeans (MON89788-5', A2704-12-3', A5547-127-3', DP356043-5', DP305423-3', A2704-12-5', and A5547-127-5') and sequence of soybean endogenous reference gene Lectin. We evaluated the specificity, limit of detection and quantification, and applicability of pSOY in both qualitative and quantitative PCR analyses. The limit of detection (LOD) was as low as 20 copies in qualitative PCR, and the limit of quantification (LOQ) in quantitative PCR was 10 copies. In quantitative real-time PCR analysis, the PCR efficiencies of all event-specific and Lectin assays were higher than 90%, and the squared regression coefficients (R(2)) were more than 0.999. The quantification bias varied from 0.21% to 19.29%, and the relative standard deviations were from 1.08% to 9.84% in simulated samples analysis. All the results demonstrated that the developed multi-targeting plasmid, pSOY, was a credible substitute of matrix reference materials, and could be used as a reliable reference calibrator in the identification and quantification of multiple GM soybean events.
16. Evaluation of multi-target immunogenic reagents for the detection of latent and body fluid-contaminated fingermarks.
PubMed
Lam, Rolanda; Hofstetter, Oliver; Lennard, Chris; Roux, Claude; Spindler, Xanthe
2016-07-01
Fingermark enhancement reagents capable of molecular recognition offer a highly selective and sensitive method of detection. Antibodies and aptamers provide a high degree of adaptability for visualisation, allowing for the selection of the most appropriate visualisation wavelength for a particular substrate without the need for specialist equipment or image processing. However, the major hurdle to overcome is the balance between sensitivity and selectivity. Single-target molecular recognition is highly specific, purported to have better detection limits than chemical reactions or stains, and can provide information about the donor or activity, but often results in incomplete ridge pattern development. Consequently, the development and evaluation of multi-target biomolecular reagents for fingermark enhancement was investigated, with the focus on endogenous eccrine secretions. To assess the suitability of the immunogenic reagents for potential operational use, a variety of parameters (i.e., processing time, fixing and working solution conditions) were optimised on a wide range of non-porous and semi-porous substrates. The relative performance of immunogenic reagents was compared to that of routine techniques applied to latent marks and marks in blood, semen and saliva. The incorporation of these novel reagents into routine technique sequences was also investigated. The experimental results indicated that the multi-target immunogenic reagents were not a suitable alternative to routine detection methods or sequences, but may have promise as a "last resort" method for difficult substrates or cases.
17. In Vivo Characterization of ARN14140, a Memantine/Galantamine-Based Multi-Target Compound for Alzheimer’s Disease
PubMed Central
Reggiani, Angelo M.; Simoni, Elena; Caporaso, Roberta; Meunier, Johann; Keller, Emeline; Maurice, Tangui; Minarini, Anna; Rosini, Michela; Cavalli, Andrea
2016-01-01
Alzheimer’s disease (AD) is a chronic pathological condition that leads to neurodegeneration, loss of intellectual abilities, including cognition and memory, and ultimately to death. It is widely recognized that AD is a multifactorial disease, where different pathological cascades (mainly amyloid and tau) contribute to neural death and to the clinical outcome related to the disease. The currently available drugs for AD were developed according to the one-target, one-drug paradigm. In recent times, multi-target strategies have begun to play an increasingly central role in the discovery of more efficacious candidates for complex neurological conditions, including AD. In this study, we report on the in vivo pharmacological characterization of ARN14140, a new chemical entity, which was obtained through a multi-target structure-activity relationship campaign, and which showed a balanced inhibiting profile against the acetylcholinesterase enzyme and the NMDA receptor. Based on the initial promising biochemical data, ARN14140 is here studied in mice treated with the amyloidogenic fragment 25–35 of the amyloid-β peptide, a consolidated non-transgenic AD model. Sub-chronically treating animals with ARN14140 leads to a prevention of the cognitive impairment and of biomarker levels connected to neurodegeneration, demonstrating its neuroprotective potential as new AD agent. PMID:27609215
18. Multi-target tacrine-coumarin hybrids: cholinesterase and monoamine oxidase B inhibition properties against Alzheimer's disease.
PubMed
Xie, Sai-Sai; Wang, Xiaobing; Jiang, Neng; Yu, Wenying; Wang, Kelvin D G; Lan, Jin-Shuai; Li, Zhong-Rui; Kong, Ling-Yi
2015-05-01
A series of novel tacrine-coumarin hybrids were designed, synthesized and evaluated as multi-target agents against Alzheimer's disease. The biological assays indicated that most of compounds displayed potent inhibitory activity toward AChE and BuChE, and clearly selective inhibition for MAO-B. Among these compounds, 14c exhibited strong inhibitory activity for AChE (IC50 values of 33.63 nM for eeAChE and 16.11 nM for hAChE) and BuChE (IC50 values of 80.72 nM for eqBuChE and 112.72 nM for hBuChE), and the highest inhibitory activity against hMAO-B (IC50 value of 0.24 μM). Kinetic and molecular modeling studies revealed that 14c was a mixed-type inhibitor, binding simultaneously to catalytic, peripheral and mid-gorge sites of AChE. It was also a competitive inhibitor, which covered the substrate and entrance cavities of MAO-B. Moreover, 14c could penetrate the CNS and show low cell toxicity. Overall, these results suggested that 14c might be an excellent multi-target agent for AD treatment.
19. In Vivo Characterization of ARN14140, a Memantine/Galantamine-Based Multi-Target Compound for Alzheimer's Disease.
PubMed
Reggiani, Angelo M; Simoni, Elena; Caporaso, Roberta; Meunier, Johann; Keller, Emeline; Maurice, Tangui; Minarini, Anna; Rosini, Michela; Cavalli, Andrea
2016-01-01
Alzheimer's disease (AD) is a chronic pathological condition that leads to neurodegeneration, loss of intellectual abilities, including cognition and memory, and ultimately to death. It is widely recognized that AD is a multifactorial disease, where different pathological cascades (mainly amyloid and tau) contribute to neural death and to the clinical outcome related to the disease. The currently available drugs for AD were developed according to the one-target, one-drug paradigm. In recent times, multi-target strategies have begun to play an increasingly central role in the discovery of more efficacious candidates for complex neurological conditions, including AD. In this study, we report on the in vivo pharmacological characterization of ARN14140, a new chemical entity, which was obtained through a multi-target structure-activity relationship campaign, and which showed a balanced inhibiting profile against the acetylcholinesterase enzyme and the NMDA receptor. Based on the initial promising biochemical data, ARN14140 is here studied in mice treated with the amyloidogenic fragment 25-35 of the amyloid-β peptide, a consolidated non-transgenic AD model. Sub-chronically treating animals with ARN14140 leads to a prevention of the cognitive impairment and of biomarker levels connected to neurodegeneration, demonstrating its neuroprotective potential as new AD agent. PMID:27609215
20. Extending multi-tenant architectures: a database model for a multi-target support in SaaS applications
Rico, Antonio; Noguera, Manuel; Garrido, José Luis; Benghazi, Kawtar; Barjis, Joseph
2016-05-01
Multi-tenant architectures (MTAs) are considered a cornerstone in the success of Software as a Service as a new application distribution formula. Multi-tenancy allows multiple customers (i.e. tenants) to be consolidated into the same operational system. This way, tenants run and share the same application instance as well as costs, which are significantly reduced. Functional needs vary from one tenant to another; either companies from different sectors run different types of applications or, although deploying the same functionality, they do differ in the extent of their complexity. In any case, MTA leaves one major concern regarding the companies' data, their privacy and security, which requires special attention to the data layer. In this article, we propose an extended data model that enhances traditional MTAs in respect of this concern. This extension - called multi-target - allows MT applications to host, manage and serve multiple functionalities within the same multi-tenant (MT) environment. The practical deployment of this approach will allow SaaS vendors to target multiple markets or address different levels of functional complexity and yet commercialise just one single MT application. The applicability of the approach is demonstrated via a case study of a real multi-tenancy multi-target (MT2) implementation, called Globalgest.
1. Multi target neuroprotective and neurorestorative anti-Parkinson and anti-Alzheimer drugs ladostigil and m30 derived from rasagiline.
PubMed
Youdim, Moussa B H
2013-03-01
Present anti-PD and -AD drugs have limited symptomatic activity and devoid of neuroprotective and neurorestorative property that is needed for disease modifying action. The complex pathology of PD and AD led us to develop several multi-target neuroprotective and neurorestorative drugs with several CNS targets with the ability for possible disease modifying activity. Employing the pharmacophore of our anti-parkinson drug rasagiline (Azilect, N-propagrgyl-1-R-aminoindan), we have developed a series of novel multi-functional neuroprotective drugs (A) [TV-3326 (N-propargyl-3R-aminoindan-5yl)-ethyl methylcarbamate)], with both cholinesterase-butyrylesterase and brain selective monoamine-oxidase (MAO) A/B inhibitory activities and (B) the iron chelator-radical scavenging-brain selective monoamine oxidase (MAO) A/B inhibitor and M30 possessing the neuroprotective and neurorescuing propargyl moiety of rasagiline, as potential treatment of AD, DLB and PD with dementia. Another series of multi-target drugs (M30, HLA-20 series) which are brain permeable iron chelators and potent selective brain MAO inhibitors were also developed. These series of drugs have the ability of regulating and processing amyloid precursor protein (APP) since APP and alpha-synuclein are metaloproteins (iron-regulated proteins), with an iron responsive element 5"UTR mRNA similar to transferring and ferritin. Ladostigil inhibits brain acetyl and butyrylcholinesterase in rats after oral doses. After chronic but not acute treatment, it inhibits MAO-A and -B in the brain. Ladostigil acts like an anti-depressant in the forced swim test in rats, indicating a potential for anti-depressant activity. Ladostigil prevents the destruction of nigrostriatal neurons induced by infusion of neurotoxin MPTP in mice. The propargylamine moiety of ladostigil confers neuroprotective activity against cytotoxicity induced by ischemia and peroxynitrite in cultured neuronal cells. The multi-target iron chelator M30 has all the
2. An Intelligent Man-Machine Interface-Multi-Robot Control Adapted for Task Engagement Based on Single-Trial Detectability of P300.
PubMed
Kirchner, Elsa A; Kim, Su K; Tabie, Marc; Wöhrle, Hendrik; Maurus, Michael; Kirchner, Frank
2016-01-01
3. An Intelligent Man-Machine Interface-Multi-Robot Control Adapted for Task Engagement Based on Single-Trial Detectability of P300.
PubMed
Kirchner, Elsa A; Kim, Su K; Tabie, Marc; Wöhrle, Hendrik; Maurus, Michael; Kirchner, Frank
2016-01-01
4. An Intelligent Man-Machine Interface—Multi-Robot Control Adapted for Task Engagement Based on Single-Trial Detectability of P300
PubMed Central
Kirchner, Elsa A.; Kim, Su K.; Tabie, Marc; Wöhrle, Hendrik; Maurus, Michael; Kirchner, Frank
2016-01-01
Duan, Xiaopin; Xiao, Jisheng; Yin, Qi; Zhang, Zhiwen; Yu, Haijun; Mao, Shirui; Li, Yaping
2014-03-01
Metastasis, the main cause of cancer related deaths, remains the greatest challenge in cancer treatment. Disulfiram (DSF), which has multi-targeted anti-tumor activity, was encapsulated into redox-sensitive shell crosslinked micelles to achieve intracellular targeted delivery and finally inhibit tumor growth and metastasis. The crosslinked micelles demonstrated good stability in circulation and specifically released DSF under a reductive environment that mimicked the intracellular conditions of tumor cells. As a result, the DSF-loaded redox-sensitive shell crosslinked micelles (DCMs) dramatically inhibited cell proliferation, induced cell apoptosis and suppressed cell invasion, as well as impairing tube formation of HMEC-1 cells. In addition, the DCMs could accumulate in tumor tissue and stay there for a long time, thereby causing significant inhibition of 4T1 tumor growth and marked prevention in lung metastasis of 4T1 tumors. These results suggested that DCMs could be a promising delivery system in inhibiting the growth and metastasis of breast cancer.
6. Multi-target QSAR and docking study of steroids binding to corticosteroid-binding globulin and sex hormone-binding globulin.
PubMed
Nikolic, Katarina; Filipic, Slavica; Agbaba, Danica
2012-12-01
The QSAR and docking studies were performed on fifty seven steroids with binding affinities for corticosteroid-binding globulin (CBG) and eighty four steroids with binding affinities for sex hormone-binding globulin (SHBG). Since the steroidal compounds have binding affinity for both CBG and SHBG, multi-target QSAR approach was employed to establish a unique QSAR method for simultaneous evaluation of the CBG and SHBG binding affinities. The constitutional, geometrical, physico-chemical and electronic descriptors were computed for the examined structures by use of the Chem3D Ultra 7.0.0, the Dragon 6.0, the MOPAC2009, and the Chemical Descriptors Library (CDL) program. Partial least squares regression (PLSR) has been applied for selection of the most relevant molecular descriptors and QSAR models building. The QSAR (SHGB) model, QSAR model (CBG), and multi-target QSAR model (CBG, SHBG) were created. The multi-target QSAR model (CBG and SHBG) was found to be more effective in describing the CBG and SHBG affinity of steroids in comparison to the one target models (QSAR (SHGB) model, QSAR model (CBG)). The multi-target QSAR study indicated the importance of the electronic descriptor (Mor16v), steric/symmetry descriptors (Eig06_EA(ed)), 2D autocorrelation descriptor (GATS4m), distance distribution descriptor (RDF045m), and atom type fingerprint descriptor (CDL-ATFP 253) in describing the CBG and SHBG affinity of steroidal compounds. Results of the created multi-target QSAR model were in accordance with the performed docking studies. The theoretical study defined physicochemical, electronic and structural requirements for selective and effective binding of steroids to the CBG and SHBG active sites.
7. Multi-Target-Directed Ligands and other Therapeutic Strategies in the Search of a Real Solution for Alzheimer's Disease
PubMed Central
Agis-Torres, Angel; Sölhuber, Monica; Fernandez, Maria; Sanchez-Montero, J.M.
2014-01-01
8. Multi-target parallel processing approach for gene-to-structure determination of the influenza polymerase PB2 subunit.
PubMed
Armour, Brianna L; Barnes, Steve R; Moen, Spencer O; Smith, Eric; Raymond, Amy C; Fairman, James W; Stewart, Lance J; Staker, Bart L; Begley, Darren W; Edwards, Thomas E; Lorimer, Donald D
2013-01-01
Pandemic outbreaks of highly virulent influenza strains can cause widespread morbidity and mortality in human populations worldwide. In the United States alone, an average of 41,400 deaths and 1.86 million hospitalizations are caused by influenza virus infection each year (1). Point mutations in the polymerase basic protein 2 subunit (PB2) have been linked to the adaptation of the viral infection in humans (2). Findings from such studies have revealed the biological significance of PB2 as a virulence factor, thus highlighting its potential as an antiviral drug target. The structural genomics program put forth by the National Institute of Allergy and Infectious Disease (NIAID) provides funding to Emerald Bio and three other Pacific Northwest institutions that together make up the Seattle Structural Genomics Center for Infectious Disease (SSGCID). The SSGCID is dedicated to providing the scientific community with three-dimensional protein structures of NIAID category A-C pathogens. Making such structural information available to the scientific community serves to accelerate structure-based drug design. Structure-based drug design plays an important role in drug development. Pursuing multiple targets in parallel greatly increases the chance of success for new lead discovery by targeting a pathway or an entire protein family. Emerald Bio has developed a high-throughput, multi-target parallel processing pipeline (MTPP) for gene-to-structure determination to support the consortium. Here we describe the protocols used to determine the structure of the PB2 subunit from four different influenza A strains. PMID:23851357
9. Multi-Target Joint Detection and Estimation Error Bound for the Sensor with Clutter and Missed Detection.
PubMed
Lian, Feng; Zhang, Guang-Hua; Duan, Zhan-Sheng; Han, Chong-Zhao
2016-01-01
The error bound is a typical measure of the limiting performance of all filters for the given sensor measurement setting. This is of practical importance in guiding the design and management of sensors to improve target tracking performance. Within the random finite set (RFS) framework, an error bound for joint detection and estimation (JDE) of multiple targets using a single sensor with clutter and missed detection is developed by using multi-Bernoulli or Poisson approximation to multi-target Bayes recursion. Here, JDE refers to jointly estimating the number and states of targets from a sequence of sensor measurements. In order to obtain the results of this paper, all detectors and estimators are restricted to maximum a posteriori (MAP) detectors and unbiased estimators, and the second-order optimal sub-pattern assignment (OSPA) distance is used to measure the error metric between the true and estimated state sets. The simulation results show that clutter density and detection probability have significant impact on the error bound, and the effectiveness of the proposed bound is verified by indicating the performance limitations of the single-sensor probability hypothesis density (PHD) and cardinalized PHD (CPHD) filters for various clutter densities and detection probabilities. PMID:26828499
10. Multi-target Parallel Processing Approach for Gene-to-structure Determination of the Influenza Polymerase PB2 Subunit
PubMed Central
Moen, Spencer O.; Smith, Eric; Raymond, Amy C.; Fairman, James W.; Stewart, Lance J.; Staker, Bart L.; Begley, Darren W.; Edwards, Thomas E.; Lorimer, Donald D.
2013-01-01
Pandemic outbreaks of highly virulent influenza strains can cause widespread morbidity and mortality in human populations worldwide. In the United States alone, an average of 41,400 deaths and 1.86 million hospitalizations are caused by influenza virus infection each year 1. Point mutations in the polymerase basic protein 2 subunit (PB2) have been linked to the adaptation of the viral infection in humans 2. Findings from such studies have revealed the biological significance of PB2 as a virulence factor, thus highlighting its potential as an antiviral drug target. The structural genomics program put forth by the National Institute of Allergy and Infectious Disease (NIAID) provides funding to Emerald Bio and three other Pacific Northwest institutions that together make up the Seattle Structural Genomics Center for Infectious Disease (SSGCID). The SSGCID is dedicated to providing the scientific community with three-dimensional protein structures of NIAID category A-C pathogens. Making such structural information available to the scientific community serves to accelerate structure-based drug design. Structure-based drug design plays an important role in drug development. Pursuing multiple targets in parallel greatly increases the chance of success for new lead discovery by targeting a pathway or an entire protein family. Emerald Bio has developed a high-throughput, multi-target parallel processing pipeline (MTPP) for gene-to-structure determination to support the consortium. Here we describe the protocols used to determine the structure of the PB2 subunit from four different influenza A strains. PMID:23851357
11. A multi-target real-time PCR assay for rapid identification of meningitis-associated microorganisms.
PubMed
Favaro, Marco; Savini, Vincenzo; Favalli, Cartesio; Fontana, Carla
2013-01-01
A central nervous system (CNS) infection, such as meningitis, is a serious and life-threatening condition. Bacterial meningitis can be severe and may result in brain damage, disability or even death. Rapid diagnosis of CNS infections and identification of the pathogenic microorganisms are needed to improve the patient outcome. Bacterial culture of a patient's cerebrospinal fluid (CSF) is currently considered the "gold standard" for diagnosing bacterial meningitis. From the CSF cultures researchers can assess the in vitro susceptibility of the causative microorganism to determine the best antibiotic treatment. However, many of the culture assays, such as microscopy and the latex agglutination test are not sensitive. To enhance pathogen detection in CSF samples we developed a multi-target real-time PCR assay that can rapidly identify six different microorganisms: Streptococcus pneumoniae, Neisseria meningitidis, Haemophilus influenzae, Streptococcus agalactiae, Listeria monocytogenes and Cryptococcus neoformans. In this study we applied this PCR analysis to 296 CSF samples from patients who were suspected of having meningitis. Of the 296 samples that were examined, 59 samples were positive according to the CSF culture and/or molecular assays. Forty-six CSF samples were positive for both the CSF culture and our real-time PCR assay, while 13 samples were positive for the real-time PCR but negative for the traditional assays. This discrepancy may have been caused by the fact that these samples were collected from 23 patients who were treated with antimicrobials before CSF sampling.
12. Multi-Target Joint Detection and Estimation Error Bound for the Sensor with Clutter and Missed Detection.
PubMed
Lian, Feng; Zhang, Guang-Hua; Duan, Zhan-Sheng; Han, Chong-Zhao
2016-01-28
The error bound is a typical measure of the limiting performance of all filters for the given sensor measurement setting. This is of practical importance in guiding the design and management of sensors to improve target tracking performance. Within the random finite set (RFS) framework, an error bound for joint detection and estimation (JDE) of multiple targets using a single sensor with clutter and missed detection is developed by using multi-Bernoulli or Poisson approximation to multi-target Bayes recursion. Here, JDE refers to jointly estimating the number and states of targets from a sequence of sensor measurements. In order to obtain the results of this paper, all detectors and estimators are restricted to maximum a posteriori (MAP) detectors and unbiased estimators, and the second-order optimal sub-pattern assignment (OSPA) distance is used to measure the error metric between the true and estimated state sets. The simulation results show that clutter density and detection probability have significant impact on the error bound, and the effectiveness of the proposed bound is verified by indicating the performance limitations of the single-sensor probability hypothesis density (PHD) and cardinalized PHD (CPHD) filters for various clutter densities and detection probabilities.
13. TimeLapseAnalyzer: multi-target analysis for live-cell imaging and time-lapse microscopy.
PubMed
Huth, Johannes; Buchholz, Malte; Kraus, Johann M; Mølhave, Kristian; Gradinaru, Cristian; v Wichert, Götz; Gress, Thomas M; Neumann, Heiko; Kestler, Hans A
2011-11-01
The direct observation of cells over time using time-lapse microscopy can provide deep insights into many important biological processes. Reliable analyses of motility, proliferation, invasive potential or mortality of cells are essential to many studies involving live cell imaging and can aid in biomarker discovery and diagnostic decisions. Given the vast amount of image- and time-series data produced by modern microscopes, automated analysis is a key feature to capitalize the potential of time-lapse imaging devices. To provide fast and reproducible analyses of multiple aspects of cell behaviour, we developed TimeLapseAnalyzer. Apart from general purpose image enhancements and segmentation procedures, this extensible, self-contained, modular cross-platform package provides dedicated modalities for fast and reliable analysis of multi-target cell tracking, scratch wound healing analysis, cell counting and tube formation analysis in high throughput screening of live-cell experiments. TimeLapseAnalyzer is freely available (MATLAB, Open Source) at http://www.informatik.uni-ulm.de/ni/mitarbeiter/HKestler/tla.
14. Multi-Target Joint Detection and Estimation Error Bound for the Sensor with Clutter and Missed Detection
PubMed Central
Lian, Feng; Zhang, Guang-Hua; Duan, Zhan-Sheng; Han, Chong-Zhao
2016-01-01
The error bound is a typical measure of the limiting performance of all filters for the given sensor measurement setting. This is of practical importance in guiding the design and management of sensors to improve target tracking performance. Within the random finite set (RFS) framework, an error bound for joint detection and estimation (JDE) of multiple targets using a single sensor with clutter and missed detection is developed by using multi-Bernoulli or Poisson approximation to multi-target Bayes recursion. Here, JDE refers to jointly estimating the number and states of targets from a sequence of sensor measurements. In order to obtain the results of this paper, all detectors and estimators are restricted to maximum a posteriori (MAP) detectors and unbiased estimators, and the second-order optimal sub-pattern assignment (OSPA) distance is used to measure the error metric between the true and estimated state sets. The simulation results show that clutter density and detection probability have significant impact on the error bound, and the effectiveness of the proposed bound is verified by indicating the performance limitations of the single-sensor probability hypothesis density (PHD) and cardinalized PHD (CPHD) filters for various clutter densities and detection probabilities. PMID:26828499
15. Network pharmacology of cancer: From understanding of complex interactomes to the design of multi-target specific therapeutics from nature.
PubMed
Poornima, Paramasivan; Kumar, Jothi Dinesh; Zhao, Qiaoli; Blunder, Martina; Efferth, Thomas
2016-09-01
Despite massive investments in drug research and development, the significant decline in the number of new drugs approved or translated to clinical use raises the question, whether single targeted drug discovery is the right approach. To combat complex systemic diseases that harbour robust biological networks such as cancer, single target intervention is proved to be ineffective. In such cases, network pharmacology approaches are highly useful, because they differ from conventional drug discovery by addressing the ability of drugs to target numerous proteins or networks involved in a disease. Pleiotropic natural products are one of the promising strategies due to their multi-targeting and due to lower side effects. In this review, we discuss the application of network pharmacology for cancer drug discovery. We provide an overview of the current state of knowledge on network pharmacology, focus on different technical approaches and implications for cancer therapy (e.g. polypharmacology and synthetic lethality), and illustrate the therapeutic potential with selected examples green tea polyphenolics, Eleutherococcus senticosus, Rhodiola rosea, and Schisandra chinensis). Finally, we present future perspectives on their plausible applications for diagnosis and therapy of cancer. PMID:27329331
16. Multi-target Chromogenic Whole-mount In Situ Hybridization for Comparing Gene Expression Domains in Drosophila Embryos
PubMed Central
Hauptmann, Giselbert; Söll, Iris; Krautz, Robert; Theopold, Ulrich
2016-01-01
To analyze gene regulatory networks active during embryonic development and organogenesis it is essential to precisely define how the different genes are expressed in spatial relation to each other in situ. Multi-target chromogenic whole-mount in situ hybridization (MC-WISH) greatly facilitates the instant comparison of gene expression patterns, as it allows distinctive visualization of different mRNA species in contrasting colors in the same sample specimen. This provides the possibility to relate gene expression domains topographically to each other with high accuracy and to define unique and overlapping expression sites. In the presented protocol, we describe a MC-WISH procedure for comparing mRNA expression patterns of different genes in Drosophila embryos. Up to three RNA probes, each specific for another gene and labeled by a different hapten, are simultaneously hybridized to the embryo samples and subsequently detected by alkaline phosphatase-based colorimetric immunohistochemistry. The described procedure is detailed here for Drosophila, but works equally well with zebrafish embryos. PMID:26862978
17. TargetNet: a web service for predicting potential drug-target interaction profiling via multi-target SAR models.
PubMed
Yao, Zhi-Jiang; Dong, Jie; Che, Yu-Jing; Zhu, Min-Feng; Wen, Ming; Wang, Ning-Ning; Wang, Shan; Lu, Ai-Ping; Cao, Dong-Sheng
2016-05-01
Drug-target interactions (DTIs) are central to current drug discovery processes and public health fields. Analyzing the DTI profiling of the drugs helps to infer drug indications, adverse drug reactions, drug-drug interactions, and drug mode of actions. Therefore, it is of high importance to reliably and fast predict DTI profiling of the drugs on a genome-scale level. Here, we develop the TargetNet server, which can make real-time DTI predictions based only on molecular structures, following the spirit of multi-target SAR methodology. Naïve Bayes models together with various molecular fingerprints were employed to construct prediction models. Ensemble learning from these fingerprints was also provided to improve the prediction ability. When the user submits a molecule, the server will predict the activity of the user's molecule across 623 human proteins by the established high quality SAR model, thus generating a DTI profiling that can be used as a feature vector of chemicals for wide applications. The 623 SAR models related to 623 human proteins were strictly evaluated and validated by several model validation strategies, resulting in the AUC scores of 75-100 %. We applied the generated DTI profiling to successfully predict potential targets, toxicity classification, drug-drug interactions, and drug mode of action, which sufficiently demonstrated the wide application value of the potential DTI profiling. The TargetNet webserver is designed based on the Django framework in Python, and is freely accessible at http://targetnet.scbdd.com . PMID:27167132
18. Finding structures with specific properties in complex configurational spaces using multi-target inverse band structure approach
Piquini, Paulo; Zunger, Alex
2009-03-01
The conventional strategy to look for materials with desired properties is to use physical intuition to select some candidates among an enormous number of possibilities.Apart the very special cases, the solutions to these search problems are far from obvious. The inverse band structure (IBS) approach, on the other hand, search for the desired electronic structures (instead of atomic configurations) from the beginning. Here we illustrate the power of this inverse approach by applying it to the simultaneous engineering of multi-target problems, which encompass huge configurational spaces: (i) the search of a specific band gap in the quaternary (In,Ga)(As,Sb) semiconductors(a) lattice-matched to InP and, (ii) the stacking sequence of (In,Ga)As/InP superlattices leading to band gaps and strains within the range suitable for thermophotovoltaic applications(b). [3pt] (a) P. Piquini, P.A. Graf, and A. Zunger, Phys. Rev. Lett. 100, 186403 (2008); [0pt] (b) P. Piquini and A. Zunger, Phys. Rev. B 78, 161302 (2008)
19. A Network-Based Data Integration Approach to Support Drug Repurposing and Multi-Target Therapies in Triple Negative Breast Cancer.
PubMed
Vitali, Francesca; Cohen, Laurie D; Demartini, Andrea; Amato, Angela; Eterno, Vincenzo; Zambelli, Alberto; Bellazzi, Riccardo
2016-01-01
The integration of data and knowledge from heterogeneous sources can be a key success factor in drug design, drug repurposing and multi-target therapies. In this context, biological networks provide a useful instrument to highlight the relationships and to model the phenomena underlying therapeutic action in cancer. In our work, we applied network-based modeling within a novel bioinformatics pipeline to identify promising multi-target drugs. Given a certain tumor type/subtype, we derive a disease-specific Protein-Protein Interaction (PPI) network by combining different data-bases and knowledge repositories. Next, the application of suitable graph-based algorithms allows selecting a set of potentially interesting combinations of drug targets. A list of drug candidates is then extracted by applying a recent data fusion approach based on matrix tri-factorization. Available knowledge about selected drugs mechanisms of action is finally exploited to identify the most promising candidates for planning in vitro studies. We applied this approach to the case of Triple Negative Breast Cancer (TNBC), a subtype of breast cancer whose biology is poorly understood and that lacks of specific molecular targets. Our "in-silico" findings have been confirmed by a number of in vitro experiments, whose results demonstrated the ability of the method to select candidates for drug repurposing. PMID:27632168
20. Prediction of Multi-Target Networks of Neuroprotective Compounds with Entropy Indices and Synthesis, Assay, and Theoretical Study of New Asymmetric 1,2-Rasagiline Carbamates
PubMed Central
Romero Durán, Francisco J.; Alonso, Nerea; Caamaño, Olga; García-Mera, Xerardo; Yañez, Matilde; Prado-Prado, Francisco J.; González-Díaz, Humberto
2014-01-01
In a multi-target complex network, the links (Lij) represent the interactions between the drug (di) and the target (tj), characterized by different experimental measures (Ki, Km, IC50, etc.) obtained in pharmacological assays under diverse boundary conditions (cj). In this work, we handle Shannon entropy measures for developing a model encompassing a multi-target network of neuroprotective/neurotoxic compounds reported in the CHEMBL database. The model predicts correctly >8300 experimental outcomes with Accuracy, Specificity, and Sensitivity above 80%–90% on training and external validation series. Indeed, the model can calculate different outcomes for >30 experimental measures in >400 different experimental protocolsin relation with >150 molecular and cellular targets on 11 different organisms (including human). Hereafter, we reported by the first time the synthesis, characterization, and experimental assays of a new series of chiral 1,2-rasagiline carbamate derivatives not reported in previous works. The experimental tests included: (1) assay in absence of neurotoxic agents; (2) in the presence of glutamate; and (3) in the presence of H2O2. Lastly, we used the new Assessing Links with Moving Averages (ALMA)-entropy model to predict possible outcomes for the new compounds in a high number of pharmacological tests not carried out experimentally. PMID:25255029
1. A Network-Based Data Integration Approach to Support Drug Repurposing and Multi-Target Therapies in Triple Negative Breast Cancer
PubMed Central
Cohen, Laurie D.; Demartini, Andrea; Amato, Angela; Eterno, Vincenzo; Zambelli, Alberto; Bellazzi, Riccardo
2016-01-01
The integration of data and knowledge from heterogeneous sources can be a key success factor in drug design, drug repurposing and multi-target therapies. In this context, biological networks provide a useful instrument to highlight the relationships and to model the phenomena underlying therapeutic action in cancer. In our work, we applied network-based modeling within a novel bioinformatics pipeline to identify promising multi-target drugs. Given a certain tumor type/subtype, we derive a disease-specific Protein-Protein Interaction (PPI) network by combining different data-bases and knowledge repositories. Next, the application of suitable graph-based algorithms allows selecting a set of potentially interesting combinations of drug targets. A list of drug candidates is then extracted by applying a recent data fusion approach based on matrix tri-factorization. Available knowledge about selected drugs mechanisms of action is finally exploited to identify the most promising candidates for planning in vitro studies. We applied this approach to the case of Triple Negative Breast Cancer (TNBC), a subtype of breast cancer whose biology is poorly understood and that lacks of specific molecular targets. Our “in-silico” findings have been confirmed by a number of in vitro experiments, whose results demonstrated the ability of the method to select candidates for drug repurposing. PMID:27632168
2. A Multi-Target Approach toward the Development of Novel Candidates for Antidermatophytic Activity: Ultrastructural Evidence on α-Bisabolol-Treated Microsporum gypseum.
PubMed
Romagnoli, Carlo; Baldisserotto, Anna; Malisardi, Gemma; Vicentini, Chiara B; Mares, Donatella; Andreotti, Elisa; Vertuani, Silvia; Manfredini, Stefano
2015-01-01
Multi-target strategies are directed toward targets that are unrelated (or distantly related) and can create opportunities to address different pathologies. The antidermatophytic activities of nine natural skin lighteners: α-bisabolol, kojic acid, β-arbutin, azelaic acid, hydroquinone, nicotinamide, glycine, glutathione and ascorbyl tetraisopalmitate, were evaluated, in comparison with the known antifungal drug fluconazole, on nine dermatophytes responsible for the most common dermatomycoses: Microsporum gypseum, Microsporum canis, Trichophyton violaceum, Nannizzia cajetani, Trichophyton mentagrophytes, Epidermophyton floccosum, Arthroderma gypseum, Trichophyton rubrum and Trichophyton tonsurans. α-Bisabolol showed the best antifungal activity against all fungi and in particular; against M. gypseum. Further investigations were conducted on this fungus to evaluate the inhibition of spore germination and morphological changes induced by α-bisabolol by TEM. PMID:26132903
3. Synthesis of Thiazolo[5,4-f]quinazolin-9(8H)-ones as Multi-Target Directed Ligands of Ser/Thr Kinases.
PubMed
Hédou, Damien; Godeau, Julien; Loaëc, Nadège; Meijer, Laurent; Fruit, Corinne; Besson, Thierry
2016-01-01
A library of thirty novel thiazolo[5,4-f]quinazolin-9(8H)-one derivatives belonging to four series designated as 12, 13, 14 and 15 was efficiently prepared, helped by microwave-assisted technology when required. The efficient multistep synthesis of methyl 6-amino-2-cyano- benzo[d]thiazole-7-carboxylate (1) has been reinvestigated and performed on a multigram scale. The inhibitory potency of the final products against five kinases involved in Alzheimer's disease was evaluated. This study demonstrates that some molecules of the 12 and 13 series described in this paper are particularly promising for the development of new multi-target inhibitors of kinases. PMID:27144552
4. Multi-targeting Peptide-Functionalized Nanoparticles Recognized Vasculogenic Mimicry, Tumor Neovasculature, and Glioma Cells for Enhanced Anti-glioma Therapy.
PubMed
Feng, Xingye; Yao, Jianhui; Gao, Xiaoling; Jing, Yixian; Kang, Ting; Jiang, Di; Jiang, Tianze; Feng, Jingxian; Zhu, Qianqian; Jiang, Xinguo; Chen, Jun
2015-12-23
Chemotherapy failure of glioma, the most aggressive and devastating cancer, might be ascribed to the physiologic barriers of the tumor mainly including heterogeneous tumor perfusion and vascular permeability, which result in a limited penetration of chemotherapeutics. Besides, the vasculogenic mimicry (VM) channels, which are highly resistant to anti-angiogenic therapy and serve as a complement of angiogenesis, were abound in glioma and always associated with tumor recurrence. In order to enhance the therapy effect of anti-glioma, we developed a PEG-PLA-based nanodrug delivery system (nanoparticles, NP) in this study and modified its surface with CK peptide, which was composed of a human sonic hedgehog (SHH) targeting peptide (CVNHPAFAC) and a KDR targeting peptide (K237) through a GYG linker, for facilitating efficient VM channels, tumor neovasculature, and glioma cells multi-targeting delivery of paclitaxel. In vitro cellular assay showed that CK-NP-PTX not only exhibited the strongest antiproliferation effect on U87MG cells and HUVEC cells but also resulted in the most efficient destruction of VM channels when compared with CVNHPAFAC-NP, K237-NP, and the unmodified ones. Besides, CK-NP accumulated more selectively at the glioma site as demonstrated by in vivo and ex vivo imaging. As expected, the glioma-bearing mice treated with CK-NP-PTX achieved the longest median survival time compared to those treated with CVNHPAFAC-NP-PTX and K237-NP-PTX. These findings indicated that the multi-targeting therapy mediated by CK peptide might provide a promising way for glioblastoma therapy.
5. The Multi-Target Drug M30 Shows Pro-Cognitive and Anti-Inflammatory Effects in a Rat Model of Alzheimer's Disease.
PubMed
Pimentel, Luisa S; Allard, Simon; Do Carmo, Sonia; Weinreb, Orly; Danik, Marc; Hanzel, Cecilia E; Youdim, Moussa B; Cuello, A Claudio
2015-01-01
Current therapies for Alzheimer's disease (AD) offer partial symptomatic relief and do not modify disease progression. There is substantial evidence indicating a disease onset years before clinical diagnosis, at which point no effective therapy has been found. In this study, we investigated the efficacy of a new multi-target drug, M30, at relatively early stages of the AD-like amyloid pathology in a robust rat transgenic model. McGill-R-Thy1-APP transgenic rats develop the full AD-like amyloid pathology in a progressive fashion, and have a minimal genetic burden. McGill rats were given 5 mg/kg M30 or vehicle per os, every 2 days for 4 months, starting at a stage where the transgenic animals suffer detectable cognitive impairments. At the completion of the treatment, cognitive functions were assessed with Novel Object Location and Novel Object Recognition tests. The brains were then analyzed to assess amyloid-β (Aβ) burden and the levels of key inflammatory markers. Long-term treatment with M30 was associated with both the prevention and the reversal of transgene-related cognitive decline. The effects on cognition were accompanied by a shift of the Aβ-immunoreactive material toward an amyloid plaque aggregated molecular form, diminished molecular signs of CNS inflammation and a change in microglia morphology toward a surveying phenotype. This study is the first to demonstrate the therapeutic potential of M30 in a rat model of the AD amyloid pathology. It provides a rationale for further investigations with M30 and with potential multi-target approaches to delay, prevent or reverse the progression the AD pathology at early disease-stages. PMID:26401560
SciTech Connect
Parker, L.E.
1998-11-01
This research addresses the problem of achieving fault tolerant cooperation within small- to medium-sized teams of heterogeneous mobile robots. The author describes a novel behavior-based, fully distributed architecture, called ALLIANCE, that utilizes adaptive action selection to achieve fault tolerant cooperative control in robot missions involving loosely coupled, largely independent tasks. The robots in this architecture possess a variety of high-level functions that they can perform during a mission, and must at all times select an appropriate action based on the requirements of the mission, the activities of other robots, the current environmental conditions, and their own internal states. Since such cooperative teams often work in dynamic and unpredictable environments, the software architecture allows the team members to respond robustly and reliably to unexpected environmental changes and modifications in the robot team that may occur due to mechanical failure, the learning of new skills, or the addition or removal of robots from the team by human intervention. After presenting ALLIANCE, the author describes in detail the experimental results of an implementation of this architecture on a team of physical mobile robots performing a cooperative box pushing demonstration. These experiments illustrate the ability of ALLIANCE to achieve adaptive, fault-tolerant cooperative control amidst dynamic changes in the capabilities of the robot team.
7. Analytic performance prediction of track-to-track association with biased data in multi-sensor multi-target tracking scenarios.
PubMed
Tian, Wei; Wang, Yue; Shan, Xiuming; Yang, Jian
2013-01-01
An analytic method for predicting the performance of track-to-track association (TTTA) with biased data in multi-sensor multi-target tracking scenarios is proposed in this paper. The proposed method extends the existing results of the bias-free situation by accounting for the impact of sensor biases. Since little insight of the intrinsic relationship between scenario parameters and the performance of TTTA can be obtained by numerical simulations, the proposed analytic approach is a potential substitute for the costly Monte Carlo simulation method. Analytic expressions are developed for the global nearest neighbor (GNN) association algorithm in terms of correct association probability. The translational biases of sensors are incorporated in the expressions, which provide good insight into how the TTTA performance is affected by sensor biases, as well as other scenario parameters, including the target spatial density, the extraneous track density and the average association uncertainty error. To show the validity of the analytic predictions, we compare them with the simulation results, and the analytic predictions agree reasonably well with the simulations in a large range of normally anticipated scenario parameters. PMID:24036583
8. Analytic performance prediction of track-to-track association with biased data in multi-sensor multi-target tracking scenarios.
PubMed
Tian, Wei; Wang, Yue; Shan, Xiuming; Yang, Jian
2013-09-12
An analytic method for predicting the performance of track-to-track association (TTTA) with biased data in multi-sensor multi-target tracking scenarios is proposed in this paper. The proposed method extends the existing results of the bias-free situation by accounting for the impact of sensor biases. Since little insight of the intrinsic relationship between scenario parameters and the performance of TTTA can be obtained by numerical simulations, the proposed analytic approach is a potential substitute for the costly Monte Carlo simulation method. Analytic expressions are developed for the global nearest neighbor (GNN) association algorithm in terms of correct association probability. The translational biases of sensors are incorporated in the expressions, which provide good insight into how the TTTA performance is affected by sensor biases, as well as other scenario parameters, including the target spatial density, the extraneous track density and the average association uncertainty error. To show the validity of the analytic predictions, we compare them with the simulation results, and the analytic predictions agree reasonably well with the simulations in a large range of normally anticipated scenario parameters.
9. Self-assembled phenylalanine-α,β-dehydrophenylalanine nanotubes for sustained intravitreal delivery of a multi-targeted tyrosine kinase inhibitor.
PubMed
Panda, Jiban J; Yandrapu, Sarath; Kadam, Rajendra S; Chauhan, Virander S; Kompella, Uday B
2013-12-28
Current standard of care for sustained back of the eye drug delivery is surgical placement or injection of large, slow release implants using a relatively large 22 gauge needle. We designed novel dipeptide (phenylalanine-α,β-dehydrophenylalanine; Phe-∆Phe) based nanotubes with a diameter of ~15-30 nm and a length of ~1500 nm that could be injected with a 33 gauge needle for sustained intravitreal delivery of pazopanib, a multi-targeted tyrosine kinase inhibitor. The drug could be loaded during nanotube assembly or post-loaded after nanotube formation, with the former being more efficient at 25% w/w pazopanib loading and ~55% loading efficiency. Plain and peptide loaded nanotube were non-cytotoxic to retinal pigment epithelial cells even at a concentration of 200 μg/ml. Following intravitreal injection of fluorescently labeled nanotubes using a 33 gauge needle in a rat model, the nanotube persistence and drug delivery were monitored using noninvasive fluorophotometry, electron microscopy and mass spectrometry analysis. Nanotubes persisted in the vitreous humor during the 15 days study and pazopanib levels in the vitreous humor, retina, and choroid-RPE at the end of the study were 4.5, 5, and 2.5-folds higher, respectively, compared to the plain drug. Thus, Phe-∆Phe nanotubes allow intravitreal injections with a small gauge needle and sustain drug delivery.
10. Dovitinib (TKI258), a multi-target angiokinase inhibitor, is effective regardless of KRAS or BRAF mutation status in colorectal cancer
PubMed Central
Lee, Choong-Kun; Lee, Myung Eun; Lee, Won Suk; Kim, Jeong Min; Park, Kyu Hyun; Kim, Tae Soo; Lee, Kang Young; Ahn, Joong Bae; Chung, Hyun Cheol; Rha, Sun Young
2015-01-01
Introduction: We aimed to determine whether KRAS and BRAF mutant colorectal cancer (CRC) cells exhibit distinct sensitivities to the multi-target angiokinase inhibitor, TKI258 (dovitinib). Materials and methods: We screened 10 CRC cell lines by using receptor tyrosine kinase (RTK) array to identify activated RTKs. MTT assays, anchorage-independent colony-formation assays, and immunoblotting assays were performed to evaluate the in vitro anti-tumor effects of TKI258. In vivo efficacy study followed by pharmacodynamic evaluation was done. Results: Fibroblast Growth Factor Receptor 1 (FGFR1) and FGFR3 were among the most highly activated RTKs in CRC cell lines. In in vitro assays, the BRAF mutant HT-29 cells were more resistant to the TKI258 than the KRAS mutant LoVo cells. However, in xenograft assays, TKI258 equally delayed the growth of tumors induced by both cell lines. TUNEL assays showed that the apoptotic index was unchanged following TKI258 treatment, but staining for Ki-67 and CD31 was substantially reduced in both xenografts, implying an anti-angiogenic effect of the drug. TKI258 treatment was effective in delaying CRC tumor growth in vivo regardless of the KRAS and BRAF mutation status. Conclusions: Our results identify FGFRs as potential targets in CRC treatment and suggest that combined targeting of multiple RTKs with TKI258 might serve as a novel approach to improve outcome of patients with CRC. PMID:25628921
11. Improved knockdown from artificial microRNAs in an enhanced miR-155 backbone: a designer's guide to potent multi-target RNAi
PubMed Central
Fowler, Daniel K.; Williams, Carly; Gerritsen, Alida T.; Washbourne, Philip
2016-01-01
Artificial microRNA (amiRNA) sequences embedded in natural microRNA (miRNA) backbones have proven to be useful tools for RNA interference (RNAi). amiRNAs have reduced off-target and toxic effects compared to other RNAi-based methods such as short-hairpin RNAs (shRNA). amiRNAs are often less effective for knockdown, however, compared to their shRNA counterparts. We screened a large empirically-designed amiRNA set in the synthetic inhibitory BIC/miR-155 RNA (SIBR) scaffold and show common structural and sequence-specific features associated with effective amiRNAs. We then introduced exogenous motifs into the basal stem region which increase amiRNA biogenesis and knockdown potency. We call this modified backbone the enhanced SIBR (eSIBR) scaffold. Using chained amiRNAs for multi-gene knockdown, we show that concatenation of miRNAs targeting different genes is itself sufficient for increased knockdown efficacy. Further, we show that eSIBR outperforms wild-type SIBR (wtSIBR) when amiRNAs are chained. Finally, we use a lentiviral expression system in cultured neurons, where we again find that eSIBR amiRNAs are more potent for multi-target knockdown of endogenous genes. eSIBR will be a valuable tool for RNAi approaches, especially for studies where knockdown of multiple targets is desired. PMID:26582923
12. Prolonged-acting, multi-targeting gallium nanoparticles potently inhibit growth of both HIV and mycobacteria in co-infected human macrophages.
PubMed
Narayanasamy, Prabagaran; Switzer, Barbara L; Britigan, Bradley E
2015-03-06
Human immunodeficiency virus (HIV) infection and Mycobacterium tuberculosis (TB) are responsible for two of the major global human infectious diseases that result in significant morbidity, mortality and socioeconomic impact. Furthermore, severity and disease prevention of both infections is enhanced by co-infection. Parallel limitations also exist in access to effective drug therapy and the emergence of resistance. Furthermore, drug-drug interactions have proven problematic during treatment of co-incident HIV and TB infections. Thus, improvements in drug access and simplified treatment regimens are needed immediately. One of the key host cells infected by both HIV and TB is the mononuclear phagocyte (MP; monocyte, macrophage and dendritic cell). Therefore, we hypothesized that one way this can be achieved is through drug-targeting by a nanoformulated drug that ideally would be active against both HIV and TB. Accordingly, we validated macrophage targeted long acting (sustained drug release) gallium (Ga) nanoformulation against HIV-mycobacterium co-infection. The multi-targeted Ga nanoparticle agent inhibited growth of both HIV and TB in the macrophage. The Ga nanoparticles reduced the growth of mycobacterium and HIV for up to 15 days following single drug loading. These results provide a potential new approach to treat HIV-TB co-infection that could eventually lead to improved clinical outcomes.
13. Prolonged-acting, Multi-targeting Gallium Nanoparticles Potently Inhibit Growth of Both HIV and Mycobacteria in Co-Infected Human Macrophages
PubMed Central
Narayanasamy, Prabagaran; Switzer, Barbara L.; Britigan, Bradley E.
2015-01-01
Human immunodeficiency virus (HIV) infection and Mycobacterium tuberculosis (TB) are responsible for two of the major global human infectious diseases that result in significant morbidity, mortality and socioeconomic impact. Furthermore, severity and disease prevention of both infections is enhanced by co-infection. Parallel limitations also exist in access to effective drug therapy and the emergence of resistance. Furthermore, drug-drug interactions have proven problematic during treatment of co-incident HIV and TB infections. Thus, improvements in drug access and simplified treatment regimens are needed immediately. One of the key host cells infected by both HIV and TB is the mononuclear phagocyte (MP; monocyte, macrophage and dendritic cell). Therefore, we hypothesized that one way this can be achieved is through drug-targeting by a nanoformulated drug that ideally would be active against both HIV and TB. Accordingly, we validated macrophage targeted long acting (sustained drug release) gallium (Ga) nanoformulation against HIV-mycobacterium co-infection. The multi-targeted Ga nanoparticle agent inhibited growth of both HIV and TB in the macrophage. The Ga nanoparticles reduced the growth of mycobacterium and HIV for up to 15 days following single drug loading. These results provide a potential new approach to treat HIV-TB co-infection that could eventually lead to improved clinical outcomes. PMID:25744727
14. A modeling study for structure features of β-N-acetyl-D-hexosaminidase from Ostrinia furnacalis and its novel inhibitor allosamidin: species selectivity and multi-target characteristics.
PubMed
Wang, Yanli; Liu, Tian; Yang, Qing; Li, Zhong; Qian, Xuhong
2012-04-01
Insect β-N-acetyl-D-hexosaminidase, a chitin degrading enzyme, is physiologically important during the unique life cycle of the insect. OfHex1, a β-N-acetyl-D-hexosaminidase from the insect, Ostrinia furna, which was obtained by our laboratory (Gen Bank No.: ABI81756.1), was studied by molecular modeling as well as by molecular docking with its inhibitor, allosamidin. 3D model of OfHex1 was built through the ligand-supported homology modeling approach. The binding modes of its substrate and inhibitor were proposed through docking and cluster analysis. The pocket's size and shape of OfHex1 differ from that of human β-N-acetyl-D-hexosaminidase, which determined that allosamidin can selectively inhibit OfHex1 instead of human β-N-acetyl-D-hexosaminidase. Moreover, the multi-target characteristics of allosamidin that inhibit enzymes from different families, OfHex1 (EC 3.2.1.52; GH20) and chitinase (EC 3.2.1.14; GH18), were compared. The common -1/+1 sugar-binding site of chitinase and OfHex1, and the -2/-3 sugar-binding site in chitinase contribute to the binding of allosamidin. This work, at molecular level, proved that OfHex1 could be a potential species-specific target for novel green pesticide design and also provide the possibility to develop allosamidin or its derivatives as a new type of insecticide to 'hit two birds with one stone', which maybe become a novel strategy in pest control. PMID:22177554
15. Synthesis and evaluation of multi-target-directed ligands for the treatment of Alzheimer's disease based on the fusion of donepezil and melatonin.
PubMed
Wang, Jin; Wang, Zhi-Min; Li, Xue-Mei; Li, Fan; Wu, Jia-Jia; Kong, Ling-Yi; Wang, Xiao-Bing
2016-09-15
A novel series of compounds obtained by fusing the acetylcholinesterase (AChE) inhibitor donepezil and the antioxidant melatonin were designed as multi-target-directed ligands for the treatment of Alzheimer's disease (AD). In vitro assay indicated that most of the target compounds exhibited a significant ability to inhibit acetylcholinesterase (eeAChE and hAChE), butyrylcholinesterase (eqBuChE and hBuChE), and β-amyloid (Aβ) aggregation, and to act as potential antioxidants and biometal chelators. Especially, 4u displayed a good inhibition of AChE (IC50 value of 193nM for eeAChE and 273nM for hAChE), strong inhibition of BuChE (IC50 value of 73nM for eqBuChE and 56nM for hBuChE), moderate inhibition of Aβ aggregation (56.3% at 20μM) and good antioxidant activity (3.28trolox equivalent by ORAC assay). Molecular modeling studies in combination with kinetic analysis revealed that 4u was a mixed-type inhibitor, binding simultaneously to catalytic anionic site (CAS) and the peripheral anionic site (PAS) of AChE. In addition, 4u could chelate metal ions, reduce PC12 cells death induced by oxidative stress and penetrate the blood-brain barrier (BBB). Taken together, these results strongly indicated the hybridization approach is an efficient strategy to identify novel scaffolds with desired bioactivities, and further optimization of 4u may be helpful to develop more potent lead compound for AD treatment. PMID:27460699
16. [Multi-Target Recognition of Internal and External Defects of Potato by Semi-Transmission Hyperspectral Imaging and Manifold Learning Algorithm].
PubMed
Huang, Tao; Li, Xiao-yu; Jin, Rui; Ku, Jing; Xu, Sen-miao; Xu, Meng-ling; Wu, Zhen-zhong; Kong, De-guo
2015-04-01
The present paper put forward a non-destructive detection method which combines semi-transmission hyperspectral imaging technology with manifold learning dimension reduction algorithm and least squares support vector machine (LSSVM) to recognize internal and external defects in potatoes simultaneously. Three hundred fifteen potatoes were bought in farmers market as research object, and semi-transmission hyperspectral image acquisition system was constructed to acquire the hyperspectral images of normal external defects (bud and green rind) and internal defect (hollow heart) potatoes. In order to conform to the actual production, defect part is randomly put right, side and back to the acquisition probe when the hyperspectral images of external defects potatoes are acquired. The average spectrums (390-1,040 nm) were extracted from the region of interests for spectral preprocessing. Then three kinds of manifold learning algorithm were respectively utilized to reduce the dimension of spectrum data, including supervised locally linear embedding (SLLE), locally linear embedding (LLE) and isometric mapping (ISOMAP), the low-dimensional data gotten by manifold learning algorithms is used as model input, Error Correcting Output Code (ECOC) and LSSVM were combined to develop the multi-target classification model. By comparing and analyzing results of the three models, we concluded that SLLE is the optimal manifold learning dimension reduction algorithm, and the SLLE-LSSVM model is determined to get the best recognition rate for recognizing internal and external defects potatoes. For test set data, the single recognition rate of normal, bud, green rind and hollow heart potato reached 96.83%, 86.96%, 86.96% and 95% respectively, and he hybrid recognition rate was 93.02%. The results indicate that combining the semi-transmission hyperspectral imaging technology with SLLE-LSSVM is a feasible qualitative analytical method which can simultaneously recognize the internal and
17. Phase II trial of pazopanib (GW786034), an oral multi-targeted angiogenesis inhibitor, for adults with recurrent glioblastoma (North American Brain Tumor Consortium Study 06-02)
PubMed Central
Iwamoto, Fabio M.; Lamborn, Kathleen R.; Robins, H. Ian; Mehta, Minesh P.; Chang, Susan M.; Butowski, Nicholas A.; DeAngelis, Lisa M.; Abrey, Lauren E.; Zhang, Wei-Ting; Prados, Michael D.; Fine, Howard A.
2010-01-01
The objective of this phase II single-arm study was to evaluate the efficacy and safety of pazopanib, a multi-targeted tyrosine kinase inhibitor, against vascular endothelial growth factor receptor (VEGFR)-1, -2, and -3, platelet-derived growth factor receptor-α and -β, and c-Kit, in recurrent glioblastoma. Patients with ≤2 relapses and no prior anti-VEGF/VEGFR therapy were treated with pazopanib 800 mg daily on 4-week cycles without planned interruptions. Brain magnetic resonance imaging and clinical reassessment were made every 8 weeks. The primary endpoint was efficacy as measured by 6-month progression-free survival (PFS6). Thirty-five GBM patients with a median age of 53 years and median Karnofsky performance scale of 90 were accrued. Grade 3/4 toxicities included leukopenia (n = 1), lymphopenia (n = 2), thrombocytopenia (n = 1), ALT elevation (n = 3), AST elevation (n = 1), CNS hemorrhage (n = 1), fatigue (n = 1), and thrombotic/embolic events (n = 3); 8 patients required dose reduction. Two patients had a partial radiographic response by standard bidimensional measurements, whereas 9 patients (6 at the 8-week point and 3 only within the first month of treatment) had decreased contrast enhancement, vasogenic edema, and mass effect but <50% reduction in tumor. The median PFS was 12 weeks (95% confidence interval [CI]: 8–14 weeks) and only 1 patient had a PFS time ≥6 months (PFS6 = 3%). Thirty patients (86%) had died and median survival was 35 weeks (95% CI: 24–47 weeks). Pazopanib was reasonably well tolerated with a spectrum of toxicities similar to other anti-VEGF/VEGFR agents. Single-agent pazopanib did not prolong PFS in this patient population but showed in situ biological activity as demonstrated by radiographic responses. ClinicalTrials.gov identifier: NCT00459381. PMID:20200024
18. A Network Pharmacology Study of Chinese Medicine QiShenYiQi to Reveal Its Underlying Multi-Compound, Multi-Target, Multi-Pathway Mode of Action
PubMed Central
Li, Xiang; Wu, Leihong; Liu, Wei; Jin, Yecheng; Chen, Qian; Wang, Linli; Fan, Xiaohui; Li, Zheng; Cheng, Yiyu
2014-01-01
Chinese medicine is a complex system guided by traditional Chinese medicine (TCM) theories, which has proven to be especially effective in treating chronic and complex diseases. However, the underlying modes of action (MOA) are not always systematically investigated. Herein, a systematic study was designed to elucidate the multi-compound, multi-target and multi-pathway MOA of a Chinese medicine, QiShenYiQi (QSYQ), on myocardial infarction. QSYQ is composed of Astragalus membranaceus (Huangqi), Salvia miltiorrhiza (Danshen), Panax notoginseng (Sanqi), and Dalbergia odorifera (Jiangxiang). Male Sprague Dawley rat model of myocardial infarction were administered QSYQ intragastrically for 7 days while the control group was not treated. The differentially expressed genes (DEGs) were identified from myocardial infarction rat model treated with QSYQ, followed by constructing a cardiovascular disease (CVD)-related multilevel compound-target-pathway network connecting main compounds to those DEGs supported by literature evidences and the pathways that are functionally enriched in ArrayTrack. 55 potential targets of QSYQ were identified, of which 14 were confirmed in CVD-related literatures with experimental supporting evidences. Furthermore, three sesquiterpene components of QSYQ, Trans-nerolidol, (3S,6S,7R)-3,7,11-trimethyl-3,6-epoxy-1,10-dodecadien-7-ol and (3S,6R,7R)-3,7,11-trimethyl-3,6-epoxy-1,10-dodecadien-7-ol from Dalbergia odorifera T. Chen, were validated experimentally in this study. Their anti-inflammatory effects and potential targets including extracellular signal-regulated kinase-1/2, peroxisome proliferator-activated receptor-gamma and heme oxygenase-1 were identified. Finally, through a three-level compound-target-pathway network with experimental analysis, our study depicts a complex MOA of QSYQ on myocardial infarction. PMID:24817581
19. Anti-myeloma activity of a multi targeted kinase inhibitor, AT9283, via potent Aurora Kinase and STAT3 inhibition either alone or in combination with lenalidomide
PubMed Central
Santo, Loredana; Hideshima, Teru; Cirstea, Diana; Bandi, Madhavi; Nelson, Erik A.; Gorgun, Gullu; Rodig, Scott; Vallet, Sonia; Pozzi, Samantha; Patel, Kishan; Unitt, Christine; Squires, Matt; Hu, Yiguo; Chauhan, Dharminder; Mahindra, Anuj; Munshi, Nikhil C.; Anderson, Kenneth C.; Raje, Noopur
2014-01-01
Purpose Aurora Kinases, whose expression is linked to genetic instability and cellular proliferation, are under investigation as novel therapeutic targets in multiple myeloma (MM). Here, we investigated the preclinical activity of a small molecule–multi-targeted kinase inhibitor, AT9283, with potent activity against Aurora kinase A (AURKA), Aurora kinase B (AURKB) and Janus Kinase 2/3. Experimental design We evaluated the in vitro anti myeloma activity of AT9283 alone and in combination with lenalidomide and the in vivo efficacy by using a Xenograft mouse model of human MM. Results Our data demonstrated AT9283 induced cell growth inhibition and apoptosis in MM. Studying the apoptosis mechanism of AT9283 in MM, we observed features consistent with both AURKA and AURKB inhibition, e.g increase of cells with polyploid DNA content, decrease in phospho-Histone H3, and decrease of phospho-Aurora A. Importantly, AT9283 also inhibited STAT3 tyrosine phosphorylation in MM cells. Genetic depletion of STAT3, AURKA or AURKB showed growth inhibition of MM cells, suggesting a role of AT9283-induced inhibition of these molecules in the underlying mechanism of MM cell death. In vivo studies demonstrated decreased MM cell growth and prolonged survival in AT9283-treated mice compared to controls. Importantly, combination studies of AT9283 with lenalidomide showed significant synergistic cytotoxicity in MM cells, even in the presence of bone marrow stromal cells (BMSCs). Enhanced cytotoxicity was associated with increased inhibition of pSTAT3 and pERK. Conclusions Demonstration of in vitro and in vivo anti-MM activity of AT9283 provides the rationale for the clinical evaluation of AT9283 as monotherapy and in combination in patients with MM. PMID:21430070
20. New insights into pharmacological profile of LASSBio-579, a multi-target N-phenylpiperazine derivative active on animal models of schizophrenia.
PubMed
Neves, Gilda; Antonio, Camila B; Betti, Andresa H; Pranke, Mariana A; Fraga, Carlos A M; Barreiro, Eliezer J; Noël, François; Rates, Stela M K
2013-01-15
Previous behavioral and receptor binding studies on N-phenylpiperazine derivatives by our group indicated that LASSBio-579, LASSBio-580 and LASSBio-581 could be potential antipsychotic lead compounds. The present study identified LASSBio-579 as the most promising among the three compounds, since it was the only one that inhibited apomorphine-induced climbing (5 mg/kg p.o.) and apomorphine-induced hypothermia (15 mg/kg p.o.). Furthermore, LASSBio-579 (0.5 mg/kg p.o.) was effective in the ketamine-induced hyperlocomotion test and prevented the prepulse inhibition deficits induced by apomorphine, DOI and ketamine with different potencies (1 mg/kg, 0.5 mg/kg and 5 mg/kg p.o., respectively). LASSBio-579 also induced a motor impairment, catalepsy and a mild sedative effect but only at doses 3-120 times higher than those with antipsychotic-like effects. In addition, LASSBio-579 (0.5 and 1 mg/kg p.o.) reversed the catalepsy induced by WAY 100,635, corroborating its action on both dopaminergic and serotonergic neurotransmission and pointing to the contribution of 5-HT(1A) receptor activation to its pharmacological profile. Moreover, co-administration of sub-effective doses of LASSBio-579 with sub-effective doses of clozapine or haloperidol prevented the apomorphine-induced climbing without induction of catalepsy. In summary, our results characterize LASSBio-579 as a multi-target ligand active in pharmacological animal models of schizophrenia, confirming that this compound could be included in development programs aiming at a new drug for treating schizophrenia.
1. Multi-Target QSAR Approaches for Modeling Protein Inhibitors. Simultaneous Prediction of Activities Against Biomacromolecules Present in Gram-Negative Bacteria.
PubMed
Speck-Planche, Alejandro; Cordeiro, M N D S
2015-01-01
Drug discovery is aimed at finding therapeutic agents for the treatment of many diverse diseases and infections. However, this is a very slow an expensive process, and for this reason, in silico approaches are needed to rationalize the search for new molecular entities with desired biological profiles. Models focused on quantitative structure-activity relationships (QSAR) have constituted useful complementary tools in medicinal chemistry, allowing the virtual predictions of dissimilar pharmacological activities of compounds. In the last 10 years, multi-target (mt) QSAR models have been reported, representing great advances with respect to those models generated from classical approaches. Thus, mt- QSAR models can simultaneously predict activities against different biological targets (proteins, microorganisms, cell lines, etc.) by using large and heterogeneous datasets of chemicals. The present review is devoted to discuss the most promising mt-QSAR models, particularly those developed for the prediction of protein inhibitors. We also report the first multi-tasking QSAR (mtk-QSAR) model for simultaneous prediction of inhibitors against biomacromolecules (specifically proteins) present in Gram-negative bacteria. This model allowed us to consider both different proteins and multiple experimental conditions under which the inhibitory activities of the chemicals were determined. The mtk-QSAR model exhibited accuracies higher than 98% in both training and prediction sets, also displaying a very good performance in the classification of active and inactive cases that depended on the specific elements of the experimental conditions. The physicochemical interpretations of the molecular descriptors were also analyzed, providing important insights regarding the molecular patterns associated with the appearance/enhancement of the inhibitory potency. PMID:25961517
2. In silico search for multi-target therapies for osteoarthritis based on 10 common Huoxue Huayu herbs and potential applications to other diseases.
PubMed
Zheng, Chun-Song; Zhuang, Zhi-Qiang; Xu, Xiao-Jie; Ye, Jin-Xia; Ye, Hong-Zhi; Li, Xi-Hai; Wu, Guang-Wen; Xu, Hui-Feng; Liu, Xian-Xiang
2014-03-01
Huoxue Huayu (HXHY) has been widely used in traditional Chinese medicine (TCM) as a key therapeutic principle for osteoarthritis (OA), and related herbs have been widely prescribed to treat OA in the clinic. The aims of the present study were to explore a multi-target therapy for OA using 10 common HXHY herbs and to investigate their potential applications for treatment of other diseases. A novel computational simulation approach that integrates chemical structure, ligand clusters, chemical space and drug‑likeness evaluations, as well as docking and network analysis, was used to investigate the properties and effects of the herbs. The compounds contained in the studied HXHY herbs were divided into 10 clusters. Comparison of the chemical properties of these compounds to those of other compounds described in the DrugBank database indicated that the properties of the former are more diverse than those of the latter and that most of the HXHY-derived compounds do not violate the 'Lipinski's rule of five'. Docking analysis allowed for the identification of 39 potential bioactive compounds from HXHY herbs and 11 potential targets for these compounds. The identified targets were closely associated with 49 diseases, including neoplasms, musculoskeletal, nervous system and cardiovascular diseases. Ligand‑target (L‑T) and ligand‑target‑disease (L‑T‑D) networks were constructed in order to further elucidate the pharmacological effects of the herbs. Our findings suggest that a number of compounds from HXHY herbs are promising candidates for mult‑target therapeutic application in OA and may exert diverse pharmacological effects, affecting additional diseases besides OA.
3. CRISPR MultiTargeter: A Web Tool to Find Common and Unique CRISPR Single Guide RNA Targets in a Set of Similar Sequences
PubMed Central
Prykhozhij, Sergey V.; Rajan, Vinothkumar; Gaston, Daniel; Berman, Jason N.
2015-01-01
Genome engineering has been revolutionized by the discovery of clustered regularly interspaced palindromic repeats (CRISPR) and CRISPR-associated system genes (Cas) in bacteria. The type IIB Streptococcus pyogenes CRISPR/Cas9 system functions in many species and additional types of CRISPR/Cas systems are under development. In the type II system, expression of CRISPR single guide RNA (sgRNA) targeting a defined sequence and Cas9 generates a sequence-specific nuclease inducing small deletions or insertions. Moreover, knock-in of large DNA inserts has been shown at the sites targeted by sgRNAs and Cas9. Several tools are available for designing sgRNAs that target unique locations in the genome. However, the ability to find sgRNA targets common to several similar sequences or, by contrast, unique to each of these sequences, would also be advantageous. To provide such a tool for several types of CRISPR/Cas system and many species, we developed the CRISPR MultiTargeter software. Similar DNA sequences in question are duplicated genes and sets of exons of different transcripts of a gene. Thus, we implemented a basic sgRNA target search of input sequences for single-sgRNA and two-sgRNA/Cas9 nickase targeting, as well as common and unique sgRNA target searches in 1) a set of input sequences; 2) a set of similar genes or transcripts; or 3) transcripts a single gene. We demonstrate potential uses of the program by identifying unique isoform-specific sgRNA sites in 71% of zebrafish alternative transcripts and common sgRNA target sites in approximately 40% of zebrafish duplicated gene pairs. The design of unique targets in alternative exons is helpful because it will facilitate functional genomic studies of transcript isoforms. Similarly, its application to duplicated genes may simplify multi-gene mutational targeting experiments. Overall, this program provides a unique interface that will enhance use of CRISPR/Cas technology. PMID:25742428
4. Rare particles
SciTech Connect
Kutschera, W.
1984-01-01
The use of Accelerator Mass Spectrometry (AMS) to search for hypothetical particles and known particles of rare processes is discussed. The hypothetical particles considered include fractionally charged particles, anomalously heavy isotopes, and superheavy elements. The known particles produced in rare processes discussed include doubly-charged negative ions, counting neutrino-produced atoms in detectors for solar neutrino detection, and the spontaneous emission of /sup 14/C from /sup 223/Ra. 35 references. (WHK)
5. Particle separation
NASA Technical Reports Server (NTRS)
Moosmuller, Hans (Inventor); Chakrabarty, Rajan K. (Inventor); Arnott, W. Patrick (Inventor)
2011-01-01
Embodiments of a method for selecting particles, such as based on their morphology, is disclosed. In a particular example, the particles are charged and acquire different amounts of charge, or have different charge distributions, based on their morphology. The particles are then sorted based on their flow properties. In a specific example, the particles are sorted using a differential mobility analyzer, which sorts particles, at least in part, based on their electrical mobility. Given a population of particles with similar electrical mobilities, the disclosed process can be used to sort particles based on the net charge carried by the particle, and thus, given the relationship between charge and morphology, separate the particles based on their morphology.
6. Particle separation
DOEpatents
Moosmuller, Hans; Chakrabarty, Rajan K.; Arnott, W. Patrick
2011-04-26
Embodiments of a method for selecting particles, such as based on their morphology, is disclosed. In a particular example, the particles are charged and acquire different amounts of charge, or have different charge distributions, based on their morphology. The particles are then sorted based on their flow properties. In a specific example, the particles are sorted using a differential mobility analyzer, which sorts particles, at least in part, based on their electrical mobility. Given a population of particles with similar electrical mobilities, the disclosed process can be used to sort particles based on the net charge carried by the particle, and thus, given the relationship between charge and morphology, separate the particles based on their morphology.
7. Searching for Multi-Targeting Neurotherapeutics against Alzheimer's: Discovery of Potent AChE-MAO B Inhibitors through the Decoration of the 2H-Chromen-2-one Structural Motif.
PubMed
Pisani, Leonardo; Farina, Roberta; Soto-Otero, Ramon; Denora, Nunzio; Mangiatordi, Giuseppe Felice; Nicolotti, Orazio; Mendez-Alvarez, Estefania; Altomare, Cosimo Damiano; Catto, Marco; Carotti, Angelo
2016-03-17
The need for developing real disease-modifying drugs against neurodegenerative syndromes, particularly Alzheimer's disease (AD), shifted research towards reliable drug discovery strategies to unveil clinical candidates with higher therapeutic efficacy than single-targeting drugs. By following the multi-target approach, we designed and synthesized a novel class of dual acetylcholinesterase (AChE)-monoamine oxidase B (MAO-B) inhibitors through the decoration of the 2H-chromen-2-one skeleton. Compounds bearing a propargylamine moiety at position 3 displayed the highest in vitro inhibitory activities against MAO-B. Within this series, derivative 3h emerged as the most interesting hit compound, being a moderate AChE inhibitor (IC50 = 8.99 µM) and a potent and selective MAO-B inhibitor (IC50 = 2.8 nM). Preliminary studies in human neuroblastoma SH-SY5Y cell lines demonstrated its low cytotoxicity and disclosed a promising neuroprotective effect at low doses (0.1 µM) under oxidative stress conditions promoted by two mitochondrial toxins (oligomycin-A and rotenone). In a Madin-Darby canine kidney (MDCK)II-MDR1 cell-based transport study, Compound 3h was able to permeate the BBB-mimicking monolayer and did not result in a glycoprotein-p (P-gp) substrate, showing an efflux ratio = 0.96, close to that of diazepam.
8. Searching for the Multi-Target-Directed Ligands against Alzheimer's disease: discovery of quinoxaline-based hybrid compounds with AChE, H₃R and BACE 1 inhibitory activities.
PubMed
Huang, Wenhai; Tang, Li; Shi, Ying; Huang, Shufang; Xu, Lei; Sheng, Rong; Wu, Peng; Li, Jia; Zhou, Naiming; Hu, Yongzhou
2011-12-01
A novel series of quinoxaline derivatives, as Multi-Target-Directed Ligands (MTDLs) for AD treatment, were designed by lending the core structural elements required for H(3)R antagonists and hybridizing BACE 1 inhibitor 1 with AChE inhibitor BYYT-25. A virtual database consisting of quinoxaline derivatives was first screened on a pharmacophore model of BACE 1 inhibitors, and then filtered by a molecular docking model of AChE. Seventeen quinoxaline derivatives with high score values were picked out, synthesized and evaluated for their biological activities. Compound 11a, the most effective MTDL, showed the potent activity to H(3)R/AChE/BACE 1 (H(3)R antagonism, IC(50)=280.0 ± 98.0 nM; H(3)R inverse agonism, IC(50)=189.3 ± 95.7 nM; AChE, IC(50)=483 ± 5 nM; BACE 1, 46.64±2.55% inhibitory rate at 20 μM) and high selectivity over H(1)R/H(2)R/H(4)R. Furthermore, the protein binding patterns between 11a and AChE/BACE 1 showed that it makes several essential interactions with the enzymes.
9. Particle generator
DOEpatents
Hess, Wayne P.; Joly, Alan G.; Gerrity, Daniel P.; Beck, Kenneth M.; Sushko, Peter V.; Shlyuger, Alexander L.
2005-06-28
Energy tunable solid state sources of neutral particles are described. In a disclosed embodiment, a halogen particle source includes a solid halide sample, a photon source positioned to deliver photons to a surface of the halide, and a collimating means positioned to accept a spatially defined plume of hyperthermal halogen particles emitted from the sample surface.
10. Multi-Target Single Cycle Instrument Placement
NASA Technical Reports Server (NTRS)
Pedersen, Liam; Smith, David E.; Deans, Matthew; Sargent, Randy; Kunz, Clay; Lees, David; Rajagopalan, Srikanth; Bualat, Maria
2005-01-01
This presentation is about the robotic exploration of Mars using multiple targets command cycle, safe instrument placements, safe operation, and K9 Rover which has a 6 wheel steer rocket-bogey chassis (FIDO, MER), 70% MER size, 1.2 GHz Pentium M laptop running Linux OS, Odometry and compass/inclinometer, CLARAty architecture, 5 DOF manipulator w/CHAMP microscopic camera, SciCams, NavCams and HazCams.
11. Antitumor activity of pimasertib, a selective MEK 1/2 inhibitor, in combination with PI3K/mTOR inhibitors or with multi-targeted kinase inhibitors in pimasertib-resistant human lung and colorectal cancer cells.
PubMed
Martinelli, Erika; Troiani, Teresa; D'Aiuto, Elena; Morgillo, Floriana; Vitagliano, Donata; Capasso, Anna; Costantino, Sarah; Ciuffreda, Loreta Pia; Merolla, Francesco; Vecchione, Loredana; De Vriendt, Veerle; Tejpar, Sabine; Nappi, Anna; Sforza, Vincenzo; Martini, Giulia; Berrino, Liberato; De Palma, Raffaele; Ciardiello, Fortunato
2013-11-01
The RAS/RAF/MEK/MAPK and the PTEN/PI3K/AKT/mTOR pathways are key regulators of proliferation and survival in human cancer cells. Selective inhibitors of different transducer molecules in these pathways have been developed as molecular targeted anti-cancer therapies. The in vitro and in vivo anti-tumor activity of pimasertib, a selective MEK 1/2 inhibitor, alone or in combination with a PI3K inhibitor (PI3Ki), a mTOR inhibitor (everolimus), or with multi-targeted kinase inhibitors (sorafenib and regorafenib), that block also BRAF and CRAF, were tested in a panel of eight human lung and colon cancer cell lines. Following pimasertib treatment, cancer cell lines were classified as pimasertib-sensitive (IC50 for cell growth inhibition of 0.001 µM) or pimasertib-resistant. Evaluation of basal gene expression profiles by microarrays identified several genes that were up-regulated in pimasertib-resistant cancer cells and that were involved in both RAS/RAF/MEK/MAPK and PTEN/PI3K/AKT/mTOR pathways. Therefore, a series of combination experiments with pimasertib and either PI3Ki, everolimus, sorafenib or regorafenib were conducted, demonstrating a synergistic effect in cell growth inhibition and induction of apoptosis with sustained blockade in MAPK- and AKT-dependent signaling pathways in pimasertib-resistant human colon carcinoma (HCT15) and lung adenocarcinoma (H1975) cells. Finally, in nude mice bearing established HCT15 and H1975 subcutaneous tumor xenografts, the combined treatment with pimasertib and BEZ235 (a dual PI3K/mTOR inhibitor) or with sorafenib caused significant tumor growth delays and increase in mice survival as compared to single agent treatment. These results suggest that dual blockade of MAPK and PI3K pathways could overcome intrinsic resistance to MEK inhibition.
12. Particle therapy
SciTech Connect
Raju, M.R.
1993-09-01
Particle therapy has a long history. The experimentation with particles for their therapeutic application got started soon after they were produced in the laboratory. Physicists played a major role in proposing the potential applications in radiotherapy as well as in the development of particle therapy. A brief review of the current status of particle radiotherapy with some historical perspective is presented and specific contributions made by physicists will be pointed out wherever appropriate. The rationale of using particles in cancer treatment is to reduce the treatment volume to the target volume by using precise dose distributions in three dimensions by using particles such as protons and to improve the differential effects on tumors compared to normal tissues by using high-LET radiations such as neutrons. Pions and heavy ions combine the above two characteristics.
13. Particle astrophysics
NASA Technical Reports Server (NTRS)
Sadoulet, Bernard; Cronin, James; Aprile, Elena; Barish, Barry C.; Beier, Eugene W.; Brandenberger, Robert; Cabrera, Blas; Caldwell, David; Cassiday, George; Cline, David B.
1991-01-01
The following scientific areas are reviewed: (1) cosmology and particle physics (particle physics and the early universe, dark matter, and other relics); (2) stellar physics and particles (solar neutrinos, supernovae, and unconventional particle physics); (3) high energy gamma ray and neutrino astronomy; (4) cosmic rays (space and ground observations). Highest scientific priorities for the next decade include implementation of the current program, new initiatives, and longer-term programs. Essential technological developments, such as cryogenic detectors of particles, new solar neutrino techniques, and new extensive air shower detectors, are discussed. Also a certain number of institutional issues (the funding of particle astrophysics, recommended funding mechanisms, recommended facilities, international collaborations, and education and technology) which will become critical in the coming decade are presented.
14. Magnetic particles
NASA Technical Reports Server (NTRS)
Chang, Manchium (Inventor); Colvin, Michael S. (Inventor)
1989-01-01
Magnetic polymer particles are formed by swelling porous, polymer particles and impregnating the particles with an aqueous solution of precursor magnetic metal salt such as an equimolar mixture of ferrous chloride and ferric chloride. On addition of a basic reagent such as dilute sodium hydroxide, the metal salts are converted to crystals of magnetite which are uniformly contained througout the pores of the polymer particle. The magnetite content can be increased and neutral buoyancy achieved by repetition of the impregnaton and neutralization steps to adjust the magnetite content to a desired level.
15. Locally oriented potential field for controlling multi-robots
Romero, Roseli A. F.; Prestes, Edson; Idiart, Marco A. P.; Faria, Gedson
2012-12-01
In this paper, we present an extension of the boundary value problem path planner (BVP PP) to control multiple robots in a robot soccer scenario. This extension is called Locally Oriented Potential Field (LOPF) and computes a potential field from the numerical solution of a BVP using local relaxations in different patches of the solution space. This permits that a single solution of the BVP endows distinct robots with different behaviors in a team. We present the steps to implement LOPF as well as several results obtained in simulation.
16. Cooperative Environment Scans Based on a Multi-Robot System
PubMed Central
Kwon, Ji-Wook
2015-01-01
This paper proposes a cooperative environment scan system (CESS) using multiple robots, where each robot has low-cost range finders and low processing power. To organize and maintain the CESS, a base robot monitors the positions of the child robots, controls them, and builds a map of the unknown environment, while the child robots with low performance range finders provide obstacle information. Even though each child robot provides approximated and limited information of the obstacles, CESS replaces the single LRF, which has a high cost, because much of the information is acquired and accumulated by a number of the child robots. Moreover, the proposed CESS extends the measurement boundaries and detects obstacles hidden behind others. To show the performance of the proposed system and compare this with the numerical models of the commercialized 2D and 3D laser scanners, simulation results are included. PMID:25789491
17. ROBODEXS: multi-robot deployment and extraction system
Gray, Jeremy P.; Mason, James R.; Patterson, Michael S.; Skalny, Matthew W.
2012-06-01
The importance of Unmanned Ground Vehicles (UGV's) in the Military's operations is continually increasing. All Military branches now rely on advanced robotic technologies to aid in their missions' operations. The integration of these technologies has not only enhanced capabilities, but has increased personnel safety by generating larger standoff distances. Currently most UGV's are deployed by an exposed dismounted Warfighter because the Military possess a limited capability to do so remotely and can only deploy a single UGV. This paper explains the conceptual development of a novel approach to remotely deploy and extract multiple robots from a single host platform. The Robotic Deployment & Extraction System (ROBODEXS) is a result of our development research to improve marsupial robotic deployment at safe standoff distances. The presented solution is modular and scalable, having the ability to deploy anywhere from two to twenty robots from a single deployment mechanism. For larger carrier platforms, multiple sets of ROBODEXS modules may be integrated for deployment and extraction of even greater numbers of robots. Such a system allows mass deployment and extraction from a single manned/unmanned vehicle, which is not currently possible with other deployment systems.
18. Control fusion for safe multi-robot coordination
Bostelman, Roger; Marvel, Jeremy
2014-05-01
Future smart manufacturing systems will include more complex coordination of mobile manipulators (i.e., robot arms mounted on mobile bases). The National Institute of Standards and Technology (NIST) conducts research on the safety and performance of multiple collaborating robots using a mobile platform, an automatic guided vehicle (AGV) with an onboard manipulator. Safety standards for robots and industrial vehicles each mandate their failsafe control, but there is little overlap between the standards that can be relied on when the two systems are combined and their independent controllers make collaborative decisions for safe movement. This paper briefly discusses previously uncovered gaps between AGV and manipulator standards and details decision sharing for when manipulators and AGVs are combined into a collaborative, mobile manipulator system. Tests using the NIST mobile manipulator with various control methods were performed and are described along with test results and plans for further, more complex tests of implicit and explicit coordination control of the mobile manipulator.
19. Cooperative environment scans based on a multi-robot system.
PubMed
Kwon, Ji-Wook
2015-01-01
This paper proposes a cooperative environment scan system (CESS) using multiple robots, where each robot has low-cost range finders and low processing power. To organize and maintain the CESS, a base robot monitors the positions of the child robots, controls them, and builds a map of the unknown environment, while the child robots with low performance range finders provide obstacle information. Even though each child robot provides approximated and limited information of the obstacles, CESS replaces the single LRF, which has a high cost, because much of the information is acquired and accumulated by a number of the child robots. Moreover, the proposed CESS extends the measurement boundaries and detects obstacles hidden behind others. To show the performance of the proposed system and compare this with the numerical models of the commercialized 2D and 3D laser scanners, simulation results are included. PMID:25789491
20. ALLIANCE: An architecture for fault tolerant multi-robot cooperation
SciTech Connect
Parker, L.E.
1995-02-01
ALLIANCE is a software architecture that facilitates the fault tolerant cooperative control of teams of heterogeneous mobile robots performing missions composed of loosely coupled, largely independent subtasks. ALLIANCE allows teams of robots, each of which possesses a variety of high-level functions that it can perform during a mission, to individually select appropriate actions throughout the mission based on the requirements of the mission, the activities of other robots, the current environmental conditions, and the robots own internal states. ALLIANCE is a fully distributed, behavior-based architecture that incorporates the use of mathematically modeled motivations (such as impatience and acquiescence) within each robot to achieve adaptive action selection. Since cooperative robotic teams usually work in dynamic and unpredictable environments, this software architecture allows the robot team members to respond robustly, reliably, flexibly, and coherently to unexpected environmental changes and modifications in the robot team that may occur due to mechanical failure, the learning of new skills, or the addition or removal of robots from the team by human intervention. The feasibility of this architecture is demonstrated in an implementation on a team of mobile robots performing a laboratory version of hazardous waste cleanup.
1. Particle preconcentrator
SciTech Connect
2000-07-11
An apparatus and method are disclosed for preconcentrating particles and vapors. The preconcentrator apparatus permits detection of highly diluted amounts of particles in a main gas stream, such as a stream of ambient air. A main gas stream having airborne particles entrained therein is passed through a previous screen. The particles accumulate upon the screen, as the screen acts as a sort of selective particle filter. The flow of the main gas stream is then interrupted by diaphragm shutter valves, whereupon a cross-flow of carrier gas stream is blown parallel past the faces of the screen to dislodge the accumulated particles and carry them to a particle or vapor detector, such as an ion mobility spectrometer. The screen may be heated, such as by passing an electrical current there through, to promote desorption of particles therefrom during the flow of the carrier gas. Various types of screens are disclosed. The apparatus and method of the invention may find particular utility in the fields of narcotics, explosives detection and chemical agents.
2. Particle preconcentrator
DOEpatents
1998-01-01
An apparatus and method for preconcentrating particles and vapors. The preconcentrator apparatus permits detection of highly diluted amounts of particles in a main gas stream, such as a stream of ambient air. A main gas stream having airborne particles entrained therein is passed through a pervious screen. The particles accumulate upon the screen, as the screen acts as a sort of selective particle filter. The flow of the main gas stream is then interrupted by diaphragm shutter valves, whereupon a cross-flow of carrier gas stream is blown parallel past the faces of the screen to dislodge the accumulated particles and carry them to a particle or vapor detector, such as an ion mobility spectrometer. The screen may be heated, such as by passing an electrical current there through, to promote desorption of particles therefrom during the flow of the carrier gas. Various types of screens are disclosed. The apparatus and method of the invention may find particular utility in the fields of narcotics, explosives detection and chemical agents.
3. Particle preconcentrator
DOEpatents
2000-01-01
An apparatus and method for preconcentrating particles and vapors. The preconcentrator apparatus permits detection of highly diluted amounts of particles in a main gas stream, such as a stream of ambient air. A main gas stream having airborne particles entrained therein is passed through a pervious screen. The particles accumulate upon the screen, as the screen acts as a sort of selective particle filter. The flow of the main gas stream is then interrupted by diaphragm shutter valves, whereupon a cross-flow of carrier gas stream is blown parallel past the faces of the screen to dislodge the accumulated particles and carry them to a particle or vapor detector, such as an ion mobility spectrometer. The screen may be heated, such as by passing an electrical current there through, to promote desorption of particles therefrom during the flow of the carrier gas. Various types of screens are disclosed. The apparatus and method of the invention may find particular utility in the fields of narcotics, explosives detection and chemical agents.
4. Particle preconcentrator
DOEpatents
2005-09-20
An apparatus and method for preconcentrating particles and vapors. The preconcentrator apparatus permits detection of highly diluted amounts of particles in a main gas stream, such as a stream of ambient air. A main gas stream having airborne particles entrained therein is passed through a pervious screen. The particles accumulate upon the screen, as the screen acts as a sort of selective particle filter. The flow of the main gas stream is then interrupted by diaphragm shutter valves, whereupon a cross-flow of carrier gas stream is blown parallel past the faces of the screen to dislodge the accumulated particles and carry them to a particle or vapor detector, such as an ion mobility spectrometer. The screen may be heated, such as by passing an electrical current there through, to promote desorption of particles therefrom during the flow of the carrier gas. Various types of screens are disclosed. The apparatus and method of the invention may find particular utility in the fields of narcotics, explosives detection and chemical agents.
5. Particle preconcentrator
DOEpatents
1998-12-29
An apparatus and method are disclosed for preconcentrating particles and vapors. The preconcentrator apparatus permits detection of highly diluted amounts of particles in a main gas stream, such as a stream of ambient air. A main gas stream having airborne particles entrained therein is passed through a pervious screen. The particles accumulate upon the screen, as the screen acts as a sort of selective particle filter. The flow of the main gas stream is then interrupted by diaphragm shutter valves, whereupon a cross-flow of carrier gas stream is blown parallel past the faces of the screen to dislodge the accumulated particles and carry them to a particle or vapor detector, such as an ion mobility spectrometer. The screen may be heated, such as by passing an electrical current there through, to promote desorption of particles therefrom during the flow of the carrier gas. Various types of screens are disclosed. The apparatus and method of the invention may find particular utility in the fields of narcotics, explosives detection and chemical agents. 3 figs.
6. Magnetic particles
NASA Technical Reports Server (NTRS)
Chang, Manchium (Inventor); Colvin, Michael S. (Inventor); Rembaum, Alan (Inventor); Richards, Gil F. (Inventor)
1987-01-01
Metal oxide containing polymers and particularly styrene, acrylic or protein polymers containing fine, magnetic iron oxide particles are formed by combining a NO.sub.2 -substituted polymer with an acid such as hydrochloric acid in the presence of metal, particularly iron particles. The iron is oxidized to fine, black Fe.sub.3 O.sub.4 particles which deposit selectively on the polymer particles. Nitrated polymers are formed by reacting functionally substituted, nitrated organic compounds such as trinitrobenzene sulfonate or dinitrofluoro benzene with a functionally coreactive polymer such as an amine modified acrylic polymer or a protein. Other transition metals such as cobalt can also be incorporated into polymers using this method.
7. Auroral particles
NASA Technical Reports Server (NTRS)
Evans, David S.
1987-01-01
The problems concerning the aurora posed prior to the war are now either solved in principle or were restated in a more fundamental form. The pre-war hypothesis concerning the nature of the auroral particles and their energies was fully confirmed, with the exception that helium and oxygen ions were identified as participating in the auroral particle precipitation in addition to the protons. The nature of the near-Earth energization processes affecting auroral particles was clarified. Charged particle trajectories in various electric field geometries were modeled. The physical problems have now moved from determining the nature and geometry of the electric fields, which accelerate charged particles near the Earth, to accounting for the existence of these electric fields as a natural consequence of the solar wind's interaction with Earth. Ultimately the reward in continuing the work in auroral and magnetospheric particle dynamics will be a deeper understanding of the subtleties of classical electricity and magnetism as applied to situations not blessed with well-defined and invariant geometries.
8. Particle Sizer
NASA Technical Reports Server (NTRS)
1987-01-01
Microspheres are tiny plastic beads that represent the first commercial products manufactured in orbit. An example of how they are used is a new aerodynamic particle sizer designated APS 33B produced by TSI Incorporated. TSI purchased the microspheres from the National Bureau of Standards which certified their exact size and the company uses them in calibration of the APS 33B* instrument, latest in a line of TSI systems for generating counting and weighing minute particles of submicron size. Instruments are used for evaluating air pollution control devices, quantifying environments, meteorological research, testing filters, inhalation, toxicology and other areas where generation or analysis of small airborne particles is required. * The APS 33B is no longer being manufactured. An improved version, APS 3320, is now being manufactured. 2/28/97
9. Carbon particles
DOEpatents
Hunt, Arlon J.
1984-01-01
A method and apparatus whereby small carbon particles are made by pyrolysis of a mixture of acetylene carried in argon. The mixture is injected through a nozzle into a heated tube. A small amount of air is added to the mixture. In order to prevent carbon build-up at the nozzle, the nozzle tip is externally cooled. The tube is also elongated sufficiently to assure efficient pyrolysis at the desired flow rates. A key feature of the method is that the acetylene and argon, for example, are premixed in a dilute ratio, and such mixture is injected while cool to minimize the agglomeration of the particles, which produces carbon particles with desired optical properties for use as a solar radiant heat absorber.
10. Particle blender
DOEpatents
Willey, Melvin G.
1981-01-01
An infinite blender that achieves a homogeneous mixture of fuel microspheres is provided. Blending is accomplished by directing respective groups of desired particles onto the apex of a stationary coaxial cone. The particles progress downward over the cone surface and deposit in a space at the base of the cone that is described by a flexible band provided with a wide portion traversing and in continuous contact with the circumference of the cone base and extending upwardly therefrom. The band, being attached to the cone at a narrow inner end thereof, causes the cone to rotate on its arbor when the band is subsequently pulled onto a take-up spool. As a point at the end of the wide portion of the band passes the point where it is tangent to the cone, the blended particles are released into a delivery tube leading directly into a mold, and a plate mounted on the lower portion of the cone and positioned between the end of the wide portion of the band and the cone assures release of the particles only at the tangent point.
11. Particle filter-based track before detect algorithms
Boers, Yvo; Driessen, Hans
2003-12-01
In this paper we will give a general system setup, that allows the formulation of a wide range of Track Before Detect (TBD) problems. A general basic particle filter algorithm for this system is also provided. TBD is a technique, where tracks are produced directly on the basis of raw (radar) measurements, e.g. power or IQ data, without intermediate processing and decision making. The advantage over classical tracking is that the full information is integrated over time, this leads to a better detection and tracking performance, especially for weak targets. In this paper we look at the filtering and the detection aspect of TBD. We will formulate a detection result, that allows the user to implement any optimal detector in terms of the weights of a running particle filter. We will give a theoretical as well as a numerical (experimental) justification for this. Furthermore, we show that the TBD setup, that is chosen in this paper, allows a straightforward extension to the multi-target case. This easy extension is also due to the fact that the implementation of the solution is by means of a particle filter.
12. Particle filter-based track before detect algorithms
Boers, Yvo; Driessen, Hans
2004-01-01
In this paper we will give a general system setup, that allows the formulation of a wide range of Track Before Detect (TBD) problems. A general basic particle filter algorithm for this system is also provided. TBD is a technique, where tracks are produced directly on the basis of raw (radar) measurements, e.g. power or IQ data, without intermediate processing and decision making. The advantage over classical tracking is that the full information is integrated over time, this leads to a better detection and tracking performance, especially for weak targets. In this paper we look at the filtering and the detection aspect of TBD. We will formulate a detection result, that allows the user to implement any optimal detector in terms of the weights of a running particle filter. We will give a theoretical as well as a numerical (experimental) justification for this. Furthermore, we show that the TBD setup, that is chosen in this paper, allows a straightforward extension to the multi-target case. This easy extension is also due to the fact that the implementation of the solution is by means of a particle filter.
13. Particle acceleration
NASA Technical Reports Server (NTRS)
Vlahos, L.; Machado, M. E.; Ramaty, R.; Murphy, R. J.; Alissandrakis, C.; Bai, T.; Batchelor, D.; Benz, A. O.; Chupp, E.; Ellison, D.
1986-01-01
Data is compiled from Solar Maximum Mission and Hinothori satellites, particle detectors in several satellites, ground based instruments, and balloon flights in order to answer fundamental questions relating to: (1) the requirements for the coronal magnetic field structure in the vicinity of the energization source; (2) the height (above the photosphere) of the energization source; (3) the time of energization; (4) transistion between coronal heating and flares; (5) evidence for purely thermal, purely nonthermal and hybrid type flares; (6) the time characteristics of the energization source; (7) whether every flare accelerates protons; (8) the location of the interaction site of the ions and relativistic electrons; (9) the energy spectra for ions and relativistic electrons; (10) the relationship between particles at the Sun and interplanetary space; (11) evidence for more than one acceleration mechanism; (12) whether there is single mechanism that will accelerate particles to all energies and also heat the plasma; and (13) how fast the existing mechanisms accelerate electrons up to several MeV and ions to 1 GeV.
14. Microfabricated particle focusing device
DOEpatents
Ravula, Surendra K.; Arrington, Christian L.; Sigman, Jennifer K.; Branch, Darren W.; Brener, Igal; Clem, Paul G.; James, Conrad D.; Hill, Martyn; Boltryk, Rosemary June
2013-04-23
A microfabricated particle focusing device comprises an acoustic portion to preconcentrate particles over large spatial dimensions into particle streams and a dielectrophoretic portion for finer particle focusing into single-file columns. The device can be used for high throughput assays for which it is necessary to isolate and investigate small bundles of particles and single particles.
15. Particle Tracks in Aerogel
NASA Technical Reports Server (NTRS)
2005-01-01
In an experiment using a special air gun, particles are shot into aerogel at high velocities. Closeup of particles that have been captured in aerogel are shown here. The particles leave a carrot-shaped trail in the aerogel. Aerogel was used on the Stardust spacecraft to capture comet particles from Comet Wild 2.
16. Particle capture device
DOEpatents
Jayne, John T.; Worsnop, Douglas R.
2016-02-23
In example embodiments, particle collection efficiency in aerosol analyzers and other particle measuring instruments is improved by a particle capture device that employs multiple collisions to decrease momentum of particles until the particles are collected (e.g., vaporized or come to rest). The particle collection device includes an aperture through which a focused particle beam enters. A collection enclosure is coupled to the aperture and has one or more internal surfaces against which particles of the focused beam collide. One or more features are employed in the collection enclosure to promote particles to collide multiple times within the enclosure, and thereby be vaporized or come to rest, rather than escape through the aperture.
17. Laser particle sorter
DOEpatents
Martin, J.C.; Buican, T.N.
1987-11-30
Method and apparatus are provided for sorting particles, such as biological particles. A first laser is used to define an optical path having an intensity gradient which is effective to propel the particles along the path but which is sufficiently weak that the particles are not trapped in an axial direction. A probe laser beam is provided for interrogating the particles to identify predetermined phenotypical characteristics of the particles. A second laser beam is provided to intersect the driving first laser beam, wherein the second laser beam is activated by an output signal indicative of a predetermined characteristic. The second laser beam is switchable between a first intensity and a second intensity, where the first intensity is effective to displace selected particles from the driving laser beam and the second intensity is effective to propel selected particles along the deflection laser beam. The selected particles may then be propelled by the deflection beam to a location effective for further analysis. 2 figs.
18. Laser particle sorter
DOEpatents
Martin, John C.; Buican, Tudor N.
1989-01-01
Method and apparatus for sorting particles, such as biological particles. A first laser defines an optical path having an intensity gradient which is effective to propel the particles along the path but which is sufficiently weak that the particles are not trapped in an axial direction. A probe laser beam interrogates the particles to identify predetermined phenotypical characteristics of the particles. A second laser beam intersects the driving first laser beam, wherein the second laser beam is activated by an output signal indicative of a predetermined characteristic. The second laser beam is switchable between a first intensity and a second intensity, where the first intensity is effective to displace selected particles from the driving laser beam and the second intensity is effective to propel selected particles along the deflection laser beam. The selected particles may then be propelled by the deflection beam to a location effective for further analysis.
19. Composite powder particles
NASA Technical Reports Server (NTRS)
Parker, Donald S. (Inventor); MacDowell, Louis G. (Inventor)
2009-01-01
A liquid coating composition including a coating vehicle and composite powder particles disposed within the coating vehicle. Each composite powder particle may include a magnesium component, a zinc component, and an indium component.
20. Solar Neutral Particles
NASA Video Gallery
This animation shows a neutral solar particle's path leaving the sun, following the magnetic field lines out to the heliosheath. The solar particle hits a hydrogen atom, stealing its electron, and ...
1. Acoustic particle separation
NASA Technical Reports Server (NTRS)
Barmatz, M. B.; Stoneburner, J. D.; Jacobi, N.; Wang, T. (Inventor)
1985-01-01
A method is described which uses acoustic energy to separate particles of different sizes, densities, or the like. The method includes applying acoustic energy resonant to a chamber containing a liquid of gaseous medium to set up a standing wave pattern that includes a force potential well wherein particles within the well are urged towards the center, or position of minimum force potential. A group of particles to be separated is placed in the chamber, while a non-acoustic force such as gravity is applied, so that the particles separate with the larger or denser particles moving away from the center of the well to a position near its edge and progressively smaller lighter particles moving progressively closer to the center of the well. Particles are removed from different positions within the well, so that particles are separated according to the positions they occupy in the well.
2. Particle exposures and infections
EPA Science Inventory
Particle exposures increase the risk for human infections. Particles can deposit in the nose, pharynx, larynx, trachea, bronchi, and distal lung and, accordingly, the respiratory tract is the system most frequently infected after such exposure; however, meningitis also occurs. Ci...
3. Classical confined particles
NASA Technical Reports Server (NTRS)
Horzela, Andrzej; Kapuscik, Edward
1993-01-01
An alternative picture of classical many body mechanics is proposed. In this picture particles possess individual kinematics but are deprived from individual dynamics. Dynamics exists only for the many particle system as a whole. The theory is complete and allows to determine the trajectories of each particle. It is proposed to use our picture as a classical prototype for a realistic theory of confined particles.
4. When is a Particle?
ERIC Educational Resources Information Center
Drell, Sidney D.
1978-01-01
Gives a new definition for the concept of the elementary particle in nuclear physics. Explains why the existance of the quark as an elementary particle could be an accepted fact even though it lacks what traditionally identifies a particle. Compares this with the development which took place during the discovery of the neutrino in the early…
5. Particle charge spectrometer
NASA Technical Reports Server (NTRS)
Fuerstenau, Stephen D. (Inventor)
2004-01-01
An airflow through a tube is used to guide a charged particle through the tube. A detector may be used to detect charge passing through the tube on the particle. The movement of the particle through the tube may be used to both detect its charge and size.
6. Review of particle properties
SciTech Connect
Wohl; Cahn, R.N.; Rittenberg, A.; Trippe, T.G.; Yost, G.P.; Porter, F.; Hernandez, J.J.; Montanet, L.; Hendrick, R.E.; Crawford, R.L.
1984-04-01
This review of the properties of leptons, mesons, and baryons is an updating of the Review of Particle Properties, Particle Data Group (Phys. Lett. 111B (1982)). Data are evaluated, listed, averaged, and summarized in tables. Numerous tables, figures, and formulae of interest to particle physicists are also included. A data booklet is available.
7. High energy particle astronomy.
NASA Technical Reports Server (NTRS)
Buffington, A.; Muller, R. A.; Smith, L. H.; Smoot, G. F.
1972-01-01
Discussion of techniques currently used in high energy particle astronomy for measuring charged and neutral cosmic rays and their isotope and momentum distribution. Derived from methods developed for accelerator experiments in particle physics, these techniques help perform important particle astronomy experiments pertaining to nuclear cosmic ray and gamma ray research, electron and position probes, and antimatter searches.
8. Anatomy of Particle Diffusion
ERIC Educational Resources Information Center
Bringuier, E.
2009-01-01
The paper analyses particle diffusion from a thermodynamic standpoint. The main goal of the paper is to highlight the conceptual connection between particle diffusion, which belongs to non-equilibrium statistical physics, and mechanics, which deals with particle motion, at the level of third-year university courses. We start out from the fact…
9. Multi-Target Analysis and Design of Mitochondrial Metabolism.
PubMed
Angione, Claudio; Costanza, Jole; Carapezza, Giovanni; Lió, Pietro; Nicosia, Giuseppe
2015-01-01
Analyzing and optimizing biological models is often identified as a research priority in biomedical engineering. An important feature of a model should be the ability to find the best condition in which an organism has to be grown in order to reach specific optimal output values chosen by the researcher. In this work, we take into account a mitochondrial model analyzed with flux-balance analysis. The optimal design and assessment of these models is achieved through single- and/or multi-objective optimization techniques driven by epsilon-dominance and identifiability analysis. Our optimization algorithm searches for the values of the flux rates that optimize multiple cellular functions simultaneously. The optimization of the fluxes of the metabolic network includes not only input fluxes, but also internal fluxes. A faster convergence process with robust candidate solutions is permitted by a relaxed Pareto dominance, regulating the granularity of the approximation of the desired Pareto front. We find that the maximum ATP production is linked to a total consumption of NADH, and reaching the maximum amount of NADH leads to an increasing request of NADH from the external environment. Furthermore, the identifiability analysis characterizes the type and the stage of three monogenic diseases. Finally, we propose a new methodology to extend any constraint-based model using protein abundances. PMID:26376088
10. Designing Multi-target Compound Libraries with Gaussian Process Models.
PubMed
Bieler, Michael; Reutlinger, Michael; Rodrigues, Tiago; Schneider, Petra; Kriegl, Jan M; Schneider, Gisbert
2016-05-01
We present the application of machine learning models to selecting G protein-coupled receptor (GPCR)-focused compound libraries. The library design process was realized by ant colony optimization. A proprietary Boehringer-Ingelheim reference set consisting of 3519 compounds tested in dose-response assays at 11 GPCR targets served as training data for machine learning and activity prediction. We compared the usability of the proprietary data with a public data set from ChEMBL. Gaussian process models were trained to prioritize compounds from a virtual combinatorial library. We obtained meaningful models for three of the targets (5-HT2c , MCH, A1), which were experimentally confirmed for 12 of 15 selected and synthesized or purchased compounds. Overall, the models trained on the public data predicted the observed assay results more accurately. The results of this study motivate the use of Gaussian process regression on public data for virtual screening and target-focused compound library design.
11. Multi-Target Operation at the HERA-B Experiment
SciTech Connect
Vassiliev, Yu.; Aushev, V.; Ehret, K.; Funcke, M.; Sever, S.I.; Pavlenko, Yu.; Pugatch, V.; Spratte, S.; Symalla, M.; Tkatch, N.; Wegener, D.
2000-12-31
The HERA-B internal target consists of eight target ribbons arranged around the beam. Each target can be moved in the radial direction independently in sub-micron steps, allowing to compensate relative beam shifts and to steer for the desired interaction rate. The experimental constraints require a stable interaction rate equally distributed over all inserted targets. The actual equalization is based on a measurement of charge originated from the beam-target interaction. The system shows a good linearity with the interaction rate and allows a reasonable distribution of the interaction rate among several wires. To cross check the performance of the multi-wire steering, the reconstructed tracks and primary vertices in the silicon vertex detector were used.
12. Multi-target tracking using a hybrid joint transform correlator
NASA Technical Reports Server (NTRS)
Yu, Francis T. S.; Tam, Eddy C.; Tanone, Aris; Gregory, Don A.; Juday, Richard D.
1990-01-01
A technique using data association target tracking in a motion sequence via an adaptive joint transform correlator is presented. The massive data in the field of view can be reduced to a few correlation peaks. The average velocity of a target during the tracking cycle is then determined from the location of the correlation peak. A data-association algorithm is used for the analysis of these correlation signals, for which multiple targets can be tracked. A phase-mostly liquid-crystal TV is used in the hybrid joint transform correlation system, and simultaneous tracking of three targets is demonstrated.
13. Chemical Modification of the Multi-Target Neuroprotective Compound Fisetin
PubMed Central
Chiruta, Chandramouli; Schubert, David; Dargusch, Richard; Maher, Pamela
2012-01-01
Many factors are implicated in age-related CNS disorders making it unlikely that modulating only a single factor will provide effective treatment. Perhaps a better approach is to identify small molecules that have multiple biological activities relevant to the maintenance of brain function. Recently, we identified an orally active, neuroprotective and cognition-enhancing molecule, the flavonoid fisetin, that is effective in several animal models of CNS disorders. Fisetin has direct antioxidant activity and can also increase the intracellular levels of glutathione (GSH), the major endogenous antioxidant. In addition, fisetin has both neurotrophic and anti-inflammatory activity. However, its relatively high EC50 in cell based assays, low lipophilicity, high tPSA and poor bioavailability suggest that there is room for medicinal chemical improvement. Here we describe a multi-tiered approach to screening that has allowed us to identify fisetin derivatives with significantly enhanced activity in an in vitro neuroprotection model while at the same time maintaining other key activities. PMID:22192055
14. Designing Multi-target Compound Libraries with Gaussian Process Models.
PubMed
Bieler, Michael; Reutlinger, Michael; Rodrigues, Tiago; Schneider, Petra; Kriegl, Jan M; Schneider, Gisbert
2016-05-01
We present the application of machine learning models to selecting G protein-coupled receptor (GPCR)-focused compound libraries. The library design process was realized by ant colony optimization. A proprietary Boehringer-Ingelheim reference set consisting of 3519 compounds tested in dose-response assays at 11 GPCR targets served as training data for machine learning and activity prediction. We compared the usability of the proprietary data with a public data set from ChEMBL. Gaussian process models were trained to prioritize compounds from a virtual combinatorial library. We obtained meaningful models for three of the targets (5-HT2c , MCH, A1), which were experimentally confirmed for 12 of 15 selected and synthesized or purchased compounds. Overall, the models trained on the public data predicted the observed assay results more accurately. The results of this study motivate the use of Gaussian process regression on public data for virtual screening and target-focused compound library design. PMID:27492085
15. Multi-Target Approach to Metastatic Adrenal Cell Carcinoma.
PubMed
Wahab, Norasyikin A; Zainudin, Suehazlyn; AbAziz, Aini; Mustafa, Norlaila; Sukor, Norlela; Kamaruddin, Nor Azmi
2016-09-01
Adrenal cell carcinoma is a rare tumor and more than 70% of patients present with advanced stages. Adrenal cell carcinoma is an aggressive tumor with a poor prognosis. Surgical intervention is the gold standard treatment and mitotane is the only drug approved for the treatment of adrenal cell carcinoma. Until recently in 2012, the etoposide, doxorubicin, cisplatin plus mitotane are approved as first-line therapy based on response rate and progression-free survival. This case illustrates a case of advanced adrenal cell carcinoma in a young girl who presented with huge adrenal mass with inferior vena cava thrombosis and pulmonary embolism. Multi-approach of therapy was used to control the tumor size and metastasis. Therefore, it may prolong her survival rate for up to 5 years and 4 months. PMID:27631184
16. Primordial Particles; Collisions of Inelastic Particles
Sagi, George
2011-03-01
Three-dimensional matter is not defined by Euclidian or Cartesian geometries. Newton's and Einstein's laws are related to the motions of elastic masses. The study of collisions of inelastic particles opens up new vistas in physics. The present article reveals how such particles create clusters composed of various numbers of particles. The Probability of each formation, duplets, triplets, etc. can be calculated. The particles are held together by a binding force, and depending upon the angles of collisions they may also rotate around their center of geometry. Because of these unique properties such inelastic particles are referred to as primordial particles, Pp. When a given density of Pp per cubic space is given, then random collisions create a field. The calculation of the properties of such primordial field is very complex and beyond the present study. However, the angles of collisions are infinite in principle, but the probabilities of various cluster sizes are quantum dependent. Consequently, field calculations will require new complex mathematical methods to be discovered yet.
DOEpatents
Klebanoff, Leonard Elliott; Rader, Daniel John; Walton, Christopher; Folta, James
2009-01-06
An efficient device for capturing fast moving particles has an adhesive particle shield that includes (i) a mounting panel and (ii) a film that is attached to the mounting panel wherein the outer surface of the film has an adhesive coating disposed thereon to capture particles contacting the outer surface. The shield can be employed to maintain a substantially particle free environment such as in photolithographic systems having critical surfaces, such as wafers, masks, and optics and in the tools used to make these components, that are sensitive to particle contamination. The shield can be portable to be positioned in hard-to-reach areas of a photolithography machine. The adhesive particle shield can incorporate cooling means to attract particles via the thermophoresis effect.
18. Fuzzy Logic Particle Tracking
NASA Technical Reports Server (NTRS)
2005-01-01
A new all-electronic Particle Image Velocimetry technique that can efficiently map high speed gas flows has been developed in-house at the NASA Lewis Research Center. Particle Image Velocimetry is an optical technique for measuring the instantaneous two component velocity field across a planar region of a seeded flow field. A pulsed laser light sheet is used to illuminate the seed particles entrained in the flow field at two instances in time. One or more charged coupled device (CCD) cameras can be used to record the instantaneous positions of particles. Using the time between light sheet pulses and determining either the individual particle displacements or the average displacement of particles over a small subregion of the recorded image enables the calculation of the fluid velocity. Fuzzy logic minimizes the required operator intervention in identifying particles and computing velocity. Using two cameras that have the same view of the illumination plane yields two single exposure image frames. Two competing techniques that yield unambiguous velocity vector direction information have been widely used for reducing the single-exposure, multiple image frame data: (1) cross-correlation and (2) particle tracking. Correlation techniques yield averaged velocity estimates over subregions of the flow, whereas particle tracking techniques give individual particle velocity estimates. For the correlation technique, the correlation peak corresponding to the average displacement of particles across the subregion must be identified. Noise on the images and particle dropout result in misidentification of the true correlation peak. The subsequent velocity vector maps contain spurious vectors where the displacement peaks have been improperly identified. Typically these spurious vectors are replaced by a weighted average of the neighboring vectors, thereby decreasing the independence of the measurements. In this work, fuzzy logic techniques are used to determine the true
19. Precision gap particle separator
DOEpatents
Benett, William J.; Miles, Robin; Jones, II., Leslie M.; Stockton, Cheryl
2004-06-08
A system for separating particles entrained in a fluid includes a base with a first channel and a second channel. A precision gap connects the first channel and the second channel. The precision gap is of a size that allows small particles to pass from the first channel into the second channel and prevents large particles from the first channel into the second channel. A cover is positioned over the base unit, the first channel, the precision gap, and the second channel. An port directs the fluid containing the entrained particles into the first channel. An output port directs the large particles out of the first channel. A port connected to the second channel directs the small particles out of the second channel.
20. CLASHING BEAM PARTICLE ACCELERATOR
DOEpatents
Burleigh, R.J.
1961-04-11
A charged-particle accelerator of the proton synchrotron class having means for simultaneously accelerating two separate contra-rotating particle beams within a single annular magnet structure is reported. The magnet provides two concentric circular field regions of opposite magnetic polarity with one field region being of slightly less diameter than the other. The accelerator includes a deflector means straddling the two particle orbits and acting to collide the two particle beams after each has been accelerated to a desired energy. The deflector has the further property of returning particles which do not undergo collision to the regular orbits whereby the particles recirculate with the possibility of colliding upon subsequent passages through the deflector.
1. Methods for forming particles
DOEpatents
Fox, Robert V.; Zhang, Fengyan; Rodriguez, Rene G.; Pak, Joshua J.; Sun, Chivin
2016-06-21
Single source precursors or pre-copolymers of single source precursors are subjected to microwave radiation to form particles of a I-III-VI.sub.2 material. Such particles may be formed in a wurtzite phase and may be converted to a chalcopyrite phase by, for example, exposure to heat. The particles in the wurtzite phase may have a substantially hexagonal shape that enables stacking into ordered layers. The particles in the wurtzite phase may be mixed with particles in the chalcopyrite phase (i.e., chalcopyrite nanoparticles) that may fill voids within the ordered layers of the particles in the wurtzite phase thus produce films with good coverage. In some embodiments, the methods are used to form layers of semiconductor materials comprising a I-III-VI.sub.2 material. Devices such as, for example, thin-film solar cells may be fabricated using such methods.
2. Particle Swarm Optimization
NASA Technical Reports Server (NTRS)
Venter, Gerhard; Sobieszczanski-Sobieski Jaroslaw
2002-01-01
The purpose of this paper is to show how the search algorithm known as particle swarm optimization performs. Here, particle swarm optimization is applied to structural design problems, but the method has a much wider range of possible applications. The paper's new contributions are improvements to the particle swarm optimization algorithm and conclusions and recommendations as to the utility of the algorithm, Results of numerical experiments for both continuous and discrete applications are presented in the paper. The results indicate that the particle swarm optimization algorithm does locate the constrained minimum design in continuous applications with very good precision, albeit at a much higher computational cost than that of a typical gradient based optimizer. However, the true potential of particle swarm optimization is primarily in applications with discrete and/or discontinuous functions and variables. Additionally, particle swarm optimization has the potential of efficient computation with very large numbers of concurrently operating processors.
3. The Sisyphus particle detector
NASA Technical Reports Server (NTRS)
Soberman, R. K.
1974-01-01
The particle measurement subsystem planned for the MJS 77 mission is described. Scientific objectives with respect to Saturn's rings are as follows: (1) measure particles outside the visible rings, including particulates orbiting in more distant rings and particles scattered out of visible rings, (2) measure meteoroid environment in vicinity of Saturn, and (3) develop an understanding of the dynamics of the rings with respect to their collisional interaction with the environment.
4. Clickable Janus Particles.
PubMed
Bradley, Laura C; Stebe, Kathleen J; Lee, Daeyeon
2016-09-14
Janus particles are colloidal analogues of molecular amphiphiles that can self-assemble to form diverse suprastructures, exhibit motility under appropriate catalytic reactions, and strongly adsorb to fluid-fluid interfaces to stabilize multiphasic fluid mixtures. The chemistry of Janus particles is the fundamental parameter that controls their behavior and utility as colloid surfactants in bulk solution and at fluid interfaces. To enable their widespread utilization, scalable methods that allow for the synthesis of Janus particles with diverse chemical compositions and shapes are highly desirable. Here, we develop clickable Janus particles that can be modified through thiol-yne click reactions with commercially available thiols. Janus particles are modified to be amphiphilic by introducing either carboxyl, hydroxyl, or amine moieties. We also demonstrate that regulating the extent of the modification can be used to control the particle morphology, and thus the type of emulsion stabilized, as well as to fabricate composite Janus particles through sequential click reactions. Modifying Janus particles through thiol-yne click chemistry provides a fast-reacting, scalable synthesis method for the fabrication of diverse Janus particles. PMID:27548642
5. Review of Particle Physics
Beringer, J.; Arguin, J.-F.; Barnett, R. M.; Copic, K.; Dahl, O.; Groom, D. E.; Lin, C.-J.; Lys, J.; Murayama, H.; Wohl, C. G.; Yao, W.-M.; Zyla, P. A.; Amsler, C.; Antonelli, M.; Asner, D. M.; Baer, H.; Band, H. R.; Basaglia, T.; Bauer, C. W.; Beatty, J. J.; Belousov, V. I.; Bergren, E.; Bernardi, G.; Bertl, W.; Bethke, S.; Bichsel, H.; Biebel, O.; Blucher, E.; Blusk, S.; Brooijmans, G.; Buchmueller, O.; Cahn, R. N.; Carena, M.; Ceccucci, A.; Chakraborty, D.; Chen, M.-C.; Chivukula, R. S.; Cowan, G.; D'Ambrosio, G.; Damour, T.; de Florian, D.; de Gouvêa, A.; DeGrand, T.; de Jong, P.; Dissertori, G.; Dobrescu, B.; Doser, M.; Drees, M.; Edwards, D. A.; Eidelman, S.; Erler, J.; Ezhela, V. V.; Fetscher, W.; Fields, B. D.; Foster, B.; Gaisser, T. K.; Garren, L.; Gerber, H.-J.; Gerbier, G.; Gherghetta, T.; Golwala, S.; Goodman, M.; Grab, C.; Gritsan, A. V.; Grivaz, J.-F.; Grünewald, M.; Gurtu, A.; Gutsche, T.; Haber, H. E.; Hagiwara, K.; Hagmann, C.; Hanhart, C.; Hashimoto, S.; Hayes, K. G.; Heffner, M.; Heltsley, B.; Hernández-Rey, J. J.; Hikasa, K.; Höcker, A.; Holder, J.; Holtkamp, A.; Huston, J.; Jackson, J. D.; Johnson, K. F.; Junk, T.; Karlen, D.; Kirkby, D.; Klein, S. R.; Klempt, E.; Kowalewski, R. V.; Krauss, F.; Kreps, M.; Krusche, B.; Kuyanov, Yu. V.; Kwon, Y.; Lahav, O.; Laiho, J.; Langacker, P.; Liddle, A.; Ligeti, Z.; Liss, T. M.; Littenberg, L.; Lugovsky, K. S.; Lugovsky, S. B.; Mannel, T.; Manohar, A. V.; Marciano, W. J.; Martin, A. D.; Masoni, A.; Matthews, J.; Milstead, D.; Miquel, R.; Mönig, K.; Moortgat, F.; Nakamura, K.; Narain, M.; Nason, P.; Navas, S.; Neubert, M.; Nevski, P.; Nir, Y.; Olive, K. A.; Pape, L.; Parsons, J.; Patrignani, C.; Peacock, J. A.; Petcov, S. T.; Piepke, A.; Pomarol, A.; Punzi, G.; Quadt, A.; Raby, S.; Raffelt, G.; Ratcliff, B. N.; Richardson, P.; Roesler, S.; Rolli, S.; Romaniouk, A.; Rosenberg, L. J.; Rosner, J. L.; Sachrajda, C. T.; Sakai, Y.; Salam, G. P.; Sarkar, S.; Sauli, F.; Schneider, O.; Scholberg, K.; Scott, D.; Seligman, W. G.; Shaevitz, M. H.; Sharpe, S. R.; Silari, M.; Sjöstrand, T.; Skands, P.; Smith, J. G.; Smoot, G. F.; Spanier, S.; Spieler, H.; Stahl, A.; Stanev, T.; Stone, S. L.; Sumiyoshi, T.; Syphers, M. J.; Takahashi, F.; Tanabashi, M.; Terning, J.; Titov, M.; Tkachenko, N. P.; Törnqvist, N. A.; Tovey, D.; Valencia, G.; van Bibber, K.; Venanzoni, G.; Vincter, M. G.; Vogel, P.; Vogt, A.; Walkowiak, W.; Walter, C. W.; Ward, D. R.; Watari, T.; Weiglein, G.; Weinberg, E. J.; Wiencke, L. R.; Wolfenstein, L.; Womersley, J.; Woody, C. L.; Workman, R. L.; Yamamoto, A.; Zeller, G. P.; Zenin, O. V.; Zhang, J.; Zhu, R.-Y.; Harper, G.; Lugovsky, V. S.; Schaffner, P.
2012-07-01
This biennial Review summarizes much of particle physics. Using data from previous editions, plus 2658 new measurements from 644 papers, we list, evaluate, and average measured properties of gauge bosons, leptons, quarks, mesons, and baryons. We summarize searches for hypothetical particles such as Higgs bosons, heavy neutrinos, and supersymmetric particles. All the particle properties and search limits are listed in Summary Tables. We also give numerous tables, figures, formulae, and reviews of topics such as the Standard Model, particle detectors, probability, and statistics. Among the 112 reviews are many that are new or heavily revised including those on Heavy-Quark and Soft-Collinear Effective Theory, Neutrino Cross Section Measurements, Monte Carlo Event Generators, Lattice QCD, Heavy Quarkonium Spectroscopy, Top Quark, Dark Matter, Vcb & Vub, Quantum Chromodynamics, High-Energy Collider Parameters, Astrophysical Constants, Cosmological Parameters, and Dark Matter.A booklet is available containing the Summary Tables and abbreviated versions of some of the other sections of this full Review. All tables, listings, and reviews (and errata) are also available on the Particle Data Group website: http://pdg.lbl.gov/.The 2012 edition of Review of Particle Physics is published for the Particle Data Group as article 010001 in volume 86 of Physical Review D.This edition should be cited as: J. Beringer et al. (Particle Data Group), Phys. Rev. D 86, 010001 (2012).
6. Bioactivation of particles
DOEpatents
Pinaud, Fabien; King, David; Weiss, Shimon
2011-08-16
Particles are bioactivated by attaching bioactivation peptides to the particle surface. The bioactivation peptides are peptide-based compounds that impart one or more biologically important functions to the particles. Each bioactivation peptide includes a molecular or surface recognition part that binds with the surface of the particle and one or more functional parts. The surface recognition part includes an amino-end and a carboxy-end and is composed of one or more hydrophobic spacers and one or more binding clusters. The functional part(s) is attached to the surface recognition part at the amino-end and/or said carboxy-end.
7. Dielectrophoretic particle-particle interaction under AC electrohydrodynamic flow conditions.
PubMed
Lee, Doh-Hyoung; Yu, Chengjie; Papazoglou, Elisabeth; Farouk, Bakhtier; Noh, Hongseok M
2011-09-01
We used the Maxwell stress tensor method to understand dielectrophoretic particle-particle interactions and applied the results to the interpretation of particle behaviors under alternating current (AC) electrohydrodynamic conditions such as AC electroosmosis (ACEO) and electrothermal flow (ETF). Distinct particle behaviors were observed under ACEO and ETF. Diverse particle-particle interactions observed in experiments such as particle clustering, particles keeping a certain distance from each other, chain and disc formation and their rotation, are explained based on the numerical simulation data. The improved understanding of particle behaviors in AC electrohydrodynamic flows presented here will enable researchers to design better particle manipulation strategies for lab-on-a-chip applications. PMID:21823132
8. Pileup per particle identification
SciTech Connect
Bertolini, Daniele; Harris, Philip; Low, Matthew; Tran, Nhan
2014-10-09
We propose a new method for pileup mitigation by implementing “pileup per particle identification” (PUPPI). For each particle we first define a local shape α which probes the collinear versus soft diffuse structure in the neighborhood of the particle. The former is indicative of particles originating from the hard scatter and the latter of particles originating from pileup interactions. The distribution of α for charged pileup, assumed as a proxy for all pileup, is used on an event-by-event basis to calculate a weight for each particle. The weights describe the degree to which particles are pileup-like and are used to rescale their four-momenta, superseding the need for jet-based corrections. Furthermore, the algorithm flexibly allows combination with other, possibly experimental, probabilistic information associated with particles such as vertexing and timing performance. We demonstrate the algorithm improves over existing methods by looking at jet pT and jet mass. As a result, we also find an improvement on non-jet quantities like missing transverse energy.
9. RESEARCH IN PARTICLE PHYSICS
SciTech Connect
Kearns, Edward
2013-07-12
This is the final report for the Department of Energy Grant to Principal Investigators in Experimental and Theoretical Particle Physics at Boston University. The research performed was in the Energy Frontier at the LHC, the Intensity Frontier at Super-Kamiokande and T2K, the Cosmic Frontier and detector R&D in dark matter detector development, and in particle theory.
10. Pileup per particle identification
DOE PAGESBeta
Bertolini, Daniele; Harris, Philip; Low, Matthew; Tran, Nhan
2014-10-09
We propose a new method for pileup mitigation by implementing “pileup per particle identification” (PUPPI). For each particle we first define a local shape α which probes the collinear versus soft diffuse structure in the neighborhood of the particle. The former is indicative of particles originating from the hard scatter and the latter of particles originating from pileup interactions. The distribution of α for charged pileup, assumed as a proxy for all pileup, is used on an event-by-event basis to calculate a weight for each particle. The weights describe the degree to which particles are pileup-like and are used tomore » rescale their four-momenta, superseding the need for jet-based corrections. Furthermore, the algorithm flexibly allows combination with other, possibly experimental, probabilistic information associated with particles such as vertexing and timing performance. We demonstrate the algorithm improves over existing methods by looking at jet pT and jet mass. As a result, we also find an improvement on non-jet quantities like missing transverse energy.« less
11. Particle impact location detector
NASA Technical Reports Server (NTRS)
Auer, S. O.
1974-01-01
Detector includes delay lines connected to each detector surface strip. When several particles strike different strips simultaneously, pulses generated by each strip are time delayed by certain intervals. Delay time for each strip is known. By observing time delay in pulse, it is possible to locate strip that is struck by particle.
PubMed
Morris, C L; King, N S P; Kwiatkowski, K; Mariam, F G; Merrill, F E; Saunders, A
2013-04-01
New applications of charged particle radiography have been developed over the past two decades that extend the range of radiographic techniques providing high-speed sequences of radiographs of thicker objects with higher effective dose than can be obtained with conventional radiographic techniques. In this paper, we review the motivation and the development of flash radiography and in particular, charged particle radiography. PMID:23481477
Morris, C. L.; King, N. S. P.; Kwiatkowski, K.; Mariam, F. G.; Merrill, F. E.; Saunders, A.
2013-04-01
New applications of charged particle radiography have been developed over the past two decades that extend the range of radiographic techniques providing high-speed sequences of radiographs of thicker objects with higher effective dose than can be obtained with conventional radiographic techniques. In this paper, we review the motivation and the development of flash radiography and in particular, charged particle radiography.
14. Fine particle separation apparatus
SciTech Connect
Berriman, L.P.; Paul, D.G.
1981-07-21
An apparatus is claimed for separating almost all fine particles, including particles less than 10 microns in diameter, from a gas stream, which requires the input of only a small amount of water and which discharges a correspondingly small amount of particle-water slurry. The apparatus includes a vertical cylindrical chamber having a relatively wide upstream portion that gradually narrows in a transition portion into an elongated throat portion. A central core member extends axially along the throat portion and forms an elongated annular passage. A high velocity gas stream containing fine particles is generally tangentially introduced into the wide upstream portion of the conduit to provide a circulatory flow. Water is introduced through a plurality of parts in the transition portion downstream therefrom, to provide a thin layer of water along the outer walls of the throat. The high velocity circulatory flow of the particle-laden gas along the annular throat region causes fine particles to migrate radially outwardly under high centrifugal forces into the water layer. The water-particle slurry is discharged through a slot in the outer wall of the lower portion of the throat region. The substantially particle-free gas passes through a radial diffuser section therebelow.
15. Ambient Tropospheric Particles
EPA Science Inventory
Atmospheric particulate matter (PM) is a complex mixture of solid and liquid particles suspended in ambient air (also known as the atmospheric aerosol). Ambient PM arises from a wide-range of sources and/or processes, and consists of particles of different shapes, sizes, and com...
16. Interactive Terascale Particle Visualization
NASA Technical Reports Server (NTRS)
Ellsworth, David; Green, Bryan; Moran, Patrick
2004-01-01
This paper describes the methods used to produce an interactive visualization of a 2 TB computational fluid dynamics (CFD) data set using particle tracing (streaklines). We use the method introduced by Bruckschen et al. [2001] that pre-computes a large number of particles, stores them on disk using a space-filling curve ordering that minimizes seeks, and then retrieves and displays the particles according to the user's command. We describe how the particle computation can be performed using a PC cluster, how the algorithm can be adapted to work with a multi-block curvilinear mesh, and how the out-of-core visualization can be scaled to 296 billion particles while still achieving interactive performance on PG hardware. Compared to the earlier work, our data set size and total number of particles are an order of magnitude larger. We also describe a new compression technique that allows the lossless compression of the particles by 41% and speeds the particle retrieval by about 30%.
17. HIGH ENERGY PARTICLE ACCELERATOR
DOEpatents
Courant, E.D.; Livingston, M.S.; Snyder, H.S.
1959-04-14
An improved apparatus is presented for focusing charged particles in an accelerator. In essence, the invention includes means for establishing a magnetic field in discrete sectors along the path of moving charged particles, the magnetic field varying in each sector in accordance with the relation. B = B/ sub 0/ STAln (r-r/sub 0/)/r/sub 0/!, where B/sub 0/ is the value of the magnetic field at the equilibrium orbit of radius r/sub 0/ of the path of the particles, B equals the magnetic field at the radius r of the chamber and n equals the magnetic field gradient index, the polarity of n being abruptly reversed a plurality of times as the particles travel along their arcuate path. With this arrangement, the particles are alternately converged towards the axis of their equillbrium orbit and diverged therefrom in successive sectors with a resultant focusing effect.
18. Particle Analysis Pitfalls
NASA Technical Reports Server (NTRS)
Hughes, David; Dazzo, Tony
2007-01-01
This viewgraph presentation reviews the use of particle analysis to assist in preparing for the 4th Hubble Space Telescope (HST) Servicing mission. During this mission the Space Telescope Imaging Spectrograph (STIS) will be repaired. The particle analysis consisted of Finite element mesh creation, Black-body viewfactors generated using I-DEAS TMG Thermal Analysis, Grey-body viewfactors calculated using Markov method, Particle distribution modeled using an iterative Monte Carlo process, (time-consuming); in house software called MASTRAM, Differential analysis performed in Excel, and Visualization provided by Tecplot and I-DEAS. Several tests were performed and are reviewed: Conformal Coat Particle Study, Card Extraction Study, Cover Fastener Removal Particle Generation Study, and E-Graf Vibration Particulate Study. The lessons learned during this analysis are also reviewed.
19. DEM Particle Fracture Model
SciTech Connect
Zhang, Boning; Herbold, Eric B.; Homel, Michael A.; Regueiro, Richard A.
2015-12-01
An adaptive particle fracture model in poly-ellipsoidal Discrete Element Method is developed. The poly-ellipsoidal particle will break into several sub-poly-ellipsoids by Hoek-Brown fracture criterion based on continuum stress and the maximum tensile stress in contacts. Also Weibull theory is introduced to consider the statistics and size effects on particle strength. Finally, high strain-rate split Hopkinson pressure bar experiment of silica sand is simulated using this newly developed model. Comparisons with experiments show that our particle fracture model can capture the mechanical behavior of this experiment very well, both in stress-strain response and particle size redistribution. The effects of density and packings o the samples are also studied in numerical examples.
20. Imaging alpha particle detector
DOEpatents
Anderson, D.F.
1980-10-29
A method and apparatus for detecting and imaging alpha particles sources is described. A dielectric coated high voltage electrode and a tungsten wire grid constitute a diode configuration discharge generator for electrons dislodged from atoms or molecules located in between these electrodes when struck by alpha particles from a source to be quantitatively or qualitatively analyzed. A thin polyester film window allows the alpha particles to pass into the gas enclosure and the combination of the glass electrode, grid and window is light transparent such that the details of the source which is imaged with high resolution and sensitivity by the sparks produced can be observed visually as well. The source can be viewed directly, electronically counted or integrated over time using photographic methods. A significant increase in sensitivity over other alpha particle detectors is observed, and the device has very low sensitivity to gamma or beta emissions which might otherwise appear as noise on the alpha particle signal.
1. Imaging alpha particle detector
DOEpatents
Anderson, David F.
1985-01-01
A method and apparatus for detecting and imaging alpha particles sources is described. A conducting coated high voltage electrode (1) and a tungsten wire grid (2) constitute a diode configuration discharge generator for electrons dislodged from atoms or molecules located in between these electrodes when struck by alpha particles from a source (3) to be quantitatively or qualitatively analyzed. A thin polyester film window (4) allows the alpha particles to pass into the gas enclosure and the combination of the glass electrode, grid and window is light transparent such that the details of the source which is imaged with high resolution and sensitivity by the sparks produced can be observed visually as well. The source can be viewed directly, electronically counted or integrated over time using photographic methods. A significant increase in sensitivity over other alpha particle detectors is observed, and the device has very low sensitivity to gamma or beta emissions which might otherwise appear as noise on the alpha particle signal.
2. General defocusing particle tracking.
PubMed
Barnkob, Rune; Kähler, Christian J; Rossi, Massimiliano
2015-09-01
A General Defocusing Particle Tracking (GDPT) method is proposed for tracking the three-dimensional motion of particles in Lab-on-a-chip systems based on a set of calibration images and the normalized cross-correlation function. In comparison with other single-camera defocusing particle-tracking techniques, GDPT possesses a series of key advantages: it is applicable to particle images of arbitrary shapes, it is intuitive and easy to use, it can be used without advanced knowledge of optics and velocimetry theory, it is robust against outliers and overlapping particle images, and it requires only equipment which is standard in microfluidic laboratories. We demonstrate the method by tracking the three-dimensional motion of 2 μm spherical particles in a microfluidic channel using three different optical arrangements. The position of the particles was measured with an estimated uncertainty of 0.1 μm in the in-plane direction and 2 μm in the depth direction for a measurement volume of 1510 × 1270 × 160 μm(3). A ready-to-use GUI implementation of the method can be acquired on . PMID:26201498
3. Review of Particle Physics
Amsler, C.; Doser, M.; Antonelli, M.; Asner, D. M.; Babu, K. S.; Baer, H.; Band, H. R.; Barnett, R. M.; Bergren, E.; Beringer, J.; Bernardi, G.; Bertl, W.; Bichsel, H.; Biebel, O.; Bloch, P.; Blucher, E.; Blusk, S.; Cahn, R. N.; Carena, M.; Caso, C.; Ceccucci, A.; Chakraborty, D.; Chen, M.-C.; Chivukula, R. S.; Cowan, G.; Dahl, O.; D'Ambrosio, G.; Damour, T.; de Gouvêa, A.; DeGrand, T.; Dobrescu, B.; Drees, M.; Edwards, D. A.; Eidelman, S.; Elvira, V. D.; Erler, J.; Ezhela, V. V.; Feng, J. L.; Fetscher, W.; Fields, B. D.; Foster, B.; Gaisser, T. K.; Garren, L.; Gerber, H.-J.; Gerbier, G.; Gherghetta, T.; Giudice, G. F.; Goodman, M.; Grab, C.; Gritsan, A. V.; Grivaz, J.-F.; Groom, D. E.; Grünewald, M.; Gurtu, A.; Gutsche, T.; Haber, H. E.; Hagiwara, K.; Hagmann, C.; Hayes, K. G.; Hernández-Rey, J. J.; Hikasa, K.; Hinchliffe, I.; Höcker, A.; Huston, J.; Igo-Kemenes, P.; Jackson, J. D.; Johnson, K. F.; Junk, T.; Karlen, D.; Kayser, B.; Kirkby, D.; Klein, S. R.; Knowles, I. G.; Kolda, C.; Kowalewski, R. V.; Kreitz, P.; Krusche, B.; Kuyanov, Yu. V.; Kwon, Y.; Lahav, O.; Langacker, P.; Liddle, A.; Ligeti, Z.; Lin, C.-J.; Liss, T. M.; Littenberg, L.; Liu, J. C.; Lugovsky, K. S.; Lugovsky, S. B.; Mahlke, H.; Mangano, M. L.; Mannel, T.; Manohar, A. V.; Marciano, W. J.; Martin, A. D.; Masoni, A.; Milstead, D.; Miquel, R.; Mönig, K.; Murayama, H.; Nakamura, K.; Narain, M.; Nason, P.; Navas, S.; Nevski, P.; Nir, Y.; Olive, K. A.; Pape, L.; Patrignani, C.; Peacock, J. A.; Piepke, A.; Punzi, G.; Quadt, A.; Raby, S.; Raffelt, G.; Ratcliff, B. N.; Renk, B.; Richardson, P.; Roesler, S.; Rolli, S.; Romaniouk, A.; Rosenberg, L. J.; Rosner, J. L.; Sachrajda, C. T.; Sakai, Y.; Sarkar, S.; Sauli, F.; Schneider, O.; Scott, D.; Seligman, W. G.; Shaevitz, M. H.; Sjöstrand, T.; Smith, J. G.; Smoot, G. F.; Spanier, S.; Spieler, H.; Stahl, A.; Stanev, T.; Stone, S. L.; Sumiyoshi, T.; Tanabashi, M.; Terning, J.; Titov, M.; Tkachenko, N. P.; Törnqvist, N. A.; Tovey, D.; Trilling, G. H.; Trippe, T. G.; Valencia, G.; van Bibber, K.; Vincter, M. G.; Vogel, P.; Ward, D. R.; Watari, T.; Webber, B. R.; Weiglein, G.; Wells, J. D.; Whalley, M.; Wheeler, A.; Wohl, C. G.; Wolfenstein, L.; Womersley, J.; Woody, C. L.; Workman, R. L.; Yamamoto, A.; Yao, W.-M.; Zenin, O. V.; Zhang, J.; Zhu, R.-Y.; Zyla, P. A.; Harper, G.; Lugovsky, V. S.; Schaffner, P.; Particle Data Group
2008-09-01
This biennial Review summarizes much of particle physics. Using data from previous editions, plus 2778 new measurements from 645 papers, we list, evaluate, and average measured properties of gauge bosons, leptons, quarks, mesons, and baryons. We also summarize searches for hypothetical particles such as Higgs bosons, heavy neutrinos, and supersymmetric particles. All the particle properties and search limits are listed in Summary Tables. We also give numerous tables, figures, formulae, and reviews of topics such as the Standard Model, particle detectors, probability, and statistics. Among the 108 reviews are many that are new or heavily revised including those on CKM quark-mixing matrix, V ud & V us, V cb & V ub, top quark, muon anomalous magnetic moment, extra dimensions, particle detectors, cosmic background radiation, dark matter, cosmological parameters, and big bang cosmology. A booklet is available containing the Summary Tables and abbreviated versions of some of the other sections of this full Review. All tables, listings, and reviews (and errata) are also available on the Particle Data Group website: http://pdg.lbl.gov.
4. Particle exposures and infections.
PubMed
Ghio, A J
2014-06-01
Particle exposures increase the risk for human infections. Particles can deposit in the nose, pharynx, larynx, trachea, bronchi, and distal lung and, accordingly, the respiratory tract is the system most frequently infected after such exposure; however, meningitis also occurs. Cigarette smoking, burning of biomass, dust storms, mining, agricultural work, environmental tobacco smoke (ETS), wood stoves, traffic-related emissions, gas stoves, and ambient air pollution are all particle-related exposures associated with an increased risk for respiratory infections. In addition, cigarette smoking, burning of biomass, dust storms, mining, and ETS can result in an elevated risk for tuberculosis, atypical mycobacterial infections, and meningitis. One of the mechanisms for particle-related infections includes an accumulation of iron by surface functional groups of particulate matter (PM). Since elevations in metal availability are common to every particle exposure, all PM potentially contributes to these infections. Therefore, exposures to wood stove emissions, diesel exhaust, and air pollution particles are predicted to increase the incidence and prevalence of tuberculosis, atypical mycobacterial infections, and meningitis, albeit these elevations are likely to be small and detectable only in large population studies. Since iron accumulation correlates with the presence of surface functional groups and dependent metal coordination by the PM, the risk for infection continues as long as the particle is retained. Subsequently, it is expected that the cessation of exposure will diminish, but not totally reverse, the elevated risk for infection.
5. Apparatus for measuring particle properties
DOEpatents
Rader, D.J.; Castaneda, J.N.; Grasser, T.W.; Brockmann, J.E.
1998-08-11
An apparatus is described for determining particle properties from detected light scattered by the particles. The apparatus uses a light beam with novel intensity characteristics to discriminate between particles that pass through the beam and those that pass through an edge of the beam. The apparatus can also discriminate between light scattered by one particle and light scattered by multiple particles. The particles size can be determined from the intensity of the light scattered. The particles velocity can be determined from the elapsed time between various intensities of the light scattered. 11 figs.
6. Charged particle accelerator grating
DOEpatents
Palmer, Robert B.
1986-01-01
A readily disposable and replaceable accelerator grating for a relativistic particle accelerator. The grating is formed for a plurality of liquid droplets that are directed in precisely positioned jet streams to periodically dispose rows of droplets along the borders of a predetermined particle beam path. A plurality of lasers are used to direct laser beams into the droplets, at predetermined angles, thereby to excite the droplets to support electromagnetic accelerating resonances on their surfaces. Those resonances operate to accelerate and focus particles moving along the beam path. As the droplets are distorted or destroyed by the incoming radiation, they are replaced at a predetermined frequency by other droplets supplied through the jet streams.
7. Biomimetic Particles as Therapeutics
PubMed Central
Green, Jordan J.
2015-01-01
In recent years, there have been major advances in the development of novel nanoparticle and microparticle-based therapeutics. An emerging paradigm is the incorporation of biomimetic features into these synthetic therapeutic constructs to enable them to better interface with biological systems. Through the control of size, shape, and material consistency, particle cores have been generated that better mimic natural cells and viruses. In addition, there have been significant advances in biomimetic surface functionalization of particles through the integration of bio-inspired artificial cell membranes and naturally derived cell membranes. Biomimetic technologies enable therapeutic particles to have increased potency to benefit human health. PMID:26277289
PubMed
Parker, R G
1985-05-01
Current interest in attempting to identify any therapeutic advantages of beams of heavy particles (heavier than electrons) over photons is based on differences in physical absorption and radiobiologic interactions. The article discusses: dose distributions in tissue, which are markedly different for particles than for high energy photons and so may be clinically advantageous for the former; differences in radiobiologic responses, which could lead to increased tumor cell killing and a possible increase in the therapeutic ratio for particles; clinical experience to date; directions for and impediments to future research. PMID:2983877
9. Charged particle accelerator grating
DOEpatents
Palmer, Robert B.
1986-09-02
A readily disposable and replaceable accelerator grating for a relativistic particle accelerator. The grating is formed for a plurality of liquid droplets that are directed in precisely positioned jet streams to periodically dispose rows of droplets along the borders of a predetermined particle beam path. A plurality of lasers are used to direct laser beams into the droplets, at predetermined angles, thereby to excite the droplets to support electromagnetic accelerating resonances on their surfaces. Those resonances operate to accelerate and focus particles moving along the beam path. As the droplets are distorted or destroyed by the incoming radiation, they are replaced at a predetermined frequency by other droplets supplied through the jet streams.
10. Particle separation by dielectrophoresis
PubMed Central
Gascoyne, Peter R. C.; Vykoukal, Jody
2009-01-01
The application of dielectrophoresis to particle discrimination, separation, and fractionation is reviewed, some advantages and disadvantages of currently available approaches are considered, and some caveats are noted. PMID:12210248
11. Particle Physics Masterclass
ScienceCinema
Helio Takai
2016-07-12
Students from six local high schools -- Farmingdale, Sachem East, Shoreham, Smithtown East, Ward Melville, and William Floyd -- came to Brookhaven National Laboratory to experience research with particle physicist Helio Takai. They were among more than 6,
12. JSC Particle Telescope
NASA Technical Reports Server (NTRS)
2003-01-01
This paper presents a detailed description of the Johnson Space Center's Particle Telescope. Schematic diagrams of the telescope geometry and an electronic block diagram of the detector telescopes' components are also described.
NASA Technical Reports Server (NTRS)
Chubb, Donald L.
1987-01-01
The performance of a new space radiator concept, the gas particle radiator (GPR), is studied. The GPR uses a gas containing submicron particles as the radiating medium contained between the radiator's emitting surface and a transparent window. For a modest volume fraction of submicron particles and gas thickness, it is found that the emissivity is determined by the window transmittance. The window must have a high transmittance in the infrared and be structurally strong enough to contain the gas-particle mixture. When the GPR is compared to a proposed titanium wall, potassium heat pipe radiator, with both radiators operating at a power level of 1.01 MW at 775 K, it is found that the GPR mass is 31 percent lower than that of the heat pipe radiator.
14. Accelerating Particles with Plasma
SciTech Connect
Litos, Michael; Hogan, Mark
2014-11-05
Researchers at SLAC explain how they use plasma wakefields to accelerate bunches of electrons to very high energies over only a short distance. Their experiments offer a possible path for the future of particle accelerators.
15. Unstable particles near threshold
Chway, Dongjin; Jung, Tae Hyun; Kim, Hyung Do
2016-07-01
We explore the physics of unstable particles when the mother particle's mass is approximately the sum of the masses of its daughter particles. In this case, the conventional wave function renormalization factor used for the narrow width approximation is ill-defined. We propose a simple resolution of the problem that allows the use of the narrow width approximation by defining the wave function renormalization factor and the branching ratio in terms of the spectral density. We test new definitions by calculating the cross section in the Higgs portal model and a significant improvement is obtained. Meanwhile, no single decay width can be assigned to the unstable particles and non-exponential decay occurs at all time scales.
16. The packing of particles
SciTech Connect
Cumberland, D.J.; Crawford, R.J.
1987-01-01
The wide range of information currently available on the packing of particles is brought together in this monograph. The authors' interest in the subject was initially aroused by the question of whether there is an optimum particle size distribution which would maximise the packing density of particles - a question which has attracted the interest of scientists and engineers for centuries. The densification of a powder mass is of relevance in a great many industries, among them the pharmaceutical, ceramic, powder metallurgy and civil engineering industries. In addition, the packing of regular - or irregular - shaped particles is also of relevance to a surprisingly large number of other industries and subject areas, i.e. the foundry industry, nuclear engineering, chemical engineering, crystallography, geology, biology, telecommunications, and so on. Accordingly, this book is written for a wide audience.
17. Magnetic Particle Imaging
SciTech Connect
Minard, Kevin R.
2010-02-01
Rapid advances in the synthesis of superparamagnetic nanoparticles has stimulated widespread interest in their use as contrast agents for visualizing biological processes with Magnetic Resonance Imaging (MRI). With this approach, strong particle magnetism alters the MRI signal from nearby water protons and this, in turn, affects observed image contrast. Magnetic particle detection with MRI is therefore indirect and suffers from several associated problems, including poor quantification and tissuedependent performance. Magnetic Particle Imaging (MPI) overcomes these by directly measuring the amount of superparamagnetic material at each location. Mass sensitivity, spatial resolution, and imaging time is also comparable to or better than that achieved with MRI. Moreover, MPI is relatively inexpensive, meets all current safety guidelines, is quantitative, provides unambiguous contrast with tissue-independent performance, and can detect lower particle concentrations. Here, the basic principles behind MPI are described, factors affecting sensitivity and resolution are discussed, and potential utility for biomedical use is examined.
18. Elementary particle physics
NASA Technical Reports Server (NTRS)
Perkins, D. H.
1986-01-01
Elementary particle physics is discussed. Status of the Standard Model of electroweak and strong interactions; phenomena beyond the Standard Model; new accelerator projects; and possible contributions from non-accelerator experiments are examined.
19. Electromagnetic particle simulation codes
NASA Technical Reports Server (NTRS)
Pritchett, P. L.
1985-01-01
Electromagnetic particle simulations solve the full set of Maxwell's equations. They thus include the effects of self-consistent electric and magnetic fields, magnetic induction, and electromagnetic radiation. The algorithms for an electromagnetic code which works directly with the electric and magnetic fields are described. The fields and current are separated into transverse and longitudinal components. The transverse E and B fields are integrated in time using a leapfrog scheme applied to the Fourier components. The particle pushing is performed via the relativistic Lorentz force equation for the particle momentum. As an example, simulation results are presented for the electron cyclotron maser instability which illustrate the importance of relativistic effects on the wave-particle resonance condition and on wave dispersion.
20. Particle Physics Masterclass
SciTech Connect
Helio Takai
2009-04-10
Students from six local high schools -- Farmingdale, Sachem East, Shoreham, Smithtown East, Ward Melville, and William Floyd -- came to Brookhaven National Laboratory to experience research with particle physicist Helio Takai. They were among more than 6,
1. Elementary particle theory
SciTech Connect
Marciano, W.J.
1984-12-01
The present state of the art in elementary particle theory is reviewed. Topics include quantum electrodynamics, weak interactions, electroweak unification, quantum chromodynamics, and grand unified theories. 113 references. (WHK)
2. Particle Size Analysis.
ERIC Educational Resources Information Center
Barth, Howard G.; Sun, Shao-Tang
1989-01-01
Presents a review of research focusing on scattering, elution techniques, electrozone sensing, filtration, centrifugation, comparison of techniques, data analysis, and particle size standards. The review covers the period 1986-1988. (MVL)
3. Research in particle theory
SciTech Connect
Mansouri, F.; Suranyi, P; Wijewardhana, L.C.R.
1991-10-01
In the test particle approximation, the scattering amplitude for two-particle scattering in (2+1)-dimensional Chern-Simons-Witten gravity and supergravity was computed and compared to the corresponding metric solutions. The formalism was then extended to the exact gauge theoretic treatment of the two-particle scattering problem and compared to 't Hooft's results from the metric approach. We have studied dynamical symmetry breaking in 2+1 dimensional field theories. We have analyzed strong Extended Technicolor (ETC) models where the ETC coupling is close to a critical value. There are effective scalar fields in each of the theories. We have worked our how such scalar particles can be produced and how they decay. The {phi}{sup 4} field theory was investigated in the Schrodinger representation. The critical behavior was extracted in an arbitrary number of dimensions in second order of a systematic truncation approximation. The correlation exponent agrees with known values within a few percent.
4. Particle chemistry impactor experiment
NASA Technical Reports Server (NTRS)
Pueschel, R. F.; Snetsinger, K. G.; Ferry, G. V.; Goodman, J. K.; Verma, S.
1990-01-01
Polar stratospheric cloud (PSC) particles are collected on impactors and studied with regard to physical and chemical properties to help explain the importance of heterogeneous chemical reactions for stratospheric ozone depletion. The nitric, hydrochloric, and sulfuric acid content of stratospheric aerosol particles collected at 18 km altitude was determined. It is suggested that nitric acid is a component of polar stratospheric clouds. This is important for two reasons: (1) it proves that chlorine activation takes place at the surface of PSC particles by converting chemically inert chlorine nitrate to chlorine radicals that can react with ozone; and (2) if the PSC particles are large enough to settle out from the stratosphere, the possibility of nitric acid removal can result in the denitrification of the stratosphere.
5. Fine particle pollution
Atmospheric Science Data Center
2013-01-10
... Satellites Track Human Exposure to Fine Particle Pollution St. Louis, Missouri Alaskan Wildfires ... provides a good test region for satellite observations of pollution. ( Full St. Louis article ) MISR ...
6. Big Bang Day: 5 Particles - 5. The Next Particle
ScienceCinema
None
2016-07-12
Simon Singh looks at the stories behind the discovery of 5 of the universe's most significant subatomic particles: the Electron, the Quark, the Anti-particle, the Neutrino and the "next particle". 5. The Next Particle The "sparticle" - a super symmetric partner to all the known particles could be the answer to uniting all the known particles and their interactions under one grand theoretical pattern of activity. But how do researchers know where to look for such phenomena and how do they know if they find them? Simon Singh reviews the next particle that physicists would like to find if the current particle theories are to ring true.
7. Big Bang Day: 5 Particles - 5. The Next Particle
SciTech Connect
2009-10-08
Simon Singh looks at the stories behind the discovery of 5 of the universe's most significant subatomic particles: the Electron, the Quark, the Anti-particle, the Neutrino and the "next particle". 5. The Next Particle The "sparticle" - a super symmetric partner to all the known particles could be the answer to uniting all the known particles and their interactions under one grand theoretical pattern of activity. But how do researchers know where to look for such phenomena and how do they know if they find them? Simon Singh reviews the next particle that physicists would like to find if the current particle theories are to ring true.
8. PARTICLES OF DIFFERENCE.
SciTech Connect
SCHWARTZ,S.E.
2000-09-21
It is no longer appropriate, if it ever was, to think of atmospheric aerosols as homogeneous spheres of uniform composition and size. Within the United States, and even more globally, not only the mass loading but also the composition, morphology, and size distribution of atmospheric aerosols are highly variable, as a function of location, and at a given location as a function of time. Particles of a given aerodynamic size may differ from one another, and even within individual particles material may be inhomogeneously distributed, as for example, carbon spherules imbedded in much larger sulfate particles. Some of the particulate matter is primary, that is, introduced into the atmosphere directly as particles, such as carbon particles in diesel exhaust. Some is secondary, that is, formed in the atmosphere by gas-to-particle conversion. Much of the material is inorganic, mainly sulfates and nitrates resulting mainly from energy-related emissions. Some of the material is carbonaceous, in part primary, in part secondary, and of this material some is anthropogenic and some biogenic. While the heterogeneity of atmospheric aerosols complicates the problem of understanding their loading and distribution, it may well be the key to its solution. By detailed examination of the materials comprising aerosols it is possible to infer the sources of these materials. It may be possible as well to identify specific health impairing agents. The heterogeneity of aerosol particles is thus the key to identifying their sources, to understanding the processes that govern their loading and properties, and to devising control strategies that are both effective and efficient. Future research must therefore take cognizance of differences among aerosol particles and use these differences to advantage.
9. Mass Formulae for Particles
Turu, Michi
2003-07-01
May we say?, the distribution of all particle masses are "Random" or "Chaos" or "Fractal" or "Bushing" as a whole. We can say perfectly, it is "Bushing". It's looks like a relationship among the masses of galaxy, sun, earth, moon, lunar orbiter. And also like the structure of contents(section, paragraph, item) in books. Generally, mass structures have the power of it's interaction constants. I state a fundamental formulae about particle masses in this purview.
10. The Least Particle Theory
Hartsock, Robert
2011-10-01
The Least Particle Theory states that the universe was cast as a great sea of energy. MaX Planck declared a quantum of energy to be the least value in the universe. We declare the quantum of energy to be the least particle in the universe. Stephen Hawking declared quantum mechanics to be of no value in todays gross mechanics. That's like saying the number 1 has no place in mathematics.
DOEpatents
Martin, Sue I.; Fergenson, David P.; Srivastava, Abneesh; Bogan, Michael J.; Riot, Vincent J.; Frank, Matthias
2010-08-24
A human-safe fluorescence particle that can be used for fluorescence detection instruments or act as a safe simulant for mimicking the fluorescence properties of microorganisms. The particle comprises a non-biological carrier and natural fluorophores encapsulated in the non-biological carrier. By doping biodegradable-polymer drug delivery microspheres with natural or synthetic fluorophores, the desired fluorescence can be attained or biological organisms can be simulated without the associated risks and logistical difficulties of live microorganisms.
12. ELEMENTARY PARTICLE INTERACTIONS
SciTech Connect
EFREMENKO, YURI; HANDLER, THOMAS; KAMYSHKOV, YURI; SIOPSIS, GEORGE; SPANIER, STEFAN
2013-07-30
The High-Energy Elementary Particle Interactions group at UT during the last three years worked on the following directions and projects: Collider-based Particle Physics; Neutrino Physics, particularly participation in “NOνA”, “Double Chooz”, and “KamLAND” neutrino experiments; and Theory, including Scattering amplitudes, Quark-gluon plasma; Holographic cosmology; Holographic superconductors; Charge density waves; Striped superconductors; and Holographic FFLO states.
13. Statistics of indistinguishable particles.
PubMed
Wittig, Curt
2009-07-01
The wave function of a system containing identical particles takes into account the relationship between a particle's intrinsic spin and its statistical property. Specifically, the exchange of two identical particles having odd-half-integer spin results in the wave function changing sign, whereas the exchange of two identical particles having integer spin is accompanied by no such sign change. This is embodied in a term (-1)(2s), which has the value +1 for integer s (bosons), and -1 for odd-half-integer s (fermions), where s is the particle spin. All of this is well-known. In the nonrelativistic limit, a detailed consideration of the exchange of two identical particles shows that exchange is accompanied by a 2pi reorientation that yields the (-1)(2s) term. The same bookkeeping is applicable to the relativistic case described by the proper orthochronous Lorentz group, because any proper orthochronous Lorentz transformation can be expressed as the product of spatial rotations and a boost along the direction of motion. PMID:19552474
14. On Characterizing Particle Shape
NASA Technical Reports Server (NTRS)
Ennis, Bryan J.; Rickman, Douglas; Rollins, A. Brent; Ennis, Brandon
2014-01-01
It is well known that particle shape affects flow characteristics of granular materials, as well as a variety of other solids processing issues such as compaction, rheology, filtration and other two-phase flow problems. The impact of shape crosses many diverse and commercially important applications, including pharmaceuticals, civil engineering, metallurgy, health, and food processing. Two applications studied here include the dry solids flow of lunar simulants (e.g. JSC-1, NU-LHT-2M, OB-1), and the flow properties of wet concrete, including final compressive strength. A multi-dimensional generalized, engineering method to quantitatively characterize particle shapes has been developed, applicable to both single particle orientation and multi-particle assemblies. The two-dimension, three dimension inversion problem is also treated, and the application of these methods to DEM model particles will be discussed. In the case of lunar simulants, flow properties of six lunar simulants have been measured, and the impact of particle shape on flowability - as characterized by the shape method developed here -- is discussed, especially in the context of three simulants of similar size range. In the context of concrete processing, concrete construction is a major contributor to greenhouse gas production, of which the major contributor is cement binding loading. Any optimization in concrete rheology and packing that can reduce cement loading and improve strength loading can also reduce currently required construction safety factors. The characterization approach here is also demonstrated for the impact of rock aggregate shape on concrete slump rheology and dry compressive strength.
15. Mutagenicity of airborne particles.
PubMed
Chrisp, C E; Fisher, G L
1980-09-01
The physical and chemical properties of airborne particles are important for the interpretation of their potential biologic significance as genotoxic hazards. For polydisperse particle size distributions, the smallest, most respirable particles are generally the most mutagenic. Particulate collection for testing purposes should be designed to reduce artifact formation and allow condensation of mutagenic compounds. Other critical factors such as UV irradiation, wind direction, chemical reactivity, humidity, sample storage, and temperature of combustion are important. Application of chemical extraction methods and subsequent class fractionation techniques influence the observed mutagenic activity. Particles from urban air, coal fly ash, automobile and diesel exhaust, agricultural burning and welding fumes contain primarily direct-acting mutagens. Cigarette smoke condensate, smoke from charred meat and protein pyrolysates, kerosene soot and cigarette smoke condensates contain primarily mutagens which require metabolic activation. Fractionation coupled with mutagenicity testing indicates that the most potent mutagens are found in the acidic fractions of urban air, coal fly ash, and automobile diesel exhaust, whereas mutagens in rice straw smoke and cigarette smoke condensate are found primarily in the basic fractions. The interaction of the many chemical compounds in complex mixtures from airborne particles is likely to be important in determining mutagenic or comutagenic potentials. Because the mode of exposure is generally frequent and prolonged, the presence of tumor-promoting agents in complex mixtures may be a major factor in evaluation of the carcinogenic potential of airborne particles.
16. Particle Accelerators Test Cosmological Theory.
ERIC Educational Resources Information Center
Schramm, David N.; Steigman, Gary
1988-01-01
Discusses the symbiotic relationship of cosmology and elementary-particle physics. Presents a brief overview of particle physics. Explains how cosmological considerations set limits on the number of types of elementary particles. (RT)
17. Perspectives on utilizing unique features of microfluidics technology for particle and cell sorting
PubMed Central
Adams, Jonathan D.; Tom Soh, H.
2009-01-01
Sample preparation is often the most tedious and demanding step in an assay, but it also plays an essential role in determining the quality of results. As biological questions and analytical methods become increasingly sophisticated, there is a rapidly growing need for systems that can reliably and reproducibly separate cells and particles with high purity, throughput and recovery. Microfluidics technology represents a compelling approach in this regard, allowing precise control of separation forces for high performance separation in inexpensive, or even disposable, devices. In addition, microfluidics technology enables the fabrication of arrayed and integrated systems that operate either in parallel or in tandem, in a capacity that would be difficult to achieve in macro-scale systems. In this report, we use recent examples from our work to illustrate the potential of microfluidic cell- and particle-sorting devices. We demonstrate the potential of chip-based high-gradient magnetophoresis that enable high-purity separation through reversible trapping of target particles paired with high-stringency washing with minimal loss. We also describe our work in the development of devices that perform simultaneous multi-target sorting, either through precise control of magnetic and fluidic forces or through the integration of multiple actuation forces into a single monolithic device. We believe that such devices may serve as a powerful “front-end” module of highly integrated analytical platforms capable of providing actionable diagnostic information directly from crude, unprocessed samples - the success of such systems may hold the key to advancing point-of-care diagnostics and personalized medicine. PMID:20161387
18. Particle acceleration in solar flares
NASA Technical Reports Server (NTRS)
Ramaty, R.; Forman, M. A.
1987-01-01
The most direct signatures of particle acceleration in flares are energetic particles detected in interplanetary space and in the Earth atmosphere, and gamma rays, neutrons, hard X-rays, and radio emissions produced by the energetic particles in the solar atmosphere. The stochastic and shock acceleration theories in flares are reviewed and the implications of observations on particle energy spectra, particle confinement and escape, multiple acceleration phases, particle anistropies, and solar atmospheric abundances are discussed.
19. Distributed control of multi-robot teams: Cooperative baton passing task
SciTech Connect
Parker, L.E.
1998-11-01
This research addresses the problem of achieving fault tolerant cooperation within small- to medium-sized teams of heterogeneous mobile robots. The author describes a novel behavior-based, fully distributed architecture, called ALLIANCE, that utilizes adaptive action selection to achieve fault tolerant cooperative control. The robots in this architecture possess a variety of high-level functions that they can perform during a mission, and must at all times select an appropriate action based on the requirements of the mission, the activities of other robots, the current environmental conditions, and their own internal states. Since such cooperative teams often work in dynamic and unpredictable environments, the software architecture allows the team members to respond robustly and reliably to unexpected environmental changes and modifications in the robot team that may occur due to mechanical failure, the learning of new skills, or the addition or removal of robots from the team by human intervention. After presenting ALLIANCE, they describe the implementation of this architecture on a team of physical mobile robots performing a cooperative baton passing task. These experiments illustrate the ability of ALLIANCE to achieve adaptive, fault-tolerant cooperative control amidst dynamic changes during the task.
20. Evolution of Signaling in a Multi-Robot System: Categorization and Communication
Ampatzis, Christos; Tuci, Elio; Trianni, Vito; Dorigo, Marco
We use Evolutionary Robotics to design robot controllers in which decision-making mechanisms to switch from solitary to social behavior are integrated with the mechanisms that underpin the sensory-motor repertoire of the robots. In particular, we study the evolution of behavioral and communicative skills in a categorization task. The individual decision-making structures are based on the integration over time of sensory information. The mechanisms for switching from solitary to social behavior and the ways in which the robots can affect each other's behavior are not predetermined by the experimenter, but are aspects of our model designed by artificial evolution. Our results show that evolved robots manage to cooperate and collectively discriminate between different environments by developing a simple communication protocol based on sound signaling. Communication emerges in the absence of explicit selective pressure coded in the fitness function. The evolution of communication is neither trivial nor obvious; for a meaningful signaling system to evolve, evolution must produce both appropriate signals and appropriate reactions to signals. The use of communication proves to be adaptive for the group, even if, in principle, non-cooperating robots can be equally successful with cooperating robots.
1. Multi-robots to micro-surgery: Selected robotic applications at Sandia National Laboratories
SciTech Connect
Bennett, P.C.
1996-11-01
The Intelligent Systems and Robotics Center (ISRC) at Sandia National Laboratories is a multi-program organization, pursuing research, development and applications in a wide range of field. Activities range from large-scale applications such as nuclear facility dismantlement for the US Department of Energy (DOE), to aircraft inspection and refurbishment, to automated script and program generation for robotic manufacturing and assembly, to miniature robotic devices and sensors for remote sensing and micro-surgery. This paper describes six activities in the large and small scale that are underway and either nearing technology transfer stage or seeking industrial partners to continue application development. The topics of the applications include multiple arm coordination for intuitively maneuvering large, ungainly work pieces; simulation, analysis and graphical training capability for CP-5 research reactor dismantlement; miniature robots with volumes of 16 cubic centimeters and less developed for inspection and sensor deployment; and biomedical sensors to enhance automated prosthetic device production and fill laparoscopic surgery information gap.
2. Task definition, decoupling and redundancy resolution by nonlinear feedback in multi-robot object handling
NASA Technical Reports Server (NTRS)
Ramadorai, A. K.; Tarn, T. J.; Bejczy, A. K.
1992-01-01
The problem of rigid object handling by multiple robot arms is investigated. The primary goal is to make the object exhibit a prescribed behavior while in contact with a fully known environment. Point contacts are assumed between the object and the arms. The aspect of task definition to achieve decoupling and linearizing control laws is discussed. Control laws are first formulated at the object level to provide decoupled force and position servo loops. It is then used to form control laws for the individual arms. Redundancies exist at the object and arm levels. The object level redundancy is used to achieve secondary goals in object handling. The arm level redundancies are the zero dynamics and can be controlled by redundant inputs. Full use of the available inputs are used to control the system as a whole. Numerical simulations for a dual-arm situation illustrate the validity of the approach.
3. Market-Based Coordination and Auditing Mechanisms for Self-Interested Multi-Robot Systems
ERIC Educational Resources Information Center
Ham, MyungJoo
2009-01-01
We propose market-based coordinated task allocation mechanisms, which allocate complex tasks that require synchronized and collaborated services of multiple robot agents to robot agents, and an auditing mechanism, which ensures proper behaviors of robot agents by verifying inter-agent activities, for self-interested, fully-distributed, and…
4. Distributed multi-robot sensing and tracking: a behavior-based approach
SciTech Connect
Parker, L.E.
1995-12-31
An important issue that arises in the automation of many large-scale surveillance and reconnaissance tasks is that of tracking the movements of (or maintaining passive contact with) objects navigating in a bounded area of interest. Oftentimes in these problems, the area to be monitored will move over time or will not permit fixed sensors, thus requiring a team of mobile sensors -- or robots -- to monitor the area collectively. In these situations, the robots must not only have mechanisms for determining how to track objects and how to fuse information from neighboring robots, but they must also have distributed control strategies for ensuring that the entire area of interest is continually covered to the greatest extent possible. This paper focuses on the distributed control issue by describing a proposed decentralized control mechanism that allows a team of robots to collectively track and monitor objects in an uncluttered area of interest. The approach is based upon an extension to the ALLIANCE behavior-based architecture that generalizes from the domain of loosely-coupled, independent applications to the domain of strongly cooperative applications, in which the action selection of a robot is dependent upon the actions selected by its teammates. We conclude the paper by describing our ongoing implementation of the proposed approach on a team of four mobile robots.
5. Formation control of multi-robots for on-orbit assembly of large solar sails
Hu, Quan; Zhang, Yao; Zhang, Jingrui; Hu, Haiyan
2016-06-01
This study focuses on the formation control of four robots used for the on-orbit construction of a large solar sail. The solar sail under consideration is non-spinning and has a 1 km2 area. It includes a hub as the central body and four large booms supporting the lightweight films. Four formation operating space robots capable of walking on the boom structure are utilized to deploy the sail films. Because of the large size and mass of the sail, the robots should remain in formation during the sail deployment to avoid dramatic changes in the system properties. In this paper, the formation control issue of the four robots is solved by an adaptive sliding mode controller. A disturbance observer with finite-time convergence is embedded to improve the control performance. The proposed controller is capable of resisting the strong uncertainties in the operation and do not require the accurate parameters of the system. The stability is proven, and numerical simulations are provided to validate the effectiveness of the control strategy.
6. Proton: The Particle
SciTech Connect
Suit, Herman
2013-11-01
The purpose of this article is to review briefly the nature of protons: creation at the Big Bang, abundance, physical characteristics, internal components, and life span. Several particle discoveries by proton as the experimental tool are considered. Protons play important roles in science, medicine, and industry. This article was prompted by my experience in the curative treatment of cancer patients by protons and my interest in the nature of protons as particles. The latter has been stimulated by many discussions with particle physicists and reading related books and journals. Protons in our universe number ≈10{sup 80}. Protons were created at 10{sup −6} –1 second after the Big Bang at ≈1.37 × 10{sup 10} years beforethe present. Proton life span has been experimentally determined to be ≥10{sup 34} years; that is, the age of the universe is 10{sup −24}th of the minimum life span of a proton. The abundance of the elements is hydrogen, ≈74%; helium, ≈24%; and heavier atoms, ≈2%. Accordingly, protons are the dominant baryonic subatomic particle in the universe because ≈87% are protons. They are in each atom in our universe and thus involved in virtually every activity of matter in the visible universe, including life on our planet. Protons were discovered in 1919. In 1968, they were determined to be composed of even smaller particles, principally quarks and gluons. Protons have been the experimental tool in the discoveries of quarks (charm, bottom, and top), bosons (W{sup +}, W{sup −}, Z{sup 0}, and Higgs), antiprotons, and antineutrons. Industrial applications of protons are numerous and important. Additionally, protons are well appreciated in medicine for their role in radiation oncology and in magnetic resonance imaging. Protons are the dominant baryonic subatomic particle in the visible universe, comprising ≈87% of the particle mass. They are present in each atom of our universe and thus a participant in every activity involving matter.
7. Particle-Charge Spectrometer
NASA Technical Reports Server (NTRS)
Fuerstenau, Stephen; Wilson, Gregory R.
2008-01-01
An instrument for rapidly measuring the electric charges and sizes (from approximately 1 to approximately 100 micrometers) of airborne particles is undergoing development. Conceived for monitoring atmospheric dust particles on Mars, instruments like this one could also be used on Earth to monitor natural and artificial aerosols in diverse indoor and outdoor settings for example, volcanic regions, clean rooms, powder-processing machinery, and spray-coating facilities. The instrument incorporates a commercially available, low-noise, ultrasensitive charge-sensing preamplifier circuit. The input terminal of this circuit--the gate of a field-effect transistor--is connected to a Faraday-cage cylindrical electrode. The charged particles of interest are suspended in air or other suitable gas that is made to flow along the axis of the cylindrical electrode without touching the electrode. The flow can be channeled and generated by any of several alternative means; in the prototype of this instrument, the gas is drawn along a glass capillary tube (see upper part of figure) coaxial with the electrode. The size of a particle affects its rate of acceleration in the flow and thus affects the timing and shape of the corresponding signal peak generated by the charge-sensing amplifier. The charge affects the magnitude (and thus also the shape) of the signal peak. Thus, the signal peak (see figure) conveys information on both the size and electric charge of a sensed particle. In experiments thus far, the instrument has been found to be capable of measuring individual aerosol particle charges of magnitude greater than 350 e (where e is the fundamental unit of electric charge) with a precision of +/- 150 e. The instrument can sample particles at a rate as high as several thousand per second.
8. Proton: the particle.
PubMed
Suit, Herman
2013-11-01
The purpose of this article is to review briefly the nature of protons: creation at the Big Bang, abundance, physical characteristics, internal components, and life span. Several particle discoveries by proton as the experimental tool are considered. Protons play important roles in science, medicine, and industry. This article was prompted by my experience in the curative treatment of cancer patients by protons and my interest in the nature of protons as particles. The latter has been stimulated by many discussions with particle physicists and reading related books and journals. Protons in our universe number ≈10(80). Protons were created at 10(-6) -1 second after the Big Bang at ≈1.37 × 10(10) years beforethe present. Proton life span has been experimentally determined to be ≥10(34) years; that is, the age of the universe is 10(-24)th of the minimum life span of a proton. The abundance of the elements is hydrogen, ≈74%; helium, ≈24%; and heavier atoms, ≈2%. Accordingly, protons are the dominant baryonic subatomic particle in the universe because ≈87% are protons. They are in each atom in our universe and thus involved in virtually every activity of matter in the visible universe, including life on our planet. Protons were discovered in 1919. In 1968, they were determined to be composed of even smaller particles, principally quarks and gluons. Protons have been the experimental tool in the discoveries of quarks (charm, bottom, and top), bosons (W(+), W(-), Z(0), and Higgs), antiprotons, and antineutrons. Industrial applications of protons are numerous and important. Additionally, protons are well appreciated in medicine for their role in radiation oncology and in magnetic resonance imaging. Protons are the dominant baryonic subatomic particle in the visible universe, comprising ≈87% of the particle mass. They are present in each atom of our universe and thus a participant in every activity involving matter.
9. Proton: the particle.
PubMed
Suit, Herman
2013-11-01
The purpose of this article is to review briefly the nature of protons: creation at the Big Bang, abundance, physical characteristics, internal components, and life span. Several particle discoveries by proton as the experimental tool are considered. Protons play important roles in science, medicine, and industry. This article was prompted by my experience in the curative treatment of cancer patients by protons and my interest in the nature of protons as particles. The latter has been stimulated by many discussions with particle physicists and reading related books and journals. Protons in our universe number ≈10(80). Protons were created at 10(-6) -1 second after the Big Bang at ≈1.37 × 10(10) years beforethe present. Proton life span has been experimentally determined to be ≥10(34) years; that is, the age of the universe is 10(-24)th of the minimum life span of a proton. The abundance of the elements is hydrogen, ≈74%; helium, ≈24%; and heavier atoms, ≈2%. Accordingly, protons are the dominant baryonic subatomic particle in the universe because ≈87% are protons. They are in each atom in our universe and thus involved in virtually every activity of matter in the visible universe, including life on our planet. Protons were discovered in 1919. In 1968, they were determined to be composed of even smaller particles, principally quarks and gluons. Protons have been the experimental tool in the discoveries of quarks (charm, bottom, and top), bosons (W(+), W(-), Z(0), and Higgs), antiprotons, and antineutrons. Industrial applications of protons are numerous and important. Additionally, protons are well appreciated in medicine for their role in radiation oncology and in magnetic resonance imaging. Protons are the dominant baryonic subatomic particle in the visible universe, comprising ≈87% of the particle mass. They are present in each atom of our universe and thus a participant in every activity involving matter. PMID:24074929
10. Particles causing lung disease.
PubMed Central
Kilburn, K H
1984-01-01
The lung has a limited number of patterns of reaction to inhaled particles. The disease observed depends upon the location: conducting airways, terminal bronchioles and alveoli, and upon the nature of inflammation induced: acute, subacute or chronic. Many different agents cause narrowing of conducting airways (asthma) and some of these cause permanent distortion or obliteration of airways as well. Terminal bronchioles appear to be particularly susceptible to particles which cause goblet cell metaplasia, mucous plugging and ultimately peribronchiolar fibrosis. Cancer is the last outcome at the bronchial level and appears to depend upon continuous exposure to or retention of an agent in the airway and failure of the affected cells to be exfoliated which may be due to squamous metaplasia. Alveoli are populated by endothelial cells, Type I or pavement epithelial cells and metabolically active cuboidal Type II cells that produce the lungs specific surfactant, dipalmytol lecithin. Disturbances of surfactant lead to edema in distal lung while laryngeal edema due to anaphylaxis or fumes may produce asphyxia. Physical retention of indigestible particles or retention by immune memory responses may provoke hyaline membranes, stimulate alveolar lipoproteinosis and finally fibrosis. This later exuberant deposition of connective tissue has been best studied in the occupational pneumoconioses especially silicosis and asbestosis. In contrast emphysema a catabolic response, appears frequently to result from leakage or release of lysosomal proteases into the lung during processing of cigarette smoke particles. The insidious and probably most important human lung disease due to particles is bronchiolar obstruction and obliteration, producing progressive impairment of air flow. The responsible particle is the complex combination of poorly digestive lipids and complex carbohydrates with active chemicals which we call cigarette smoke. More research is needed to perfect, correct and
11. Review of Particle Physics
Olive, K. A.; Particle Data Group; et al.
2016-10-01
The Review summarizes much of particle physics and cosmology. Using data from previous editions, plus 3,062 new measurements from 721 papers, we list, evaluate, and average measured properties of gauge bosons and the recently discovered Higgs boson, leptons, quarks, mesons, and baryons. We summarize searches for hypothetical particles such as supersymmetric particles, heavy bosons, axions, dark photons, etc. All the particle properties and search limits are listed in Summary Tables. We also give numerous tables, figures, formulae, and reviews of topics such as Higgs Boson Physics, Supersymmetry, Grand Unified Theories, Neutrino Mixing, Dark Energy, Dark Matter, Cosmology, Particle Detectors, Colliders, Probability and Statistics. Among the 117 reviews are many that are new or heavily revised, including those on Pentaquarks and Inflation. The complete Review is published online in a journal and on the website of the Particle Data Group (http://pdg.lbl.gov). The printed PDG Book contains the Summary Tables and all review articles but no longer includes the detailed tables from the Particle Listings. A Booklet with the Summary Tables and abbreviated versions of some of the review articles is also available. Contents Abstract, Contributors, Highlights and Table of ContentsAcrobat PDF (150 KB) IntroductionAcrobat PDF (456 KB) Particle Physics Summary Tables Gauge and Higgs bosonsAcrobat PDF (155 KB) LeptonsAcrobat PDF (134 KB) QuarksAcrobat PDF (84 KB) MesonsAcrobat PDF (871 KB) BaryonsAcrobat PDF (300 KB) Searches (Supersymmetry, Compositeness, etc.)Acrobat PDF (91 KB) Tests of conservation lawsAcrobat PDF (330 KB) Reviews, Tables, and Plots Detailed contents for this sectionAcrobat PDF (37 KB) Constants, Units, Atomic and Nuclear PropertiesAcrobat PDF (278 KB) Standard Model and Related TopicsAcrobat PDF (7.3 MB) Astrophysics and CosmologyAcrobat PDF (2.7 MB) Experimental Methods and CollidersAcrobat PDF (3.8 MB) Mathematical Tools or Statistics, Monte Carlo, Group
12. Lorentz force particle analyzer
Wang, Xiaodong; Thess, André; Moreau, René; Tan, Yanqing; Dai, Shangjun; Tao, Zhen; Yang, Wenzhi; Wang, Bo
2016-07-01
A new contactless technique is presented for the detection of micron-sized insulating particles in the flow of an electrically conducting fluid. A transverse magnetic field brakes this flow and tends to become entrained in the flow direction by a Lorentz force, whose reaction force on the magnetic-field-generating system can be measured. The presence of insulating particles suspended in the fluid produce changes in this Lorentz force, generating pulses in it; these pulses enable the particles to be counted and sized. A two-dimensional numerical model that employs a moving mesh method demonstrates the measurement principle when such a particle is present. Two prototypes and a three-dimensional numerical model are used to demonstrate the feasibility of a Lorentz force particle analyzer (LFPA). The findings of this study conclude that such an LFPA, which offers contactless and on-line quantitative measurements, can be applied to an extensive range of applications. These applications include measurements of the cleanliness of high-temperature and aggressive molten metal, such as aluminum and steel alloys, and the clean manufacturing of semiconductors.
13. Plasma Particle Lofting
Heijmans, Lucas; Nijdam, Sander
2015-09-01
In plasma particle lofting, macroscopic particles are picked up from a surface by an electric force. This force originates from a plasma that charges both the surface and any particle on it, leading to an electric force that pushes particles off the surface. This process has been suggested as a novel cleaning technique in modern high-tech applications, because it has intrinsic advantages over more traditional methods. Its development is, however, limited by a lack of knowledge of the underlying physics. Although the lofting has been demonstrated before, there are neither numerical nor experimental quantitative measures of it. Especially determining the charge deposited by a plasma on a particle on a surface proves difficult. We have developed a novel experimental method using a probe force.'' This allows us to, for the first time, quantitatively measure the plasma lofting force. By applying this method to different plasma conditions we can identify the important plasma parameters, allowing us to tailor a plasma for specific cleaning applications. Additionally, the quantitative result can help in the development of new models for the electron and ion currents through a plasma sheath.
14. New particle searches
SciTech Connect
Derrick, M.
1985-01-01
The Standard Model is a remarkable result of decades of work in particle physics, but it is clearly an incomplete representation of the world. Exploring possibilities beyond the Standard Model is a major preoccupation of both theorists and experimentalists. Despite the many suggestions that are extant about the missing links within the Standard Model as well as extensions beyond it, no hard experimental evidence exists. In particular, in more than five years of experimentation both at PETRA and PEP no new particles have been found that would indicate new physics. Several reasons are possible for these negative results: the particles may be too heavy; the experiments may not be looking in the proper way; the cross sections may be too small or finally the particles may not exist. A continuing PEP program, at high luminosity will ensure that the second and third reason continue to be addressed. The higher energy e/sup +/e/sup -/ storage rings such as TRISTAN and LEP will extend the mass limits. High mass particles can also be produced at the CERN collider and soon with the Tevatron collider. A concise summary of the mass limits from the PETRA experiments has been given in a recent Mark J publication. The results shown provide a convenient yardstick against which to measure future search experiments.
15. Large Particle Titanate Sorbents
SciTech Connect
Taylor-Pashow, K.
2015-10-08
This research project was aimed at developing a synthesis technique for producing large particle size monosodium titanate (MST) to benefit high level waste (HLW) processing at the Savannah River Site (SRS). Two applications were targeted, first increasing the size of the powdered MST used in batch contact processing to improve the filtration performance of the material, and second preparing a form of MST suitable for deployment in a column configuration. Increasing the particle size should lead to improvements in filtration flux, and decreased frequency of filter cleaning leading to improved throughput. Deployment of MST in a column configuration would allow for movement from a batch process to a more continuous process. Modifications to the typical MST synthesis led to an increase in the average particle size. Filtration testing on dead-end filters showed improved filtration rates with the larger particle material; however, no improvement in filtration rate was realized on a crossflow filter. In order to produce materials suitable for column deployment several approaches were examined. First, attempts were made to coat zirconium oxide microspheres (196 µm) with a layer of MST. This proved largely unsuccessful. An alternate approach was then taken synthesizing a porous monolith of MST which could be used as a column. Several parameters were tested, and conditions were found that were able to produce a continuous structure versus an agglomeration of particles. This monolith material showed Sr uptake comparable to that of previously evaluated samples of engineered MST in batch contact testing.
16. Particle physics and cosmology
SciTech Connect
Kolb, E.W.
1986-10-01
This series of lectures is about the role of particle physics in physical processes that occurred in the very early stages of the bug gang. Of particular interest is the role of particle physics in determining the evolution of the early Universe, and the effect of particle physics on the present structure of the Universe. The use of the big bang as a laboratory for placing limits on new particle physics theories will also be discussed. Section 1 reviews the standard cosmology, including primordial nucleosynthesis. Section 2 reviews the decoupling of weakly interacting particles in the early Universe, and discusses neutrino cosmology and the resulting limits that may be placed on the mass and lifetime of massive neutrinos. Section 3 discusses the evolution of the vacuum through phase transitions in the early Universe and the formation of topological defects in the transitions. Section 4 covers recent work on the generation of the baryon asymmetry by baryon-number violating reactions in Grand Unified Theories, and mentions some recent work on baryon number violation effects at the electroweak transition. Section 5 is devoted to theories of cosmic inflation. Finally, Section 6 is a discussion of the role of extra spatial dimensions in the evolution of the early Universe. 78 refs., 32 figs., 6 tabs.
17. Cosmology and Particle Physics
Steigman, G.
1982-01-01
The cosmic connections between physics on the very largest and very smallest scales are reviewed with an emphasis on the symbiotic relation between elementary particle physics and cosmology. After a review of the early Universe as a cosmic accelerator, various cosmological and astrophysical constraints on models of particle physics are outlined. To illustrate this approach to particle physics via cosmology, reference is made to several areas of current research: baryon non-conservation and baryon asymmetry; free quarks, heavy hadrons and other exotic relics; primordial nucleosynthesis and neutrino masses. In the last few years we have witnessed the birth and growth to healthy adolescence of a new collaboration between astrophysicists and particle physicists. The most notable success of this cooperative effort has been to provide the framework for understanding, within the context of GUTs and the hot big-bang cosmology, the universal baryon asymmetry. The most exciting new predictions this effort has spawned are that exotic relics may exist in detectable abundances. In particular, we may live in a neutrino-dominated Universe. In the next few years, accummulating laboratory data (for example proton decay, neutrino masses and oscillations) coupled with theoritical work in particle physics and cosmology will ensure the growth to maturity of this joint effort.
18. RESONATOR PARTICLE SEPARATOR
DOEpatents
Blewett, J.P.
1962-01-01
A wave guide resonator structure is described for use in separating particles of equal momentum but differing in mass and having energies exceeding one billion electron volts. The particles are those of sub-atomic size and are generally produced as a result of the bombardment of a target by a beam such as protons produced in a high-energy accelerator. In this wave guide construction, the particles undergo preferential deflection as a result of the presence of an electric field. The boundary conditions established in the resonator are such as to eliminate an interfering magnetic component, and to otherwise phase the electric field to obtain a traveling wave such as one which moves at the same speed as the unwanted particle. The latter undergoes continuous deflection over the whole length of the device and is, therefore, eliminated while the wanted particle is deflected in opposite directions over the length of the resonator and is thus able to enter an exit aperture. (AEC)
19. A relationship between maximum packing of particles and particle size
NASA Technical Reports Server (NTRS)
Fedors, R. F.
1979-01-01
Experimental data indicate that the volume fraction of particles in a packed bed (i.e. maximum packing) depends on particle size. One explanation for this is based on the idea that particle adhesion is the primary factor. In this paper, however, it is shown that entrainment and immobilization of liquid by the particles can also account for the facts.
20. Big Bang Day: 5 Particles - 3. The Anti-particle
SciTech Connect
2009-10-07
Simon Singh looks at the stories behind the discovery of 5 of the universe's most significant subatomic particles: the Electron, the Quark, the Anti-particle, the Neutrino and the "next particle". 3. The Anti-particle. It appears to be the stuff of science fiction. Associated with every elementary particle is an antiparticle which has the same mass and opposite charge. Should the two meet and combine, the result is annihilation - and a flash of light. Thanks to mysterious processes that occurred after the Big Bang there are a vastly greater number of particles than anti-particles. So how could their elusive existence be proved? At CERN particle physicists are crashing together subatomic particles at incredibly high speeds to create antimatter, which they hope will finally reveal what happened at the precise moment of the Big Bang to create the repertoire of elementary particles and antiparticles in existence today.
1. Big Bang Day: 5 Particles - 3. The Anti-particle
ScienceCinema
None
2016-07-12
Simon Singh looks at the stories behind the discovery of 5 of the universe's most significant subatomic particles: the Electron, the Quark, the Anti-particle, the Neutrino and the "next particle". 3. The Anti-particle. It appears to be the stuff of science fiction. Associated with every elementary particle is an antiparticle which has the same mass and opposite charge. Should the two meet and combine, the result is annihilation - and a flash of light. Thanks to mysterious processes that occurred after the Big Bang there are a vastly greater number of particles than anti-particles. So how could their elusive existence be proved? At CERN particle physicists are crashing together subatomic particles at incredibly high speeds to create antimatter, which they hope will finally reveal what happened at the precise moment of the Big Bang to create the repertoire of elementary particles and antiparticles in existence today.
2. Carbon-particle generator
DOEpatents
Hunt, A.J.
1982-09-29
A method and apparatus whereby small carbon particles are made by pyrolysis of a mixture of acetylene carried in argon. The mixture is injected through a nozzle into a heated tube. A small amount of air is added to the mixture. In order to prevent carbon build-up at the nozzle, the nozzle tip is externally cooled. The tube is also elongated sufficiently to assure efficient pyrolysis at the desired flow rates. A key feature of the method is that the acetylene and argon, for example, are premixed in a dilute ratio, and such mixture is injected while cool to minimize the agglomeration of the particles, which produces carbon particles with desired optical properties for use as a solar radiant heat absorber.
3. Biological particle identification apparatus
DOEpatents
Salzman, Gary C.; Gregg, Charles T.; Grace, W. Kevin; Hiebert, Richard D.
1989-01-01
An apparatus and method for making multiparameter light scattering measurements from suspensions of biological particles is described. Fourteen of the sixteen Mueller matrix elements describing the particles under investigation can be substantially individually determined as a function of scattering angle and probing radiations wavelength, eight elements simultaneously for each of two apparatus configurations using an apparatus which incluees, in its simplest form, two polarization modulators each operating at a chosen frequency, one polarizer, a source of monochromatic electromagnetic radiation, a detector sensitive to the wavelength of radiation employed, eight phase-sensitive detectors, and appropriate electronics. A database of known biological particle suspensions can be assembled, and unknown samples can be quickly identified once measurements are performed on it according to the teachings of the subject invention, and a comparison is made with the database.
4. Charged particle accelerator grating
DOEpatents
Palmer, R.B.
1985-09-09
A readily disposable and replaceable accelerator grating for a relativistic particle accelerator is described. The grating is formed for a plurality of liquid droplets that are directed in precisely positioned jet streams to periodically dispose rows of droplets along the borders of a predetermined particle beam path. A plurality of lasers are used to direct laser beams onto the droplets, at predetermined angles, thereby to excite the droplets to support electromagnetic accelerating resonances on their surfaces. Those resonances operate to accelerate and focus particles moving along the beam path. As the droplets are distorted or destroyed by the incoming radiation, they are replaced at a predetermined frequency by other droplets supplied through the jet streams.
5. Precision wood particle feedstocks
DOEpatents
Dooley, James H; Lanning, David N
2013-07-30
Wood particles having fibers aligned in a grain, wherein: the wood particles are characterized by a length dimension (L) aligned substantially parallel to the grain, a width dimension (W) normal to L and aligned cross grain, and a height dimension (H) normal to W and L; the L.times.H dimensions define two side surfaces characterized by substantially intact longitudinally arrayed fibers; the W.times.H dimensions define two cross-grain end surfaces characterized individually as aligned either normal to the grain or oblique to the grain; the L.times.W dimensions define two substantially parallel top and bottom surfaces; and, a majority of the W.times.H surfaces in the mixture of wood particles have end checking.
6. On particle track detectors
NASA Technical Reports Server (NTRS)
Benton, E. V.; Gruhn, T. A.; Andrus, C. H.
1973-01-01
Aqueous sodium hydroxide is widely used to develop charged particle tracks in polycarbonate film, particularly Lexan. The chemical nature of the etching process for this system has been determined. A method employing ultra-violet absorbance was developed for monitoring the concentration of the etch products in solution. Using this method it was possible to study the formation of the etching solution saturated in etch products. It was found that the system super-saturates to a significant extent before precipitation occurs. It was also learned that the system approaches its equilibrium state rather slowly. It is felt that both these phenomena may be due to the presence of surfactant in the solution. In light of these findings, suggestions are given regarding the preparation and maintenance of the saturated etch solution. Two additional research projects, involving automated techniques for particle track analysis and particle identification using AgCl crystals, are briefly summarized.
7. Electrostatic particle precipitator
SciTech Connect
Uchiya, T.; Hikizi, S.; Yabuta, H.
1984-04-03
An electrostatic particle precipitator for removing dust particles from a flue gas. The precipitator includes a plurality of collecting electrodes in the shape of plates mounted on endless chains and moving between a first region through which flue gas to be treated flows and a second region where the flow of gas is extremely scarce. A dust removal mechanism is positioned in the second region to remove dust which accumulates on the electrode plates. The moving speed of the collecting electrodes is controlled within a certain range to maintain a prescribed thickness of dust on the electrodes whereby the ocurrence of reverse ionization phenomenon is prevented.
8. Particle image cinematograph velocimetry
Ma, Guangyun; Shen, Gongxin
1993-01-01
Particle image cinematograph velocimetry (PICV), a new method based on 2D velocity field with time history measurements for unsteady flows, is presented here. Using mechanical chopping light pulses of the Aron ion laser, which are matched synchronously with moving action of a cinematograph, a series of double or multiple exposure images of particles which are seeded in fluid could be recorded in the films sequentially. The recording films are scanned by an auto-interrogation system, a series of instantaneous 2D-velocity distribution maps with time history are obtained. Some application results for a starting vortex flow around a backward step are presented.
9. Review of Particle Physics
Olive, K. A.; Particle Data Group
2014-08-01
The Review summarizes much of particle physics and cosmology. Using data from previous editions, plus 3,283 new measurements from 899 papers, we list, evaluate, and average measured properties of gauge bosons and the recently discovered Higgs boson, leptons, quarks, mesons, and baryons. We summarize searches for hypothetical particles such as heavy neutrinos, supersymmetric and technicolor particles, axions, dark photons, etc. All the particle properties and search limits are listed in Summary Tables. We also give numerous tables, figures, formulae, and reviews of topics such as Supersymmetry, Extra Dimensions, Particle Detectors, Probability, and Statistics. Among the 112 reviews are many that are new or heavily revised including those on: Dark Energy, Higgs Boson Physics, Electroweak Model, Neutrino Cross Section Measurements, Monte Carlo Neutrino Generators, Top Quark, Dark Matter, Dynamical Electroweak Symmetry Breaking, Accelerator Physics of Colliders, High-Energy Collider Parameters, Big Bang Nucleosynthesis, Astrophysical Constants and Cosmological Parameters. All tables, listings, and reviews (and errata) are also available on the Particle Data Group website: http://pdg.lbl.gov. Contents Abstract, Contributors, Highlights and Table of ContentsAcrobat PDF (4.4 MB) IntroductionAcrobat PDF (595 KB) Particle Physics Summary Tables Gauge and Higgs bosonsAcrobat PDF (204 KB) LeptonsAcrobat PDF (167 KB) QuarksAcrobat PDF (115 KB) MesonsAcrobat PDF (976 KB) BaryonsAcrobat PDF (384 KB) Searches (Supersymmetry, Compositeness, etc.)Acrobat PDF (120 KB) Tests of conservation lawsAcrobat PDF (383 KB) Reviews, Tables, and Plots Detailed contents for this sectionAcrobat PDF (73 KB) Constants, Units, Atomic and Nuclear PropertiesAcrobat PDF (395 KB) Standard Model and Related TopicsAcrobat PDF (8.37 MB) Astrophysics and CosmologyAcrobat PDF (3.79 MB) Experimental Methods and CollidersAcrobat PDF (3.82 MB) Mathematical Tools of Statistics, Monte Carlo, Group Theory Acrobat
10. Particles, space, and time
Icke, Vincent
1996-03-01
Our Universe consistes of particles, space and time. Ever since Descartes we have known that true emptiness cannot exist; ever since Einstein we have known that space and time are part of the stuff of our world. Efforts to determine the structure of particles go in parallel with the search for the structure of spacetime. Einstein gave us a geometrical answer regarding the structure of spacetime: a distance recipe (Lorentz-Minkowski) suffices. The theory boils down to a patching together of local Lorentz frames into a global whole, which gives it the form of a gauge field theory based on local Lorentz symmetry. On large scales, the Einstein Equation seems to work well. The structure of particles is described by a gauge field. too. On small scales the ‘Standard Model’ seems to work very well. However, we know from Newtonian gravity that the presence of particles must be related to the structure of spacetime. Einstein made a conjecture for the form of this connection using the Newtonian limit of small speeds and weak fields. The right hand side of his equation for the bulk theory of matter (the energy-momentum tensor), is equated to the Einstein tensor from non-Euclidian geometry. But that connection is wrong. The structure of spacetime cannot be equated to the density of particles if we include the Standard Model in the matter tensor. In field theory a potential is not something that can be freely changed by adding an arbitrary scalar term; due to the local (as opposed to global) character of the fields, a potential becomes an entity in itself. Einstein's conjecture runs into profound trouble because the reality of potentials implies that the zero point energy of the vacuum must be included in the Einstein equation. The net result is the appearance of a term equivalent to a cosmological constant A of stupendous size, some 10118 times the critical cosmic density. The crisis due to the zero point fluctuations in the energy-momentum tensor is a clash of titans
11. Amorphous silicon ionizing particle detectors
DOEpatents
Street, Robert A.; Mendez, Victor P.; Kaplan, Selig N.
1988-01-01
Amorphous silicon ionizing particle detectors having a hydrogenated amorphous silicon (a--Si:H) thin film deposited via plasma assisted chemical vapor deposition techniques are utilized to detect the presence, position and counting of high energy ionizing particles, such as electrons, x-rays, alpha particles, beta particles and gamma radiation.
12. Amorphous silicon ionizing particle detectors
DOEpatents
Street, R.A.; Mendez, V.P.; Kaplan, S.N.
1988-11-15
Amorphous silicon ionizing particle detectors having a hydrogenated amorphous silicon (a--Si:H) thin film deposited via plasma assisted chemical vapor deposition techniques are utilized to detect the presence, position and counting of high energy ionizing particles, such as electrons, x-rays, alpha particles, beta particles and gamma radiation. 15 figs.
13. Apparatus for measuring particle properties
DOEpatents
Rader, Daniel J.; Castaneda, Jaime N.; Grasser, Thomas W.; Brockmann, John E.
1998-01-01
An apparatus for determining particle properties from detected light scattered by the particles. The apparatus uses a light beam with novel intensity characteristics to discriminate between particles that pass through the beam and those that pass through an edge of the beam. The apparatus can also discriminate between light scattered by one particle and light scattered by multiple particles. The particle's size can be determined from the intensity of the light scattered. The particle's velocity can be determined from the elapsed time between various intensities of the light scattered.
14. Particle concentration in exhaled breath
SciTech Connect
Fairchild, C.I.; Stampfer, J.F.
1987-11-01
Measurements were made of the number of concentration of particles in exhaled breath under various conditions of exercise. A laser light scattering particle spectrometer was used to count particles exhaled by test subjects wearing respirators in a challenge environment of clean, dry air. Precautions were taken to ensure that particles were not generated by the respirators and that no extraneous water or other particles were produced in the humid exhaled air. The number of particles detected in exhales air varied over a range from <0.10 to approx. 4 particles/cm/sup 3/ depending upon the test subject and his activity. Subjects at rest exhaled the lowest concentration of particles, whereas exercises producing a faster respiration rate caused increased exhalation of particles. Exhaled particle concentration can limit the usefulness of nondiscriminating, ambient challenge aerosols for the fit testing of highly protective respirators.
15. Elementary Particles and Forces.
ERIC Educational Resources Information Center
Quigg, Chris
1985-01-01
Discusses subatomic particles (quarks, leptons, and others) revealed by higher accelerator energies. A connection between forces at this subatomic level has been established, and prospects are good for a description of forces that encompass binding atomic nuclei. Colors, fundamental interactions, screening, camouflage, electroweak symmetry, and…
16. Particles causing lung disease
SciTech Connect
Kilburn, K.H.
1984-04-01
The lung has a limited number of patterns of reaction to inhaled particles. The disease observed depends upon the location: conducting airways, terminal bronchioles and alveoli, and upon the nature of inflammation induced: acute, subacute or chronic. Many different agents cause narrowing of conducting airways (asthma) and some of these cause permanent distortion or obliteration of airways as well. Terminal bronchioles appear to be particularly susceptible to particles which cause goblet cell metaplasia, mucous plugging and ultimately peribronchiolar fibrosis. Cancer is the last outcome at the bronchial level and appears to depend upon continuous exposure to or retention of an agent in the airway and failure of the affected cells to be exfoliated which may be due to squamous metaplasia. Alveoli are populated by endothelial cells, Type I or pavement epithelial cells and metabolically active cuboidal Type II cells that produce the lungs specific surfactant, dipalmytol lecithin. Disturbances of surfactant lead to edema in distal lung while laryngeal edema due to anaphylaxis or fumes may produce asphyxia. Physical retention of indigestible particles or retention by immune memory responses may provoke hyaline membranes, stimulate alveolar lipoproteinosis and finally fibrosis. This later exuberant deposition of connective tissue has been best studied in the occupational pneumoconioses especially silicosis and asbestosis. In contrast emphysema a catabolic response appears frequently to result from leakage or release of lysosomal proteases into the lung during processing of cigarette smoke particles. 164 references, 1 figure, 2 tables.
17. Battery Particle Simulation
SciTech Connect
2014-09-15
Two simulations show the differences between a battery being drained at a slower rate, over a full hour, versus a faster rate, only six minutes (a tenth of an hour). In both cases battery particles go from being fully charged (green) to fully drained (red), but there are significant differences in the patterns of discharge based on the rate.
18. Particle-Size Analysis
SciTech Connect
Gee, Glendon W. ); Or, Dani; J.H. Dane and G.C. Topp
2002-11-01
Book Chapter describing methods of particle-size analysis for soils. Includes a variety of classification schemes. Standard methods for size distributions using pipet and hydrometer techniques are described. New laser-light scattering and related techniques are discussed. Complete with updated references.
19. Supertwistors and massive particles
SciTech Connect
Mezincescu, Luca; Routh, Alasdair J.; Townsend, Paul K.
2014-07-15
In the (super)twistor formulation of massless (super)particle mechanics, the mass-shell constraint is replaced by a “spin-shell” constraint from which the spin content can be read off. We extend this formalism to massive (super)particles (with N-extended space–time supersymmetry) in three and four space–time dimensions, explaining how the spin-shell constraints are related to spin, and we use it to prove equivalence of the massive N=1 and BPS-saturated N=2 superparticle actions. We also find the supertwistor form of the action for “spinning particles” with N-extended worldline supersymmetry, massless in four dimensions and massive in three dimensions, and we show how this simplifies special features of the N=2 case. -- Highlights: •Spin-shell constraints are related to Poincaré Casimirs. •Twistor form of 4D spinning particle for spin N/2. •Twistor proof of scalar/antisymmetric tensor equivalence for 4D spin 0. •Twistor form of 3D particle with arbitrary spin. •Proof of equivalence of N=1 and N=2 BPS massive 4D superparticles.
20. RESONATOR PARTICLE SEPARATOR
DOEpatents
Blewett, J.P.; Kiesling, J.D.
1963-06-11
A wave-guide resonator structure is designed for use in separating particles of equal momentum but differing in mass, having energies exceeding one billion eiectron volts. The particles referred to are those of sub-atomic size and are generally produced as a result of the bombardment of a target by a beam such as protons produced in a high energy accelerator. In the resonator a travelling electric wave is produced which travels at the same rate of speed as the unwanted particle which is thus deflected continuously over the length of the resonator. The wanted particle is slightly out of phase with the travelling wave so that over the whole length of the resonator it has a net deflection of substantially zero. The travelling wave is established in a wave guide of rectangular cross section in which stubs are provided to store magnetic wave energy leaving the electric wave energy in the main structure to obtain the desired travelling wave and deflection. The stubs are of such shape and spacing to establish a critical mathemitical relationship. (AEC)
1. Particle Acceleration in Jets
NASA Technical Reports Server (NTRS)
Nishikawa, Ken-Ichi
2005-01-01
Nonthermal radiation observed from astrophysical systems containing relativistic jets and shocks, e.g., active galactic nuclei (AGNs), gamma ray burst (GRBs), and Galactic microquasar systems usually have power-law emission spectra. Fermi acceleration is the mechanism usually assumed for the acceleration of particles in astrophysical environments.
2. Elementary particle interactions
SciTech Connect
Bugg, W.M.; Condo, G.T.; Handler, T.; Hart, E.L.; Ward, B.F.L.; Close, F.E.; Christophorou, L.G.
1990-10-01
This report discusses freon bubble chamber experiments exposed to {mu}{sup +} and neutrinos, photon-proton interactions; shower counter simulations; SLD detectors at the Stanford Linear Collider, and the detectors at the Superconducting Super Collider; elementary particle interactions; physical properties of dielectric materials used in High Energy Physics detectors; and Nuclear Physics. (LSP)
3. Lunar Soil Particle Separator
NASA Technical Reports Server (NTRS)
Berggren, Mark
2010-01-01
The Lunar Soil Particle Separator (LSPS) beneficiates soil prior to in situ resource utilization (ISRU). It can improve ISRU oxygen yield by boosting the concentration of ilmenite, or other iron-oxide-bearing materials found in lunar soils, which can substantially reduce hydrogen reduction reactor size, as well as drastically decreasing the power input required for soil heating
4. Magnetic particle characterization-magnetophoretic mobility and particle size.
PubMed
Zhou, Chen; Boland, Eugene D; Todd, Paul W; Hanley, Thomas R
2016-06-01
Quantitative characterization of magnetic particles is useful for analysis and separation of labeled cells and magnetic particles. A particle velocimeter is used to directly measure the magnetophoretic mobility, size, and other parameters of magnetic particle suspensions. The instrument provides quantitative video analysis of particles and their motion. The trajectories of magnetic particles in an isodynamic magnetic field are recorded using a high-definition camera/microscope system for image collection. Image analysis software then converts the image data to the parameters of interest. The distribution of magnetophoretic mobility is determined by combining fast image analysis with velocimetry measurements. Particle size distributions have been characterized to provide a better understanding of sample quality. The results have been used in the development and operation of analyzer protocols for counting particle concentrations accurately and measuring magnetic susceptibility and size for simultaneous display for routine application to particle suspensions and magnetically labeled biological cells. © 2016 International Society for Advancement of Cytometry.
5. Influence of particle wall adhesion on particle electrification in mixers.
PubMed
Zhu, Kewu; Tan, Reginald B H; Chen, Fengxi; Ong, Kunn Hadinoto; Heng, Paul W S
2007-01-01
6. Particle Swarm Optimization Toolbox
NASA Technical Reports Server (NTRS)
Grant, Michael J.
2010-01-01
The Particle Swarm Optimization Toolbox is a library of evolutionary optimization tools developed in the MATLAB environment. The algorithms contained in the library include a genetic algorithm (GA), a single-objective particle swarm optimizer (SOPSO), and a multi-objective particle swarm optimizer (MOPSO). Development focused on both the SOPSO and MOPSO. A GA was included mainly for comparison purposes, and the particle swarm optimizers appeared to perform better for a wide variety of optimization problems. All algorithms are capable of performing unconstrained and constrained optimization. The particle swarm optimizers are capable of performing single and multi-objective optimization. The SOPSO and MOPSO algorithms are based on swarming theory and bird-flocking patterns to search the trade space for the optimal solution or optimal trade in competing objectives. The MOPSO generates Pareto fronts for objectives that are in competition. A GA, based on Darwin evolutionary theory, is also included in the library. The GA consists of individuals that form a population in the design space. The population mates to form offspring at new locations in the design space. These offspring contain traits from both of the parents. The algorithm is based on this combination of traits from parents to hopefully provide an improved solution than either of the original parents. As the algorithm progresses, individuals that hold these optimal traits will emerge as the optimal solutions. Due to the generic design of all optimization algorithms, each algorithm interfaces with a user-supplied objective function. This function serves as a "black-box" to the optimizers in which the only purpose of this function is to evaluate solutions provided by the optimizers. Hence, the user-supplied function can be numerical simulations, analytical functions, etc., since the specific detail of this function is of no concern to the optimizer. These algorithms were originally developed to support entry
7. Movement of particles using sequentially activated dielectrophoretic particle trapping
DOEpatents
Miles, Robin R.
2004-02-03
Manipulation of DNA and cells/spores using dielectrophoretic (DEP) forces to perform sample preparation protocols for polymerized chain reaction (PCR) based assays for various applications. This is accomplished by movement of particles using sequentially activated dielectrophoretic particle trapping. DEP forces induce a dipole in particles, and these particles can be trapped in non-uniform fields. The particles can be trapped in the high field strength region of one set of electrodes. By switching off this field and switching on an adjacent electrodes, particles can be moved down a channel with little or no flow.
8. Particle analyzing method and apparatus
NASA Technical Reports Server (NTRS)
Sinha, M. P.; Griffin, C. E.; Norris, D. D.; Friedlander, S. K. (Inventor)
1980-01-01
The rapid chemical analysis of particles in aerosols can be accomplished using an apparatus which produces a controlled stream of individual particles from an environment, and another apparatus which vaporizes and ionizes the particles moving in free flight, for analysis by a mass spectrometer. The device for producing the stream of particles includes a capillary tube through which the air with suspended particles moves, a skimmer with a small opening spaced from an end of the capillary tube to receive particles passing through the tube, and a vacuum pump which removes air from between the tube and skimmer and creates an inflow of air and particles through the tube. The particles passing through the skimmer opening can be simultaneously vaporized and ionized while in free flight, by a laser beam of sufficient intensity that is directed across the path of the free flying particles.
9. Magnetic flocculation of paramagnetic particles
SciTech Connect
Tsouris, C.; Scott, T.C.
1994-09-01
An experimental apparatus has been assembled for the flocculation study of paramagnetic particles under the influence of a strong magnetic field. A magnetic field of strength up to 6 T is generated by a cryogenic magnet operating near liquid helium temperatures. Experimental information is obtained from fluctuation and intensity measurements of light passing through a particle suspension located in a uniform magnetic field. Particle flocculation is described by a Brownian flocculation model in which hydrodynamic, van der Waals, double-layer, and magnetic forces are incorporated for the estimation of the particle flocculation rate. A population-balance model is employed in conjunction with the flocculation model to predict the evolution of the particle size and composition or magnetic susceptibility with time. The effects of magnetic-field strength, magnetic susceptibility of the particles, particle size, and zeta potential are investigated. Results show that particle size and magnetic susceptibility each play an important role in the selective flocculation of particles of different properties.
10. Experimental Particle Physics
SciTech Connect
Rosenfeld, Carl; Mishra, Sanjib R.; Petti, Roberto; Purohit, Milind V.
2014-08-31
The high energy physics group at the University of South Carolina, under the leadership of Profs. S.R. Mishra, R. Petti, M.V. Purohit, J.R. Wilson (co-PI's), and C. Rosenfeld (PI), engaged in studies in "Experimental Particle Physics." The group collaborated with similar groups at other universities and at national laboratories to conduct experimental studies of elementary particle properties. We utilized the particle accelerators at the Fermi National Accelerator Laboratory (Fermilab) in Illinois, the Stanford Linear Accelerator Center (SLAC) in California, and the European Center for Nuclear Research (CERN) in Switzerland. Mishra, Rosenfeld, and Petti worked predominantly on neutrino experiments. Experiments conducted in the last fifteen years that used cosmic rays and the core of the sun as a source of neutrinos showed conclusively that, contrary to the former conventional wisdom, the "flavor" of a neutrino is not immutable. A neutrino of flavor "e," "mu," or "tau," as determined from its provenance, may swap its identity with one of the other flavors -- in our jargon, they "oscillate." The oscillation phenomenon is extraordinarily difficult to study because neutrino interactions with our instruments are exceedingly rare -- they travel through the earth mostly unimpeded -- and because they must travel great distances before a substantial proportion have made the identity swap. Three of the experiments that we worked on, MINOS, NOvA, and LBNE utilize a beam of neutrinos from an accelerator at Fermilab to determine the parameters governing the oscillation. Two other experiments that we worked on, NOMAD and MIPP, provide measurements supportive of the oscillation experiments. Good measurements of the neutrino oscillation parameters may constitute a "low energy window" on related phenomena that are otherwise unobservable because they would occur only at energies way above the reach of conceivable accelerators. Purohit and Wilson participated in the BaBar experiment
11. Particle nonuniformity effects on particle cloud flames in low gravity
NASA Technical Reports Server (NTRS)
Berlad, A. L.; Tangirala, V.; Seshadri, K.; Facca, L. T.; Ogrin, J.; Ross, H.
1991-01-01
Experimental and analytical studies of particle cloud combustion at reduced gravity reveal the substantial roles that particle cloud nonuniformities may play in particle cloud combustion. Macroscopically uniform, quiescent particle cloud systems (at very low gravitational levels and above) sustain processes which can render them nonuniform on both macroscopic and microscopic scales. It is found that a given macroscopically uniform, quiescent particle cloud flame system can display a range of microscopically nonuniform features which lead to a range of combustion features. Microscopically nonuniform particle cloud distributions are difficult experimentally to detect and characterize. A uniformly distributed lycopodium cloud of particle-enriched microscopic nonuniformities in reduced gravity displays a range of burning velocities for any given overall stoichiometry. The range of observed and calculated burning velocities corresponds to the range of particle enriched concentrations within a characteristic microscopic nonuniformity. Sedimentation effects (even in reduced gravity) are also examined.
12. Particle processing technology
Sakka, Yoshio
2014-02-01
In recent years, there has been strong demand for the development of novel devices and equipment that support advanced industries including IT/semiconductors, the environment, energy and aerospace along with the achievement of higher efficiency and reduced environmental impact. Many studies have been conducted on the fabrication of innovative inorganic materials with novel individual properties and/or multifunctional properties including electrical, dielectric, thermal, optical, chemical and mechanical properties through the development of particle processing. The fundamental technologies that are key to realizing such materials are (i) the synthesis of nanoparticles with uniform composition and controlled crystallite size, (ii) the arrangement/assembly and controlled dispersion of nanoparticles with controlled particle size, (iii) the precise structural control at all levels from micrometer to nanometer order and (iv) the nanostructural design based on theoretical/experimental studies of the correlation between the local structure and the functions of interest. In particular, it is now understood that the application of an external stimulus, such as magnetic energy, electrical energy and/or stress, to a reaction field is effective in realizing advanced particle processing [1-3]. This special issue comprises 12 papers including three review papers. Among them, seven papers are concerned with phosphor particles, such as silicon, metals, Si3N4-related nitrides, rare-earth oxides, garnet oxides, rare-earth sulfur oxides and rare-earth hydroxides. In these papers, the effects of particle size, morphology, dispersion, surface states, dopant concentration and other factors on the optical properties of phosphor particles and their applications are discussed. These nanoparticles are classified as zero-dimensional materials. Carbon nanotubes (CNT) and graphene are well-known one-dimensional (1D) and two-dimensional (2D) materials, respectively. This special issue also
13. Particle detector spatial resolution
DOEpatents
Perez-Mendez, Victor
1992-01-01
Method and apparatus for producing separated columns of scintillation layer material, for use in detection of X-rays and high energy charged particles with improved spatial resolution. A pattern of ridges or projections is formed on one surface of a substrate layer or in a thin polyimide layer, and the scintillation layer is grown at controlled temperature and growth rate on the ridge-containing material. The scintillation material preferentially forms cylinders or columns, separated by gaps conforming to the pattern of ridges, and these columns direct most of the light produced in the scintillation layer along individual columns for subsequent detection in a photodiode layer. The gaps may be filled with a light-absorbing material to further enhance the spatial resolution of the particle detector.
14. Particle detector spatial resolution
DOEpatents
Perez-Mendez, V.
1992-12-15
Method and apparatus for producing separated columns of scintillation layer material, for use in detection of X-rays and high energy charged particles with improved spatial resolution is disclosed. A pattern of ridges or projections is formed on one surface of a substrate layer or in a thin polyimide layer, and the scintillation layer is grown at controlled temperature and growth rate on the ridge-containing material. The scintillation material preferentially forms cylinders or columns, separated by gaps conforming to the pattern of ridges, and these columns direct most of the light produced in the scintillation layer along individual columns for subsequent detection in a photodiode layer. The gaps may be filled with a light-absorbing material to further enhance the spatial resolution of the particle detector. 12 figs.
McPhail, M. J.; Krane, M. H.; Fontaine, A. A.; Goss, L.; Crafton, J.
2015-04-01
This paper describes the extension of multicolor particle shadow velocimetry (CPSV) to the measurement of local acceleration in an Eulerian frame of reference. A validation experiment was conducted on a pendulous disk undergoing unsteady rigid body rotation. Angular velocity and acceleration profiles by CPSA are presented along with a comparison to recordings by an accelerometer mounted on the pendulum. CPSA is also demonstrated in a fully-developed turbulent pipe flow. Profiles of standard deviation of the local acceleration in the near wall region ≤ft(0<~{{y}+}<75\\right) are compared to similar measurements by Christensen and Adrian. A favorable comparison is found between CPSA and particle image accelerometry (PIA). The effect of acceleration time delay, or the time between two velocity estimates, on local acceleration estimates is discussed.
16. Aviation Particle Emissions Workshop
NASA Technical Reports Server (NTRS)
Wey, Chowen C. (Editor)
2004-01-01
The Aviation Particle Emissions Workshop was held on November 18 19, 2003, in Cleveland, Ohio. It was sponsored by the National Aeronautic and Space Administration (NASA) under the Vehicle Systems Program (VSP) and the Ultra- Efficient Engine Technology (UEET) Project. The objectives were to build a sound foundation for a comprehensive particulate research roadmap and to provide a forum for discussion among U.S. stakeholders and researchers. Presentations included perspectives from the Federal Aviation Administration, the U.S. Environmental Protection Agency, NASA, and United States airports. There were five interactive technical sessions: sampling methodology, measurement methodology, particle modeling, database, inventory and test venue, and air quality. Each group presented technical issues which generated excellent discussion. The five session leads collaborated with their members to present summaries and conclusions to each content area.
17. Research in particle physics
SciTech Connect
Not Available
1993-08-01
This proposal presents the research accomplishments and ongoing activities of Boston University researchers in high energy physics. Some changes have been made in the structure of the program from the previous arrangement of tasks. Task B, Accelerator Design Physics, is being submitted as a separate proposal for an independent grant; this will be consistent with the nature of the research and the source of funding. We are active in seven principal areas which will be discussed in this report: Colliding Beams - physics of e{sup +}e{sup {minus}} and {bar p}p collisions; MACRO Experiment - search for magnetic monopoles and study of cosmic rays; Proton Decay - search for nucleon instability and study of neutrino interactions; Particle Theory - theoretical high energy particle physics, including two Outstanding Junior Investigator awards; Muon G-2 - measurement of the anomalous magnetic moment of the muon; SSCintcal - calorimetry for the GEM Experiment; and Muon detectors for the GEM Experiment.
18. Cosmology and particle physics
NASA Technical Reports Server (NTRS)
Turner, Michael S.
1988-01-01
The interplay between cosmology and elementary particle physics is discussed. The standard cosmology is reviewed, concentrating on primordial nucleosynthesis and discussing how the standard cosmology has been used to place constraints on the properties of various particles. Baryogenesis is discussed, showing how a scenario in which the B-, C-, and CP-violating interactions in GUTs provide a dynamical explanation for the predominance of matter over antimatter and for the present baryon-to-photon ratio. It is shown how the very early dynamical evolution of a very weakly coupled scalar field which is initially displaced from the minimum of its potential may explain a handful of very fundamental cosmological facts which are not explained by the standard cosmology.
19. Particle-mesh techniques
NASA Technical Reports Server (NTRS)
Macneice, Peter
1995-01-01
This is an introduction to numerical Particle-Mesh techniques, which are commonly used to model plasmas, gravitational N-body systems, and both compressible and incompressible fluids. The theory behind this approach is presented, and its practical implementation, both for serial and parallel machines, is discussed. This document is based on a four-hour lecture course presented by the author at the NASA Summer School for High Performance Computational Physics, held at Goddard Space Flight Center.
20. PARTICLE BEAM TRACKING CIRCUIT
DOEpatents
Anderson, O.A.
1959-05-01
>A particle-beam tracking and correcting circuit is described. Beam induction electrodes are placed on either side of the beam, and potentials induced by the beam are compared in a voltage comparator or discriminator. This comparison produces an error signal which modifies the fm curve at the voltage applied to the drift tube, thereby returning the orbit to the preferred position. The arrangement serves also to synchronize accelerating frequency and magnetic field growth. (T.R.H.)
DOEpatents
Moore, Murray E.; Gauss, Adam Benjamin; Justus, Alan Lawrence
2012-06-26
A method and apparatus for providing a timed, synchronized dynamic alpha or beta particle source for testing the response of continuous air monitors (CAMs) for airborne alpha or beta emitters is provided. The method includes providing a radioactive source; placing the radioactive source inside the detection volume of a CAM; and introducing an alpha or beta-emitting isotope while the CAM is in a normal functioning mode.
2. Universality of particle multiplicities
SciTech Connect
Goulianos, K. |
1994-09-01
We discuss the scaling properties and universality aspects of the rapidity and multiplicity distributions of particles produced in high energy hadronic and e{sup +}e{sup {minus}} interactions. This paper is based on material presented in three lectures on pomeron phenomenology, which included a review of traditional soft pomeron physics and selected topics on hard diffraction processes probing the structure function of the pomeron.
SciTech Connect
More, R; Graziani, F; Glosli, J; Surh, M
2010-11-19
Hot dense radiative (HDR) plasmas common to Inertial Confinement Fusion (ICF) and stellar interiors have high temperature (a few hundred eV to tens of keV), high density (tens to hundreds of g/cc) and high pressure (hundreds of megabars to thousands of gigabars). Typically, such plasmas undergo collisional, radiative, atomic and possibly thermonuclear processes. In order to describe HDR plasmas, computational physicists in ICF and astrophysics use atomic-scale microphysical models implemented in various simulation codes. Experimental validation of the models used to describe HDR plasmas are difficult to perform. Direct Numerical Simulation (DNS) of the many-body interactions of plasmas is a promising approach to model validation but, previous work either relies on the collisionless approximation or ignores radiation. We present four methods that attempt a new numerical simulation technique to address a currently unsolved problem: the extension of molecular dynamics to collisional plasmas including emission and absorption of radiation. The first method applies the Lienard-Weichert solution of Maxwell's equations for a classical particle whose motion is assumed to be known. The second method expands the electromagnetic field in normal modes (planewaves in a box with periodic boundary-conditions) and solves the equation for wave amplitudes coupled to the particle motion. The third method is a hybrid molecular dynamics/Monte Carlo (MD/MC) method which calculates radiation emitted or absorbed by electron-ion pairs during close collisions. The fourth method is a generalization of the third method to include small clusters of particles emitting radiation during close encounters: one electron simultaneously hitting two ions, two electrons simultaneously hitting one ion, etc. This approach is inspired by the virial expansion method of equilibrium statistical mechanics. Using a combination of these methods we believe it is possible to do atomic-scale particle simulations of
4. Particle sensor array
NASA Technical Reports Server (NTRS)
Buehler, Martin G. (Inventor); Blaes, Brent R. (Inventor); Lieneweg, Udo (Inventor)
1994-01-01
A particle sensor array which in a preferred embodiment comprises a static random access memory having a plurality of ion-sensitive memory cells, each such cell comprising at least one pull-down field effect transistor having a sensitive drain surface area (such as by bloating) and at least one pull-up field effect transistor having a source connected to an offset voltage. The sensitive drain surface area and the offset voltage are selected for memory cell upset by incident ions such as alpha-particles. The static random access memory of the present invention provides a means for selectively biasing the memory cells into the same state in which each of the sensitive drain surface areas is reverse biased and then selectively reducing the reversed bias on these sensitive drain surface areas for increasing the upset sensitivity of the cells to ions. The resulting selectively sensitive memory cells can be used in a number of applications. By way of example, the present invention can be used for measuring the linear energy transfer of ion particles, as well as a device for assessing the resistance of CMOS latches to Cosmic Ray induced single event upsets. The sensor of the present invention can also be used to determine the uniformity of an ion beam.
5. Cosmology with decaying particles
SciTech Connect
Turner, M.S.
1984-09-01
We consider a cosmological model in which an unstable massive relic particle species (denoted by X) has an initial mass density relative to baryons ..beta../sup -1/ identically equal rho/sub X//rho/sub B/ >> 1, and then decays recently (redshift z less than or equal to 1000) into particles which are still relativistic today (denoted by R). We write down and solve the coupled equations for the cosmic scale factor a(t), the energy density in the various components (rho/sub X/, rho/sub R/, rho/sub B/), and the growth of linear density perturbations (delta rho/rho). The solutions form a one parameter (..beta..) family of solutions; physically ..beta../sup -1/ approx. = (..cap omega../sub R//..cap omega../sub NR/) x (1 + z/sub D/) = (ratio today of energy density of relativistic to nonrelativistic particles) x (1 + redshift of (decay)). We discuss the observational implications of such a cosmological model and compare our results to earlier results computed in the simultaneous decay approximation. In an appendix we briefly consider the case where one of the decay products of the X is massive and becomes nonrelativistic by the present epoch. 21 references.
6. Statistical Physics of Particles
Kardar, Mehran
2006-06-01
Statistical physics has its origins in attempts to describe the thermal properties of matter in terms of its constituent particles, and has played a fundamental role in the development of quantum mechanics. Based on lectures for a course in statistical mechanics taught by Professor Kardar at Massachusetts Institute of Technology, this textbook introduces the central concepts and tools of statistical physics. It contains a chapter on probability and related issues such as the central limit theorem and information theory, and covers interacting particles, with an extensive description of the van der Waals equation and its derivation by mean field approximation. It also contains an integrated set of problems, with solutions to selected problems at the end of the book. It will be invaluable for graduate and advanced undergraduate courses in statistical physics. A complete set of solutions is available to lecturers on a password protected website at www.cambridge.org/9780521873420. Based on lecture notes from a course on Statistical Mechanics taught by the author at MIT Contains 89 exercises, with solutions to selected problems Contains chapters on probability and interacting particles Ideal for graduate courses in Statistical Mechanics
7. Particle Velocity Measuring System
NASA Technical Reports Server (NTRS)
Arndt, G. Dickey (Inventor); Carl, James R. (Inventor)
1998-01-01
Method and apparatus are provided for determining the velocity of individual food particles within a liquid/solid food mixture that is cooked by an aseptic cooking method whereby the food mixture is heated as it flows through a flowline. At least one upstream and at least one downstream microwave transducer are provided to determine the minimum possible travel time of the fastest food particle through the flowline. In one embodiment, the upstream detector is not required. In another embodiment, a plurality of small dipole antenna markers are secured to a plurality of food particles to provide a plurality of signals as the markers pass the upstream and downstream transducers. The dipole antenna markers may also include a non-linear element to reradiate a harmonic frequency of a transmitter frequency. Upstream and downstream transducers include dipole antennas that are matched to the impedance of the food slurry and a signal transmission cable by various impedance matching means including unbalanced feed to the antennas.
8. New particles and interactions
SciTech Connect
Gilman, F.J.; Grannis, P.D.
1984-04-01
The Working Group on New Particles and Interactions met as a whole at the beginning and at the end of the Workshop. However, much of what was accomplished was done in five subgroups. These were devoted to: (1) new quarks and leptons; (2) technicolor; (3) supersymmetry; (4) rare decays and CP; and (5) substructure of quarks and leptons. Other aspects of new particles, e.g., Higgs, W', Z', fell to the Electroweak Working Group to consider. The central question of this Workshop of comparing anti pp (with L = 10/sup 32//cm/sup 2/-sec) with pp (with L = 10/sup 33//cm/sup 2/-sec) colliders carried through to all these subgroups. In addition there were several other aspects of hadron colliders which were considered: what does an increase in ..sqrt..s gain in cross section and resultant sensitivity to new physics versus an increase in luminosity; will polarized beams or the use of asymmetries be essential in finding new interactions; where and at what level do rate limitations due to triggering or detection systems play a role; and how and where will the detection of particles with short, but detectable, lifetimes be important. 25 references.
9. Alpha Particle Diagnostic
SciTech Connect
Fisher, Ray, K.
2009-05-13
The study of burning plasmas is the next frontier in fusion energy research, and will be a major objective of the U.S. fusion program through U.S. collaboration with our international partners on the ITER Project. For DT magnetic fusion to be useful for energy production, it is essential that the energetic alpha particles produced by the fusion reactions be confined long enough to deposit a significant fraction of their initial ~3.5 MeV energy in the plasma before they are lost. Development of diagnostics to study the behavior of energetic confined alpha particles is a very important if not essential part of burning plasma research. Despite the clear need for these measurements, development of diagnostics to study confined the fast confined alphas to date has proven extremely difficult, and the available techniques remain for the most part unproven and with significant uncertainties. Research under this grant had the goal of developing diagnostics of fast confined alphas, primarily based on measurements of the neutron and ion tails resulting from alpha particle knock-on collisions with the plasma deuterium and tritium fuel ions. One of the strengths of this approach is the ability to measure the alphas in the hot plasma core where the interesting ignition physics will occur.
10. Particle Theory & Cosmology
SciTech Connect
Shafi, Qaisar; Barr, Steven; Gaisser, Thomas; Stanev, Todor
2015-03-31
1. Executive Summary (April 1, 2012 - March 31, 2015) Title: Particle Theory, Particle Astrophysics and Cosmology Qaisar Shafi University of Delaware (Principal Investigator) Stephen M. Barr, University of Delaware (Co-Principal Investigator) Thomas K. Gaisser, University of Delaware (Co-Principal Investigator) Todor Stanev, University of Delaware (Co-Principal Investigator) The proposed research was carried out at the Bartol Research included Professors Qaisar Shafi Stephen Barr, Thomas K. Gaisser, and Todor Stanev, two postdoctoral fellows (Ilia Gogoladze and Liucheng Wang), and several graduate students. Five students of Qaisar Shafi completed their PhD during the period August 2011 - August 2014. Measures of the group’s high caliber performance during the 2012-2015 funding cycle included pub- lications in excellent refereed journals, contributions to working groups as well as white papers, and conference activities, which together provide an exceptional record of both individual performance as well as overall strength. Another important indicator of success is the outstanding quality of the past and current cohort of graduate students. The PhD students under our supervision regularly win the top departmental and university awards, and their publications records show excellence both in terms of quality and quantity. The topics covered under this grant cover the frontline research areas in today’s High Energy Theory & Phenomenology. For Professors Shafi and Barr they include LHC related topics including supersymmetry, collider physics, fl vor physics, dark matter physics, Higgs boson and seesaw physics, grand unifi and neutrino physics. The LHC two years ago discovered the Standard Model Higgs boson, thereby at least partially unlocking the secrets behind electroweak symmetry breaking. We remain optimistic that new and exciting physics will be found at LHC 14, which explain our focus on physics beyond the Standard Model. Professors Shafi continued his
11. Summary of Alpha Particle Transport
SciTech Connect
Medley, S.S.; White, R.B.; Zweben, S.J.
1998-08-19
This paper summarizes the talks on alpha particle transport which were presented at the 5th International Atomic Energy Agency's Technical Committee Meeting on "Alpha Particles in Fusion Research" held at the Joint European Torus, England in September 1997.
12. Particle detection on flat surfaces
van der Donck, Jacques; Snel, Rob; Stortelder, Jetske; Abutan, Alfred; Oostrom, Sjoerd; van Reek, Sander; van der Zwan, Bert; van der Walle, Peter
2011-04-01
Since 2006 EUV Lithographic tools have been available for testing purposes giving a boost to the development of fab infrastructure for EUV masks. The absence of a pellicle makes the EUV reticles extremely vulnerable to particles. Therefore, the fab infrastructure for masks must meet very strict particle requirements. It is expected that all new equipment must be qualified on particles before it can be put into operation. This qualification requirement increases the need for a low cost method for particle detection on mask substrates. TNO developed its fourth generation particle scanner, the Rapid Nano. This scanner is capable of detecting nanometer sized particles on flat surfaces. The particle detection is based on dark field imaging techniques and fast image processing. The tool was designed for detection of a single added particle in a handling experiment over a reticle sized substrate. Therefore, the Rapid Nano is very suitable for the validation of particle cleanliness of equipment. During the measurement, the substrate is protected against particle contamination by placing it in a protective environment. This environment shields the substrate from all possible contamination source in the Nano Rapid (stages, elevator, cabling). The imaging takes place through a window in the protective cover. The geometry of the protective environment enables large flexibility in substrate shape and size. Particles can be detected on substrates varying from 152 x 152 mm mask substrates to wafers up to 200 mm. PSL particles of 50 nm were detected with signal noise ratio of 26. Larger particles had higher signal noise ratios. By individually linking particles in two measurements the addition of particles can be detected. These results show that the Rapid Nano is capable of detecting particles of 50 nm and larger of a full reticle substrate.
13. Particle-free microchip processing
DOEpatents
Geller, Anthony S.; Rader, Daniel J.
1996-01-01
Method and apparatus for reducing particulate contamination in microchip processing are disclosed. The method and apparatus comprise means to reduce particle velocity toward the wafer before the particles can be deposited on the wafer surface. A reactor using electric fields to reduce particle velocity and prevent particulate contamination is disclosed. A reactor using a porous showerhead to reduce particle velocities and prevent particulate contamination is disclosed.
14. Particle-free microchip processing
DOEpatents
1996-06-04
Method and apparatus for reducing particulate contamination in microchip processing are disclosed. The method and apparatus comprise means to reduce particle velocity toward the wafer before the particles can be deposited on the wafer surface. A reactor using electric fields to reduce particle velocity and prevent particulate contamination is disclosed. A reactor using a porous showerhead to reduce particle velocities and prevent particulate contamination is disclosed. 5 figs.
15. Amps particle accelerator definition study
NASA Technical Reports Server (NTRS)
Sellen, J. M., Jr.
1975-01-01
The Particle Accelerator System of the AMPS (Atmospheric, Magnetospheric, and Plasmas in Space) payload is a series of charged particle accelerators to be flown with the Space Transportation System Shuttle on Spacelab missions. In the configuration presented, the total particle accelerator system consists of an energetic electron beam, an energetic ion accelerator, and both low voltage and high voltage plasma acceleration devices. The Orbiter is illustrated with such a particle accelerator system.
16. Polarization correlations of Dirac particles
SciTech Connect
Caban, Pawel; Dziegielewska, Agnieszka; Karmazyn, Anna; Okrasa, Malgorzata
2010-03-15
We calculate the polarization correlation function in the Einstein-Podolsky-Rosen-type experiments with relativistic spin-1/2 particles. This function depends monotonically on the particle momenta. Moreover, we also show that the polarization correlation function violates the Clauser-Horn-Shimony-Holt inequality and the degree of this violation can depend on the particle momenta and the motion of observers.
17. Apparatus for separating particles utilizing engineered acoustic contrast capture particles
SciTech Connect
2014-10-21
An apparatus for separating particles from a medium includes a capillary defining a flow path therein that is in fluid communication with a medium source. The medium source includes engineered acoustic contrast capture particle having a predetermined acoustic contrast. The apparatus includes a vibration generator that is operable to produce at least one acoustic field within the flow path. The acoustic field produces a force potential minima for positive acoustic contrast particles and a force potential minima for negative acoustic contrast particles in the flow path and drives the engineered acoustic contrast capture particles to either the force potential minima for positive acoustic contrast particles or the force potential minima for negative acoustic contrast particles.
18. Apparatus for separating particles utilizing engineered acoustic contrast capture particles
SciTech Connect
2011-12-27
An apparatus for separating particles from a medium includes a capillary defining a flow path therein that is in fluid communication with a medium source. The medium source includes engineered acoustic contrast capture particle having a predetermined acoustic contrast. The apparatus includes a vibration generator that is operable to produce at least one acoustic field within the flow path. The acoustic field produces a force potential minima for positive acoustic contrast particles and a force potential minima for negative acoustic contrast particles in the flow path and drives the engineered acoustic contrast capture particles to either the force potential minima for positive acoustic contrast particles or the force potential minima for negative acoustic contrast particles.
19. Apparatus for separating particles utilizing engineered acoustic contrast capture particles
DOEpatents
2016-05-17
An apparatus for separating particles from a medium includes a capillary defining a flow path therein that is in fluid communication with a medium source. The medium source includes engineered acoustic contrast capture particle having a predetermined acoustic contrast. The apparatus includes a vibration generator that is operable to produce at least one acoustic field within the flow path. The acoustic field produces a force potential minima for positive acoustic contrast particles and a force potential minima for negative acoustic contrast particles in the flow path and drives the engineered acoustic contrast capture particles to either the force potential minima for positive acoustic contrast particles or the force potential minima for negative acoustic contrast particles.
SciTech Connect
Alonso, J.R.
1995-05-01
Radiation therapy with hadrons (protons, neutrons, pions, ions) has accrued a 55-year track record, with by now over 30,000 patients having received treatments with one of these particles. Very good, and in some cases spectacular results are leading to growth in the field in specific well-defined directions. The most noted contributor to success has been the ability to better define and control the radiation field produced with these particles, to increase the dose delivered to the treatment volume while achieving a high degree of sparing of normal tissue. An additional benefit is the highly-ionizing, character of certain beams, leading to creater cell-killing potential for tumor lines that have historically been very resistant to radiation treatments. Until recently these treatments have been delivered in laboratories and research centers whose primary, or original mission was physics research. With maturity in the field has come both the desire to provide beam facilities more accessible to the clinical setting, of a hospital, as well as achieving, highly-efficient, reliable and economical accelerator and beam-delivery systems that can make maximum advantage of the physical characteristics of these particle beams. Considerable work in technology development is now leading, to the implementation of many of these ideas, and a new generation of clinically-oriented facilities is beginning to appear. We will discuss both the physical, clinical and technological considerations that are driving these designs, as well as highlighting, specific examples of new facilities that are either now treating, patients or that will be doing so in the near future.
SciTech Connect
More, R M; Graziani, F R; Glosli, J; Surh, M
2009-06-15
Hot dense radiative (HDR) plasmas common to Inertial Confinement Fusion (ICF) and stellar interiors have high temperature (a few hundred eV to tens of keV), high density (tens to hundreds of g/cc) and high pressure (hundreds of Megabars to thousands of Gigabars). Typically, such plasmas undergo collisional, radiative, atomic and possibly thermonuclear processes. In order to describe HDR plasmas, computational physicists in ICF and astrophysics use atomic-scale microphysical models implemented in various simulation codes. Experimental validation of the models used to describe HDR plasmas are difficult to perform. Direct Numerical Simulation (DNS) of the many-body interactions of plasmas is a promising approach to model validation but, previous work either relies on the collisionless approximation or ignores radiation. We present four methods that attempt a new numerical simulation technique to address a currently unsolved problem: the extension of molecular dynamics to collisional plasmas including emission and absorption of radiation. The first method applies the Lienard-Weichert solution of Maxwell's equations for a classical particle whose motion is assumed to be known (section 3). The second method expands the electromagnetic field in normal modes (plane-waves in a box with periodic boundary-conditions) and solves the equation for wave amplitudes coupled to the particle motion (section 4). The third method is a hybrid MD/MC (molecular dynamics/Monte Carlo) method which calculates radiation emitted or absorbed by electron-ion pairs during close collisions (section 5). The fourth method is a generalization of the third method to include small clusters of particles emitting radiation during close encounters: one electron simultaneously hitting two ions, two electrons simultaneously hitting one ion, etc.(section 6). This approach is inspired by the Virial expansion method of equilibrium statistical mechanics.
2. Solar Energetic Particle Variations
NASA Technical Reports Server (NTRS)
Reames, D. V.
2003-01-01
In the largest solar energetic-particle (SEP) events, acceleration occurs at shock waves driven out from the Sun by coronal mass ejections (CMEs). In fact, the highest proton intensities directly measured near Earth at energies up to approximately 1 GeV occur at the time of passage of shocks, which arrive about a day after the CMEs leave the Sun. CME-driven shocks expanding across magnetic fields can fill over half of the heliosphere with SEPs. Proton-generated Alfven waves trap particles near the shock for efficient acceleration but also throttle the intensities at Earth to the streaming limit early in the events. At high energies, particles begin to leak from the shock and the spectrum rolls downward to form an energy-spectral 'knee' that can vary in energy from approximately 1 MeV to approximately 1 GeV in different events. All of these factors affect the radiation dose as a function of depth and latitude in the Earth's atmosphere and the risk to astronauts and equipment in space. SEP ionization of the polar atmosphere produces nitrates that precipitate to become trapped in the polar ice. Observations of nitrate deposits in ice cores reveal individual large SEP events and extend back approximately 400 years. Unlike sunspots, SEP events follow the approximately 80-100-year Gleissberg cycle rather faithfully and are now at a minimum in that cycle. The largest SEP event in the last 400 years appears to be related to the flare observed by Carrington in 1859, but the probability of SEP events with such large fluences falls off sharply because of the streaming limit.
Peach, Ken; Ekdahl, Carl
2014-02-01
4. The Auroral Particles experiment
NASA Technical Reports Server (NTRS)
1981-01-01
An instrument for the detection of particles in the energy range of 0.1 ev to 80 Kev was designed, built, tested, calibrated, and flown onboard the spacecraft ATS-6. Data from this instrument generated the following research: intensive studies of the plasma in the vicinity of the spacecraft; global variations of plasmas; correlative studies using either other spacecraft or ground based measurements; and studies of spacecraft interactions with ambient plasmas including charging, local electric fields due to differential charging, and active control of spacecraft potential. Results from this research are presented.
5. Particle acceleration in flares
NASA Technical Reports Server (NTRS)
Benz, Arnold O.; Kosugi, Takeo; Aschwanden, Markus J.; Benka, Steve G.; Chupp, Edward L.; Enome, Shinzo; Garcia, Howard; Holman, Gordon D.; Kurt, Victoria G.; Sakao, Taro
1994-01-01
Particle acceleration is intrinsic to the primary energy release in the impulsive phase of solar flares, and we cannot understand flares without understanding acceleration. New observations in soft and hard X-rays, gamma-rays and coherent radio emissions are presented, suggesting flare fragmentation in time and space. X-ray and radio measurements exhibit at least five different time scales in flares. In addition, some new observations of delayed acceleration signatures are also presented. The theory of acceleration by parallel electric fields is used to model the spectral shape and evolution of hard X-rays. The possibility of the appearance of double layers is further investigated.
6. Microgravity Particle Dynamics
NASA Technical Reports Server (NTRS)
Clark, Ivan O.; Johnson, Edward J.
1996-01-01
This research seeks to identify the experiment design parameters for future flight experiments to better resolve the effects of thermal and velocity gradients on gas-solid flows. By exploiting the reduced body forces and minimized thermal convection current of reduced gravity experiments, features of gas-solid flow normally masked by gravitationally induced effects can be studied using flow regimes unattainable under unigravity. This paper assesses the physical scales of velocity, length, time, thermal gradient magnitude, and velocity gradient magnitude likely to be involved in laminar gas-solid multiphase flight experiments for 1-100 micro-m particles.
7. Particle bed reactor modeling
NASA Technical Reports Server (NTRS)
Sapyta, Joe; Reid, Hank; Walton, Lew
1993-01-01
The topics are presented in viewgraph form and include the following: particle bed reactor (PBR) core cross section; PBR bleed cycle; fuel and moderator flow paths; PBR modeling requirements; characteristics of PBR and nuclear thermal propulsion (NTP) modeling; challenges for PBR and NTP modeling; thermal hydraulic computer codes; capabilities for PBR/reactor application; thermal/hydralic codes; limitations; physical correlations; comparison of predicted friction factor and experimental data; frit pressure drop testing; cold frit mask factor; decay heat flow rate; startup transient simulation; and philosophy of systems modeling.
8. Physics of windblown particles
NASA Technical Reports Server (NTRS)
Greeley, Ronald; Leach, Rodman; Marshall, John R.; White, Bruce; Iversen, James D.; Nickling, William G.; Gillette, Dale; Sorensen, Michael
1987-01-01
A laboratory facility proposed for the Space Station to investigate fundamental aspects of windblown particles is described. The experiments would take advantage of the environment afforded in earth orbit and would be an extension of research currently being conducted on the geology and physics of windblown sediments on earth, Mars, and Venus. Aeolian (wind) processes are reviewed in the planetary context, the scientific rational is given for specific experiments to be conducted, the experiment apparatus (the Carousel Wind Tunnel, or CWT) is described, and a plan presented for implementing the proposed research program.
9. Small Particle Pollutants
NASA Technical Reports Server (NTRS)
1976-01-01
NASA and the EPA are cooperating to measure particle size of all elements in aerosols from airports, coal-fired power stations, municipal waste incinerators, and other combustion aerosol sources. Langley intends to sample the air using its proton-induced x-ray emission technique initially developed to determine aerosols in jet-engine exhaust. Proton technique is important because no other rapid, nondestructive method now exists for measuring trace element compositions of massive amounts of air. Method can also analyze human tissue and hair samples to determine exposure to toxic elements.
10. Particle data reduction in Japan
NASA Technical Reports Server (NTRS)
Nakayama, Mitsushige
1987-01-01
The characterization of atomized particles generated by various atomizer and the mechanics of their evaporation and combustion processes were studied. The need existed for visualizing the internal structure of flames including evaporation and combustion processes as well as for a better way of understanding spray particle generation mechanisms and internal structures. A particle sizer based on Fraunhofer diffraction for detecting particle size and in-line Fraunhofer holograms for observation of local spray particles were used. A novel visualizing technique based on Computer Technology was developed and is discussed.
11. Apparatus for blending small particles
DOEpatents
Bradley, R.A.; Reese, C.R.; Sease, J.D.
1975-08-26
An apparatus is described for blending small particles and uniformly loading the blended particles in a receptacle. Measured volumes of various particles are simultaneously fed into a funnel to accomplish radial blending and then directed onto the apex of a conical splitter which collects the blended particles in a multiplicity of equal subvolumes. Thereafter the apparatus sequentially discharges the subvolumes for loading in a receptacle. A system for blending nuclear fuel particles and loading them into fuel rod molds is described in a preferred embodiment. (auth)
12. Synthesis of Biofunctional Janus Particles.
PubMed
Li, Binghui; Wang, Man; Chen, Kui; Cheng, Zhifeng; Chen, Gaojian; Zhang, Zexin
2015-06-01
Janus particles with anisotropic biofunctionalities are perfect models to mimic anisotropic architectures and directional interactions that occur in nature. It is therefore highly desirable to develop reliable and efficient methods to synthesize biofunctional Janus particles. Herein, a facile method combining seeded-emulsion polymerization and thiol-click chemistry has been developed to synthesize Janus particles with glucose moieties on one side. These biofunctional Janus particles show region-selective binding of protein, which represents a big step toward biomimicry, and demonstrates the potential of the bioJanus particles for targeted drug delivery and binding.
13. Dusty-Plasma Particle Accelerator
NASA Technical Reports Server (NTRS)
Foster, John E.
2005-01-01
A dusty-plasma apparatus is being investigated as means of accelerating nanometer- and micrometer-sized particles. Applications for the dusty-plasma particle accelerators fall into two classes: Simulation of a variety of rapidly moving dust particles and micrometeoroids in outer-space environments that include micrometeoroid streams, comet tails, planetary rings, and nebulae and Deposition or implantation of nanoparticles on substrates for diverse industrial purposes that could include hardening, increasing thermal insulation, altering optical properties, and/or increasing permittivities of substrate materials. Relative to prior apparatuses used for similar applications, dusty-plasma particle accelerators offer such potential advantages as smaller size, lower cost, less complexity, and increased particle flux densities. A dusty-plasma particle accelerator exploits the fact that an isolated particle immersed in plasma acquires a net electric charge that depends on the relative mobilities of electrons and ions. Typically, a particle that is immersed in a low-temperature, partially ionized gas, wherein the average kinetic energy of electrons exceeds that of ions, causes the particle to become negatively charged. The particle can then be accelerated by applying an appropriate electric field. A dusty-plasma particle accelerator (see figure) includes a plasma source such as a radio-frequency induction discharge apparatus containing (1) a shallow cup with a biasable electrode to hold the particles to be accelerated and (2) a holder for the substrate on which the particles are to impinge. Depending on the specific design, a pair of electrostatic-acceleration grids between the substrate and discharge plasma can be used to both collimate and further accelerate particles exiting the particle holder. Once exposed to the discharge plasma, the particles in the cup quickly acquire a negative charge. Application of a negative voltage pulse to the biasable electrode results in the
14. Radiation emission from small particles
Egan, W. G.; Hilgeman, T. W.
1984-04-01
Measurements have been made of the IR radiation from monodisperse optically absorbing spherical particles of di-2-ethylhexyl sebacate. The purpose was to validate the Mie emission theory for particles that are small compared with the radiation wavelength. In contradiction to the Mie theory, McGregor has theoretically concluded that radiation absorption or emission is not possible at wavelengths longer than pi times the square root of 2 times the particle diameter for spherical particles. The present results on monodisperse spherical particles of 3, 1, and 0.5 microns emitting at a wavelength of 3.4 microns support the Mie theory predictions.
15. Synthesis of Biofunctional Janus Particles.
PubMed
Li, Binghui; Wang, Man; Chen, Kui; Cheng, Zhifeng; Chen, Gaojian; Zhang, Zexin
2015-06-01
Janus particles with anisotropic biofunctionalities are perfect models to mimic anisotropic architectures and directional interactions that occur in nature. It is therefore highly desirable to develop reliable and efficient methods to synthesize biofunctional Janus particles. Herein, a facile method combining seeded-emulsion polymerization and thiol-click chemistry has been developed to synthesize Janus particles with glucose moieties on one side. These biofunctional Janus particles show region-selective binding of protein, which represents a big step toward biomimicry, and demonstrates the potential of the bioJanus particles for targeted drug delivery and binding. PMID:25858757
16. Interaction of Burning Metal Particles
NASA Technical Reports Server (NTRS)
Dreizin, Edward L.; Berman, Charles H.; Hoffmann, Vern K.
1999-01-01
Physical characteristics of the combustion of metal particle groups have been addressed in this research. The combustion behavior and interaction effects of multiple metal particles has been studied using a microgravity environment, which presents a unique opportunity to create an "aerosol" consisting of relatively large particles, i.e., 50-300 micrometer diameter. Combustion behavior of such an aerosol could be examined using methods adopted from well-developed single particle combustion research. The experiment included fluidizing relatively large (order of 100 micrometer diameter) uniform metal particles under microgravity and igniting such an "aerosol" using a hot wire igniter. The flame propagation and details of individual particle combustion and particle interaction have been studied using a high speed movie and video-imaging with cameras coupled with microscope lenses to resolve individual particles. Interference filters were used to separate characteristic metal and metal oxide radiation bands form the thermal black body radiation. Recorded flame images were digitized and employed to understand the processes occurring in the burning aerosol. The development of individual particle flames, merging or separation, and extinguishing as well as induced particle motion have been analyzed to identify the mechanisms governing these processes. Size distribution, morphology, and elemental compositions of combustion products were characterized and used to link the observed in this project aerosol combustion phenomena with the recently expanded mechanism of single metal particle combustion.
17. Interaction of Burning Metal Particles
NASA Technical Reports Server (NTRS)
Dreizin, Edward L.; Berman, Charles H.; Hoffmann, Vern K.
1999-01-01
Physical characteristics of the combustion of metal particle groups have been addressed in this research. The combustion behavior and interaction effects of multiple metal particles has been studied using a microgravity environment, which presents a unique opportunity to create an "aerosol" consisting of relatively large particles, i.e., 50-300 m diameter. Combustion behavior of such an aerosol could be examined using methods adopted from well-developed single particle combustion research. The experiment included fluidizing relatively large (order of 100 m diameter) uniform metal particles under microgravity and igniting such an "aerosol" using a hot wire igniter. The flame propagation and details of individual particle combustion and particle interaction have been studied using a high speed movie and video-imaging with cameras coupled with microscope lenses to resolve individual particles. Interference filters were used to separate characteristic metal and metal oxide radiation bands from the thermal black body radiation. Recorded flame images were digitized and various image processing techniques including flame position tracking, color separation, and pixel by pixel image comparison were employed to understand the processes occurring in the burning aerosol. The development of individual particle flames, merging or separation, and extinguishment as well as induced particle motion have been analyzed to identify the mechanisms governing these processes. Size distribution, morphology, and elemental compositions of combustion products were characterized and used to link the observed in this project aerosol combustion phenomena with the recently expanded mechanism of single metal particle combustion.
18. Classification of Volatile Engine Particles
SciTech Connect
Cheng, Mengdawn
2013-01-01
Volatile particles cannot be detected at the engine exhaust by an aerosol detector. They are formed when the exhaust is mixed with ambient air downstream. Lack of a precise definition of volatile engine particles has been an impediment to engine manufacturers and regulatory agencies involved in the development of an effective control strategy. It is beyond doubt that volatile particles from combustion sources contribute to the atmospheric particulate burden, and the effect of that contribution is a critical issue in the ongoing research in the areas of air quality and climate change. A new instrument, called volatile particle separator (VPS), has been developed. It utilizes a proprietary microporous metallic membrane to separate particles from vapors. VPS data were used in the development of a two-parameter function to quantitatively classify, for the first time, the volatilization behavior of engine particles. The value of parameter A describes the volatilization potential of an aerosol. A nonvolatile particle has a larger A-value than a volatile one. The value of parameter k, an effective evaporation energy barrier, is found to be much smaller for small engine particles than that for large engine particles. The VPS instrument provides a means beyond just being a volatile particle remover; it enables a numerical definition to characterize volatile engine particles.
19. Gyrokinetic particle simulation model
SciTech Connect
Lee, W.W.
1986-07-01
A new type of particle simulation model based on the gyrophase-averaged Vlasov and Poisson equations is presented. The reduced system, in which particle gyrations are removed from the equations of motion while the finite Larmor radius effects are still preserved, is most suitable for studying low frequency microinstabilities in magnetized plasmas. It is feasible to simulate an elongated system (L/sub parallel/ >> L/sub perpendicular/) with a three-dimensional grid using the present model without resorting to the usual mode expansion technique, since there is essentially no restriction on the size of ..delta..x/sub parallel/ in a gyrokinetic plasma. The new approach also enables us to further separate the time and spatial scales of the simulation from those associated with global transport through the use of multiple spatial scale expansion. Thus, the model can be a very efficient tool for studying anomalous transport problems related to steady-state drift-wave turbulence in magnetic confinement devices. It can also be applied to other areas of plasma physics.
20. Particle physics -- Future directions
SciTech Connect
Chris Quigg
2001-11-29
Wonderful opportunities await particle physics over the next decade, with the coming of the Large Hadron Collider at CERN to explore the 1-TeV scale (extending efforts at LEP and the Tevatron to unravel the nature of electroweak symmetry breaking) and many initiatives to develop our understanding of the problem of identity: what makes a neutrino a neutrino and a top quark a top quark. Here I have in mind the work of the B factories and the Tevatron collider on CP violation and the weak interactions of the b quark; the wonderfully sensitive experiments at Brookhaven, CERN, Fermilab, and Frascati on CP violation and rare decays of kaons; the prospect of definitive accelerator experiments on neutrino oscillations and the nature of the neutrinos; and a host of new experiments on the sensitivity frontier. We might even learn to read experiment for clues about the dimensionality of spacetime. If we are inventive enough, we may be able to follow this rich menu with the physics opportunities offered by a linear collider and a (muon storage ring) neutrino factory. I expect a remarkable flowering of experimental particle physics, and of theoretical physics that engages with experiment. I describe some of the great questions before us and the challenges of providing the instruments that will be needed to define them more fully and eventually to answer them.
1. Particle physics---Experimental
SciTech Connect
Lord, J.J.; Boynton, P.E.; Burnett, T.H.; Wilkes, R.J.
1991-08-21
We are continuing a research program in particle astrophysics and high energy experimental particle physics. We have joined the DUMAND Collaboration, which is constructing a deep undersea astrophysical neutrino detector near Hawaii. Studies of high energy hadronic interactions using emulsion chamber techniques were also continued, using balloon flight exposures to ultra-high cosmic ray nuclei (JACEE) and accelerator beams. As members of the DUMAND Collaboration, we have responsibility for development a construction of critical components for the deep undersea neutrino detector facility. We have designed and developed the acoustical positioning system required to permit reconstruction of muon tracks with sufficient precision to meet the astrophysical goals of the experiment. In addition, we are making significant contributions to the design of the database and triggering system to be used. Work has been continuing in other aspects of the study of multiparticle production processes in nuclei. We are participants in a joint US/Japan program to study nuclear interactions at energies two orders of magnitude greater than those of existing accelerators, using balloon-borne emulsion chambers. On one of the flights we found two nuclear interactions of multiplicity over 1000 -- one with a multiplicity of over 2000 and pseudorapidity density {approximately} 800 in the central region. At the statistical level of the JACEE experiment, the frequency of occurrence of such events is orders of magnitude too large. We have continued our ongoing program to study hadronic interactions in emulsions exposed to high energy accelerator beams.
2. Holographic particle detection
NASA Technical Reports Server (NTRS)
Bowen, Theodore
1988-01-01
The feasibility was studied of developing a novel particle track detector based on the detection of 1p-1s emission radiation from electron bubbles in liquid helium. The principles, design, construction, and initial testing of the detection system have been described in previous reports. The main obstacle encountered was the construction of the liquid-helium tight infrared windows. Despite numerous efforts in testing and redesigning the windows, the problem of window leakage at low temperature persisted. Due to limited time and resources, attention was switched to investigating the possibility of using room-temperature liquid as the detection medium. A possible mechanism was the detection of de-excitation radiation emitted from localized electrons in common liquids where electrons exhibit low mobilities, as suggested in the previous report. The purity of the liquid is critical in this method as the dissolved impurities (such as oxygen), even in trace amounts, will act as scavengers of electrons. Another mechanism is discussed whereby the formation of the superoxide ions by electron scavenging behavior of dissolved oxygen is exploited to detect the track of ionizing particles. An experiment to measure the ionization current produced in a liquid by a pulsed X-ray beam in order to study propertiies of the ions is also reported.
3. Energetic particles at Uranus
NASA Technical Reports Server (NTRS)
Cheng, Andrew F.; Krimigis, S. M.; Lanzerotti, L. J.
1991-01-01
The energetic particle measurements by the low-energy charged-particle and cosmic-ray instruments on the Voyager 2 spacecraft in the magnetosphere of Uranus are reviewed. Upstream events were observed outside the Uranian bow shock, probably produced by ion escape from the magnetosphere. Evidence of earthlike substorm activity was discovered within the Uranian magnetosphere. A proton injection event was observed within the orbit of Umbriel and proton events were observed in the magnetotail plasma-sheet boundary layer that are diagnostic of earthlike substorms. The magnetospheric composition is totally dominated by protons, with only a trace abundance of H(2+) and no evidence for He or heavy ions; the Uranian atmophere is argued to be the principal plasma source. Phase-space densities of medium energy protons show inward radial diffusion and are quantitatively similar to those observed at the earth, Jupiter, and Saturn. These findings and plasma wave data suggest the existence of structures analogous to the earth's plasmasphere and plasmapause.
4. Particle therapy for noncancer diseases
SciTech Connect
Bert, Christoph; Engenhart-Cabillic, Rita; Durante, Marco
2012-04-15
Radiation therapy using high-energy charged particles is generally acknowledged as a powerful new technique in cancer treatment. However, particle therapy in oncology is still controversial, specifically because it is unclear whether the putative clinical advantages justify the high additional costs. However, particle therapy can find important applications in the management of noncancer diseases, especially in radiosurgery. Extension to other diseases and targets (both cranial and extracranial) may widen the applications of the technique and decrease the cost/benefit ratio of the accelerator facilities. Future challenges in this field include the use of different particles and energies, motion management in particle body radiotherapy and extension to new targets currently treated by catheter ablation (atrial fibrillation and renal denervation) or stereotactic radiation therapy (trigeminal neuralgia, epilepsy, and macular degeneration). Particle body radiosurgery could be a future key application of accelerator-based particle therapy facilities in 10 years from today.
5. Echo particle image velocimetry.
PubMed
DeMarchi, Nicholas; White, Christopher
2012-12-27
The transport of mass, momentum, and energy in fluid flows is ultimately determined by spatiotemporal distributions of the fluid velocity field.(1) Consequently, a prerequisite for understanding, predicting, and controlling fluid flows is the capability to measure the velocity field with adequate spatial and temporal resolution.(2) For velocity measurements in optically opaque fluids or through optically opaque geometries, echo particle image velocimetry (EPIV) is an attractive diagnostic technique to generate "instantaneous" two-dimensional fields of velocity.(3,4,5,6) In this paper, the operating protocol for an EPIV system built by integrating a commercial medical ultrasound machine(7) with a PC running commercial particle image velocimetry (PIV) software(8) is described, and validation measurements in Hagen-Poiseuille (i.e., laminar pipe) flow are reported. For the EPIV measurements, a phased array probe connected to the medical ultrasound machine is used to generate a two-dimensional ultrasound image by pulsing the piezoelectric probe elements at different times. Each probe element transmits an ultrasound pulse into the fluid, and tracer particles in the fluid (either naturally occurring or seeded) reflect ultrasound echoes back to the probe where they are recorded. The amplitude of the reflected ultrasound waves and their time delay relative to transmission are used to create what is known as B-mode (brightness mode) two-dimensional ultrasound images. Specifically, the time delay is used to determine the position of the scatterer in the fluid and the amplitude is used to assign intensity to the scatterer. The time required to obtain a single B-mode image, t, is determined by the time it take to pulse all the elements of the phased array probe. For acquiring multiple B-mode images, the frame rate of the system in frames per second (fps) = 1/δt. (See 9 for a review of ultrasound imaging.) For a typical EPIV experiment, the frame rate is between 20-60 fps
6. Analysis of particle kinematics in spheronization via particle image velocimetry.
PubMed
Koester, Martin; Thommes, Markus
2013-02-01
Spheronization is a wide spread technique in pellet production for many pharmaceutical applications. Pellets produced by spheronization are characterized by a particularly spherical shape and narrow size distribution. The particle kinematic during spheronization is currently not well-understood. Therefore, particle image velocimetry (PIV) was implemented in the spheronization process to visualize the particle movement and to identify flow patterns, in order to explain the influence of various process parameters. The spheronization process of a common formulation was recorded with a high-speed camera, and the images were processed using particle image velocimetry software. A crosscorrelation approach was chosen to determine the particle velocity at the surface of the pellet bulk. Formulation and process parameters were varied systematically, and their influence on the particle velocity was investigated. The particle stream shows a torus-like shape with a twisted rope-like motion. It is remarkable that the overall particle velocity is approximately 10-fold lower than the tip speed of the friction plate. The velocity of the particle stream can be correlated to the water content of the pellets and the load of the spheronizer, while the rotation speed was not relevant. In conclusion, PIV was successfully applied to the spheronization process, and new insights into the particle velocity were obtained.
7. Interaction of Burning Metal Particles
NASA Technical Reports Server (NTRS)
Dreizin, Edward L.
1997-01-01
Multiple particle/droplet flames are ubiquitous in practical combustion systems, and thus the flame interaction processes are of great practical importance. This explains the strong current interest in interactive combustion phenomena. This research is aimed at the investigation of combustion parameters of microgravity model aerosols: relatively large uniform metal particles aerosolized in microgravity environment. An experiment consisting of creation and ignition of a metal multiparticle system in microgravity and high-speed video-recording of the combustion events will produce visual records of the development of individual particle flames, their interactions and the particle motion they induce simultaneously with the observation of the entire aerosol combustion process. Frame by frame analysis of the video-images taken using a high-speed movie camera will allow one to determine particle brightness temperatures and the decrease in particle diameter during combustion. Analysis of the experimental results and comparison with the results of single metal particle combustion experiments, conducted under similar microgravity conditions in the framework of a parallel program, will establish the relationship between single and multiple particle burning rates and combustion temperatures, concentrations at which the flame substructure forms rather than individual particle flames, efficiency of radiative heat transfer in metal aerosol combustion, what is the role of electrostatic forces in structuring the flame and the effect of that structure on the flame propagation rate. Although some details of fine particle aerosol clouds, such as the kinetics limited burning rate, radiative heat transfer in a system with a high specific surface, particle induced turbulence, etc., will probably not be very well simulated in the planned experiments, they are relatively well understood and can be accounted for using an adequate individual particle combustion model. On the other hand, the
8. Particle resuspension via human activity
Qian, Jing
This dissertation consists of three correlated parts that are related to particle resuspension from floorings in indoor environment. The term resuspension in this dissertation refers the re-entrainment of deposited particles into atmosphere via mechanic disturbances by human activity indoors, except where it is specified. The first part reviews the literature related to particle resuspension. Fundamental concepts and kinetics of resuspension of particles were extracted from previous studies. Suggestions for future research on indoor particle resuspension have been given based on the literature reviews and the findings of part 2 and part 3. The second part involved 54 resuspension experiments conducted in a room-scale environmental chamber. Three floorings types and two ventilation configurations were tested. Air exchange rate were fixed during the experiments, and the temperature/RH were monitored. The airborne particle concentration was measured by an array of optical particle counters (OPCs) in the chamber. Resuspension rates were estimated in size ranges of 0.8--1, 1.0--2.0, 2.0--5.0, and 5.0--10 mum ranging from 10-5--10 -2 hr-1, with higher resuspension rates associated with larger particles. Resuspension via walking activity varied from experiment to experiment. A "heavy and fast" walking style was associated with a higher resuspension rate than a less active style. Given the same floor loading of the test particles, resuspension rates for the carpeted floor were on the same order of magnitude but significantly higher than those for the hard floor. In the third part, an image analysis method (IAM) was adapted to characterize the particle distribution on fabric floorings. The IAM results showed the variability of particles loading on various carpets. The dust particles on fibers from ten carpets vary in sizes. The normal dust loading varies from house to house from 3.6x106 particles/cm2 to 8.2x106 particles/cm2. The dust particle number distribution for size
9. Anomalous dispersions of hedgehog' particles
Bahng, Joong Hwan; Yeom, Bongjun; Wang, Yichun; Tung, Siu On; Hoff, J. Damon; Kotov, Nicholas
2015-01-01
Hydrophobic particles in water and hydrophilic particles in oil aggregate, but can form colloidal dispersions if their surfaces are chemically camouflaged with surfactants, organic tethers, adsorbed polymers or other particles that impart affinity for the solvent and increase interparticle repulsion. A different strategy for modulating the interaction between a solid and a liquid uses surface corrugation, which gives rise to unique wetting behaviour. Here we show that this topographical effect can also be used to disperse particles in a wide range of solvents without recourse to chemicals to camouflage the particles' surfaces: we produce micrometre-sized particles that are coated with stiff, nanoscale spikes and exhibit long-term colloidal stability in both hydrophilic and hydrophobic media. We find that these hedgehog' particles do not interpenetrate each other with their spikes, which markedly decreases the contact area between the particles and, therefore, the attractive forces between them. The trapping of air in aqueous dispersions, solvent autoionization at highly developed interfaces, and long-range electrostatic repulsion in organic media also contribute to the colloidal stability of our particles. The unusual dispersion behaviour of our hedgehog particles, overturning the notion that like dissolves like, might help to mitigate adverse environmental effects of the use of surfactants and volatile organic solvents, and deepens our understanding of interparticle interactions and nanoscale colloidal chemistry.
10. The Particle Cleanliness Validation System
SciTech Connect
Stowers, I.F.; Ravizza, D.L.
2001-12-21
The Particle Cleanliness Validation System (PCVS) is a combination of a surface particle collection tool and a microscope based data, reduction system for determining the particle cleanliness of mechanical and optical surfaces at LLNL. Livermore is currently constructing the National Ignition Facility (NIF), a large 192 beam laser system for studying fusion physics. The laser is entirely enclosed. in aluminum and stainless steel vessels containing several environments; air, argon, and vacuum. It contains uncoated optics as well as hard dielectric coated and softer solgel coated optics which are, to varying degrees, sensitive to opaque particles, translucent particles, and molecular contamination. To quantify the particulate matter on structural surfaces during vendor cleaning and installation, a novel instrument has been developed to-both collect surface particles and to quantify the number and size distribution of these particles. The particles are collected on membrane filter paper which is ''swiped'' on a test surface for a proscribed distance to collect sufficient particles to significantly exceed the cleanliness of the filter paper. The swipe paper is then placed into a cassette for protection from further. contamination and transported to a microscope with x-y motorized stage and image analysis software, The surface of the swipe paper is scanned to determine both the background particle level of the paper, the cassette cover, and the portion of the paper which made contact with the test surface. The cumulative size distribution of the collected particles are displayed in size bins from 5 to 200 {micro}m. The quantity of particles exceeding 5 {micro}m is used to compute the IEST-STD-1246D cleanliness Level. Eight image analysis microscopes have been constructed for use with several dozen particle collection tools. About 30,000 cleanliness measurements have been taken to assure the clean construction and operation of the NIF laser system.
11. Ultrafine particles in cities.
PubMed
Kumar, Prashant; Morawska, Lidia; Birmili, Wolfram; Paasonen, Pauli; Hu, Min; Kulmala, Markku; Harrison, Roy M; Norford, Leslie; Britter, Rex
2014-05-01
Ultrafine particles (UFPs; diameter less than 100 nm) are ubiquitous in urban air, and an acknowledged risk to human health. Globally, the major source for urban outdoor UFP concentrations is motor traffic. Ongoing trends towards urbanisation and expansion of road traffic are anticipated to further increase population exposure to UFPs. Numerous experimental studies have characterised UFPs in individual cities, but an integrated evaluation of emissions and population exposure is still lacking. Our analysis suggests that the average exposure to outdoor UFPs in Asian cities is about four-times larger than that in European cities but impacts on human health are largely unknown. This article reviews some fundamental drivers of UFP emissions and dispersion, and highlights unresolved challenges, as well as recommendations to ensure sustainable urban development whilst minimising any possible adverse health impacts. PMID:24503484
12. Particle beam injection system
DOEpatents
Jassby, Daniel L.; Kulsrud, Russell M.
1977-01-01
This invention provides a poloidal divertor for stacking counterstreaming ion beams to provide high intensity colliding beams. To this end, method and apparatus are provided that inject high energy, high velocity, ordered, atomic deuterium and tritium beams into a lower energy, toroidal, thermal equilibrium, neutral, target plasma column that is magnetically confined along an endless magnetic axis in a strong restoring force magnetic field having helical field lines to produce counterstreaming deuteron and triton beams that are received bent, stacked and transported along the endless axis, while a poloidal divertor removes thermal ions and electrons all along the axis to increase the density of the counterstreaming ion beams and the reaction products resulting therefrom. By balancing the stacking and removal, colliding, strong focused particle beams, reaction products and reactions are produced that convert one form of energy into another form of energy.
13. Cooled particle accelerator target
DOEpatents
Degtiarenko, Pavel V.
2005-06-14
A novel particle beam target comprising: a rotating target disc mounted on a retainer and thermally coupled to a first array of spaced-apart parallel plate fins that extend radially inwardly from the retainer and mesh without physical contact with a second array of spaced-apart parallel plate fins that extend radially outwardly from and are thermally coupled to a cooling mechanism capable of removing heat from said second array of spaced-apart fins and located within the first array of spaced-apart parallel fins. Radiant thermal exchange between the two arrays of parallel plate fins provides removal of heat from the rotating disc. A method of cooling the rotating target is also described.
14. Ultrafine particles in cities.
PubMed
Kumar, Prashant; Morawska, Lidia; Birmili, Wolfram; Paasonen, Pauli; Hu, Min; Kulmala, Markku; Harrison, Roy M; Norford, Leslie; Britter, Rex
2014-05-01
Ultrafine particles (UFPs; diameter less than 100 nm) are ubiquitous in urban air, and an acknowledged risk to human health. Globally, the major source for urban outdoor UFP concentrations is motor traffic. Ongoing trends towards urbanisation and expansion of road traffic are anticipated to further increase population exposure to UFPs. Numerous experimental studies have characterised UFPs in individual cities, but an integrated evaluation of emissions and population exposure is still lacking. Our analysis suggests that the average exposure to outdoor UFPs in Asian cities is about four-times larger than that in European cities but impacts on human health are largely unknown. This article reviews some fundamental drivers of UFP emissions and dispersion, and highlights unresolved challenges, as well as recommendations to ensure sustainable urban development whilst minimising any possible adverse health impacts.
15. Theoretical Particle Astrophysics
SciTech Connect
Kamionkowski, Marc
2013-08-07
Abstract: Theoretical Particle Astrophysics The research carried out under this grant encompassed work on the early Universe, dark matter, and dark energy. We developed CMB probes for primordial baryon inhomogeneities, primordial non-Gaussianity, cosmic birefringence, gravitational lensing by density perturbations and gravitational waves, and departures from statistical isotropy. We studied the detectability of wiggles in the inflation potential in string-inspired inflation models. We studied novel dark-matter candidates and their phenomenology. This work helped advance the DoE's Cosmic Frontier (and also Energy and Intensity Frontiers) by finding synergies between a variety of different experimental efforts, by developing new searches, science targets, and analyses for existing/forthcoming experiments, and by generating ideas for new next-generation experiments.
16. Mitochondria-targeting particles
PubMed Central
Wongrakpanich, Amaraporn; Geary, Sean M; Joiner, Mei-ling A; Anderson, Mark E; Salem, Aliasger K
2015-01-01
Mitochondria are a promising therapeutic target for the detection, prevention and treatment of various human diseases such as cancer, neurodegenerative diseases, ischemia-reperfusion injury, diabetes and obesity. To reach mitochondria, therapeutic molecules need to not only gain access to specific organs, but also to overcome multiple barriers such as the cell membrane and the outer and inner mitochondrial membranes. Cellular and mitochondrial barriers can be potentially overcome through the design of mitochondriotropic particulate carriers capable of transporting drug molecules selectively to mitochondria. These particulate carriers or vectors can be made from lipids (liposomes), biodegradable polymers, or metals, protecting the drug cargo from rapid elimination and degradation in vivo. Many formulations can be tailored to target mitochondria by the incorporation of mitochondriotropic agents onto the surface and can be manufactured to desired sizes and molecular charge. Here, we summarize recently reported strategies for delivering therapeutic molecules to mitochondria using various particle-based formulations. PMID:25490424
17. Particle processing technology
Sakka, Yoshio
2014-02-01
In recent years, there has been strong demand for the development of novel devices and equipment that support advanced industries including IT/semiconductors, the environment, energy and aerospace along with the achievement of higher efficiency and reduced environmental impact. Many studies have been conducted on the fabrication of innovative inorganic materials with novel individual properties and/or multifunctional properties including electrical, dielectric, thermal, optical, chemical and mechanical properties through the development of particle processing. The fundamental technologies that are key to realizing such materials are (i) the synthesis of nanoparticles with uniform composition and controlled crystallite size, (ii) the arrangement/assembly and controlled dispersion of nanoparticles with controlled particle size, (iii) the precise structural control at all levels from micrometer to nanometer order and (iv) the nanostructural design based on theoretical/experimental studies of the correlation between the local structure and the functions of interest. In particular, it is now understood that the application of an external stimulus, such as magnetic energy, electrical energy and/or stress, to a reaction field is effective in realizing advanced particle processing [1-3]. This special issue comprises 12 papers including three review papers. Among them, seven papers are concerned with phosphor particles, such as silicon, metals, Si3N4-related nitrides, rare-earth oxides, garnet oxides, rare-earth sulfur oxides and rare-earth hydroxides. In these papers, the effects of particle size, morphology, dispersion, surface states, dopant concentration and other factors on the optical properties of phosphor particles and their applications are discussed. These nanoparticles are classified as zero-dimensional materials. Carbon nanotubes (CNT) and graphene are well-known one-dimensional (1D) and two-dimensional (2D) materials, respectively. This special issue also
18. Crystallography of ribosomal particles
Yonath, A.; Frolow, F.; Shoham, M.; Müssig, J.; Makowski, I.; Glotz, C.; Jahn, W.; Weinstein, S.; Wittmann, H. G.
1988-07-01
Several forms of three-dimensional crystals and two-dimensional sheets of intact ribosomes and their subunits have been obtained as a result of: (a) an extensive systematic investigation of the parameters involved in crystallization, (b) a development of an experimental procedure for controlling the volumes of the crystallization droplets, (c) a study of the nucleation process, and (d) introducing a delicate seeding procedure coupled with variations in the ratios of mono- and divalent ions in the crystallization medium. In all cases only biologically active particles could be crystallized, and the crystalline material retains its integrity and activity. Crystallographic data have been collected from crystals of 50S ribosomal subunits, using synchrotron radiation at temperatures between + 19 and - 180°C. Although at 4°C the higher resolution reflections decay within minutes in the synchrotron beam, at cryo-temperature there was hardly any radiation damage, and a complete set of data to about 6Åresolution could be collected from a single crystal. Heavy-atom clusters were used for soaking as well as for specific binding to the surface of the ribosomal subunits prior to crystallization. The 50S ribosomal subunits from a mutant of B. stearothermophilus which lacks the ribosomal protein BL11 crystallize isomorphously with in the native ones. Models, aimed to be used for low resolution phasing, have been reconstructed from two-dimensional sheets of 70S ribosomes and 50S subunits at 47 and 30Å, respectively. These models show the overall structure of these particles, the contact areas between the large and small subunits, the space where protein synthesis might take place and a tunnel which may provide the path for the nascent protein chain.
19. Morphological details in bloodstain particles.
PubMed
De Wael, K; Lepot, L
2015-01-01
During the commission of crimes blood can be transferred to the clothing of the offender or on other crime related objects. Bloodstain particles are sub-millimetre sized flakes that are lost from dried bloodstains. The nature of these red particles is easily confirmed using spectroscopic methods. In casework, bloodstain particles showing highly detailed morphological features were observed. These provided a rationale for a series of experiments described in this work. It was found that the "largest" particles are shed from blood deposited on polyester and polyamide woven fabrics. No particles are lost from the stains made on absorbent fabrics and from those made on knitted fabrics. The morphological features observed in bloodstain particles can provide important information on the substrates from which they were lost. PMID:25437904
20. Nanocarpets for Trapping Microscopic Particles
NASA Technical Reports Server (NTRS)
Noca, Flavio; Chen, Fei; Hunt, Brian; Bronikowski, Michael; Hoenk, Michael; Kowalczyk, Robert; Choi, Daniel
2004-01-01
Nanocarpets that is, carpets of carbon nanotubes are undergoing development as means of trapping microscopic particles for scientific analysis. Examples of such particles include inorganic particles, pollen, bacteria, and spores. Nanocarpets can be characterized as scaled-down versions of ordinary macroscopic floor carpets, which trap dust and other particulate matter, albeit not purposefully. Nanocarpets can also be characterized as mimicking both the structure and the particle-trapping behavior of ciliated lung epithelia, the carbon nanotubes being analogous to cilia. Carbon nanotubes can easily be chemically functionalized for selective trapping of specific particles of interest. One could, alternatively, use such other three-dimensionally-structured materials as aerogels and activated carbon for the purposeful trapping of microscopic particles. However, nanocarpets offer important advantages over these alternative materials: (1) Nanocarpets are amenable to nonintrusive probing by optical means; and (2) Nanocarpets offer greater surface-to-volume ratios.
1. Fuzzy logic particle tracking velocimetry
NASA Technical Reports Server (NTRS)
Wernet, Mark P.
1993-01-01
Fuzzy logic has proven to be a simple and robust method for process control. Instead of requiring a complex model of the system, a user defined rule base is used to control the process. In this paper the principles of fuzzy logic control are applied to Particle Tracking Velocimetry (PTV). Two frames of digitally recorded, single exposure particle imagery are used as input. The fuzzy processor uses the local particle displacement information to determine the correct particle tracks. Fuzzy PTV is an improvement over traditional PTV techniques which typically require a sequence (greater than 2) of image frames for accurately tracking particles. The fuzzy processor executes in software on a PC without the use of specialized array or fuzzy logic processors. A pair of sample input images with roughly 300 particle images each, results in more than 200 velocity vectors in under 8 seconds of processing time.
2. Simulating Ice Particle Melting using Smooth Particle Hydrodynamics
Kuo, Kwo-Sen; Pelissier, Craig
2015-04-01
To measure precipitation from space requires an accurate estimation of the collective scattering properties of particles suspended in a precipitating column. It is well known that the complicated and typically unknowable shapes of the solid precipitation particles cause much uncertainty in the retrievals involving such particles. This remote-sensing problem becomes even more difficult with the "melting layer" containing partially melted ice particles, where both the geometric shape and liquid-solid fraction of the hydrometeors are variables.. For the scattering properties of these particles depend not only on their shapes, but also their melt-water fraction,and the spatial distribution of liquid and ice within. To obtain an accurate estimation thus requires a set of "realistic" particle geometries and a method to determine the melt-water distribution at various stages in the melting process. Once this is achieved, a suitable method can be used to compute the scattering properties. In previous work, the growth of a set of astoundingly realistic ice particles has been simulated using the "Snowfake" algorithm of Gravner and Griffeath. To simulate the melting process of these particles, the method of Smooth Particle Hydrodynamics (SPH) is used. SPH is a mesh-less particle-based approach where kinematic and thermal dynamics is controlled entirely through two-body interactions between neighboring SPH particles. An important property of SPH is that the interaction at boundaries between air/ice/water is implicitly taken care of. This is crucial for this work since those boundaries are complex and vary throughout the melting process. We present the SPH implementation and a simulation, using highly parallel Graphic Processing Units (GPUs), with ~1 million SPH particles to represent one of the generated ice particle geometries. We plan to use this method, especially its parallelized version, to simulate the melting of all the "Snowfake" particles (~10,000 of them) in our
3. Particle dynamics and particle-cell interaction in microfluidic systems
Stamm, Matthew T.
Particle-laden flow in a microchannel resulting in aggregation of microparticles was investigated to determine the dependence of the cluster growth rate on the following parameters: suspension void fraction, shear strain rate, and channel-height to particle-diameter ratio. The growth rate of an average cluster was found to increase linearly with suspension void fraction, and to obey a power-law relationships with shear strain rate as S 0.9 and channel-height to particle-diameter ratio as (h/d )--3.5. Ceramic liposomal nanoparticles and silica microparticles were functionalized with antibodies that act as targeting ligands. The bio-functionality and physical integrity of the cerasomes were characterized. Surface functionalization allows cerasomes to deliver drugs with selectivity and specificity that is not possible using standard liposomes. The functionalized particle-target cell binding process was characterized using BT-20 breast cancer cells. Two microfluidic systems were used; one with both species in suspension, the other with cells immobilized inside a microchannel and particle suspension as the mobile phase. Effects of incubation time, particle concentration, and shear strain rate on particle-cell binding were investigated. With both species in suspension, the particle-cell binding process was found to be reasonably well-described by a first-order model. Particle desorption and cellular loss of binding affinity in time were found to be negligible; cell-particle-cell interaction was identified as the limiting mechanism in particle-cell binding. Findings suggest that separation of a bound particle from a cell may be detrimental to cellular binding affinity. Cell-particle-cell interactions were prevented by immobilizing cells inside a microchannel. The initial stage of particle-cell binding was investigated and was again found to be reasonably well-described by a first-order model. For both systems, the time constant was found to be inversely proportional to
4. Particle cloud mixing in microgravity
NASA Technical Reports Server (NTRS)
Ross, H.; Facca, L.; Tangirala, V.; Berlad, A. L.
1989-01-01
Quasi-steady flame propagation through clouds of combustible particles requires quasi-steady transport properties and quasi-steady particle number density. Microgravity conditions may be employed to help achieve the conditions of quiescent, uniform clouds needed for such combustion studies. Joint experimental and theoretical NASA-UCSD studies were concerned with the use of acoustic, electrostatic, and other methods of dispersion of fuel particulates. Results of these studies are presented for particle clouds in long cylindrical tubes.
5. Quark matter or new particles?
NASA Technical Reports Server (NTRS)
Michel, F. Curtis
1988-01-01
It has been argued that compression of nuclear matter to somewhat higher densities may lead to the formation of stable quark matter. A plausible alternative, which leads to radically new astrophysical scenarios, is that the stability of quark matter simply represents the stability of new particles compounded of quarks. A specific example is the SU(3)-symmetric version of the alpha particle, composed of spin-zero pairs of each of the baryon octet (an 'octet' particle).
6. Quantitative wave-particle duality
Qureshi, Tabish
2016-07-01
The complementary wave and particle character of quantum objects (or quantons) was pointed out by Niels Bohr. This wave-particle duality, in the context of the two-slit experiment, is here described not just as two extreme cases of wave and particle characteristics, but in terms of quantitative measures of these characteristics, known to follow a duality relation. A very simple and intuitive derivation of a closely related duality relation is presented, which should be understandable to the introductory student.
7. Photocatalytic/Magnetic Composite Particles
NASA Technical Reports Server (NTRS)
Wu, Chang-Yu; Goswami, Yogi; Garretson, Charles; Andino, Jean; Mazyck, David
2007-01-01
Photocatalytic/magnetic composite particles have been invented as improved means of exploiting established methods of photocatalysis for removal of chemical and biological pollutants from air and water. The photocatalytic components of the composite particles are formulated for high levels of photocatalytic activity, while the magnetic components make it possible to control the movements of the particles through the application of magnetic fields. The combination of photocatalytic and magnetic properties can be exploited in designing improved air- and water treatment reactors.
8. Particle plasmons: Why shape matters
Barnes, William L.
2016-08-01
Simple analytic expressions for the polarizability of metallic nanoparticles are in wide use in the field of plasmonics, but their origins are not obvious. In this article, expressions for the polarizability of a particle are derived in the quasistatic limit in a manner that allows the physical origin of the terms to be clearly seen. The discussion is tutorial in nature, with particular attention given to the role of particle shape since this is a controlling factor in particle plasmon resonances.
9. Trajectory dependent particle response for anisotropic mono domain particles in magnetic particle imaging
Graeser, M.; Bente, K.; Neumann, A.; Buzug, T. M.
2016-02-01
In magnetic particle imaging, scanners use different spatial sampling techniques to cover the field of view (FOV). As spatial encoding is realized by a selective low field region (a field-free-point, or field-free-line), this region has to be moved through the FOV on specific sampling trajectories. To achieve these trajectories complex time dependent magnetic fields are necessary. Due to the superposition of the selection field and the homogeneous time dependent fields, particles at different spatial positions experience different field sequences. As a result, the dynamic behaviour of those particles can be strongly spatially dependent. So far, simulation studies that determined the trajectory quality have used the Langevin function to model the particle response. This however, neglects the dynamic relaxation of the particles, which is highly affected by magnetic anisotropy. More sophisticated models based on stochastic differential equations that include these effects were only used for one dimensional excitation. In this work, a model based on stochastic differential equations is applied to two-dimensional trajectory field sequences, and the effects of these field sequences on the particle response are investigated. The results show that the signal of anisotropic particles is not based on particle parameters such as size and shape alone, but is also determined by the field sequence that a particle ensemble experiences at its spatial position. It is concluded, that the particle parameters can be optimized in terms of the used trajectory.
10. Air agglomeration of hydrophobic particles
SciTech Connect
Drzymala, J.; Wheelock, T.D.
1995-12-31
The agglomeration of hydrophobic particles in an aqueous suspension was accomplished by introducing small amounts of air into the suspension while it was agitated vigorously. The extent of aggregation was proportional both to the air to solids ratio and to the hydrophobicity of the solids. For a given air/solids ratio, the extent of aggregation of different materials increased in the following order: graphite, gilsonite, coal coated with heptane, and Teflon. The structure of agglomerates produced from coarse Teflon particles differed noticeably from the structure of bubble-particle aggregates produced from smaller, less hydrophobic particles.
11. In Situ Solid Particle Generator
NASA Technical Reports Server (NTRS)
Agui, Juan H.; Vijayakumar, R.
2013-01-01
Particle seeding is a key diagnostic component of filter testing and flow imaging techniques. Typical particle generators rely on pressurized air or gas sources to propel the particles into the flow field. Other techniques involve liquid droplet atomizers. These conventional techniques have drawbacks that include challenging access to the flow field, flow and pressure disturbances to the investigated flow, and they are prohibitive in high-temperature, non-standard, extreme, and closed-system flow conditions and environments. In this concept, the particles are supplied directly within a flow environment. A particle sample cartridge containing the particles is positioned somewhere inside the flow field. The particles are ejected into the flow by mechanical brush/wiper feeding and sieving that takes place within the cartridge chamber. Some aspects of this concept are based on established material handling techniques, but they have not been used previously in the current configuration, in combination with flow seeding concepts, and in the current operational mode. Unlike other particle generation methods, this concept has control over the particle size range ejected, breaks up agglomerates, and is gravity-independent. This makes this device useful for testing in microgravity environments.
12. High field gradient particle accelerator
DOEpatents
Nation, J.A.; Greenwald, S.
1989-05-30
A high electric field gradient electron accelerator utilizing short duration, microwave radiation, and capable of operating at high field gradients for high energy physics applications or at reduced electric field gradients for high average current intermediate energy accelerator applications is disclosed. Particles are accelerated in a smooth bore, periodic undulating waveguide, wherein the period is so selected that the particles slip an integral number of cycles of the r.f. wave every period of the structure. This phase step of the particles produces substantially continuous acceleration in a traveling wave without transverse magnetic or other guide means for the particle. 10 figs.
13. High field gradient particle accelerator
DOEpatents
Nation, John A.; Greenwald, Shlomo
1989-01-01
A high electric field gradient electron accelerator utilizing short duration, microwave radiation, and capable of operating at high field gradients for high energy physics applications or at reduced electric field gradients for high average current intermediate energy accelerator applications. Particles are accelerated in a smooth bore, periodic undulating waveguide, wherein the period is so selected that the particles slip an integral number of cycles of the r.f. wave every period of the structure. This phase step of the particles produces substantially continuous acceleration in a traveling wave without transverse magnetic or other guide means for the particle.
14. Continuous flow dielectrophoretic particle concentrator
DOEpatents
Cummings, Eric B.
2007-04-17
A continuous-flow filter/concentrator for separating and/or concentrating particles in a fluid is disclosed. The filter is a three-port device an inlet port, an filter port and a concentrate port. The filter separates particles into two streams by the ratio of their dielectrophoretic mobility to their electrokinetic, advective, or diffusive mobility if the dominant transport mechanism is electrokinesis, advection, or diffusion, respectively.Also disclosed is a device for separating and/or concentrating particles by dielectrophoretic trapping of the particles.
15. Aging fingerprints in combustion particles
Zelenay, V.; Mooser, R.; Tritscher, T.; Křepelová, A.; Heringa, M. F.; Chirico, R.; Prévôt, A. S. H.; Weingartner, E.; Baltensperger, U.; Dommen, J.; Watts, B.; Raabe, J.; Huthwelker, T.; Ammann, M.
2011-05-01
Soot particles can significantly influence the Earth's climate by absorbing and scattering solar radiation as well as by acting as cloud condensation nuclei. However, despite their environmental (as well as economic and political) importance, the way these properties are affected by atmospheric processing is still a subject of discussion. In this work, soot particles emitted from two different cars, a EURO 2 transporter, a EURO 3 passenger vehicle, and a wood stove were investigated on a single-particle basis. The emitted exhaust, including the particulate and the gas phase, was processed in a smog chamber with artificial solar radiation. Single particle specimens of both unprocessed and aged soot were characterized using x-ray absorption spectroscopy and scanning electron microscopy. Comparison of the spectra from the unprocessed and aged soot particles revealed changes in the carbon functional group content, such as that of carboxylic carbon, which can be ascribed to both the condensation of secondary organic compounds on the soot particles and oxidation of primary soot particles upon photochemical aging. Changes in the morphology and size of the single soot particles were also observed upon aging. Furthermore, we show that the soot particles take up water in humid environments and that their water uptake capacity increases with photochemical aging.
16. Fiber Optic Particle Concentration Sensor
Boiarski, Anthony A.
1986-01-01
A particle concentration sensor would be useful in many industrial process monitoring applications where in situ measurements are required. These applications include determination of butterfat content of milk, percent insolubles in engine oil, and cell concentration in a bioreactor. A fiber optic probe was designed to measure particle concentration by monitoring the scattered light from the particle-light interaction at the end of a fiber-optic-based probe tip. Linear output was obtained from the sensor over a large range of particle loading for a suspension of 1.7 μm polystyrene microspheres in water and E. coli bacteria in a fermenter.
17. Interplanetary Dust Particles
2003-12-01
micrometeorites) containing layer silicates indicative of parent-body aqueous alteration and the more distant anhydrous P and D asteroids exhibiting no evidence of (aqueous) alteration (Gradie and Tedesco, 1982). This gradation in spectral properties presumably extends several hundred AU out to the Kuiper belt, the source region of most short-period comets, where the distinction between comets and outer asteroids may simply be one of the orbital parameters ( Luu, 1993; Brownlee, 1994; Jessberger et al., 2001). The mineralogy and petrography of meteorites provides direct confirmation of aqueous alteration, melting, fractionation, and thermal metamorphism among the inner asteroids ( Zolensky and McSween, 1988; Farinella et al., 1993; Brearley and Jones, 1998). Because the most common grains in the ISM (silicates and carbonaceous matter) are not as refractory as those found in meteorites, it is unlikely that they have survived in significant quantities in meteorites. Despite a prolonged search, not a single presolar silicate grain has yet been identified in any meteorite.Interplanetary dust particles (IDPs) are the smallest and most fine-grained meteoritic objects available for laboratory investigation (Figure 1). In contrast to meteorites, IDPs are derived from a broad range of dust-producing bodies extending from the inner main belt of the asteroids to the Kuiper belt (Flynn, 1996, 1990; Dermott et al., 1994; Liou et al., 1996). After release from their asteroidal or cometary parent bodies the orbits of IDPs evolve by Poynting-Robertson (PR) drag (the combined influence of light pressure and radiation drag) ( Dermott et al., 2001). Irrespective of the location of their parent bodies nearly all IDPs under the influence of PR drag can eventually reach Earth-crossing orbits. IDPs are collected in the stratosphere at 20-25 km altitude using NASA ER2 aircraft ( Sandford, 1987; Warren and Zolensky, 1994). Laboratory measurements of implanted rare gases, solar flare tracks ( Figure 2
18. Particle splitting in smoothed particle hydrodynamics based on Voronoi diagram
Chiaki, Gen; Yoshida, Naoki
2015-08-01
We present a novel method for particle splitting in smoothed particle hydrodynamics simulations. Our method utilizes the Voronoi diagram for a given particle set to determine the position of fine daughter particles. We perform several test simulations to compare our method with a conventional splitting method in which the daughter particles are placed isotropically over the local smoothing length. We show that, with our method, the density deviation after splitting is reduced by a factor of about 2 compared with the conventional method. Splitting would smooth out the anisotropic density structure if the daughters are distributed isotropically, but our scheme allows the daughter particles to trace the original density distribution with length-scales of the mean separation of their parent. We apply the particle splitting to simulations of the primordial gas cloud collapse. The thermal evolution is accurately followed to the hydrogen number density of 1012 cm-3. With the effective mass resolution of ˜10-4 M⊙ after the multistep particle splitting, the protostellar disc structure is well resolved. We conclude that the method offers an efficient way to simulate the evolution of an interstellar gas and the formation of stars.
19. Particle transport and deposition: basic physics of particle kinetics
PubMed Central
Tsuda, Akira; Henry, Frank S.; Butler, James P.
2015-01-01
The human body interacts with the environment in many different ways. The lungs interact with the external environment through breathing. The enormously large surface area of the lung with its extremely thin air-blood barrier is exposed to particles suspended in the inhaled air. Whereas the particle-lung interaction may cause deleterious effects on health if the inhaled pollutant aerosols are toxic, this interaction can be beneficial for disease treatment if the inhaled particles are therapeutic aerosolized drug. In either case, an accurate estimation of dose and sites of deposition in the respiratory tract is fundamental to understanding subsequent biological response, and the basic physics of particle motion and engineering knowledge needed to understand these subjects is the topic of this chapter. A large portion of this chapter deals with three fundamental areas necessary to the understanding of particle transport and deposition in the respiratory tract. These are: 1) the physical characteristics of particles, 2) particle behavior in gas flow, and 3) gas flow patterns in the respiratory tract. Other areas, such as particle transport in the developing lung and in the diseased lung are also considered. The chapter concludes with a summary and a brief discussion of areas of future research. PMID:24265235
20. Particle transport and deposition: basic physics of particle kinetics.
PubMed
Tsuda, Akira; Henry, Frank S; Butler, James P
2013-10-01
The human body interacts with the environment in many different ways. The lungs interact with the external environment through breathing. The enormously large surface area of the lung with its extremely thin air-blood barrier is exposed to particles suspended in the inhaled air. The particle-lung interaction may cause deleterious effects on health if the inhaled pollutant aerosols are toxic. Conversely, this interaction can be beneficial for disease treatment if the inhaled particles are therapeutic aerosolized drugs. In either case, an accurate estimation of dose and sites of deposition in the respiratory tract is fundamental to understanding subsequent biological response, and the basic physics of particle motion and engineering knowledge needed to understand these subjects is the topic of this article. A large portion of this article deals with three fundamental areas necessary to the understanding of particle transport and deposition in the respiratory tract. These are: (i) the physical characteristics of particles, (ii) particle behavior in gas flow, and (iii) gas-flow patterns in the respiratory tract. Other areas, such as particle transport in the developing lung and in the diseased lung are also considered. The article concludes with a summary and a brief discussion of areas of future research.
1. Multiswarm Particle Swarm Optimization with Transfer of the Best Particle
PubMed Central
Wei, Xiao-peng; Zhang, Jian-xia; Zhou, Dong-sheng; Zhang, Qiang
2015-01-01
We propose an improved algorithm, for a multiswarm particle swarm optimization with transfer of the best particle called BMPSO. In the proposed algorithm, we introduce parasitism into the standard particle swarm algorithm (PSO) in order to balance exploration and exploitation, as well as enhancing the capacity for global search to solve nonlinear optimization problems. First, the best particle guides other particles to prevent them from being trapped by local optima. We provide a detailed description of BMPSO. We also present a diversity analysis of the proposed BMPSO, which is explained based on the Sphere function. Finally, we tested the performance of the proposed algorithm with six standard test functions and an engineering problem. Compared with some other algorithms, the results showed that the proposed BMPSO performed better when applied to the test functions and the engineering problem. Furthermore, the proposed BMPSO can be applied to other nonlinear optimization problems. PMID:26345200
2. Particle deposition in ventilation ducts
SciTech Connect
Sippola, Mark R.
2002-09-01
Exposure to airborne particles is detrimental to human health and indoor exposures dominate total exposures for most people. The accidental or intentional release of aerosolized chemical and biological agents within or near a building can lead to exposures of building occupants to hazardous agents and costly building remediation. Particle deposition in heating, ventilation and air-conditioning (HVAC) systems may significantly influence exposures to particles indoors, diminish HVAC performance and lead to secondary pollutant release within buildings. This dissertation advances the understanding of particle behavior in HVAC systems and the fates of indoor particles by means of experiments and modeling. Laboratory experiments were conducted to quantify particle deposition rates in horizontal ventilation ducts using real HVAC materials. Particle deposition experiments were conducted in steel and internally insulated ducts at air speeds typically found in ventilation ducts, 2-9 m/s. Behaviors of monodisperse particles with diameters in the size range 1-16 {micro}m were investigated. Deposition rates were measured in straight ducts with a fully developed turbulent flow profile, straight ducts with a developing turbulent flow profile, in duct bends and at S-connector pieces located at duct junctions. In straight ducts with fully developed turbulence, experiments showed deposition rates to be highest at duct floors, intermediate at duct walls, and lowest at duct ceilings. Deposition rates to a given surface increased with an increase in particle size or air speed. Deposition was much higher in internally insulated ducts than in uninsulated steel ducts. In most cases, deposition in straight ducts with developing turbulence, in duct bends and at S-connectors at duct junctions was higher than in straight ducts with fully developed turbulence. Measured deposition rates were generally higher than predicted by published models. A model incorporating empirical equations based on
3. Single Particle Difraction at FLASH
SciTech Connect
Bogan, M.; Boutet, S.; Starodub, Dmitri; Decorwin-Martin, Philippe; Chapman, H.; Bajt, S.; Schulz, J.; Hajdu, Janos; Seibert, M.M.; Iwan, Bianca; Timneanu, Nicusor; Marchesini, Stefano; Barty, Anton; Benner, W.Henry; Frank, Matthias; Hau-Riege, Stefan P.; Woods, Bruce; Rohner, Urs; /Tofwerk AG, Thun
2010-06-11
Single-pulse coherent diffraction patterns have been collected from randomly injected single particles with a soft X-ray free-electron laser (FEL). The intense focused FEL pulse gives a high-resolution low-noise coherent diffraction pattern of the object before that object turns into a plasma and explodes. A diffraction pattern of a single particle will only be recorded when the particle arrival into the FEL interaction region coincides with FEL pulse arrival and detector integration. The properties of the experimental apparatus coinciding with these three events set the data acquisition rate. For our single particle FLASH diffraction imaging experiments: (1) an aerodynamic lens stack prepared a particle beam that consisted of particles moving at 150-200 m/s positioned randomly in space and time, (2) the 10 fs long FEL pulses were delivered at a fixed rate, and (3) the detector was set to integrate and readout once every two seconds. The effect of these experimental parameters on the rate of data acquisition using randomly injected particles will be discussed. Overall, the ultrashort FEL pulses do not set the limit of the data acquisition, more important is the effective interaction time of the particle crossing the FEL focus, the pulse sequence structure and the detector readout rate. Example diffraction patterns of randomly injected ellipsoidal iron oxide nanoparticles in different orientations are presented. This is the first single particle diffraction data set of identical particles in different orientations collected on a shot-to-shot basis. This data set will be used to test algorithms for recovering 3D structure from single particle diffraction.
4. The Particle Theory of Matter
ERIC Educational Resources Information Center
Widick, Paul R.
1969-01-01
Described are activities that are designed to help elementary children understand the possibility of the particle theory of matter. Children work with beads, marbles, B-B shot and sand; by mixing these materials and others they are led to see that it is highly possible for the existence of particles which are not visible. (BR)
5. Janus molecularly imprinted polymer particles.
PubMed
Huang, Chuixiu; Shen, Xiantao
2014-03-11
By combining the specific molecular recognition capability of MIPs and the asymmetric structure of Janus particles, the Janus MIP particles which were synthesized via a wax-water Pickering emulsion showed attractive capabilities as self-propelled transporters for controlled drug delivery. PMID:24469062
6. Janus molecularly imprinted polymer particles.
PubMed
Huang, Chuixiu; Shen, Xiantao
2014-03-11
By combining the specific molecular recognition capability of MIPs and the asymmetric structure of Janus particles, the Janus MIP particles which were synthesized via a wax-water Pickering emulsion showed attractive capabilities as self-propelled transporters for controlled drug delivery.
7. Particle acceleration by the sun
NASA Technical Reports Server (NTRS)
Lin, R. P.
1986-01-01
A review is given of the analysis of new observations of energetic particles and energetic secondary emissions obtained over the solar maxium (approx. 1980) by the Solar Maximum mission, Hinotori, the international Sun-Earth Explorer, Helios, Explorer satellites, and Voyager spacecraft. Solar energetic particle events observed in space, He(3)- rich events, solar gamma rays and neutrons, and solar neutrinos are discussed.
8. Genotoxicity of poorly soluble particles.
PubMed
Schins, Roel P F; Knaapen, Ad M
2007-01-01
Poorly soluble particles such as TiO2, carbon black, and diesel exhaust particles have been evaluated for their genotoxicity using both in vitro and in vivo assays, since inhalation of these compounds by rats at high concentrations has been found to lead to tumor formation. Two principle modes of genotoxic action can be considered for particles, referred to as primary and secondary genotoxicity. Primary genotoxicity is defined as genetic damage elicited by particles in the absence of pulmonary inflammation, whereas secondary genotoxicity implies a pathway of genetic damage resulting from the oxidative DNA attack by reactive oxygen/nitrogen species (ROS/RNS), generated during particle-elicited inflammation. Conceptually, primary genotoxicity might operate via various mechanisms, such as the actions of ROS (e.g., as generated from reactive particle surfaces), or DNA-adduct formation by reactive metabolites of particle-associated organic compounds (e.g., polycyclic aromatic hydrocarbons). Currently available literature data, however, merely indicate that the tumorigenesis of poorly soluble particles involves a mechanism of secondary genotoxicity. However, further research is urgently required, since (1) causality between pulmonary inflammation and genotoxicity has not yet been established, and (2) effects of inflammation on fundamental DNA damage responses that orchestrate mutagenesis and carcinogenic outcome,that is, cell cycle arrest, DNA repair, proliferation, and apoptosis, are currently poorly understood. PMID:17886067
9. Build Your Own Particle Sensor
EPA Science Inventory
This is an information packet explaining an educational outreach activity, where the participant does some simple electronics with low cost components to build a particle sensor that can turn one to three small lights on based upon the detected concentration of particles.
10. Particle pressures in fluidized beds
SciTech Connect
Campbell, C.S.; Rahman, K.; Hu, X.; Jin, C.; Potapov, A.V.
1992-01-01
This is an experimental project to make detailed measurements of the particle pressures generated in fluidized beds. The focus lies in two principle areas: (1) the particle pressure distribution around single bubbles rising in a two-dimensional gas-fluidized bed and (2) the particle pressures measured in liquid-fluidized beds. This first year has largely been to constructing the experiments The design of the particle pressure probe has been improved and tested. A two-dimensional gas-fluidized bed has been constructed in order to measure the particle pressure generated around injected bubbles. The probe is also being adapted to work in a liquid fluidized bed. Finally, a two-dimensional liquid fluidized bed is also under construction. Preliminary measurements show that the majority of the particle pressures are generated in the wake of a bubble. However, the particle pressures generated in the liquid bed appear to be extremely small. Finally, while not directly associated with the particle pressure studies, some NERSC supercomputer time was granted alongside this project. This is being used to make large scale computer simulation of the flow of granular materials in hoppers.
11. The Particle--Motion Problem.
ERIC Educational Resources Information Center
Demana, Franklin; Waits, Bert K.
1993-01-01
Discusses solutions to real-world linear particle-motion problems using graphing calculators to simulate the motion and traditional analytic methods of calculus. Applications include (1) changing circular or curvilinear motion into linear motion and (2) linear particle accelerators in physics. (MDH)
12. Fluorescent Particles For Flow Testing
NASA Technical Reports Server (NTRS)
Bonnell, Jeremy L.; Stern, Susan M.; Torkelson, Jan R.
1995-01-01
Small alumina spheres coated with fluorescent dye used in flow testing of transparent plastic model of check valve. Entrained fluroescent particles make flows visible. After completion of flow test, particles remaining in valve easily detectable and removed for measurement of their sizes.
13. Particle impingement in SRM nozzles
Ikeda, Hirohide; Tanno, Haruhito; Tokudome, Shinichiro; Kohno, Masahiro
It is experimentally shown that an improved two-phase flow program can well predict the alumina particle impingement location in small rocket motor nozzles as well as motor performance. The size distribution of particles in the nozzle flow is well characterized by a log-normal distribution. The program has achieved sufficient accuracy of prediction to be an effective nozzle contouring design tool.
14. Research in particles and fields
NASA Technical Reports Server (NTRS)
Vogt, R. E.; Buffington, A.; Davis, L., Jr.; Stone, E. C.
1980-01-01
The astrophysical aspects of cosmic and gamma rays and the radiation environment of the Earth and other planets investigated by means of energetic particle detector systems flown on spacecraft and balloons are discussed. The theory of particles and fields in space is also addressed with particular emphasis on models of Saturn's magnetic field.
15. High spatial resolution particle detectors
DOEpatents
Boatner, Lynn A.; Mihalczo, John T.
2015-10-13
Disclosed below are representative embodiments of methods, apparatus, and systems for detecting particles, such as radiation or charged particles. One exemplary embodiment disclosed herein is particle detector comprising an optical fiber with a first end and second end opposite the first end. The optical fiber of this embodiment further comprises a doped region at the first end and a non-doped region adjacent to the doped region. The doped region of the optical fiber is configured to scintillate upon interaction with a target particle, thereby generating one or more photons that propagate through the optical fiber and to the second end. Embodiments of the disclosed technology can be used in a variety of applications, including associated particle imaging and cold neutron scattering.
16. High spatial resolution particle detectors
DOEpatents
Boatner, Lynn A.; Mihalczo, John T.
2012-09-04
Disclosed below are representative embodiments of methods, apparatus, and systems for detecting particles, such as radiation or charged particles. One exemplary embodiment disclosed herein is particle detector comprising an optical fiber with a first end and second end opposite the first end. The optical fiber of this embodiment further comprises a doped region at the first end and a non-doped region adjacent to the doped region. The doped region of the optical fiber is configured to scintillate upon interaction with a target particle, thereby generating one or more photons that propagate through the optical fiber and to the second end. Embodiments of the disclosed technology can be used in a variety of applications, including associated particle imaging and cold neutron scattering.
17. Particle sizer and DNA sequencer
DOEpatents
Olivares, Jose A.; Stark, Peter C.
2005-09-13
An electrophoretic device separates and detects particles such as DNA fragments, proteins, and the like. The device has a capillary which is coated with a coating with a low refractive index such as Teflon.RTM. AF. A sample of particles is fluorescently labeled and injected into the capillary. The capillary is filled with an electrolyte buffer solution. An electrical field is applied across the capillary causing the particles to migrate from a first end of the capillary to a second end of the capillary. A detector light beam is then scanned along the length of the capillary to detect the location of the separated particles. The device is amenable to a high throughput system by providing additional capillaries. The device can also be used to determine the actual size of the particles and for DNA sequencing.
18. Selective encapsulation by Janus particles
SciTech Connect
Li, Wei; Ruth, Donovan; Gunton, James D.; Rickman, Jeffrey M.
2015-06-28
We employ Monte Carlo simulation to examine encapsulation in a system comprising Janus oblate spheroids and isotropic spheres. More specifically, the impact of variations in temperature, particle size, inter-particle interaction range, and strength is examined for a system in which the spheroids act as the encapsulating agents and the spheres as the encapsulated guests. In this picture, particle interactions are described by a quasi-square-well patch model. This study highlights the environmental adaptation and selectivity of the encapsulation system to changes in temperature and guest particle size, respectively. Moreover, we identify an important range in parameter space where encapsulation is favored, as summarized by an encapsulation map. Finally, we discuss the generalization of our results to systems having a wide range of particle geometries.
19. Particle manipulation using vibrating cilia
2012-11-01
The ability to manipulate small particles suspended in fluids has many practical applications, ranging from the mechanical testing of macromolecules like DNA to the controlled abrasion of brittle surfaces for precision polishing. A natural method is non-contact manipulation of particles through boundary excitations. Particle-manipulation via a vibrating cilia to establish controlled fluid flows with desired patterns of transport is one such bioinspired method. We show experimental results on the clustering and transport of finite-sized particles in the streaming flow set up by the oscillating cilia. We further show computations to explain the effects of hyperbolic structures in the four dimensional phase space of the dynamics of finite-sized particles.
20. Superconducting transmission line particle detector
DOEpatents
Gray, K.E.
1988-07-28
A microvertex particle detector for use in a high energy physic collider including a plurality of parallel superconducting thin film strips separated from a superconducting ground plane by an insulating layer to form a plurality of superconducting waveguides. The microvertex particle detector indicates passage of a charged subatomic particle by measuring a voltage pulse measured across a superconducting waveguide caused by the transition of the superconducting thin film strip from a superconducting to a non- superconducting state in response to the passage of a charged particle. A plurality of superconducting thin film strips in two orthogonal planes plus the slow electromagnetic wave propagating in a superconducting transmission line are used to resolve N/sup 2/ ambiguity of charged particle events. 6 figs.
1. Superconducting transmission line particle detector
DOEpatents
Gray, Kenneth E.
1989-01-01
A microvertex particle detector for use in a high energy physic collider including a plurality of parallel superconducting thin film strips separated from a superconducting ground plane by an insulating layer to form a plurality of superconducting waveguides. The microvertex particle detector indicates passage of a charged subatomic particle by measuring a voltage pulse measured across a superconducting waveguide caused by the transition of the superconducting thin film strip from a superconducting to a non-superconducting state in response to the passage of a charged particle. A plurality of superconducting thin film strips in two orthogonal planes plus the slow electromagnetic wave propogating in a superconducting transmission line are used to resolve N.sup.2 ambiguity of charged particle events.
2. Discharge Property of Resin Particles Refined by Silica Particles
Makabe, Akira; Narita, Miyuki; Makino, Kazutaka; Hamada, Fumio
2001-12-01
The discharge property in the solid state has been utilized for ceramics processing and printer technology. The charge of particles has to be controlled in these fields because it affects the particle filling process in ceramics processing and the print quality of a printer. Fine silica particles are used to refine ceramics or resin particles for optimization of flowability, the discharge ability and the wettability. However, it is difficult to understand these properties, because critical factors “affecting” for these properties have not been elucidated yet. For example, the discharge property has not been examined in connection with the surface chemical structure of particles. In this study, we report the electron accepting or electron donating ability of chemicals and find that the discharge property is significantly influenced by that ability. Work function values are measured for polystyrene resin particles covered by different kinds of silica particles. In addition, we suggest a simple evaluation method for solid discharge through the measuring of pH in solution form. The relationships among the discharge, pH and work function values are examined. As a result, we arrive at some results to elucidate these phenomena.
3. Research in particle theory
SciTech Connect
Mansouri, F.; Suranyi, P.; Wijewardhana, L.C.R.
1992-10-01
Dynamics of 2+1 dimensional gravity is analyzed by coupling matter to Chern Simons Witten action in two ways and obtaining the exact gravity Hamiltonian for each case. 't Hoot's Hamiltonian is obtained as an approximation. The notion of space-time emerges in the very end as a broken phase of the gauge theory. We have studied the patterns of discrete and continuous symmetry breaking in 2+1 dimensional field theories. We formulate our analysis in terms of effective composite scalar field theories. Point-like sources in the Chern-Simons theory of gravity in 2+1 dimensions are described by their Poincare' charges. We have obtained exact solutions of the constraints of Chern-Simons theory with an arbitrary number of isolated point sources in relative motion. We then showed how the space-time metric is constructed. A reorganized perturbation expansion with a propagator of soft infrared behavior has been used to study the critical behavior of the mass gap. The condition of relativistic covariance fixes the form of the soft propagator. Approximants to the correlation critical exponent were obtained in two loop order for the two and three dimensional theories. We proposed a new model of QED exhibiting two phases and a Majorana mass spectrum of single particle states. The model has a new source of coupling constant renormalization which opposes screening and suggests the model may confine. Assuming that the bound states of e{sup +}e{sup {minus}} essentially obey a Majorana spectrum, we obtained a consistent fit of the GSI peaks as well as predicting new peaks and their spin assignments.
4. Two-dimensional particle displacement tracking in particle imaging velocimetry
NASA Technical Reports Server (NTRS)
Wernet, Mark P.
1991-01-01
A new particle imaging velocimetry data acquisition and analysis system, which is an order of magnitude faster than any previously proposed system, has been constructed and tested. The new particle displacement tracking (PDT) system is an all electronic technique employing a video camera and a large memory buffer frame-grabber board. Using a simple encoding scheme, a time sequence of single exposure images is time-coded into a single image and then processed to track particle displacements and determine two-dimensional velocity vectors. Use of the PDT technique in a counterrotating vortex flow produced over 1100 velocity vectors in 110 s when processed on an 80386 PC.
5. Focusing particle concentrator with application to ultrafine particles
DOEpatents
Hering, Susanne; Lewis, Gregory; Spielman, Steven R.
2013-06-11
Technology is presented for the high efficiency concentration of fine and ultrafine airborne particles into a small fraction of the sampled airflow by condensational enlargement, aerodynamic focusing and flow separation. A nozzle concentrator structure including an acceleration nozzle with a flow extraction structure may be coupled to a containment vessel. The containment vessel may include a water condensation growth tube to facilitate the concentration of ultrafine particles. The containment vessel may further include a separate carrier flow introduced at the center of the sampled flow, upstream of the acceleration nozzle of the nozzle concentrator to facilitate the separation of particle and vapor constituents.
6. Electrostatic wire stabilizing a charged particle beam
DOEpatents
Prono, D.S.; Caporaso, G.J.; Briggs, R.J.
1983-03-21
In combination with a charged particle beam generator and accelerator, apparatus and method are provided for stabilizing a beam of electrically charged particles. A guiding means, disposed within the particle beam, has an electric charge induced upon it by the charged particle beam. Because the sign of the electric charge on the guiding means and the sign of the particle beam are opposite, the particles are attracted toward and cluster around the guiding means to thereby stabilize the particle beam as it travels.
7. Sampling of respirable isocyanate particles.
PubMed
Gylestam, Daniel; Gustavsson, Marcus; Karlsson, Daniel; Dalene, Marianne; Skarping, Gunnar
2014-04-01
An advanced design of a denuder impactor (DI) sampler has been developed for characterization of possible airborne isocyanate exposure in different particle size fractions. The sampler is equipped with 12 different parallel denuder tubes, 4 impaction stages with the cut-off values (d50) of: 9.5, 4, 2.5 and 1 µm, and an end filter that collects particles < 1 µm. All collecting parts were impregnated with di-n-butylamine DBA as the reagent in a mixture with acetic acid. The performance of the DI sampler was studied on a standard atmosphere containing gas and particulate isocyanates. The isocyanate atmosphere was generated by liquid permeation of 2,4-, 2,6-Toluene Diisocyanate (TDI), 1,6-Hexamethylene Diisocyanate (HDI) and Isophorone Diisocyanate (IPDI). 4,4'-Methylene Diphenyl Diisocyanate (MDI) particles were generated by heating of technical MDI and condensing the mixture of gas and particle-borne MDI in an atmosphere containing mixed salt particles. The study was performed in a 0.85 m3 environmental chamber with stainless steel walls. With the advancement of the DI sampler it is now possible to collect isocyanate particle samples for up to 320 min. The performance of the DI sampler is essentially unaffected by the humidity. The DI sampler and the ASSET EZ4-NCO sampler (Sigma-Aldrich/Supelco, Bellefonte, PA, USA) gave similar results. Sample losses within the DI sampler are low. In the environmental chamber it was observed that the particle distribution may be affected by the humidity and ageing. A scanning mobility particle sizer (SMPS) was used to separate a flow of selected fractions containing MDI particles from mixed MDI and salt particles. The particle-size distribution had a maximum at about 300 nm, but later in the environmental chamber 1 µm dominated. The distribution was very different as compared to with only NaCl or MDI present. The biological relevance for studying isocyanate nano particles is significant as these have the possibility to reach the
8. Negative Numbers and Antimatter Particles
Tsan, Ung Chan
Dirac's equation states that an electron implies the existence of an antielectron with the same mass (more generally same arithmetic properties) and opposite charge (more generally opposite algebraic properties). Subsequent observation of antielectron validated this concept. This statement can be extended to all matter particles; observation of antiproton, antineutron, antideuton … is in complete agreement with this view. Recently antihypertriton was observed and 38 atoms of antihydrogen were trapped. This opens the path for use in precise testing of nature's fundamental symmetries. The symmetric properties of a matter particle and its mirror antimatter particle seem to be well established. Interactions operate on matter particles and antimatter particles as well. Conservation of matter parallels addition operating on positive and negative numbers. Without antimatter particles, interactions of the Standard Model (electromagnetism, strong interaction and weak interaction) cannot have the structure of group. Antimatter particles are characterized by negative baryonic number A or/and negative leptonic number L. Materialization and annihilation obey conservation of A and L (associated to all known interactions), explaining why from pure energy (A = 0, L = 0) one can only obtain a pair of matter particle antimatter particle — electron antielectron, proton and antiproton — via materialization where the mass of a pair of particle antiparticle gives back to pure energy with annihilation. These two mechanisms cannot change the difference in the number of matter particles and antimatter particles. Thus from pure energy only a perfectly symmetric (in number) universe could be generated as proposed by Dirac but observation showed that our universe is not symmetric, it is a matter universe which is nevertheless neutral. Fall of reflection symmetries shattered the prejudice that there is no way to define in an absolute way right and left or matter and antimatter
9. Detector for Particle Surface Contamination
NASA Technical Reports Server (NTRS)
Mogan, Paul A. (Inventor); Schwindt, Christian J. (Inventor); Mattson, Carl B. (Inventor)
1999-01-01
A system and method for detecting and quantizing particle fallout contamination particles which are collected on a transparent disk or other surface employs an optical detector, such as a CCD camera, to obtain images of the disk and a computer for analyzing the images. From the images, the computer detects, counts and sizes particles collected on the disk The computer also determines, through comparison to previously analyzed images, the particle fallout rate, and generates an alarm or other indication if the rate exceeds a maximum allowable value. The detector and disk are disposed in a housing having an aperture formed therein for defining the area on the surface of the disk which is exposed to the particle fallout. A light source is provided for evenly illuminating the disk. A first drive motor slowly rotates the disk to increase the amount of its surface area which is exposed through the aperture to the particle fallout. A second motor is also provided for incrementally scanning the disk in a radial direction back and forth over the camera so that the camera eventually obtains images of the entire surface of the disk which is exposed to the particle fallout.
10. Surgical smoke and ultrafine particles
PubMed Central
Brüske-Hohlfeld, Irene; Preissler, Gerhard; Jauch, Karl-Walter; Pitz, Mike; Nowak, Dennis; Peters, Annette; Wichmann, H-Erich
2008-01-01
Background Electrocautery, laser tissue ablation, and ultrasonic scalpel tissue dissection all generate a 'surgical smoke' containing ultrafine (<100 nm) and accumulation mode particles (< 1 μm). Epidemiological and toxicological studies have shown that exposure to particulate air pollution is associated with adverse cardiovascular and respiratory health effects. Methods To measure the amount of generated particulates in 'surgical smoke' during different surgical procedures and to quantify the particle number concentration for operation room personnel a condensation particle counter (CPC, model 3007, TSI Inc.) was applied. Results Electro-cauterization and argon plasma tissue coagulation induced the production of very high number concentration (> 100000 cm-3) of particles in the diameter range of 10 nm to 1 μm. The peak concentration was confined to the immediate local surrounding of the production side. In the presence of a very efficient air conditioning system the increment and decrement of ultrafine particle occurrence was a matter of seconds, with accumulation of lower particle number concentrations in the operation room for only a few minutes. Conclusion Our investigation showed a short term very high exposure to ultrafine particles for surgeons and close assisting operating personnel – alternating with longer periods of low exposure. PMID:19055750
11. Vortex Cores of Inertial Particles.
PubMed
Günther, Tobias; Theisel, Holger
2014-12-01
The cores of massless, swirling particle motion are an indicator for vortex-like behavior in vector fields and to this end, a number of coreline extractors have been proposed in the literature. Though, many practical applications go beyond the study of the vector field. Instead, engineers seek to understand the behavior of inertial particles moving therein, for instance in sediment transport, helicopter brownout and pulverized coal combustion. In this paper, we present two strategies for the extraction of the corelines that inertial particles swirl around, which depend on particle density, particle diameter, fluid viscosity and gravity. The first is to deduce the local swirling behavior from the autonomous inertial motion ODE, which eventually reduces to a parallel vectors operation. For the second strategy, we use a particle density estimation to locate inertial attractors. With this, we are able to extract the cores of swirling inertial particle motion for both steady and unsteady 3D vector fields. We demonstrate our techniques in a number of benchmark data sets, and elaborate on the relation to traditional massless corelines. PMID:26356967
12. HZE particle effects in space
Horneck, Gerda
Among the various particulate components of ionizing radiation in space, heavy ions (the so-called HZE particles) have been of special concern to radiobiologists. To understand the ways by which HZE particles of cosmic radiation interact with biological systems, methods have been developed to precisely localize the trajectory of an HZE particle relative to the biological object and to correlate the physical data of the particle with the biological effects observed along its path. In a variety of test systems, injuries were traced back to the traversal of a single HZE particle, such as somatic mutations and chromosomal aberrations in plant seeds, development disturbances and malformations in insect and salt shrimp embryos, or cell death in bacterial spores. In the latter case, a long-ranging killing effect around the particle's track was observed. Whereas, from spaceflight experiments, substantial information has been accumulated on single HZE particle effects in resting systems and in a few embryonic systems, there is a paucity of data on cosmic radiation effects in whole tissues or animals, especially mammalians.
13. Solar flares and energetic particles.
PubMed
Vilmer, Nicole
2012-07-13
Solar flares are now observed at all wavelengths from γ-rays to decametre radio waves. They are commonly associated with efficient production of energetic particles at all energies. These particles play a major role in the active Sun because they contain a large amount of the energy released during flares. Energetic electrons and ions interact with the solar atmosphere and produce high-energy X-rays and γ-rays. Energetic particles can also escape to the corona and interplanetary medium, produce radio emissions (electrons) and may eventually reach the Earth's orbit. I shall review here the available information on energetic particles provided by X-ray/γ-ray observations, with particular emphasis on the results obtained recently by the mission Reuven Ramaty High-Energy Solar Spectroscopic Imager. I shall also illustrate how radio observations contribute to our understanding of the electron acceleration sites and to our knowledge on the origin and propagation of energetic particles in the interplanetary medium. I shall finally briefly review some recent progress in the theories of particle acceleration in solar flares and comment on the still challenging issue of connecting particle acceleration processes to the topology of the complex magnetic structures present in the corona.
14. Vortex Cores of Inertial Particles.
PubMed
Günther, Tobias; Theisel, Holger
2014-12-01
The cores of massless, swirling particle motion are an indicator for vortex-like behavior in vector fields and to this end, a number of coreline extractors have been proposed in the literature. Though, many practical applications go beyond the study of the vector field. Instead, engineers seek to understand the behavior of inertial particles moving therein, for instance in sediment transport, helicopter brownout and pulverized coal combustion. In this paper, we present two strategies for the extraction of the corelines that inertial particles swirl around, which depend on particle density, particle diameter, fluid viscosity and gravity. The first is to deduce the local swirling behavior from the autonomous inertial motion ODE, which eventually reduces to a parallel vectors operation. For the second strategy, we use a particle density estimation to locate inertial attractors. With this, we are able to extract the cores of swirling inertial particle motion for both steady and unsteady 3D vector fields. We demonstrate our techniques in a number of benchmark data sets, and elaborate on the relation to traditional massless corelines.
15. HZE particle effects in space.
PubMed
Horneck, G
1994-11-01
Among the various particulate components of ionizing radiation in space, heavy ions (the so-called HZE particles) have been of special concern to radiobiologists. To understand the ways by which HZE particles of cosmic radiation interact with biological systems, methods have been developed to precisely localize the trajectory of an HZE particle relative to the biological object and to correlate the physical data of the particle with the biological effects observed along its path. In a variety of test systems, injuries were traced back to the traversal of a single HZE particle, such as somatic mutations, and chromosomal aberrations in plant seeds, development disturbances and malformations in insect and salt shrimp embryos, or cell death in bacterial spores. In the latter case, a long-ranging killing effect around the particle's track was observed. Whereas, from spaceflight experiments, substantial infomation has been accumulated on single HZE particle effects in resting systems and in a few embryonic systems, there is a paucity of data on cosmic radiation effects in whole tissues or animals, especially mammalians. PMID:11538453
16. Particle Detectors Subatomic Bomb Squad
SciTech Connect
Lincoln, Don
2014-08-29
The manner in which particle physicists investigate collisions in particle accelerators is a puzzling process. Using vaguely-defined “detectors,” scientists are able to somehow reconstruct the collisions and convert that information into physics measurements. In this video, Fermilab’s Dr. Don Lincoln sheds light on this mysterious technique. In a surprising analogy, he draws a parallel between experimental particle physics and bomb squad investigators and uses an explosive example to illustrate his points. Be sure to watch this video… it’s totally the bomb.
17. Particle displacement tracking for PIV
NASA Technical Reports Server (NTRS)
Wernet, Mark P.
1990-01-01
A new Particle Imaging Velocimetry (PIV) data acquisition and analysis system, which is an order of magnitude faster than any previously proposed system has been constructed and tested. The new Particle Displacement Tracing (PDT) system is an all electronic technique employing a video camera and a large memory buffer frame-grabber board. Using a simple encoding scheme, a time sequence of single exposure images are time coded into a single image and then processed to track particle displacements and determine velocity vectors. Application of the PDT technique to a counter-rotating vortex flow produced over 1100 velocity vectors in 110 seconds when processed on an 80386 PC.
18. Particle diffusion in a spheromak
SciTech Connect
Meyerhofer, D.D.; Levinton, F.M.; Yamada, M.
1988-01-01
The local carbon particle diffusion coefficient was measured in the Proto S-1/C spheromak using a test particle injection scheme. When the plasma was not in a force-free Taylor state, and when there were pressure gradients in the plasma, the particle diffusion was five times that predicted by Bohm and was consistent with collisional drift wave diffusion. The diffusion appears to be driven by correlations of the fluctuating electric field and density. During the decay phase of the discharge when the plasma was in the Taylor state, the diffusion coefficient of the carbon was classical. 23 refs., 4 figs.
19. Fog dispersion. [charged particle technique
NASA Technical Reports Server (NTRS)
Christensen, L. S.; Frost, W.
1980-01-01
The concept of using the charged particle technique to disperse warm fog at airports is investigated and compared with other techniques. The charged particle technique shows potential for warm fog dispersal, but experimental verification of several significant parameters, such as particle mobility and charge density, is needed. Seeding and helicopter downwash techniques are also effective for warm fog disperals, but presently are not believed to be viable techniques for routine airport operations. Thermal systems are currently used at a few overseas airports; however, they are expensive and pose potential environmental problems.
20. Particle Detectors Subatomic Bomb Squad
ScienceCinema
Lincoln, Don
2016-07-12
The manner in which particle physicists investigate collisions in particle accelerators is a puzzling process. Using vaguely-defined âdetectors,â scientists are able to somehow reconstruct the collisions and convert that information into physics measurements. In this video, Fermilabâs Dr. Don Lincoln sheds light on this mysterious technique. In a surprising analogy, he draws a parallel between experimental particle physics and bomb squad investigators and uses an explosive example to illustrate his points. Be sure to watch this video⦠itâs totally the bomb.
1. Electron microscopy of atmospheric particles
Huang, Po-Fu
Electron microscopy coupled with energy dispersive spectrometry (EM/EDS) is a powerful tool for single particle analysis. However, the accuracy with which atmospheric particle compositions can be quantitatively determined by EDS is often hampered by substrate-particle interactions, volatilization losses in the low pressure microscope chamber, electron beam irradiation and use of inaccurate quantitation factors. A pseudo-analytical solution was derived to calculate the temperature rise due to the dissipation of the electron energy on a particle-substrate system. Evaporative mass loss for a spherical cap-shaped sulfuric acid particle resting on a thin film supported by a TEM grid during electron beam impingement has been studied. Measured volatilization rates were found to be in very good agreement with theoretical predictions. The method proposed can also be used to estimate the vapor pressure of a species by measuring the decay of X-ray intensities. Several types of substrates were studied. We found that silver-coated silicon monoxide substrates give carbon detection limits comparable to commercially available substrates. An advantage of these substrates is that the high thermal conductivity of the silver reduces heating due to electron beam impingement. In addition, exposure of sulfuric acid samples to ammonia overnight substantially reduces sulfur loss in the electron beam. Use of size-dependent k-factors determined from particles of known compositions shows promise for improving the accuracy of atmospheric particle compositions measured by EM/EDS. Knowledge accumulated during the course of this thesis has been used to analyze atmospheric particles (Minneapolis, MN) selected by the TDMA and collected by an aerodynamic focusing impactor. 'Less' hygroscopic particles, which do not grow to any measurable extent when humidified to ~90% relative humidity, included chain agglomerates, spheres, flakes, and irregular shapes. Carbon was the predominant element detected in
2. Particle Physics Implications for Astrophysics
Stochaj, Steve
2012-10-01
New Mexico State University's involvement in the measurement of cosmic rays (space borne energetic particles) dates back to the 1970's. Measurements of these particles can contribute to our understanding of the most energetic processes in the Universe. The talk will cover the contributions of NMSU to the measurements of the antimatter components of the cosmic radiation and the study of solar energetic particles with PAMELA, Payload for Antimatter Matter Exploration and Light-nuclei Astrophysics. PAMELA was launched on a Russian Resurs-DK1 spacecraft into a polar orbit in June 2006 and remains operational to date. A summary of the PAMELA results and their connection to astrophysics will be given.
3. Colloids exposed to random potential energy landscapes: From particle number density to particle-potential and particle-particle interactions.
PubMed
Bewerunge, Jörg; Sengupta, Ankush; Capellmann, Ronja F; Platten, Florian; Sengupta, Surajit; Egelhaaf, Stefan U
2016-07-28
Colloidal particles were exposed to a random potential energy landscape that has been created optically via a speckle pattern. The mean particle density as well as the potential roughness, i.e., the disorder strength, were varied. The local probability density of the particles as well as its main characteristics were determined. For the first time, the disorder-averaged pair density correlation function g((1))(r) and an analogue of the Edwards-Anderson order parameter g((2))(r), which quantifies the correlation of the mean local density among disorder realisations, were measured experimentally and shown to be consistent with replica liquid state theory results.
4. Matter and Interactions: A Particle Physics Perspective
ERIC Educational Resources Information Center
Organtini, Giovanni
2011-01-01
In classical mechanics, matter and fields are completely separated; matter interacts with fields. For particle physicists this is not the case; both matter and fields are represented by particles. Fundamental interactions are mediated by particles exchanged between matter particles. In this article we explain why particle physicists believe in…
5. Inclusive Focus Particles in English and Korean
ERIC Educational Resources Information Center
Kang, Sang-gu
2011-01-01
When discussing focus particles, it has been common practice to rely on the dichotomy of inclusive vs. exclusive particles, "a la" Konig (1991). Inclusive focus particles are often further divided into scalar particles, such as "also", "too", and "either", and non-scalar particles, such as "even". In this thesis, I advance a comparative analysis…
6. Hydrodynamic enhanced dielectrophoretic particle trapping
DOEpatents
Miles, Robin R.
2003-12-09
Hydrodynamic enhanced dielectrophoretic particle trapping carried out by introducing a side stream into the main stream to squeeze the fluid containing particles close to the electrodes producing the dielelectrophoretic forces. The region of most effective or the strongest forces in the manipulating fields of the electrodes producing the dielectrophoretic forces is close to the electrodes, within 100 .mu.m from the electrodes. The particle trapping arrangement uses a series of electrodes with an AC field placed between pairs of electrodes, which causes trapping of particles along the edges of the electrodes. By forcing an incoming flow stream containing cells and DNA, for example, close to the electrodes using another flow stream improves the efficiency of the DNA trapping.
7. Particles trajectories in magnetic filaments
SciTech Connect
Bret, A.
2015-07-15
The motion of a particle in a spatially harmonic magnetic field is a basic problem involved, for example, in the mechanism of formation of a collisionless shock. In such settings, it is generally reasoned that particles entering a Weibel generated turbulence are trapped inside it, provided their Larmor radius in the peak field is smaller than the field coherence length. The goal of this work is to put this heuristic conclusion on firm ground by studying, both analytically and numerically, such motion. A toy model is analyzed, consisting of a relativistic particle entering a region of space occupied by a spatially harmonic field. The particle penetrates the magnetic structure in a direction aligned with the magnetic filaments. Although the conclusions are not trivial, the main result is confirmed.
8. Coaxial charged particle energy analyzer
NASA Technical Reports Server (NTRS)
Kelly, Michael A. (Inventor); Bryson, III, Charles E. (Inventor); Wu, Warren (Inventor)
2011-01-01
A non-dispersive electrostatic energy analyzer for electrons and other charged particles having a generally coaxial structure of a sequentially arranged sections of an electrostatic lens to focus the beam through an iris and preferably including an ellipsoidally shaped input grid for collimating a wide acceptance beam from a charged-particle source, an electrostatic high-pass filter including a planar exit grid, and an electrostatic low-pass filter. The low-pass filter is configured to reflect low-energy particles back towards a charged particle detector located within the low-pass filter. Each section comprises multiple tubular or conical electrodes arranged about the central axis. The voltages on the lens are scanned to place a selected energy band of the accepted beam at a selected energy at the iris. Voltages on the high-pass and low-pass filters remain substantially fixed during the scan.
9. Study of heavy flavored particles
SciTech Connect
Nemati, Bijan
1991-01-01
This report discusses progress on the following topics: time-of- flight system; charmed baryon production and decays; D decays to baryons; measurement of sigma plus particles magnetic moments; and strong interaction coupling. (LSP)
10. Lunar Regolith Particle Shape Analysis
NASA Technical Reports Server (NTRS)
Kiekhaefer, Rebecca; Hardy, Sandra; Rickman, Douglas; Edmunson, Jennifer
2013-01-01
Future engineering of structures and equipment on the lunar surface requires significant understanding of particle characteristics of the lunar regolith. Nearly all sediment characteristics are influenced by particle shape; therefore a method of quantifying particle shape is useful both in lunar and terrestrial applications. We have created a method to quantify particle shape, specifically for lunar regolith, using image processing. Photomicrographs of thin sections of lunar core material were obtained under reflected light. Three photomicrographs were analyzed using ImageJ and MATLAB. From the image analysis measurements for area, perimeter, Feret diameter, orthogonal Feret diameter, Heywood factor, aspect ratio, sieve diameter, and sieve number were recorded. Probability distribution functions were created from the measurements of Heywood factor and aspect ratio.
11. Particle detection systems and methods
DOEpatents
Morris, Christopher L.; Makela, Mark F.
2010-05-11
Techniques, apparatus and systems for detecting particles such as muons and neutrons. In one implementation, a particle detection system employs a plurality of drift cells, which can be for example sealed gas-filled drift tubes, arranged on sides of a volume to be scanned to track incoming and outgoing charged particles, such as cosmic ray-produced muons. The drift cells can include a neutron sensitive medium to enable concurrent counting of neutrons. The system can selectively detect devices or materials, such as iron, lead, gold, uranium, plutonium, and/or tungsten, occupying the volume from multiple scattering of the charged particles passing through the volume and can concurrently detect any unshielded neutron sources occupying the volume from neutrons emitted therefrom. If necessary, the drift cells can be used to also detect gamma rays. The system can be employed to inspect occupied vehicles at border crossings for nuclear threat objects.
12. Particle adhesion in powder coating
SciTech Connect
Mazumder, M.K.; Wankum, D.L.; Knutson, M.; Williams, S.; Banerjee, S.
1996-12-31
Electrostatic powder coating is a widely used industrial painting process. It has three major advantages: (1) it provides high quality durable finish, (2) the process is environmentally friendly and does not require the use of organic solvents, and (3) it is economically competitive. The adhesion of electrostatically deposited polymer paint particles on the grounded conducting substrate depends upon many parameters: (a) particle size and shape distributions, (b) electrostatic charge distributions, (c) electrical resistivity, (d) dielectric strength of the particles, (e) thickness of the powder film, (f) presence and severity of the back corona, and (g) the conductivity and surface properties of the substrate. The authors present a model on the forces of deposition and adhesion of corona charged particles on conducting substrates.
13. Progress in smooth particle hydrodynamics
SciTech Connect
Wingate, C.A.; Dilts, G.A.; Mandell, D.A.; Crotzer, L.A.; Knapp, C.E.
1998-07-01
Smooth Particle Hydrodynamics (SPH) is a meshless, Lagrangian numerical method for hydrodynamics calculations where calculational elements are fuzzy particles which move according to the hydrodynamic equations of motion. Each particle carries local values of density, temperature, pressure and other hydrodynamic parameters. A major advantage of SPH is that it is meshless, thus large deformation calculations can be easily done with no connectivity complications. Interface positions are known and there are no problems with advecting quantities through a mesh that typical Eulerian codes have. These underlying SPH features make fracture physics easy and natural and in fact, much of the applications work revolves around simulating fracture. Debris particles from impacts can be easily transported across large voids with SPH. While SPH has considerable promise, there are some problems inherent in the technique that have so far limited its usefulness. The most serious problem is the well known instability in tension leading to particle clumping and numerical fracture. Another problem is that the SPH interpolation is only correct when particles are uniformly spaced a half particle apart leading to incorrect strain rates, accelerations and other quantities for general particle distributions. SPH calculations are also sensitive to particle locations. The standard artificial viscosity treatment in SPH leads to spurious viscosity in shear flows. This paper will demonstrate solutions for these problems that they and others have been developing. The most promising is to replace the SPH interpolant with the moving least squares (MLS) interpolant invented by Lancaster and Salkauskas in 1981. SPH and MLS are closely related with MLS being essentially SPH with corrected particle volumes. When formulated correctly, JLS is conservative, stable in both compression and tension, does not have the SPH boundary problems and is not sensitive to particle placement. The other approach to
14. Dye Sensitization of Semiconductor Particles
SciTech Connect
Hartland, G. V.
2003-01-13
In this project electron transfer at semiconductor liquid interfaces was examined by ultrafast time-resolved and steady-state optical techniques. The experiments primarily yielded information about the electron transfer from titanium dioxide semiconductor particles to absorbed molecules. The results show that the rate of electron transfer depends on the structure of the molecule, and the crystalline phase of the particle. These results can be qualitatively explained by Marcus theory for electron transfer.
15. Long range alpha particle detector
DOEpatents
MacArthur, D.W.; Wolf, M.A.; McAtee, J.L.; Unruh, W.P.; Cucchiara, A.L.; Huchton, R.L.
1993-02-02
An alpha particle detector capable of detecting alpha radiation from distant sources. In one embodiment, a high voltage is generated in a first electrically conductive mesh while a fan draws air containing air molecules ionized by alpha particles through an air passage and across a second electrically conductive mesh. The current in the second electrically conductive mesh can be detected and used for measurement or alarm. The detector can be used for area, personnel and equipment monitoring.
16. Microelectrophoresis of selected mineral particles
NASA Technical Reports Server (NTRS)
Herren, B. J.; Tipps, R. W.; Alexander, K. D.
1982-01-01
Particle mobilities of ilmenite, labradorite plagioclase, enstatite pyroxene, and olivine were measured with a Rank microelectrophoresis system to evaluate indicated mineral separability. Sodium bicarbonate buffer suspension media with and without additives (0.0001 M DTAB and 5 percent v/v ethylene glycol) were used to determine differential adsorption by mineral particles and modification of relative mobilities. Good separability between some minerals was indicated; additives did not enhance separability.
17. Hybrid particles and associated methods
DOEpatents
Fox, Robert V; Rodriguez, Rene; Pak, Joshua J; Sun, Chivin
2015-02-10
Hybrid particles that comprise a coating surrounding a chalcopyrite material, the coating comprising a metal, a semiconductive material, or a polymer; a core comprising a chalcopyrite material and a shell comprising a functionalized chalcopyrite material, the shell enveloping the core; or a reaction product of a chalcopyrite material and at least one of a reagent, heat, and radiation. Methods of forming the hybrid particles are also disclosed.
18. Primordial nucleosynthesis with generic particles
NASA Technical Reports Server (NTRS)
Walker, T. P.; Kolb, E. W.; Turner, M. S.
1986-01-01
A revision of the standard model for Big Bang nucleosynthesis is discussed which allows for the presence of generic particle species. The primordial production of He-4 and D + He-3 is calculated as a function of the mass, spin degrees of freedom, and spin statistics of the generic particle for masses in the range 0.01-100 times the electron mass. The particular case of the Gelmini and Roncadelli majoron model for massive neutrinos is discussed.
19. Helium in interplanetary dust particles
NASA Technical Reports Server (NTRS)
Nier, A. O.; Schlutter, D. J.
1993-01-01
Helium and neon were extracted from fragments of individual stratosphere-collected interplanetary dust particles (IDP's) by subjecting them to increasing temperature by applying short-duration pulses of power in increasing amounts to the ovens containing the fragments. The experiment was designed to see whether differences in release temperatures could be observed which might provide clues as to the asteroidal or cometary origin of the particles. Variations were observed which show promise for elucidating the problem.
20. Long range alpha particle detector
DOEpatents
MacArthur, Duncan W.; Wolf, Michael A.; McAtee, James L.; Unruh, Wesley P.; Cucchiara, Alfred L.; Huchton, Roger L.
1993-01-01
An alpha particle detector capable of detecting alpha radiation from distant sources. In one embodiment, a high voltage is generated in a first electrically conductive mesh while a fan draws air containing air molecules ionized by alpha particles through an air passage and across a second electrically conductive mesh. The current in the second electrically conductive mesh can be detected and used for measurement or alarm. The detector can be used for area, personnel and equipment monitoring.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5929551720619202, "perplexity": 3972.9727707205543}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118950.30/warc/CC-MAIN-20170423031158-00380-ip-10-145-167-34.ec2.internal.warc.gz"}
|
https://socratic.org/questions/how-do-you-find-the-zeroes-of-f-x-5x-4-2x-2-3
|
Precalculus
Topics
How do you find the zeroes of f (x) =5x^4 − 2x^2 − 3?
May 3, 2015
This function is an example of a bi-quadratic function, which is the polynomial function of the ${4}^{t h}$ degree with no terms of an odd degree.
The general polynomial of the ${4}^{t h}$ degree looks like this:
$f \left(x\right) = {a}_{0} {x}^{4} + {a}_{1} {x}^{3} + {a}_{2} {x}^{2} + {a}_{3} {x}^{1} + {a}_{4} {x}^{0}$
Since no odd degree terms are present, general expression for a bi-quadratic function is:
$f \left(x\right) = {a}_{0} {x}^{4} + {a}_{2} {x}^{2} + {a}_{4} {x}^{0}$
Finding the values of an unknown $x$ where this function equals to zero is a simple three-step procedure.
Step 1. Substitute $y = {x}^{2}$. Then the equation $f \left(x\right) = 0$ that determines the zeros of a function is transformed into an equation with an unknown $y$:
${a}_{0} {y}^{2} + {a}_{2} y + {a}_{4} = 0$
Step 2. The above equation is a regular quadratic equation that we know how to solve. Its two solutions are:
${y}_{1} = \frac{- {a}_{2} + \sqrt{{a}_{2}^{2} - 4 {a}_{0} {a}_{4}}}{2 {a}_{0}}$
${y}_{2} = \frac{- {a}_{2} - \sqrt{{a}_{2}^{2} - 4 {a}_{0} {a}_{4}}}{2 {a}_{0}}$
(solutions might not be real if ${a}_{2}^{2} - 4 {a}_{0} {a}_{4} < 0$, they are supposed to be discarded).
Step 3. Knowing the value of an unknown $y$ (actually, from zero up to two values, depending on coefficients), we can find up to four values of $x$ since $y = {x}^{2}$:
x_1=sqrt(y_1); x_2=-sqrt(y_1); x_3=sqrt(y_2); x_4=-sqrt(y_2);
(depending on the coefficients, certain solutions might not be real)
I think it would be useful for a student who ask this question to do the math with concrete coefficients given in the problem.
As an illustration, here is a graph of the given function that shows where it takes zero values. It shows that this function has only two real values of $x$ where it equals to zero, $x = 1$ and $x = - 1$, which implies that one of the solutions of an equation for $y$ is negative and there is no $x$ that would be equal to it if raised to a power of $2$.
graph{5x^4-2x^2-3 [-3, 3, -4, 4]}
Impact of this question
155 views around the world
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 22, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7899231314659119, "perplexity": 285.77706058783866}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530100.28/warc/CC-MAIN-20190421000555-20190421022555-00505.warc.gz"}
|
https://www.skepticalcommunity.com/viewtopic.php?t=47687&start=40
|
## Fashion
This is our lounge area. Feel free to come in and get acquainted!
Anaxagoras
Posts: 30089
Joined: Wed Mar 19, 2008 5:45 am
Location: Yokohama/Tokyo, Japan
### Re: Fashion
https://cdn.theatlantic.com/assets/medi ... 1570040878
World of WearableArt 2019
Anaxagoras
Posts: 30089
Joined: Wed Mar 19, 2008 5:45 am
Location: Yokohama/Tokyo, Japan
### Re: Fashion
A little street fashion
https://i.redd.it/xr5hk5j2rat31.jpg
I like it. A bit cheeky. A bit outrageous. Makes you think.
ed
Posts: 42123
Joined: Tue Jun 08, 2004 11:52 pm
Title: G_D
### Re: Fashion
I really enjoy fachion, we are rewatching all 17 seasons of project runway. Funny thing is that I cannot tell who is strog and who is lousey. I think I have the fashon analog of tone deafness (which I am too)
Meanwhile ... check out Galliano
Witness
Posts: 35689
Joined: Thu Sep 19, 2013 5:50 pm
### Re: Fashion
sparks
Posts: 17630
Joined: Fri Oct 26, 2007 4:13 pm
Location: Friar McWallclocks Bar -- Where time stands still while you lean over!
### Re: Fashion
Junk porn.
Wherein no actual fucking takes place.
Fid
Posts: 1635
Joined: Sun Jun 06, 2004 3:45 pm
Location: The island of Atlanta
### Re: Fashion
She's doing the mating display of a male ostrich.
Witness
Posts: 35689
Joined: Thu Sep 19, 2013 5:50 pm
### Re: Fashion
:figamagee:
Pyrrho
Posts: 33285
Joined: Sat Jun 05, 2004 2:17 am
Title: Man in Black
Location: Division 6
### Re: Fashion
https://i.imgur.com/RkwbNBz.jpg
Pyrrho
Posts: 33285
Joined: Sat Jun 05, 2004 2:17 am
Title: Man in Black
Location: Division 6
### Re: Fashion
Would be better if they were wearing stiletto heels.
shemp
Posts: 7307
Joined: Thu Jun 10, 2004 12:16 pm
Title: inbred shit-for-brains
Location: Planet X
### Re: Fashion
They could borrow some from ed.
Pyrrho
Posts: 33285
Joined: Sat Jun 05, 2004 2:17 am
Title: Man in Black
Location: Division 6
### Re: Fashion
No socks make a bold fashion statement that will get stronger every day.
shemp
Posts: 7307
Joined: Thu Jun 10, 2004 12:16 pm
Title: inbred shit-for-brains
Location: Planet X
### Re: Fashion
They just plain hate everybody. They never design anything that real people would wear.
Witness
Posts: 35689
Joined: Thu Sep 19, 2013 5:50 pm
### Re: Fashion
shemp wrote: Sun Nov 03, 2019 5:21 pm They just plain hate everybody. They never design anything that real people would wear.
It's not made to be worn, just shown once.
https://i.imgur.com/cfeeIMV.jpg
Witness
Posts: 35689
Joined: Thu Sep 19, 2013 5:50 pm
### Re: Fashion
↑ Hadn't thought of the humiliation aspect, but yes: young beautiful people in ridiculous garbs having to ritually parade in front of rich old hags and their courts (press included).
But I think there are two types of clothes designers. A handful of famous ones who sell to the happy few (and not what they design for the shows), and basic grunts working for the usual shops/brands/chains who have to keep an eye on materials, availability and costs.
The whole process from design to display racks has its mysteries. A girl I know who has a small clothes shop near a ski station told me she has to make her choices with sales representatives a year or even a year and a half before the actual season, so the cut/model/style choices have already been made. Factor in that somebody somewhere has to buy the cloth at best price, have it dyed in the soon to be fashionable colors, then cut, sewn, packaged, distributed, advertised, &c.
Makes for a very skeptical look on the spontaneity and freedom – dare i say empowerment – of "street fashion", no? :mrgreen:
Anaxagoras
Posts: 30089
Joined: Wed Mar 19, 2008 5:45 am
Location: Yokohama/Tokyo, Japan
### Re: Fashion
https://ichef.bbci.co.uk/news/660/cpspr ... .field.jpg
The £7,500 dress that does not exist
Earlier this year Richard Ma, the chief executive of San Francisco-based security company Quantstamp, spent $9,500 (£7,500) on a dress for his wife. That is a lot of money for a dress, particularly when it does not exist, at least not in a physical form. Instead it was a digital dress, designed by fashion house The Fabricant, rendered on to an image of Richard's wife, Mary Ren, which can then be used on social media. "It's definitely very expensive, but it's also like an investment," Mr Ma says. He explains that he and his wife don't usually buy expensive clothing, but he wanted this piece because he thinks it has long-term value. "In 10 years time everybody will be 'wearing' digital fashion. It's a unique memento. It's a sign of the times." Ms Ren has shared the image on her personal Facebook page, and via WeChat, but opted not to post it on a more public platform. Digital collection Another fashion house designing for the digital space is Carlings. The Scandinavian company released a digital street wear collection, starting at around £9 ($11), last October.
It "sold out" within a month.
"It sounds kinda stupid to say we 'sold out', which is theoretically impossible when you work with a digital collection because you can create as many as you want," explains Ronny Mikalsen, Carlings' brand director.
"We had set a limit on the amount of products we were going to produce to make it a bit more special.
Being digital-only allows designers to create items that can push boundaries of extravagance or possibilities.
Anaxagoras
Posts: 30089
Joined: Wed Mar 19, 2008 5:45 am
Location: Yokohama/Tokyo, Japan
### Re: Fashion
Yeah: they have more money.
Giz
Posts: 5193
Joined: Mon Jul 12, 2004 5:07 pm
Location: USA!
### Re: Fashion
Witness wrote: Sun Nov 03, 2019 10:45 pm
shemp wrote: Sun Nov 03, 2019 5:21 pm They just plain hate everybody. They never design anything that real people would wear.
It's not made to be worn, just shown once.
https://i.imgur.com/cfeeIMV.jpg
I wanted an outfit that would express who I really am. I chose ‘doormat’.
Witness
Posts: 35689
Joined: Thu Sep 19, 2013 5:50 pm
### Re: Fashion
https://i.imgur.com/oLwNaVW.jpg
Fid
Posts: 1635
Joined: Sun Jun 06, 2004 3:45 pm
Location: The island of Atlanta
### Re: Fashion
Well he seems delighted for this opportunity to display the hours spent in the gym sculpting all those god like boy parts.
Giz
Posts: 5193
Joined: Mon Jul 12, 2004 5:07 pm
Location: USA!
### Re: Fashion
I’m thinking clown, Eskimo, bankrobber: now make me an outfit!
Witness
Posts: 35689
Joined: Thu Sep 19, 2013 5:50 pm
### Re: Fashion
https://i.imgur.com/Mm3ams3.jpg
Giz
Posts: 5193
Joined: Mon Jul 12, 2004 5:07 pm
Location: USA!
### Re: Fashion
Pink hat? Should have been red MAGA hat. Their attempt at controversial is failing.
Anaxagoras
Posts: 30089
Joined: Wed Mar 19, 2008 5:45 am
Location: Yokohama/Tokyo, Japan
### Re: Fashion
Made me giggle involuntarily. I'm pretty sure that's the point. It's a joke. Inspired by Humpty Dumpty maybe?
Witness
Posts: 35689
Joined: Thu Sep 19, 2013 5:50 pm
### Re: Fashion
↑ I'd have to see a crowd laughing at one of their ceremonies… :notsure:
https://i.imgur.com/MTEmXwr.jpg
Fid
Posts: 1635
Joined: Sun Jun 06, 2004 3:45 pm
Location: The island of Atlanta
### Re: Fashion
Wonder what the reaction would be if he stuck some big red lips on that.
Witness
Posts: 35689
Joined: Thu Sep 19, 2013 5:50 pm
### Re: Fashion
https://i.imgur.com/p2e1Y38.jpg
Giz
Posts: 5193
Joined: Mon Jul 12, 2004 5:07 pm
Location: USA!
### Re: Fashion
Jesus, get her a cheeseburger stat!
Witness
Posts: 35689
Joined: Thu Sep 19, 2013 5:50 pm
### Re: Fashion
https://i.imgur.com/u1tLmc8.jpg
Witness
Posts: 35689
Joined: Thu Sep 19, 2013 5:50 pm
### Re: Fashion
The stars wear vintage:
sparks
Posts: 17630
Joined: Fri Oct 26, 2007 4:13 pm
Location: Friar McWallclocks Bar -- Where time stands still while you lean over!
### Re: Fashion
Planetary resources are being wasted.
No matter, Yumans will be no more very soon.
Witness
Posts: 35689
Joined: Thu Sep 19, 2013 5:50 pm
### Re: Fashion
https://i.imgur.com/H4RtJRc.jpg
Moschino Men’s RTW Spring 2016
sparks
Posts: 17630
Joined: Fri Oct 26, 2007 4:13 pm
Location: Friar McWallclocks Bar -- Where time stands still while you lean over!
### Re: Fashion
Argumentative tan lines.
Unacceptable.
Witness
Posts: 35689
Joined: Thu Sep 19, 2013 5:50 pm
### Re: Fashion
https://i.imgur.com/DwUdiAL.jpg
sparks
Posts: 17630
Joined: Fri Oct 26, 2007 4:13 pm
Location: Friar McWallclocks Bar -- Where time stands still while you lean over!
### Re: Fashion
Abdul Alhazred wrote: Mon Dec 16, 2019 4:35 pm It shows that even on the beach he does not choose to dress so skimpy.
But he does dress that skimpy if the price is right. :)
High priced whore then. That's what I thought. :De_Bunk:
Witness
Posts: 35689
Joined: Thu Sep 19, 2013 5:50 pm
### Re: Fashion
https://i.imgur.com/Rz8wQoy.jpg
ed
Posts: 42123
Joined: Tue Jun 08, 2004 11:52 pm
Title: G_D
### Re: Fashion
https://external-content.duckduckgo.com ... f=1&nofb=1
John Galliano. Brilliant fucker.
sparks
Posts: 17630
Joined: Fri Oct 26, 2007 4:13 pm
Location: Friar McWallclocks Bar -- Where time stands still while you lean over!
### Re: Fashion
Who?
robinson
Posts: 19635
Joined: Sat Aug 12, 2006 2:01 am
Title: Je suis devenu Français
Location: USA
### Re: Fashion
Pyrrho wrote: Mon Mar 26, 2018 10:47 am https://twitter.com/rynprry/status/977917818434973696
grrr
ed
Posts: 42123
Joined: Tue Jun 08, 2004 11:52 pm
Title: G_D
### Re: Fashion
sparks wrote: Wed Dec 18, 2019 8:11 amWho?
John Galliano. He did/does couture. The high fashion stuff. Made some unfortunate drunken remarks about hitler and jews a few years ago and saw his career as head designer with Dior tank.
Witness
Posts: 35689
Joined: Thu Sep 19, 2013 5:50 pm
### Re: Fashion
https://i.imgur.com/4dDO8zz.jpg
:mrgreen:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49848952889442444, "perplexity": 28116.934300004472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571597.73/warc/CC-MAIN-20220812075544-20220812105544-00513.warc.gz"}
|
https://support.bioconductor.org/p/83328/#83560
|
GOSeq: analysis of unsupported genome after HTseq and DEseq2 - building gene lengths, comparing contrasts and understanding results.
1
0
Entering edit mode
@ben-mansfeld-10812
Last seen 6.2 years ago
Michigan State University
Hello all,
I am analyzing an RNAseq experiment in cucumber comparing two genotypes at two ages: Vlaspik (V) and Gy (G) at 8 and 16 days. I used HTseq-count for my read counts and subsequently performed DE analysis using DEseq2.
Briefly:
#Create DESeq data file from sample tabel, HTSeq data and model with interaction
ddsHTseq <- DESeqDataSetFromHTSeqCount(sampleTable = sampleTable, directory = "./HTSeq_Output_files/", design= ~ genotype + age + genotype:age)
#Re-level the data set defining G and 8dpp as the base levels
ddsHTseq$genotype <- relevel(ddsHTseq$genotype,"Gy 14")
ddsHTseq$age <- relevel(ddsHTseq$age,"8dpp")
#Run DESeq and check resultnames=comparisons
dds <- DESeq(ddsHTseq)
#Results for the contrast comparisons
V16G16 <- results(dds,contrast=list(c("genotype_Vlaspik_vs_Gy.14","genotypeVlaspik.age16dpp")))
V8G8 <- results(dds, contrast=c("genotype","Vlaspik","Gy 14"))
V16V8 <- results(dds, contrast=list(c("age_16dpp_vs_8dpp","genotypeVlaspik.age16dpp")))
G16G8 <- results(dds, contrast=c("age","16dpp","8dpp"))
#subset results by adjusted pvalue < 0.05
V16G16filt <- subset(V16G16, padj < 0.05)
V8G8filt <- subset(V8G8, padj < 0.05)
V16V8filt <- subset(V16V8, padj < 0.05)
G16G8filt <- subset(G16G8, padj < 0.05)
#filter subset of results by lfc
lfc <- 0.99
V16G16filt_lfc <- subset(V16G16filt, log2FoldChange > lfc | log2FoldChange < -(lfc))
V8G8filt_lfc <- subset(V8G8filt, log2FoldChange > lfc | log2FoldChange < -(lfc))
V16V8filt_lfc <- subset(V16V8filt, log2FoldChange > lfc | log2FoldChange < -(lfc))
G16G8filt_lfc <- subset(G16G8filt, log2FoldChange > lfc | log2FoldChange < -(lfc))
#of these are Upregulated genes
V16G16filt_lfc_up <- subset(V16G16filt, log2FoldChange > lfc)
V8G8filt_lfc_up <- subset(V8G8filt, log2FoldChange > lfc)
V16V8filt_lfc_up <- subset(V16V8filt, log2FoldChange > lfc)
G16G8filt_lfc_up <- subset(G16G8filt, log2FoldChange > lfc)
#of these are Downregulated genes
V16G16filt_lfc_dwn <- subset(V16G16filt, log2FoldChange < -(lfc))
V8G8filt_lfc_dwn <- subset(V8G8filt, log2FoldChange < -(lfc))
V16V8filt_lfc_dwn <- subset(V16V8filt, log2FoldChange < -(lfc))
G16G8filt_lfc_dwn <- subset(G16G8filt, log2FoldChange < -(lfc))
I am specifically interested in genes upregulated in V16G16 and V16V8. Ie:
#genes up in V16 vs V8 and G16
ARR_associated_filt_up<-V16G16filt_lfc_up[intersect(rownames(V16G16filt_lfc_up), rownames(V16V8filt_lfc_up)),]
I want to subsequently perform GOseq analysis on this set (65 genes).
Questions:
1.For GOseq, should my set of background assayed genes be all the genes assayed in both contrasts? or something else? Ie:
ARR_associated<-V16G16[union(rownames(V16G16),rownames(V16V8)),]
2. As far as I know HTseq can’t generate a gene length file. I thus created my gene length from the GFF3 I used for HTseq-count by summing exon lengths and calculating the median of transcripts per gene. Is this valid for GOseq?
txdb <- makeTxDbFromGFF("../../Chinese Long Genome/cucumber_v2.gff3",format="gff3")
exBytx<-exonsBy(txdb, by="tx", use.names=T)
txlengthData<-sum(width(reduce(exBytx)))
#Some nomenclature changes from transcript to gene names
names(txlengthData)<-gsub("M","G",names(txlengthData))
names(txlengthData)<-gsub("\\..*","",names(txlengthData))
#median transcript length per gene
txlengthData<-ave(x=txlengthData, names(txlengthData),FUN = median)
3. Calculating PWF for each contrast without any filtering gives ok to good looking plots for all but one contrast – V8G8, in which proportion of DE goes down with length. What do I make of this?
4. Furthermore, when I apply a fold change cut off of 2, all plots decrease with length. Why?
5. The plot for the set of interest described above is also flipped compared to the example in the vignette .
6. Can I continue with the analysis? If I do proceed, the resulting GO terms make biological sense.
7. More of a statistical question: When performing FDR calculation on the resulting p-vals, should I include those p-vals that equal 1? In the case of my 65 genes, only few GO terms are significant out of the thousand tested, thus multiple testing here yields no significant terms.
Thank you to all and any who can help.
Ben Mansfeld
sessionInfo()
R version 3.2.3 (2015-12-10)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows >= 8 x64 (build 9200)
locale:
[1] LC_COLLATE=English_United States.1252 LC_CTYPE=English_United States.1252 LC_MONETARY=English_United States.1252
[4] LC_NUMERIC=C LC_TIME=English_United States.1252
attached base packages:
[1] parallel stats4 stats graphics grDevices utils datasets methods base
other attached packages:
[1] goseq_1.22.0 BiocInstaller_1.20.3 rtracklayer_1.28.10 GenomicFeatures_1.20.6
[5] AnnotationDbi_1.30.1 Biobase_2.28.0 RSQLite_1.0.0 DBI_0.4-1
[9] geneLenDataBase_1.4.0 BiasedUrn_1.07 pheatmap_1.0.8 ggplot2_2.1.0
[17] GenomeInfoDb_1.4.3 IRanges_2.2.9 S4Vectors_0.6.6 BiocGenerics_0.14.0
[21] NCmisc_1.1.4
deseq2 goseq htseqcounts rnaseq plant • 1.2k views
0
Entering edit mode
@gordon-smyth
Last seen 3 hours ago
WEHI, Melbourne, Australia
An alternative would be to use Rsubread::featureCounts() to make the read counts for each gene. That is runnable from the R prompt, produces an R object directly, and returns appropriate gene lengths for you without any mucking around. See:
However the median of exon length transcripts per gene, as you have computed, is probably good enough. Any reasonable measure of gene length will probably do the job.
0
Entering edit mode
Thanks!
Any idea about the weird PWF plots? Why they are affected by filtering by log fold change? I read some other posts with similar issues but no conclusive response.
-Ben
1
Entering edit mode
Without knowing for sure, my guess would be that longer genes have more counts and are therefore sensitive to DE discovery even in the fold change is small. By having a threshold and remove these, you might be reducing the proportion of longer genes which are DE.
0
Entering edit mode
That's what I guessed. Do you think this makes a difference to the analysis?
Does fitting to the PWF still work if DE goes down with size? Is GOseq still a valid approach for my purposes?
Thanks again
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29974618554115295, "perplexity": 24739.90594712258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335362.18/warc/CC-MAIN-20220929163117-20220929193117-00780.warc.gz"}
|
http://www.gradesaver.com/aristotles-metaphysics/q-and-a/work-physics-help-135690
|
# WORK PHYSICS HELP!
A 2.6 kg wagon moves in a straight line on a frictionless horizontal surface with an initial velocity of 3.0 m/s. It is then pushed for 4.0 m by a force of 2.5 N
In the same direction as the initial velocity
a) Use energy techniques to determine the final velocity
b) Check the answer to part a) using kinematics
##### Answers 1
Hey sorry, but this is a literature site. If I tried to help you, I'd be wrong.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9623492360115051, "perplexity": 596.3203558923315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282110.46/warc/CC-MAIN-20170116095122-00494-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/algebra-2-1st-edition/chapter-7-exponential-and-logarithmic-functions-7-2-graph-exponential-decay-functions-7-2-exercises-skill-practice-page-490/28
|
## Algebra 2 (1st Edition)
$f(x)=5(4)^{-x}=5\frac{1}{4^x}=5(\frac{1}{4})^x=5(0.25)^x=g(x)$ Thus they represent the same function.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9918275475502014, "perplexity": 4139.1263169701015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250610004.56/warc/CC-MAIN-20200123101110-20200123130110-00039.warc.gz"}
|
http://en.citizendium.org/wiki/Cholesterol
|
Citizendium - a community developing a quality comprehensive compendium of knowledge, online and free. Click here to join and contribute—free CZ thanks SEPTEMBER 2014 donors; special to Darren Duncan. OCTOBER 2014 donations open; need minimum total \$100. Let's exceed that. Donate here. Treasurer's Financial Report -- Thanks to September content contributors. --
# Cholesterol
Main Article
Talk
Related Articles [?]
Bibliography [?]
Citable Version [?]
Signed Articles [?]
This editable Main Article is under development and not meant to be cited; by editing it you can help to improve it towards a future approved, citable version. These unapproved articles are subject to a disclaimer.
(CC) Image: David E. Volk
Structure and nomenclature of cholesterol. All steroid nomenclature is based on cholesterol. By convention, substituents pointing up, like C-18 and C-19, are called $\beta$ while those pointing down are called $\alpha$.
Cholesterol is a lipid that is the "principal sterol of all higher animals, distributed in body tissues, especially the brain and spinal cord, and in animal fats and oils."[1]
There is much confusion, in the lay press, between cholesterol itself, and the lipoproteins that carry it in the blood. References to "bad cholesterol" are misleading because cholesterol is cholesterol, but cholesterol carried by low-density lipoproteins (LDL) tend to increase atherosclerosis while cholesterol carried by high-density lipoproteins (HDL) tend to decrease it.
## Disorders of cholesterol
Hypercholesterolemia may contribute to coronary heart disease, stroke, and other complications.
Hypoalphalipoproteinemia is abnormally low levels of alpha-lipoproteins (high-density lipoproteins or HDL) in the blood. Low levels of high-density lipoproteins in the blood is a component of the metabolic syndrome.
## References
1. Anonymous (2014), Cholesterol (English). Medical Subject Headings. U.S. National Library of Medicine.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3548169732093811, "perplexity": 13925.219597508985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119650516.39/warc/CC-MAIN-20141024030050-00116-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/calcluated-density-value-different-from-literature.465936/
|
# Calcluated density value different from literature
1. ### ana111790
42
1. The problem statement, all variables and given/known data
A blood pressure cuff is used to measure the gage pressure associated with blood flow in the body. “Normal”, systolic blood pressure is commonly reported as 120 mm of mercury. This value represents the vertical displacement of mercury (h) resulting from the gage pressure within the device. (The density of mercury is ρ = 1.38 x 10^4 x kg/m^3 )
a. Calculate the gage pressure within the device (in Pa) that corresponds to a vertical displacement of 120 mmHg.
b. The fluid in the device is replaced with a glycerin solution and the gage pressure from part b is applied. The displacement in the column corresponding to this gage pressure is 166 mm of glycerin. What is the density of this glycerin solution?
2. Relevant equations
Pgage= ρ*g*h
3. The attempt at a solution
a) Pgage= ρ*g*h = (1.38 x 10^4 kg/m^3)*(9.8 m/s^2)*(120mm)* (1m/1000mm)
Pgage=16200 kPa
b) ρglycerin=Pgage= 16200Pa/[(9.8 m/s^2)*(166mm)*(1m/1000mm)
ρglycerin=9960 kg/m^3 which is different from the density of glycerin in literature (1250 kg/m^3)
So I am wondering these calculations are right or if I am missing something.
Thanks!
2. ### SammyS
8,747
Staff Emeritus
The height of the column is inversely proportional to the density of the fluid, so your answer appears to be consistent with the data given. I agree with you that the glycerin solution is unrealistically dense.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.892248272895813, "perplexity": 1361.858243874969}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096579.89/warc/CC-MAIN-20150627031816-00148-ip-10-179-60-89.ec2.internal.warc.gz"}
|
http://www.conservapedia.com/Newton's_Laws_of_Motion
|
Newton's Laws of Motion
Isaac Newton's 3 laws of motion form the basis for classical mechanics. They are:
1) An object in motion will remain in motion in a straight line unless acted upon by an external force. An object at rest will remain at rest unless acted upon by an external force. (Or, alternatively, an object's velocity remains constant unless the object is acted upon by an external force.)
2) The rate of change of an object's momentum is equal to the net force acting on it ($\vec F = d{\vec p}/dt$, sometimes written as $\vec F = m \times \vec a$ when mass can be assumed to be constant).
3) For every action there is an equal and opposite reaction; or, more precisely, the total momentum of any isolated system is always constant.
Explanation
The first law defines an inertial frame of reference as one which is acted upon by no outside forces. In general, inertial frames are far easier to understand conceptually and deal with mathematically than accelerated frames.
The second law relates force and momentum. Mathematically, $\vec F = d{\vec p}/dt = d(m \times \vec v)/dt = m \times d{\vec v}/dt + \vec v \times dm/dt$. Usually dm / dt = 0, so the law is simplified to $\vec F = m \times d{\vec v}/dt = m \times \vec a$, or mass times acceleration. A notable exception is rocket motion, where dm / dt is not 0, and so $\vec F = m \times \vec a$ does not apply. Note that the quantities F, p, v, and a are all vector quantities--that is, they have an associated direction as well as a magnitude. In general, the second law gives a way to predict the motion of an object by summing all the forces acting on that object.
The third law states that momentum is always conserved. If one object imparts a momentum p0 on another, the first object's momentum will change by -p0. This can be viewed as a consequence of Noether's Theorem; the associated symmetry is that the laws of physics do not change under spatial translations (that is, the laws of physics are the same everywhere).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9641661643981934, "perplexity": 162.11478235114382}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637898779.5/warc/CC-MAIN-20141030025818-00176-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/vacuum-impedance.128081/
|
# Vacuum impedance
1. Aug 5, 2006
### Kolahal Bhattacharya
The vacuum impedance,as our professor writes in blackboard, equals 377 ohm
What is the physical origin of this impedance?
2. Aug 5, 2006
### J Hann
This value arises from the plane wave solution of Maxwell's equations.
The ratio of E/H in a plane wave in free space is equal to the square
root of the ratio of the magnetic permittivity (mu) to the electric permittivity (epsilon).
This value has the dimensions of ohms and is called the characteristic
impedance of free space and has the value of 376.6 ohms.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9557135105133057, "perplexity": 2119.9460352181745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00025-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://physics.stackexchange.com/questions/390299/what-are-the-advantages-to-the-path-integral-formulation-of-non-relativistic-qua
|
# What are the advantages to the path integral formulation of non-relativistic quantum mechanics?
When I first learned quantum mechanics almost everything was in terms of wave functions or matrix mechanics, not path integrals. Not having learned much about path integrals besides some brief reading I am struggling to see their benefits or the motivation behind them, besides a desire for an approach based on action/lagrangians. It seems that some problems, like the hydrogen atom would even be more difficult in that formulation. However I would expect there has to be some, probably significant, advantages to using path integrals in certain situations.
What are the advantages (and disadvantages) to the path integral formulation compared to other approaches?
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8923920392990112, "perplexity": 346.42516147761967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360803.6/warc/CC-MAIN-20211201143545-20211201173545-00443.warc.gz"}
|
http://math.stackexchange.com/questions/107519/location-of-a-root-of-a-cubic-polynomial
|
# Location of a root of a cubic polynomial
For $\alpha\in(0,\frac12)$, $\beta\in(0,\infty)$, $N\in\mathbb N\backslash\{0\}$ and $n\in\{0,\ldots,N\}$, how can I prove that exactly one zero of the cubic polynomial
$$(N+2\beta)x^3-(N+n+3\beta)x^2+(n+\beta+N\alpha-N\alpha^2)x+n\alpha^2-n\alpha$$ lies in $[\alpha,1-\alpha]$?
-
I would evaluate the polynomial at $\alpha$ and $1-\alpha$, hoping to find different signs, and then I would compute the discriminant, hoping to find zero. Did you try? – Giovanni De Gaetano Feb 9 '12 at 17:30
Evaluating at $\alpha$, we have: \begin{align*} f(\alpha) &= (N+2\beta)\alpha^3 - (N+n+3\beta)\alpha^2 + (n+\beta+N\alpha-N\alpha^2)\alpha + n\alpha^2 - n\alpha\\ &= (N+2\beta - N)\alpha^3 + (-N-n-3\beta+N+n)\alpha^2 + (n+\beta-n)\alpha\\ &= 2\beta\alpha^3 - 3\beta\alpha^2 + \beta\alpha\\ &= \alpha\beta(2\alpha^2 -3\alpha + 1). \end{align*} Evaluating at $1-\alpha$ gives \begin{align*} f(1-\alpha) &= (N+2\beta)(1-3\alpha + 3\alpha^2-\alpha^3) - (N+n+3\beta)(1-2\alpha+\alpha^2)\\ &\qquad \mathop{+} (n+\beta+N\alpha-N\alpha^2)(1-\alpha) + n\alpha^2 - n\alpha\\ &= (-N-2\beta +N)\alpha^3 + (3N+6\beta - N-n-3\beta -N-N+n)\alpha^2\\ &\qquad \mathop{+}(-3N-6\beta+2N+2n+6\beta-n-\beta+N-n)\alpha\\ &\qquad \mathop{+} (N+2\beta-N-n-3\beta+n+\beta)\\ &= -2\beta\alpha^3 +3\beta\alpha^2-\beta\alpha\\ &= -\alpha\beta(2\alpha^2 - 3\alpha + 1). \end{align*} So, unless $2\alpha^2-3\alpha+1$ is $0$, the two values have opposite signs. But the roots of $2x^2-3x+1$ are $1$ and $\frac{1}{2}$, so $\alpha$ cannot be a root.
Thus, there is at least one root for the polynomial in $[\alpha,1-\alpha]$ (in fact, in $(\alpha,1-\alpha)$.
Since $f(x)$ has opposite signs on $\alpha$ and on $1-\alpha$, if $f(x)$ has more than one (distinct) root on $[\alpha,1-\alpha]$, then it must have three distinct roots in the interval (why?). Can all three roots be in that interval?
Great! The discriminant is identically zero so that implies that there is at most two distinct roots. But if there were two distinct roots in $(\alpha,1-\alpha)$, it could not change sign. So there is exactly one root in the interval. – Chris Ferrie Feb 9 '12 at 19:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997289776802063, "perplexity": 725.2726727786944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096991.38/warc/CC-MAIN-20150627031816-00094-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/problems-with-integration-contours-and-residues.226590/
|
# Problems with integration contours and residues
1. Apr 4, 2008
### nullus 1er
1. Hi there,
I have problems in finding the correct results for these integrals. I have to say I m not a great expert in residues...
2. Int[-inf;+inf;exp(-x^2)/(z-x)]
variable is x, constant is z
Int[-inf;0;exp(x)/(z-x)] with z<0
3. I think the 1st one gives 2i pi exp(-z^2) and the second one 2i pi exp(-z)+something
2. Apr 4, 2008
### Avodyne
These are not simple integrals that can be done by residues. The yields an error function and the second an incomplete gamma function.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9600264430046082, "perplexity": 2316.8294686857216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720154.20/warc/CC-MAIN-20161020183840-00350-ip-10-171-6-4.ec2.internal.warc.gz"}
|
http://mymathforum.com/abstract-algebra/11442-solving-real-roots-polynomial-equation.html
|
My Math Forum Solving real roots polynomial equation
Abstract Algebra Abstract Algebra Math Forum
February 25th, 2010, 08:28 AM #1 Newbie Joined: Feb 2010 Posts: 6 Thanks: 0 Solving real roots polynomial equation I have shown some more methods which is much quicker than previous ones for solving real roots polynomial equations. These examples are from’ Solving Real Roots Polynomial Equations’(New And Simple Methods Of Solving Real Roots Polynomial Equations Part-2). Try it to see whether it works well. Click here. http://www.rkmath.yolasite.com
February 25th, 2010, 09:41 AM #2 Senior Member Joined: Oct 2007 From: Chicago Posts: 1,701 Thanks: 3 Re: Solving real roots polynomial equation I've looked briefly through the first paper. I haven't looked close enough to say much about the methods or the novelty of the results, but a few things: I see no reference to other mathematicians here. Have you built on on (or even examined) work that's been done in the last century on roots of polynomial equations? How does your method depart from these methods and why? This is always a useful thing to see when someone claims their result is new-- it provides motivation and shows that this isn't just some guy who has "discovered" some nonsense, or something that's already well-known. It's shocking (and suspicious) to see a "new" result in the solution to polynomial equations which does not rely on any methods from higher algebra. Can you provide any insight on how you've proven something new with elementary methods? Also, how do you know your results are new? You really need to work on the formatting: it's difficult to follow what's going on. In particular: * departing form standard notation makes it difficult for mathematicians to read. (e.g. $a_nx^n+a_{n-1}x^{n-1}+\ldots + a_1x+a_0$ is much clearer notation than $AX^n+BX^{n-1}-------------YX+Z$. Also, your equations are difficult to distinguish from text; you should really be making use of a different font or style (e.g. math is always in italics, text is never in italics.) * If you insist on using word (instead of typesetting it in LaTeX), use a different font than calibri. Preferably one which is standard in mathematical papers (Latin Modern, Garamond, etc.) It looks nicer, and is much easier to take seriously (I'm not joking here.) * It is traditional to provide proofs immediately after first introducing a theorem; perhaps a quick remark, or an example beforehand is illuminating, but I shouldn't have to go to the end of the paper to read a proof of a statement on page one. * Your results are completely unmotivated; prose helps make math papers clearer. Especially, I don't get any sense of why I should care about what you have to say. I may have time to take a look at this in more detail this weekend.
February 27th, 2010, 07:45 AM #3
Newbie
Joined: Feb 2010
Posts: 6
Thanks: 0
Re: Solving real roots polynomial equation
Quote:
"cknapp"
Quote:
I see no reference to other mathematicians here.
Code:
All are my own ideas. I didn’t take anything from anywhere. This is only a draft. I like to add many things including a few references?, general proofs, some more examples, explanations and good formatting( all in good English), Only if my works are appreciated
Quote:
Have you built on on (or even examined) work that's been done in the last century on roots of polynomial equations?
Code:
I saw them, only after writing these papers. Because I didn’t study any degree.
Quote:
How does your method depart from these methods and why?
Code:
All are simple formulas.
Easy and suitable to school students.
Quote:
This is always a useful thing to see when someone claims their result is new-- it provides motivation and shows that this isn't just some guy who has "discovered" some nonsense, or something that's already well-known. It's shocking (and suspicious) to see a "new" result in the solution to polynomial equations which does not rely on any methods from higher algebra.
Code:
Please note that these formulas are only for real roots polynomial equations.
After seeing many math websites in net, I satisfied myself that no one discovered it before. I have shown that my method works well.Perhaps I may be wrong. You (mathematicians and scholars) will decide.
Quote:
Can you provide any insight on how you've proven something new with elementary methods?
Code:
All are based on simple facts , already known by all.
I think differently.
See the proofs, especially theorem 6. It does not need any proof. Theorem 1 is another form of theorem 6. My works all are mainly based on theorem 1. I will try to give you a detailed notes.
Quote:
Also, how do you know your results are new?
Code:
After seeing many math websites in net, I satisfied myself that no one discovered it before. Perhaps I may be wrong. You (mathematicians and scholars) will decide.
Quote:
it's difficult to follow what's going on
Code:
I have poor English knowledge. I can’t explain accurately in English as I think. That is why I wrote main points only. I didn’t explain any thing.
Quote:
You really need to work on the formatting: In particular: * departing form standard notation makes it difficult for mathematicians to read. (e.g. $a_nx^n+a_{n-1}x^{n-1}+\ldots + a_1x+a_0$ is much clearer notation than $AX^n+BX^{n-1}-------------YX+Z$. Also, your equations are difficult to distinguish from text; you should really be making use of a different font or style (e.g. math is always in italics, text is never in italics.) * If you insist on using word (instead of typesetting it in LaTeX), use a different font than calibri. Preferably one which is standard in mathematical papers (Latin Modern, Garamond, etc.) It looks nicer, and is much easier to take seriously (I'm not joking here.) * It is traditional to provide proofs immediately after first introducing a theorem; perhaps a quick remark, or an example beforehand is illuminating, but I shouldn't have to go to the end of the paper to read a proof of a statement on page one.
Code:
You are correct. Your suggestion all are accepted. Thank you.
Quote:
* Your results are completely unmotivated; prose helps make math papers clearer. Especially, I don't get any sense of why I should care about what you have to say.
Code:
It is only a request.
Quote:
I may have time to take a look at this in more detail this weekend
.
Code:
Thanks.
Code:
By the by, Part-1 is posted in net a year back. So far no one told ‘ it is not new ’ , ‘ theory is wrong ‘,’ it is useless and difficult ‘.
Second one – ‘ it is not a breakthrough ’.
third – ‘ I think its an amazing discovery. a powerful and unique method. awesome work on the whole.’
Fourth one – ‘ is it a joke ? ‘
So, You (mathematicians and scholars) will give the verdict that ‘ it is a breakthrough or not’, ‘it is a new method or not’, ‘it is simple or not’, ‘it is useful or not’ and ‘appreciate it or not’.
Thank you.
February 27th, 2010, 04:39 PM #4
Senior Member
Joined: Oct 2007
From: Chicago
Posts: 1,701
Thanks: 3
Re: Solving real roots polynomial equation
Quote:
Originally Posted by RKJCHENNAI All are my own ideas. I didn’t take anything from anywhere. This is only a draft. I like to add many things including a few references?, general proofs, some more examples, explanations and good formatting( all in good English), Only if my works are appreciated
Unfortunately, it's unlikely that an amateur working alone will discover something new that mathematicians find of any interest. Of course, it is possible. More likely than not, though, your results (even if new) will be of little interest to the mathematical community.
Quote:
I think differently.
A bold statement in mathematics.
Quote:
See the proofs, especially theorem 6. It does not need any proof.
A dangerous statement in mathematics.
Quote:
I have poor English knowledge. I can’t explain accurately in English as I think. That is why I wrote main points only. I didn’t explain any thing.
Fair enough.
Quote:
[quote:26nofcko]* Your results are completely unmotivated; prose helps make math papers clearer. Especially, I don't get any sense of why I should care about what you have to say.
It is only a request.[/quote:26nofcko]
I had mostly meant that there is little explanation. You explained why above.
The statement of theorem 1 needs to be refined: if k=0, then it is vacuous-- all polynomials with a 0-valued constant have 0 as a root.
Also, if n=1, then the two statements ($rs>k/z$ and $rs\leq nk/z$) are contradictory.
Your proof does not give anything unless n=3. You need to show that it holds for all polynomials that you are interested in.
n=3 has been "solved"-- we can very easily find all roots of a degree 3 polynomial. For n>4 polynomials are not necessarily solvable, so methods for approximating roots are certainly useful; unfortunately, a lot of work has gone into these, and I fail to see anything terribly original in your ideas.
I hope that's not too blunt...
March 1st, 2010, 10:58 PM #5
Newbie
Joined: Feb 2010
Posts: 6
Thanks: 0
Re: Solving real roots polynomial equation
Quote:
Unfortunately, it's unlikely that an amateur working alone will discover something new that mathematicians find of any interest. Of course, it is possible. More likely than not, though, your results (even if new) will be of little interest to the mathematical community.
okay.
Quote:
See the proofs, especially theorem 6. It does not need any proof.
I didn’t mean it. Actual(indirect) meaning is that the proof is very, very simple. Any one can prove it. Because it is obvious.
Quote:
I had mostly meant that there is little explanation. You explained why above..
Misunderstanding.
Sorry.
Quote:
The statement of theorem 1 needs to be refined: if k=0, then it is vacuous-- all polynomials with a 0-valued constant have 0 as a root.
X is not equal to 0 (to be included)
Quote:
Also, if n=1, then the two statements ( and ) are contradictory.
n > 1 (it is already told)
Quote:
Your proof does not give anything unless n=3. You need to show that it holds for all polynomials that you are interested in.
Yes. I have general proofs.
Quote:
n=3 has been "solved"-- we can very easily find all roots of a degree 3 polynomial. For n>4 polynomials are not necessarily solvable, so methods for approximating roots are certainly useful; unfortunately, a lot of work has gone into these, and I fail to see anything terribly original in your ideas. I hope that's not too blunt...
I am disappointed.
March 5th, 2010, 04:12 PM #6 Senior Member Joined: Apr 2008 Posts: 435 Thanks: 0 Re: Solving real roots polynomial equation Disappointing you or not, I agree with cknapp. Having skimmed through your papers, the first thing that strikes me is the new format. It is far more difficult to read through notations atypical of established papers than those I am accustomed to reading. It was an interesting challenge to read through your work. Perhaps the general proofs for cases when n is not equal to 3 would add validity to your claims, but these claims do not seem to substantiate a new development in math. I'm sorry.
Tags equation, polynomial, real, roots, solving
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post panky Algebra 1 December 10th, 2013 07:15 AM rayman Abstract Algebra 0 April 20th, 2012 05:34 AM advancedfunctions Calculus 2 March 11th, 2010 12:36 PM domaPL Real Analysis 1 January 12th, 2008 10:28 AM domaPL Number Theory 0 December 31st, 1969 04:00 PM
Contact - Home - Forums - Cryptocurrency Forum - Top
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 6, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6423319578170776, "perplexity": 1154.0243520432118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204086.87/warc/CC-MAIN-20190325174034-20190325200034-00401.warc.gz"}
|
https://topospaces.subwiki.org/w/index.php?title=Homotopy_type_of_connected_sum_depends_on_choice_of_gluing_map&diff=cur&oldid=3903
|
# Difference between revisions of "Homotopy type of connected sum depends on choice of gluing map"
## Statement
It is possible to find an example of compact connected orientable manifolds $M_1$ and $M_2$ such that the homotopy type of the connected sum $M_1 \# M_2$ is not well defined, i.e., we can get connected sums of different homotopy types depending on the choice of the gluing map.
## Facts used
1. Complex projective space has orientation-reversing self-homeomorphism iff it has odd complex dimension
## Proof
To construct an example, we need to find a case where both $M_1$ and $M_2$ are orientable but neither of them has an orientation-reversing self-homeomorphism. One simple choice, by Fact (1), is to set both $M_1$ and $M_2$ as homeomorphic to the complex projective plane $\mathbb{P}^2(\mathbb{C})$ which has real dimension 4.
There are two possible connected sums:
• Connected sum of two complex projective planes with same orientation: This has cohomology ring isomorphic to $\mathbb{Z}[x,y]/(x^2 - y^2,xy,x^3,y^3)$, where $x,y$ are additive generators of the free abelian group $H^2$ and $x^2 = y^2$ is the additive generator for $H^4$.
• Connected sum of two complex projective planes with opposite orientation: This has cohomology ring isomorphic to $\mathbb{Z}[x,y]/(x^2 + y^2,xy,x^3,y^3)$, where $x,y$ are additive generators of the free abelian group $H^2$ and $x^2 = -y^2$ is the additive generator for $H^4$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 18, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9326461553573608, "perplexity": 176.20761625220996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991488.53/warc/CC-MAIN-20210515223209-20210516013209-00606.warc.gz"}
|
https://data.mendeley.com/research-data/?page=0&type=SLIDES&type=IMAGE&type=OTHER&search=qubit%20oscillator%20frequency
|
Filter Results
53376 results
We see from Eq. ( e62) that for non-interacting qubits, the non-vanishing qubit bias just shifts the frequency position of the liner peaks ( e57) without qualitatively changing their shape. If both the bias and the qubit-qubit interaction are finite, the bias splits each of the linear peaks in two simple Lorentzians bringing the total number of the finite-frequency peaks in the spectrum of the detector output to six as it should be in the generic situation (see, e.g., Fig. fig3).... Output spectra of the non-linear detector measuring two different unbiased qubits. Solid line is the spectrum in the case of non-interacting qubits. The two larger peaks are the “linear” peaks that correspond to the oscillations in the individual qubits, while smaller peaks are non-linear peaks at the combination frequencies. Dashed line is the spectrum for interacting qubits. Interaction shifts the lower-frequency liner peak down and all other peaks up in frequency. Parameters of the detector-qubit coupling are: δ 1 = 0.12 t 0 , δ 2 = 0.09 t 0 , λ = 0.08 t 0 .... Finite qubit bias should lead to averaging of the two spectra S I ± ( e27) similar to that discussed in the case of non-interacting qubits and illustrated in Fig. fig4.... The two spectral densities ( e20) correspond to two possible outcomes of measurement: the qubits found in one or the other subspace D ± , the probability of the outcomes being determined by the initial state of the qubits. Each of the spectral densities coincides with the spectral density of the linear detector measuring coherent oscillations in one qubit . Similarly to that case, the maximum of the ratio of the oscillation peak versus noise S 0 for each spectrum S I ± ω is 4. As one can see from Eq. ( e20), this maximum is reached when the measurement is weak: | λ | ≪ | t 0 | , and the detector is “ideal”: arg t 0 λ * =0, and only Γ + or Γ - is non-vanishing. If, however, there is small but finite transition rate between the two subspaces that mixes the two outcomes of measurement, the peak height is reduced by averaging over the two spectral densities ( e20). This situation is illustrated in Fig. fig4 which shows the output spectra of the purely quadratic detector, when the subspaces D ± are mixed by small qubit bias ε . Since the stationary density matrix ( e14) is equally distributed over all qubit states, the two peaks of the spectral densities ( e20) are mixed with equal probabilities, and the maximum of the ratio of the oscillation peak heights versus noise S 0 for the combined spectrum S I ω is 2. Spectrum shown in Fig. fig4 for ε = 0.1 Δ 1 (solid line) is close to this limit.... An example of the output spectrum of the non-linear detector measuring unbiased qubits with different tunneling amplitudes is shown in Fig. fig6. One can see that when the linear and non-linear coefficient of the detector-qubit coupling are roughly similar, the linear peaks are more pronounced than the peaks at combination frequencies. Qubit-qubit interaction shifts all but the lower-frequency linear peak up in frequency and reduces both the amplitudes of the higher-frequency peaks and the distance between them.... Evolution of the output spectrum of the non-linear detector measuring two identical unbiased qubits with the strength ν of the qubit-qubit interaction. The qubit-detector coupling constants δ 1 , 2 are taken to be slightly different to average the spectrum over all qubit states. The three solid curves correspond to ν / Δ = 0.0 , 0.1 , 0.2 . In agreement with Eqs. ( e42) – ( e44), the peak at ω ≃ Δ is at first suppressed and then split in two by increasing ν , while the peak at ω ≃ 2 Δ is not changed noticeably by such a weak interaction. Dashed and dotted lines show the regime of relatively strong interaction: ν / Δ = 0.5 and ν / Δ = 1.0 , respectively, that is described by Eqs. ( e46) and ( e47).... Figure fig5 illustrates evolution of the output spectrum of the non-linear detector measuring identical qubits due to changing interaction strength. We see that this evolution agrees with the analytical description developed above. Weak qubit-qubit interaction ν ≃ κ ≪ Δ suppresses and subsequently splits the spectral peak at ω ≃ Ω while not changing the peak ω ≃ 2 Ω . Stronger qubit-qubit interaction ν ≃ Δ ≫ κ shifts the ω ≃ 2 Ω peak to higher frequencies while moving the two peaks around ω ≃ Ω further apart.... Output spectrum of a nonlinear detector measuring two qubits with “the most general” set of parameters. Six peaks in the spectrum at finite frequencies correspond to six different energy intervals in the energy spectrum of the two-qubit system. The zero-frequency peak reflects dynamics of transitions between energy levels. Detector parameters are: δ 1 = 0.1 , δ 2 = 0.07 , λ = 0.09 (all normalized to t 0 ). In this Figure, and in all numerical plots below we take Γ + | t 0 | 2 = Δ 1 , Γ - = 0 , and assume that the detector tunneling amplitudes are real.... Diagram of a mesoscopic detector measuring two qubits. The qubits modulate amplitude t of tunneling of detector particles between the two reservoirs.... Output spectra of a purely quadratic detector measuring two non-interacting qubits. Small qubit bias ε 1 = ε 2 ≡ ε (solid line) creates transitions that lead to averaging of the two main peaks at combination frequencies Δ 1 ± Δ 2 [see Eq. ( e20)]. Further increase of ε (dashed line) makes additional spectral peaks associated with these transitions more pronounced. The strength of quadratic qubit-detector coupling is taken to be λ = 0.15 t 0 .
Data Types:
• Image
(color online) Qubit’s final excited state probability P obtained from the semiclassical calculation as a function of temperature k B T and coupling strength g , both measured relative to the minimum qubit gap Δ . The different panels correspond to different values of the harmonic oscillator frequency: ℏ ω / Δ = 0.2 (top), 1 (middle) and 5 (bottom).... (color online) Energy level diagram of a coupled qubit-oscillator system with the qubit bias conditions varied according to the LZ protocol.... We can also see in Fig. Fig:ExcitationProbability02 that for g / Δ 1 the temperature dependence is non-monotonic. In particular, for low temperatures we obtain the intuitively expected increase in excitation probability with increasing temperature, but this trend reverses for higher temperatures. In order to investigate this feature further, we calculate the qubit’s final excited-state probability as a function of the number n of excitation quanta present in the initial state of the oscillator (Note that this calculation differs from the ones described above in that here we do not use the Boltzmann distribution for the oscillator’s initial state). The results are plotted in Fig. Fig:ExcitationProbabilityAsFunctionOfInitialOscillatorExcitationNumber. These results explain the non-monotonic dependence on temperature. For intermediate values of g / Δ (e.g. for g / Δ = 1 ), there is a peak at a small but finite excitation number followed by a steady decrease. As the temperature is increased from zero, the qubit’s final excited-state probability samples the probabilities for increasingly high excitation numbers, and a peak at intermediate values of temperature is obtained. Note that for large excitation numbers, the increase in P as a function of n resumes, and this increase will also be reflected in the temperature dependence.... where ω is the characteristic frequency of the harmonic oscillator, â and â † are, respectively, the oscillator’s annihilation and creation operators, and g is the qubit-oscillator coupling strength. The energy level diagram of this problem is illustrated in Fig. Fig:EnergyLevelDiagram.... Another feature worth noting is the temperature dependence of P close to zero temperature. As can be seen clearly in Figs. Fig:ExcitationProbability10 and Fig:ExcitationProbability50, the initial increase in P with temperature is very slow, indicating that it probably follows an exponential function that corresponds to the probability of populating the excited states in the harmonic oscillator (and the same dependence is probably present but difficult to see because of the scale of the x axis in Fig. Fig:ExcitationProbability02). After this initial slow rise, and in particular when k B T ℏ ω , we see a steady rise that in the case of Fig. Fig:ExcitationProbability02 can be approximated as a linear increase in P with increasing T . Importantly, the slope of this increase can be quite large for intermediate g values. From the results shown in Figs. Fig:ExcitationProbability02- Fig:ExcitationProbability50, we find that the maximum slope d P / d k B T / Δ m a x = 0.18 × ℏ ω / Δ -0.57 , and results for other parameter values extending up to ℏ ω / Δ = 20 follow this dependence. The implication of this result can be seen clearly in the middle panel of Fig. Fig:ExcitationProbability02: even when the temperature is substantially smaller than the qubit’s minimum gap Δ , the initial excitation of the low-frequency oscillator (stemming from the finite temperature) can cause a large increase in the qubit’s final excited-state probability. This result is in contrast with the exact result of Ref. stating that at zero temperature the qubit’s final excited-state probability is given by P L Z regardless of the value of g . The typical temperature scale at which deviations from the LZ formula occur can therefore be much lower than Δ / k B . This result is relevant for adiabatic quantum computing, because it contradicts the expectation that having a minimum gap that is large compared to the temperature might provide automatic protection for the ground state population against thermal excitation. Another point worth noting here is that when ℏ ω qubit and oscillator are resonant with each other, yet the initial thermal excitation of the oscillator can result in exciting the qubit at the final time. The excitations in the oscillator are in some sense up-converted into excitations in the qubit as a result of the sweep through the avoided crossing.... In addition to solving the Schrödinger equation, we have performed semiclassical calculations where we assume that there is no quantum coherence between the different LZ processes. (Note here that when we replace the isolated qubit with the coupled qubit-oscillator system the single avoided crossing is replaced by a complex network of avoided crossings.) Under this approximation, we only need to calculate the occupation probabilities of the different states, and these probabilities change (according to the LZ formula) only at the points of avoided crossing. This approach greatly simplifies the numerical calculations because the locations and gaps for the different avoided crossings can be determined easily (see e.g.~Fig.~ Fig:EnergyLevelDiagram). The results are shown in Fig. Fig:ExcitationProbabilityFromIncoherentCalculation. The results of this calculation agree generally well with those obtained by solving the Schrödinger equation when ℏ ω / Δ = 1 . For ℏ ω / Δ = 5 , the semiclassical calculation consistently underestimates the excited-state probability, but the overall dependence on temperature and coupling strength is remarkably similar to that shown in Fig. Fig:ExcitationProbability50. We should note that higher values of ℏ ω (not shown) exhibit more pronounced deviations, with side peaks appearing in the dependence of P on g / Δ . The most striking deviation from the results of the fully quantum calculation is seen in the case ℏ ω / Δ = 0.2 (i.e. the case of a low-frequency oscillator). In the semiclassical calculation, there is a rather high peak at a small value of the coupling strength (and sufficiently high temperatures), and the excited-state probability starts decreasing when the coupling strength g becomes larger than ℏ ω . In the fully quantum calculation, however, the peak is located at a much higher value, somewhere between 0.5 and 1 depending on the temperature.... (color online) Top: Qubit’s final excited-state probability P as a function of temperature k B T and coupling strength g , both measured relative to the qubit’s minimum gap Δ . Middle: P as a function of k B T / Δ for four different values of g / Δ : 0.1 (red solid line), 0.3 (green dashed line), 1 (blue dotted line) and 2 (magenta dash-dotted line). Bottom: P as a function of g / Δ for three different values of k B T / Δ : 1 (red solid line), 3 (green dashed line), and 5 (blue dotted line). In all the panels, the harmonic oscillator frequency is ℏ ω / Δ = 0.2 . The sweep rate is chosen such that P L Z = 0.1 , and this value is the baseline for all of the results plotted in this figure.... (color online) The final excited state probability P as a function of the number of excitation quanta n present in the initial state of the oscillator. Here we take ℏ ω / Δ = 0.2 . The different lines correspond to different values of the coupling strength: g / Δ = 0.1 (red solid line), 0.5 (green dashed line), 1 (blue dotted line) and 2 (magenta dash-dotted line).... The probability for the qubit to end up in the excited state at the final time as a function of temperature and coupling strength is plotted in Figs. Fig:ExcitationProbability02- Fig:ExcitationProbability50. As expected from known results , the final excited-state occupation probability P remains equal to 0.1 whenever the temperature or the coupling strength is equal to zero. Otherwise, the coupling to the oscillator causes this probability to increase. A common, and somewhat surprising, trend for all values of ℏ ω / Δ is the non-monotonic dependence on the coupling strength g . As the coupling strength is increased from zero to finite but small values, P increases. But when the coupling strength is increased further, P starts decreasing. Based on the results that are plotted in Figs. Fig:ExcitationProbability02- Fig:ExcitationProbability50, one can expect that in the limit of large g / Δ (and assuming not-very-large values of k B T / Δ ) the excited-state occupation probability will go back to its value in the uncoupled case, i.e. P = 0.1 . This phenomenon is probably a manifestation of the superradiance-like behaviour in a strongly coupled qubit-oscillator system . In the superradiant regime (i.e. the strong-coupling regime), the ground state is highly entangled exactly at the symmetry point (which corresponds to the bias conditions at t = 0 in the LZ problem), but even small deviations from the symmetry point can lead to an effective decoupling between the qubit and resonator with the exception of some state-dependent mean-field shifts. Indeed the maximum values of P reached in Figs. Fig:ExcitationProbability10 and Fig:ExcitationProbability50 occur at coupling strength values that are comparable to the expression for the uncorrelated-to-correlated crossover value, namely g ∼ ℏ ω (and we have verified that the near-linear increase in peak location as a function of oscillator frequency continues up to ℏ ω / Δ = 20 ). This relation does not apply in the case ℏ ω / Δ = 0.2 , shown in Fig. Fig:ExcitationProbability02. In this case, the peak occurs when the coupling strength g is comparable to the minimum gap Δ . It is in fact quite surprising that the excitation peak in the case ℏ ω / Δ = 0.2 occurs at a higher coupling strength than that obtained in the case ℏ ω / Δ = 1 . In order to investigate this point further, we tried values close to ℏ ω / Δ = 1 and found that this value gives a minimum in the peak location (i.e. the peak in P when plotted as a function of g / Δ ).
Data Types:
• Image
The first term has a peak at zero frequency, while the second term has a peak at ω = Ω , with width 3 Γ / 2 , and signal -1 / 3 Γ . Bounding this signal in relation to the noise in the individual twin detectors gives | S 1 , 2 Ω | ≤ 2 / 3 S I . The interesting feature of this correlator is that it changes sign as a function of frequency. The low frequency part describes the incoherent relaxation to the stationary state, while the high frequency part describes the out of phase, coherent oscillations of the z and x degrees of freedom. The measured correlator S z x , as well as S x x , S z z are plotted as a function of frequency in Fig. combo(b,c,d) for different values of ϵ . These correlators all describe different aspects of the time domain destruction of the quantum state by the weak measurement, visualized in Fig. comboa. We note that the cross-correlator changes sign for ϵ = - Δ .... (color online). (a) Time domain destruction of the quantum state by the weak measurement process for ϵ = Δ . The elapsed time is parameterized by color, and (x,y,z) denote coordinates on the Bloch sphere. (b) The measured cross-correlator S z x ω changes sign from positive at low frequency (describing incoherent relaxation) to negative at the qubit oscillation frequency (describing out of phase, coherent oscillations). (c,d) The correlators S x x , S z z have both a peak at zero frequency and at qubit oscillation frequency. We take Γ = Γ x = Γ z = .07 Δ / ℏ . S i j are plotted in units of Γ -1 .... Cross-correlated quantum measurement set-up: Two quantum point contacts are measuring the same double quantum dot qubit. As the quantum measurement is taking place, the current outputs of both detectors can be averaged or cross-correlated with each other.
Data Types:
• Image
(Color online) Energy spectrums for lowest eight levels under the situation with three high-frequency qubits: ℏ w 0 / E q = 0.01 . The rescaled energy E k / ℏ w 0 with k = 1 , 2 , 3 , . . . , 8 versus the rescaled coupling strength λ / ℏ w 0 is plotted: (a) θ = 0 ; (b) θ = π / 6 ; (c) θ = π / 3 .... (Color online) Schematic of four displaced oscillators. The horizontal and vertical axises represent the position and displaced oscillator’s eigenenergy E d o , respectively. Four displaced oscillators are shifted to the left or right from the equilibrium position with a specific constant, where the shift direction is determined by the state of three qubits. The eigenstates (plotted with n no more than 2 ) that have the same value of n are degenerate for the states | A ± 1 (or | A ± 3 ), and have the symmetry divided by the origin point in horizontal axis.... adiabatic approximation, three qubits, ultrastrongly coupled, harmonic oscillator... (Color online) Energy spectrums for lowest eight levels under the situation with a high-frequency oscillator: ℏ w 0 / E q = 10 . The rescaled energy E k / ℏ w 0 with k = 1 , 2 , 3 , . . . , 8 versus the rescaled coupling strength λ / ℏ w 0 is plotted: (a) θ = 0 ; (b) θ = π / 6 ; (c) θ = π / 3 .... (Color online) Schematic of the system with three identical qubits coupled to a harmonic oscillator. The j th ( j = 1 , 2 , 3 ) qubit with one ground ( | g j ) and one excited states ( | e j ) is coupled to the oscillator with frequency w 0 , where the qubit-oscillator coupling strength is denoted by g or λ .... (Color online) The Q function (upside) and the Wigner function (underside) of the oscillator’s state with three high-frequency qubits (i.e., ℏ w 0 / Δ = 0.1 and ϵ = 0 ): (a,d) λ / ℏ w 0 = 0.5 , (b,e) λ / ℏ w 0 = 1 , (c,f) λ / ℏ w 0 = 1.25 .
Data Types:
• Image
The systems considered are shown in Fig. fig:system. To be specific we first analyze the Rabi driven flux qubit coupled to an LC-oscillator (Fig. fig:systema) with Hamiltonian... Average number of photons in the resonator as function of the driving detuning δ ω and amplitude Ω R 0 . Peaks at δ ω > 0 correspond to lasing, dips at δ ω qubit are Δ / 2 π = 1 GHz, ϵ = 0.01 Δ , and Γ 0 / 2 π = 125 kHz, the frequency and line-width of the resonator are ω T / 2 π = 6 MHz and κ / 2 π = 1.7 kHz, the coupling constant is g / 2 π = 3.3 MHz and the temperature of the resonator T = 10 mK. The inset shows the bistability of the photon number for Ω R 0 / 2 π = 7 MHz. The dashed line represents the unstable solution.... So far we described a flux qubit coupled to an LC oscillator, but our analysis applies equally to a nano-mechanical resonator capacitively coupled to a Josephson charge qubit (see Fig. fig:systemb). In this case σ z stands for the charge of the qubit, and both the coupling to the oscillator and the driving are capacitive, i.e., involve σ z . To produce capacitive coupling between the qubit and the oscillator, the latter is metal coated and charged by a voltage source . The dc component of the gate voltage V g puts the system near the charge degeneracy point where the dephasing due to the 1 / f charge noise is minimal. Rabi driving is induced by an ac component of V g . Realistic experimental parameters are expected to be very similar to the ones used in the examples discussed above, except that a much higher quality factor of the resonator ( ∼ 10 5 ) and a much higher number of quanta in the oscillator can be reached. This number will easily exceed the thermal one, thus a proper lasing state with Poisson statistics, appropriately named SASER , is produced. One should then observe the usual line narrowing with line width given by κ N t h / 4 n ̄ ∼ κ 2 N t h / Γ 1 . Experimental observation of this line-width narrowing would constitute a confirmation of the lasing/sasing.... In Fig. 3dphoton we summarize our main results obtained by solving the Langevin (Fokker-Plank) equations . The number of photons n ̄ is plotted as a function of the detuning δ ω of the driving frequency and driving amplitude Ω R 0 . It exhibits sharp extrema along two curves corresponding to the one- and two-photon resonances, Ω R = ω T - 4 g 3 n ̄ and Ω R = 2 ω T - 4 g 3 n ̄ . Blue detuning, δ ω > 0 , induces a strong population inversion of the qubit levels, which in resonance leads to one-qubit lasing. In experiments the effect can be measured as a strong increase of the photon number in the resonator above the thermal values. On the other hand, red detuning produces a one-qubit cooler with photon numbers substantially below the thermal value. Near the resonances we find regions of bi-stability illustrated in the inset of Fig. 3dphoton. In these regions we expect a telegraph-like noise due to random switching between the two solutions.... Several recent experiments on quantum state engineering with superconducting circuits realized concepts originally introduced in the field of quantum optics and stimulated substantial theoretical activities . Josephson qubits play the role of two-level atoms, while oscillators of various kinds replace the quantized light field. Motivated by one such experiment , we investigate a Josephson qubit coupled to a slow LC oscillator (Fig. fig:system a) with eigenfrequency (in the MHz range) much lower than the qubit’s energy splitting (in the GHz range), ω T ≪ Δ E . The qubit is ac-driven to perform Rabi oscillations, and the Rabi frequency Ω R is tuned close to resonance with the oscillator. For this previously unexplored regime of frequencies we study both one-photon (for Ω R ≈ ω T ) and two-photon (for Ω R ≈ 2 ω T ) qubit-oscillator couplings. The latter is dominant at the “sweet" point of the qubit, where due to symmetry the linear coupling to the noise sources is tuned to zero and dephasing effects are minimized . When the qubit driving frequency is blue detuned, δ ω = ω d - Δ E > 0 , we find that the system exhibits lasing behavior; for red detuning the qubit cools the oscillator. Similar behavior is expected in an accessible range of parameters for a Josephson qubit coupled to a nano-mechanical oscillator (Fig. fig:systemb), thus providing a realization of a SASER (Sound Amplifier by Stimulated Emission of Radiation).... The systems. a) In the circuit QED setup of Ref. an externally driven three-junction flux qubit is coupled inductively to an LC oscillator. b) In an equivalent setup a charge qubit is coupled to a mechanical resonator.
Data Types:
• Image
Average number of photons in the resonator as function of the driving detuning δ ω and amplitude Ω R 0 . Peaks at δ ω > 0 correspond to lasing, while dips at δ ω qubit: Δ / 2 π = 1 GHz, ϵ = 0.01 Δ , Γ 0 / 2 π = 125 kHz, the resonator: ω T / 2 π = 6 MHz, κ / 2 π = 0.34 kHz, and the coupling: g / 2 π = 3.3 MHz. The bath temperature is T = 10 mK.... Dressed states of a driven qubit near resonance. Here m is the number of photons of the driving field, which is assumed to be quantized.... In experiments with the same setup as shown in Fig. fig:systema) but in a different parameter regime the mechanisms of Sisyphus cooling and amplification has recently been demonstrated . Due to the resonant high-frequency driving of the qubit, depending on the detuning, the oscillator is either cooled or amplified with a tendency towards lasing. The Sisyphus mechanism is most efficient when the relaxation rate of the qubit is close to the oscillator’s frequency. In contrast, in the present paper we concentrate on the “resolved sub-band" regime where the dissipative transition rates of the qubits are much lower than the oscillator’s frequency.... Average number of photons n ̄ versus the detuning. The blue curves are obtained from the Langevin equations ( dot alpha) and ( dot alpha2). They show the bistability with the solid curve denoting stable solutions, while the dashed curve denotes the unstable solution. The red curve is obtained from a numerical solution of the master equation ( eq:Master_Equation). The driving amplitude is taken as Ω R 0 / 2 π = 5 MHz. The parameters of the qubit: Δ / 2 π = 1 GHz, ϵ = 0.01 Δ , Γ 0 / 2 π = 125 kHz, the resonator: ω T / 2 π = 6 MHz, κ / 2 π = 1.7 kHz, N t h = 5 , and the coupling: g / 2 π = 3.3 MHz.... So far we described an LC oscillator coupled to a flux qubit. But our analysis equally applies for a nano-mechanical resonator coupled capacitively to a Josephson charge qubit (see Fig. fig:systemb). In this case σ z stands for the charge of the qubit and both the coupling to the oscillator as well as the driving are capacitive, i.e., involve σ z . To produce the capacitive coupling between the qubit and the oscillator, the latter could be metal-coated and charged by the voltage source V x . The dc component of the gate voltage V g puts the system near the charge degeneracy point where the dephasing due to the 1 / f charge noise is minimal. Rabi driving is induced by an a c component of V g . Realistic experimental parameters are expected to be very similar to the ones used in the examples discussed above, except that a much higher quality factor of the resonator ( ∼ 10 5 ) and a much higher number of quanta in the oscillator can be reached. This number will easily exceed the thermal one, thus a proper lasing state with Poisson statistics, appropriately named SASER , is produced. One should then observe the usual line narrowing with line width given by κ N t h / 4 n ̄ ∼ κ 2 N t h / Γ ~ 1 . Experimental observation of this line-width narrowing would constitute a confirmation of the lasing/sasing.... Average number of photons in the resonator as function of the qubit’s relaxation rate, Γ 0 at the one-photon resonance, Ω R = ω T for g 3 = 0 and N t h = 5 . The dark blue line shows the numerical solution of the master equation, the light blue solid line represents the solution of the Langevin equation, Eq. ( dot alpha ). The green and red dashed curves represent respectively the saturation number n 0 and the thermal photon number N t h . The parameters are as in Fig. fig:compar (except for Γ 0 ).... Also in situations where the qubit, e.g., a Josephson charge qubit, is coupled to a nano-mechanical oscillator (Fig. fig:systemb) it either cools or amplifies the oscillator. On one hand, this may constitute an important tool on the way to ground state cooling. On the other hand, this setup provides a realization of what is called a SASER .... Recent experiments on quantum state engineering with superconducting circuits realized concepts originally introduced in the field of quantum optics, as well as extensions thereof, e.g., to the regime of strong coupling , and prompted substantial theoretical activities . Josephson qubits play the role of two-level atoms while electric or nanomechanical oscillators play the role of the quantized radiation field. In most QED or circuit QED experiments the atom or qubit transition frequency is near resonance with the oscillator. In contrast, in the experiments of Refs. , with setup shown in Fig. fig:systema), the qubit is coupled to a slow LC oscillator with frequency ( ω T / 2 π ∼ MHz) much lower than the qubit’s level splitting ( Δ E / 2 π ℏ ∼ 10 GHz). The idea of this experiment is to drive the qubit to perform Rabi oscillations with Rabi frequency in resonance with the oscillator, Ω R ≈ ω T . In this situation the qubit should drive the oscillator and increase its oscillation amplitude. When the qubit driving frequency is blue detuned, the driving creates a population inversion of the qubit, and the system exhibits lasing behavior (“single-atom laser"); for red detuning the qubit cools the oscillator . A similar strategy for cooling of a nanomechanical resonator via a Cooper pair box qubit has been recently suggested in Ref. . The analysis of the driven circuit QED system shows that these properties depend strongly on relaxation and decoherence effects in the qubit.... a) In the setup of Ref. an externally driven three-junction flux qubit is coupled inductively to an LC oscillator. b) A charge qubit is coupled to a mechanical resonator.... The systems to be considered are shown in Fig. fig:system. A qubit is coupled to an oscillator and driven to perform Rabi oscillations. To be specific we first analyze the flux qubit coupled to an electric oscillator (Fig. fig:systema) with Hamiltonian
Data Types:
• Image
In the preceeding analysis we neglected the effect of the local environment by setting Y i n t ω = 0 . As a result, the low-frequency value of T 1 is substantially larger than obtained in experiment . By modeling the local environment with R 0 = 5000 ohms and L 0 = 0 we obtain the T 1 versus ω 01 plot shown in Fig. fig:three. Notice that this value of R 0 brings T 1 to values close to 20 ns at T = 0 . The message to extract from Figs. fig:two and fig:three is that increasing R 0 as much as possible and increasing the qubit frequency ω 01 from 0.1 Ω to 2 Ω at fixed low temperature can produce a large increase in T 1 .... Schematic drawing of the phase qubit with an RLC isolation circuit.... The circuit used to describe intrinsic decoherence and self-induced Rabi oscillations in phase qubits is shown in Fig. fig:one, which correponds to an asymmetric dc SQUID . The circuit elements inside the dashed box form an isolation network which serves two purposes: a) it prevents current noise from reaching the qubit junction; b) it is used as a measurement tool.... In the limit of T = 0 , we can solve for c 1 t exactly and obtain the closed form c 1 t = L -1 s + Γ - i ω 01 2 + Ω 2 - Γ 2 s s + Γ - i ω 01 2 + Ω 2 - Γ 2 - κ Ω 4 π i / Γ where L -1 F s is the inverse Laplace transform of F s , and κ = α / M ω 01 × Φ 0 / 2 π 2 ≈ 1 / ω 01 T 1 , 0 . The element ρ 11 = | c 1 t | 2 of the density matrix is plotted in Fig. fig:four for three different values of resistance, assuming that the qubit is in its excited state such that ρ 11 0 = 1 . We consider the experimentally relevant limit of Γ ≪ ω 01 ≈ Ω , which corresponds to the weak dissipation limit. Since Γ = 1 / 2 C R the width of the resonance in the spectral density shown in Eq. ( eqn:sd-poles) is smaller for larger values of R . Thus, for large R , the RLC environment transfers energy resonantly back and forth to the qubit and induces Rabi-oscillations with an effective time dependent decay rate γ t = - 2 ℜ c ̇ 1 t / c 1 t .... fig:three T 1 (in nanoseconds) as a function of qubit frequency ω 01 . The solid (red) curves describes an RLC isolation network with parameters R = 50 ohms, L 1 = 3.9 nH, L = 2.25 pH, C = 2.22 pF, and qubit parameters C 0 = 4.44 pF, R 0 = 5000 ohms and L 0 = 0 . The dashed curves correspond to an RL isolation network with the same parameters, except that C = 0 . Main figure ( T = 0 ), inset ( T = 50 mK) with Ω = 141 GHz.... fig:four Population of the excited state of the qubit as a function of time ρ 11 t , with ρ 11 t = 0 = 1 for R = 50 ohms (solid curve), 350 ohms (dotted curve), and R = 550 ohms (dashed curve), and L 1 = 3.9 nH, L = 2.25 pH, C = 2.22 pF, C 0 = 4.44 pF, R 0 = ∞ and L 0 = 0 .... fig:two T 1 (in seconds) as a function of qubit frequency ω 01 . The solid (red) curves describes an RLC isolation network with parameters R = 50 ohms, L 1 = 3.9 nH, L = 2.25 pH, C = 2.22 pF, and qubit parameters C 0 = 4.44 pF, R 0 = ∞ and L 0 = 0 . The dashed curves correspond to an RL isolation network with the same parameters, except that C = 0 . Main figure ( T = 0 ), inset ( T = 50 mK) with Ω = 141 GHz.... In Fig. fig:two, T 1 is plotted versus qubit frequency ω 01 for spectral densities describing an RLC (Eq. eqn:spectral-density-isolation) or Drude (Eq. eqn:sd-drude) isolation network at fixed temperatures T = 0 (main figure) and T = 50 mK (inset), for J i n t ω = 0 corresponding to R 0 ∞ . In the limit of low temperatures k B T / ℏ ω 01 ≪ 1 , the relaxation time becomes T 1 ω 01 = M ω 01 / J ω 01 . From Fig. fig:two (main plot) several important points can be extracted. First, in the low frequency regime ( ω 01 ≪ Ω ) the RL (Drude) and RLC environments produce essentially the same relaxation time T 1 , R L C 0 = T 1 , R L 0 = T 1 , 0 ≈ L 1 / L 2 R C 0 , because both systems are ohmic. Second, near resonance ( ω 01 ≈ Ω ), T 1 , R L C is substantially reduced because the qubit is resonantly coupled to its environment producing a distinct non-ohmic behavior. Third, for ( ω 01 > Ω ), T 1 grows very rapidly in the RLC case. Notice that for ω 01 > 2 Ω , the RLC relaxation time T 1 , R L C is always larger than T 1 , R L . Furthermore, in the limit of ω 01 ≫ m a x Ω , 2 Γ , T 1 , R L C grows with the fourth power of ω 01 behaving as T 1 , R L C ≈ T 1 , 0 ω 01 4 / Ω 4 , while for ω 01 ≫ Ω 2 / 2 Γ , T 1 , R L grows only with second power of ω 01 behaving as T 1 , R L ≈ 4 T 1 , 0 Γ 2 ω 01 2 / Ω 4 . Thus, T 1 , R L C is always much larger than T 1 , R L for sufficiently large ω 01 . Notice, however, that for parameters in the experimental range such as those used in Fig fig:two, T 1 , R L C is two orders of magnitude larger than T 1 , R L , indicating a clear advantage of the RLC environment shown in Fig fig:one over the standard ohmic RL environment. Thermal effects are illustrated in the inset of Fig. fig:two where T = 50 mK is a characteristic temperature where experiments are performed . The typical values of T 1 at low frequencies vary from 10 -5 s at T = 0 to 10 -6 s at T = 50 mK, while the high frequency values remain essentially unchanged as the thermal effects are not important for ℏ ω 01 ≫ k B T .... These environmentally-induced Rabi oscillations are a clear signature of the non-Markovian behavior produced by the RLC environment, and are completely absent in the RL environment because the energy from the qubits is quickly dissipated without being temporarily stored. These environmentally-induced Rabi oscillations are generic features of circuits with resonances in the real part of the admittance. The frequency of the Rabi oscillations Ω R a = π κ Ω 3 / 2 Γ is independent of the resistance since Ω R a ≈ Ω π L 2 C / L 1 2 C 0 , and has the value of Ω R a = 2 π f R a ≈ 360 × 10 6 rad/sec for Fig. fig:four.
Data Types:
• Image
In the example figure (Fig. fig:qubosc1d2d), the control bias is varied from left to right for a low frequency oscillator circuit (1.36GHz). For each bias point the simulation is reinitialised, the stochastic time evolution of the system density matrix is simulated over 1500 oscillator cycles. Then the oscillator and qubit charge expectation values are extracted to obtain the power spectrum for each component, with a frequency resolution of 4.01MHz. The power spectra for each time series are collated as an image such that the power axis is now represented as a colour, and the individual power spectra are vertical ‘slices’ through the image. The dominant frequency peaks become line traces, therefore illustrating the various avoided crossings, mergeings and intersections. The example figure shows the PSD ‘slice’ at Bias = 0.5187 , the broadband noise is readily apparent and is due to the discontinuous quantum jumps in the qubit. The bias oscillator peak (1.36GHz) is most prominent in the oscillator PSD, as would be expected, but it is also present in the qubit PSD. It should also be noted that most features are present in both the qubit and oscillator, including the noise which is generated by the quantum jumps and the quantum state diffusion processes. Interestingly, the qubit PSD is significantly stronger than the oscillator PSD, however, a larger voltage is generated by the smaller charge due to the extremely small island capacitance, V q = q / C q .... fig:mwRamp (Color online) Oscillator PSD as a function of the applied microwave drive frequency f m w , for microwave amplitudes A m w = 0.0050 (A) and A m w = 0.0100 (B). It is important to notice that there are now two frequency axes per plot, a drive (H) and a response (V). Of particular interest is the magnified section which shows clearly the distinct secondary splitting in the sub-GHz regime. This occurs due to a high frequency interaction seen in the upper plots, where the lower Rabi sideband of the microwave drive passes through the high frequency oscillator signal. The maximum splitting occurs when the Rabi amplitude is a maximum, hence this is observed for a very particular combination of bias and drive, which is beneficial for charactering the qubit. Most importantly, this would not be observed with a conventional low frequency oscillator configuration as the f m w - f o s c separation would be too large for the Rabi frequency. ( κ = 5 × 10 -5 ).... Fig. fig:mwRamp is presented in a similar manner as Fig. fig:BiasRamp. However there are now two frequency axes: the horizontal axis represents the frequency of the applied microwave drive field, and the vertical axis is the frequency response. It should be remembered that the microwave frequency axis is focused near the qubit transition frequency ( f q u b i t ≈ 3.49GHz) and the diagonally increasing line is now the microwave frequency.... Autler Townes effect, charge qubit, characterisation, frequency spectrum... fig:QubitOscEnergy A two level qubit is coupled to a many level harmonic oscillator, investigated for two different oscillator energies. Firstly, the oscillator resonant frequency is set to 1.36GHz, this more resembles the conventional configuration such that the fundamental component of the oscillator does not drive the qubit. However, we also investigate the use of a high frequency oscillator of 3.06GHz which can excite this qubit. In addition, qubit is constantly driven by a microwave field at 3.49GHz to generate Rabi oscillations and in this paper we examine the relation between these three fields.... fig:qubosc1d2d (Color online) Oscillator and Qubit power spectra slices for Bias = 0.5187, using the low frequency oscillator circuit f o s c = 1.36 GHz. The solid lines overlay the energy level separations found in Fig. fig:EnergyLevel. ( κ = 5 × 10 -5 ). As one would expect, the bias oscillator peak at 1.36GHz is clearly observed in the oscillator PSD, but only weakly in the qubit PSD. Likewise the qubit Rabi frequency is found to be stronger in the qubit PSD. However it is important to note that the qubit dynamics such as the Rabi oscillations are indeed coupled to the bias oscillator circuit and so can be extracted. In addition, it is recommended to compare the layout of the most prominent features with Fig. fig:BiasRamp.... fig:BiasRamp (Color online) Oscillator PSD as a function of bias, for microwave amplitudes A m w = 0.0025 (A) and A m w = 0.0050 (B). The red lines track the positions (in frequency) of significant power spectrum peaks (+10dB to +15dB above background), the overlaid black and blue lines are the qubit energy and microwave transition (Fig. fig:EnergyLevel). Unlike Fig. fig:qubosc1d2d, in these figures the 3.06GHz oscillator circuit can now drive the qubit (Fig. fig:EnergyLevel) and so creates excitations which mix with the microwave driven excitations creating a secondary splitting centred on f m w - f o s c (430MHz). This feature contains the Rabi frequency information in the sidebands of the splitting, but now in a different and controllable frequency regime. In addition, the intersection of the two differently driven excitations (illustrated in the magnified sections), opens the possibility of calibrating the biased qubit against a fixed engineered oscillator circuit, using a single point feature. ( κ = 5 × 10 -5 ).... In a previous paper , a method was proposed by which the energy level structure of a charge qubit can be obtained from measurements of the peak noise in the bias/control oscillator, without the need of extra readout devices. This was based on a technique originally proposed for superconducting flux qubits but there are many similarities between the two technologies. The oscillator noise peak is the result of broadband noise caused by quantum jumps in the qubit being coupled back to the oscillator circuit. This increase in the jump rate becomes a maximum when the Rabi oscillations are at peak amplitude, this should only occur when the qubit is correctly biased and the microwave drive is driving at the transition frequency. Therefore by monitoring this peak as a function of bias, we can associate a bias position with a microwave frequency equal to that of the energy gap, hence constructing the energy diagram (Fig. fig:EnergyLevel).... fig:Jumps (Color online) (A) Oscillator power spectra when the coupled qubit is driven at f m w = 5.00 GHz. An increase in bias noise power ( f o s c = 1.36 GHz) can be observed when Rabi oscillations occur, the more frequent quantum jump noise couples back to the oscillator. (B) Bias noise power peak position changes as a function of f m w , the microwave drive frequency. Therefore, it is possible to probe the qubit energy level structure by using the power increase in the oscillator which is already in place, eliminating the need for additional measurement devices. However, it should be noted that the surrounding oscillator harmonics may mask the microwave driven peak. ( κ = 1 × 10 -3 ).
Data Types:
• Image
• Tabular Data
where we have defined the total spin operators J ̂ α = ∑ σ ̂ α / 2 . In the limit ℏ ω 0 / Δ → 0 , all the results concerning the low-energy spectrum of the resonator remain unchanged; one could say that the reduction of the coupling strength by the factor N is compensated by the strengthening of the spin raising and lowering operators by the same factor because of the collective behaviour of the qubits. In particular, the transition occurs at the critical coupling strength given by Eq. ( Eq:CriticalCouplingStrength). Because the qubits now have a larger total spin (when compared to the single-qubit case), spin states that are separated by small angles can be drastically different (i.e. have a small overlap). In particular, the overlap for N qubits is given by cos 2 N θ / 2 . By expanding this function to second order around θ = 0 , one can see that for small values of θ the relevant overlap is lower than unity by an amount that is proportional to N . This dependence translates into the dependence of the qubit-oscillator entanglement on the coupling strength just above the critical point. The entanglement therefore rises more sharply in the multi-qubit case (with the increase being by a factor N ), as demonstrated in Fig. Fig:EntropyLogLog.... (Color online) The logarithm of the von Neumann entropy S as a function of the logarithm of the quantity λ / λ c - 1 , which measures the distance of the coupling strength from the critical value. The red solid line corresponds to the single-qubit case, whereas the other lines correspond to the multi-qubit case: N = 2 (green dashed line), 3 (blue short-dashed line), 5 (purple dotted line) and 10 (dash-dotted cyan line). All the lines correspond to ℏ ω 0 / Δ = 10 -7 . The slope of all lines is approximately 0.92 when λ / λ c - 1 = 10 -4 . The ratio of the entropy in the multi-qubit case to that in the single-qubit case approaches N for all the lines as we approach the critical point.... The energy level structure in the single-qubit case is simple in principle. In the limit ℏ ω 0 / Δ → 0 , one can say that the energy levels form two sets, one corresponding to each qubit state. Each one of these sets has a structure that is similar to that of a harmonic oscillator with some modifications that are not central in the present context. In particular the density of states has a weak dependence on energy, a situation that cannot support a thermal phase transition. If the temperature is increased while all other system parameters are kept fixed, qubit-oscillator correlations (which are finite only above the critical point) gradually decrease and vanish asymptotically in the high-temperature limit. No singular point is encountered along the way. This result implies that the transition point is independent of temperature. In other words, it remains at the value given by Eq. ( Eq:CriticalCouplingStrength) for all temperatures. If, for example, one is investigating the dependence of the correlation function C on the coupling strength (as plotted in Fig. Fig:SpinFieldSignCorrelationFunction), the only change that occurs as we increase the temperature is that the qubit-oscillator correlations change more slowly when the coupling strength is varied.... where p ̂ is the oscillator’s momentum operator, which is proportional to i â † - â in our definition of the operators. The squeezing parameter mirrors the behaviour of the low-lying energy levels. In particular we can see from Fig. Fig:SqueezingParameter that only when ℏ ω 0 / Δ reaches the value 10 -5 does the squeezing become almost singular at the critical point.... (Color online) The von Neumann entropy S as a function of the oscillator frequency ℏ ω 0 and the coupling strength λ , both measured in comparison to the qubit frequency Δ . One can see clearly that moving in the vertical direction the rise in entropy is sharp in the regime ℏ ω 0 / Δ ≪ 1 , whereas it is smooth when ℏ ω 0 / Δ is comparable to or larger than 0.1.... The tendency towards singular behaviour (in the dependence of various physical quantities on λ ) in the limit ℏ ω 0 / Δ → 0 is illustrated in Figs. Fig:ColorPlot- Fig:SqueezingParameter. In these figures, the entanglement, spin-field correlation function, low-lying energy levels (measured from the ground state) and the oscillator’s squeezing parameter are plotted as functions of the coupling strength. It is clear from Figs. Fig:EntropyLinear and Fig:SpinFieldSignCorrelationFunction that when ℏ ω 0 / Δ ≤ 10 -3 both the entanglement (which is quantified through the von Neumann entropy S = T r ρ q log 2 ρ q with ρ q being the qubit’s reduced density matrix) and the correlation function C = σ z s i g n a + a † rise sharply upon crossing the critical point . The low-lying energy levels, shown in Fig. Fig:EnergyLevels, approach each other to form a large group of almost degenerate energy levels at the critical point before they separate again into pairs of asymptotically degenerate energy levels. This approach is not complete, however, even when ℏ ω 0 / Δ = 10 -3 ; for this value the energy level spacing in the closest-approach region is roughly ten times smaller than the energy level spacing at λ = 0 . The squeezing parameter is defined by the width of the momentum distribution relative to that in the case of an isolated oscillator. For consistency with Ref. , we define it as
Data Types:
• Image
We performed a spectroscopy measurement of the qubit with long (50 ns) single-frequency microwave pulses. We observed multi-photon resonant peaks ( Φ q b 1.5 Φ 0 ) in the dependence of P s w on f M W 1 at a fixed magnetic flux Φ q b . We obtained the qubit energy diagram by plotting their positions as a function of Φ q b / Φ 0 (Fig. Fig2(a)). We took the data around the degeneracy point Φ q b ≈ 1.5 Φ 0 by applying an additional dc pulse to the microwave line to shift Φ q b away from 1.5 Φ 0 just before the readout, because the dc-SQUID could not distinguish the qubit states around the degeneracy point. The top solid curve in Fig. Fig2(a) represents a numerical fit to the resonant frequencies of one-photon absorption. From this fit, we obtain the qubit parameters E J / h = 213 GHz, Δ / 2 π = 1.73 GHz, and α = 0.8. The other curves in Fig. Fig2(a) are drawn by using these parameters for n 1 = 2, 3, and 4.... Next, we used short single-frequency microwave pulses with a frequency of 10.25 GHz to observe the coherent quantum dynamics of the qubit. Figures Fig2(b) and (c) show one- and four-photon Rabi oscillations observed at the operating points indicated by arrows in Fig. Fig2(a) with various microwave amplitudes V M W 1 . These data can be fitted by damped oscillations ∝ exp - t p / T d cos Ω R a b i t p , except for the upper two curves in Fig. Fig2(b). Here, t p and T d are the microwave pulse length and qubit decay time, respectively. To obtain Ω R a b i , we performed a fast Fourier transform (FFT) on the curves that we could not fit by damped oscillations. Although we controlled the qubit environment, there were some unexpected resonators coupled to the qubit, which could be excited by the strong microwave driving or by the Rabi oscillations of the qubit. We consider that these resonators degraded the Rabi oscillations in the higher V M W 1 range of Fig. Fig2(b). Figure Fig2(d) shows the V M W 1 dependences of Ω R a b i / 2 π up to four-photon Rabi oscillations, which are well reproduced by Eq. ( eq2). Here, we used only one scaling parameter a (10.25 GHz) = 0.013 defined as a f M W 1 ≡ 4 g 1 α 1 / ω M W 1 V M W 1 , because it is hard to measure the real amplitude of the microwave applied to the qubit at the sample position. The scaling parameter a f M W 1 reflects the way in which the applied microwave is attenuated during its transmission to the qubit and the efficiency of the coupling between the qubit and the on-chip microwave line. In this way, we can estimate the real microwave amplitude and the interaction energy between the qubit and the microwave 2 ℏ g 1 α 1 by fitting the dependence of Ω R a b i / 2 π on V M W 1 . These results show that we can reach a driving regime that is so strong that the interaction energy 2 ℏ g 1 α 1 is larger than the qubit transition energy ℏ ω q b .... Experimental results with single-frequency microwave pulses. (a) Spectroscopic data of the qubit. Each set of the dots represents the resonant frequencies f r e s caused by the one to four-photon absorption processes. The solid curves are numerical fits. The dashed line shows a microwave frequency f M W 1 of 10.25 GHz. (b) One-photon Rabi oscillations of P s w with exponentially damped oscillation fits. Both the qubit Larmor frequency f q b and the microwave frequency f M W 1 are 10.25 GHz. The external flux is Φ q b / Φ 0 = 1.4944. (c) Four-photon Rabi oscillations when f q b = 41.0 GHz, f M W 1 = 10.25 GHz, and Φ q b / Φ 0 = 1.4769. (d) The microwave amplitude dependence of the Rabi frequencies Ω R a b i / 2 π up to four-photon Rabi oscillations. The solid curves represent theoretical fits. Fig2... The measurements were carried out in a dilution refrigerator. The sample was mounted in a gold plated copper box that was thermalized to the base temperature of 20 mK ( k B T frequency microwave pulses, we added two microwaves MW1 and MW2 with frequencies of f M W 1 and f M W 2 , respectively by using a splitter SP (Fig. Fig1(b)). Then we shaped them into microwave pulses through two mixers. We measured the amplitude of MW k V M W k at the point between the attenuator and the mixer with an oscilloscope. We confirmed that unwanted higher-order frequency components in the pulses, for example | f M W 1 ± f M W 2 | , 2 f M W 1 , and 2 f M W 2 are negligibly small under our experimental conditions. First, we choose the operating point by setting Φ q b around 1.5 Φ 0 , which fixes the qubit Larmor frequency f q b . The qubit is thermally initialized to be in | g by waiting for 300 μ s, which is much longer than the qubit energy relaxation time (for example 3.8 μ s at f q b = 11.1 GHz). Then a qubit operation is performed by applying a microwave pulse to the qubit. The pulse, with an appropriate length t p , amplitudes V M W k , and frequencies f M W k , prepares a qubit in the superposition state of | g and | e . After the operation, we immediately apply a dc readout pulse to the dc-SQUID. This dc pulse consists of a short (70 ns) initial pulse followed by a long (1.5 μ s) trailing plateau that has 0.6 times the amplitude of the initial part. For Φ q b qubit is detected as being in | e , the SQUID switches to a voltage state and an output voltage pulse should be observed; otherwise there should be no output voltage pulse. By repeating the measurement 8000 times, we obtain the SQUID switching probability P s w , which is directly related to P e t p for the dc readout pulse with a proper amplitude. For Φ q b > 1.5 Φ 0 , P s w is directly related to 1 - P e t p .... We next investigated the coherent oscillations of the qubit through the parametric processes by using short two-frequency microwave pulses. Figure Fig3(a) [(b)] shows the Rabi oscillations of P s w when the qubit Larmor frequency f q b = 26.45 [7.4] GHz corresponds to the sum of the two microwave frequencies f M W 1 = 16.2 GHz, f M W 2 = 10.25 GHz [the difference between f M W 1 = 11.1 GHz and f M W 2 = 18.5 GHz] and the microwave amplitude of MW2 V M W 2 was fixed at 33.0 [50.1] mV. They are well fitted by exponentially damped oscillations ∝ exp - t p / T d cos Ω R a b i t p . The Rabi frequencies obtained from the data in Fig. 3(a) [(b)] are well reproduced by Eq. ( eq3) without any fitting parameters (Fig. Fig3(c) [(d)]). Here, we used Δ , which was obtained from the spectroscopy measurement (Fig. Fig2(a)) and used a (10.25 GHz) = 0.013 and a (16.2 GHz) = 0.0074 [ a (11.1 GHz) = 0.013 and a (18.5 GHz) = 0.0082], which had been obtained from Rabi oscillations by using single-frequency microwave pulses with each frequency. Those results provide strong evidence that we can achieve parametric control of the qubit with two-frequency microwave pulses.... (a) Scanning electron micrograph of a flux qubit (inner loop) and a dc-SQUID (outer loop). The loop sizes of the qubit and SQUID are 10.2 × 10.4 μ m 2 and 12.6 × 13.5 μ m 2 , respectively. They are magnetically coupled by the mutual inductance M ≈ 13 pH. (b) A circuit diagram of the flux qubit measurement system. On-chip components are shown in the dashed box. L ≈ 140 pH, C ≈ 9.7 pF, R I 1 = 0.9 k Ω , R V 1 = 5 k Ω . Surface mount resistors R I 2 = 1 k Ω and R V 2 = 3 k Ω are set in the sample holder. We put adequate copper powder filters CP and LC filters F and attenuators A for each line. Fig1... Experimental results with two-frequency microwave pulses. (a) [(b)] Two-photon Rabi oscillations due to a parametric process when f q b = f M W 2 + - f M W 1 . The solid curves are fits by exponentially damped oscillations. (c) [(d)] Rabi frequencies as a function of V M W 1 , which are obtained from the data in Fig. Fig3(a) [(b)]. The dots represent experimental data when V M W 2 = 16.9, 23.5, 33.0, and 52.0 [50.1, 62.9, 79.1, and 124.7] mV from the bottom set of dots to the top one. The solid curves represent Eq. ( eq3). The inset is a schematic of the parametric process that causes two-photon Rabi oscillation when f q b = f M W 2 + - f M W 1 . Fig3
Data Types:
• Image
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9582261443138123, "perplexity": 1143.3073840629265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371810617.95/warc/CC-MAIN-20200408041431-20200408071931-00021.warc.gz"}
|
https://www.ideals.illinois.edu/handle/2142/23753
|
## Files in this item
FilesDescriptionFormat
application/pdf
9136653.pdf (7MB)
(no description provided)PDF
## Description
Title: Scanning tunneling microscopy and photoemission spectroscopy studies of clean and adsorbate-covered semiconductor surfaces Author(s): Leibsle, Frederick Michael Doctoral Committee Chair(s): Chiang, Tai-Chang Department / Program: Physics, Condensed Matter Discipline: Physics, Condensed Matter Degree Granting Institution: University of Illinois at Urbana-Champaign Degree: Ph.D. Genre: Dissertation Subject(s): Physics, Condensed Matter Abstract: Scanning tunneling microscopy (STM), photoemission spectroscopy, and a variety of other experimental techniques have been used to examine both the initial stages of interface formation between various adsorbates, as well as, reconstructions occurring on the clean surfaces of Si and Ge. The initial stages of oxidation of the Si(111)-(7 x 7) and Ge(111)-c(2 x 8) surfaces were studied. Images of the same area of each surface were obtained from various exposures of oxygen. On the Si(111)-(7 x 7) surface, the results show that defect sites act as nucleation centers for the oxidation process. On the Ge(111)-c(2 x 8) surface, images of the surface for exposures of oxygen up to 1600 Langmuirs were obtained. The results show that the oxidized portions of the surface grow as islands which expand preferentially in the (112) direction. The "16 structure" and c(8 x 10) reconstructions occurring on the clean Ge(110) surfaces were examined. STM images show that the Ge(110)-"16 structure" reconstructed surface is composed of ordered facets as predicted by a low-energy-electron diffraction study. STM images of the Ge(110)-c(8 x 10) surface show that the unit cell is composed of alternating oblique sub-unit cells. Photoemission spectra of the Ge 3d core levels for both these surfaces show the presence of multiple surface-shifted components. Sb deposition on these surfaces has also been studied. Sb deposition results in the formation of a (1 x 1) or (3 x 2) ordered over-layer depending on the substrate temperature and Sb coverage. Both these surfaces have been observed with STM. Photoemission results for various coverages of Sb show that the surface-shifted components of the Ge 3d core level are suppressed by the deposition of Sb. A structural model, consistent with the data, for the (1 x 1) Sb-terminated surface is presented. Angle-resolved photoemission spectroscopy was used to measure the bulk band-dispersion relations along the high symmetry $\Gamma$-$\Sigma$-X directions for both Ge and Si. Issue Date: 1991 Type: Text Language: English URI: http://hdl.handle.net/2142/23753 Rights Information: Copyright 1991 Leibsle, Frederick Michael Date Available in IDEALS: 2011-05-07 Identifier in Online Catalog: AAI9136653 OCLC Identifier: (UMI)AAI9136653
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5485413074493408, "perplexity": 4194.410646150529}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590046.11/warc/CC-MAIN-20180718021906-20180718041906-00118.warc.gz"}
|
http://eprints.iisc.ernet.in/13204/
|
# $^{13}C and ^1H$ NMR study of N-5'-methylsalicylideneanilines
Kishore, K and Sathyanarayana, DN and Bhanu, VA (1987) $^{13}C and ^1H$ NMR study of N-5'-methylsalicylideneanilines. In: Magnetic Resonance in Chemistry, 25 . pp. 471-473.
PDF full3].pdf Restricted to Registered users only Download (296Kb) | Request a copy
## Abstract
The $^{13}C and ^1H$ NMR spectra of six N-5'-methylsalicylideneanilines have been studied. Correlations of the chem. shifts of C-\alpha (azomethine carbon) and C-4' with the \sigma, F, R, \sigma 1 and \sigmaR0 parameters have been examd. for N-5'-methylsalicylideneanilines, and also for N-benzylideneanilines and N-salicylideneanilines. The results suggest that the first compds. have a nearly planar conformation whereas the second and third type of derivs. have a twisted conformation.
Item Type: Journal Article http://dx.doi.org/10.1002/mrc.1260250602 Copyright belongs to John Wiley and Sons Ltd. 13C and 1H NMR;N-5-methylsalicylideneanilines;F and R Conformation Division of Chemical Sciences > Inorganic & Physical Chemistry 06 Mar 2008 19 Sep 2010 04:43 http://eprints.iisc.ernet.in/id/eprint/13204
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8872504234313965, "perplexity": 10798.981966013778}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737931434.67/warc/CC-MAIN-20151001221851-00101-ip-10-137-6-227.ec2.internal.warc.gz"}
|
https://mathmistakes.wordpress.com/tag/simplifying-expressions/
|
## Order of Operations
The “Order of Operations” is a convention that ensures everyone interprets algebraic notation the same way. It is used in conjunction with the properties of each operation (addition, subtraction, multiplication, division, and exponentiation). I summarize the order of operations for myself as “do the most powerful operations first”.
The answer shown below is wrong. Try working your way through the problem backwards, from the answer up. As you find each mistake, try to identify the thinking behind it… what perspective led to the mistake? What should have been done instead?
$3+2(15 \div 3-1)^2$
$=5(15 \div 3-1)^2$
$=(75 \div 15-5)^2$
$=(5-5)^2$
$=0^2$
$=0$
.
..
Hint 1: The correct answer to the original problem is: $35$
Hint 2: What if the problem had been written
$3+2\cdot (15 \div 3-1)^2$
Hint 3: Two mistakes have been made in the work above
## Collecting Like Terms #1
“Collecting Like Terms” is a phrase that is used often in mathematics, yet it is a process that can feel a bit arbitrary at first as it relies on several algebraic principles.
The answer shown below is wrong. Try working your way through the problem backwards, from the answer up. As you find each mistake, try to identify the thinking behind it… what perspective led to the mistake? What should have been done instead?
$3x+5-4x+2$
$=8x-4x+2$
$=4x-2$
.
..
Hint 1: The correct answer to the original problem is: $-x+7$
Hint 2: What if the problem had been written
$3x+4+~^-4x+2$
or perhaps
$3\cdot x+4+(-4)\cdot x+2$
Hint 3: Two mistakes have been made in the work above
## Sums and Products with Exponents
Powers and roots can be distributed over products and quotients. They may not be distributed over sums or differences, no matter how tempting it may be. Sums or differences raised to a power must be used as a factor the indicated number of times, then multiplied using the distributive property.
An exponent applies only to the factor immediately below it unless parentheses have been used to indicate otherwise.
“Like terms” have the same variables, to the same powers… and only “like” terms may be combined by adding their coefficients.
The answer shown below is wrong. Try working your way through the problem backwards, from the answer up. As you find each mistake, try to identify the thinking behind it… what perspective led to the mistake? What should have been done instead?
$(f-g^2)^2-fg^2-(-fg)^2-f^2$
$=f^2-g^4-fg^2-(-fg)^2-f^2$
$=f^2-g^4-f^2g^2-(-fg)^2-f^2$
$=f^2-g^4-f^2g^2+fg^2-f^2$
$=-g^4$
.
..
Hint 1: The correct answer to the problem on the first line above is: $g^4-3fg^2-f^2g^2$
Hint 2: The work shown above contains sign errors, simplification errors, and exponentiation errors. Re-read the text at the top of the posting if you have not found them all…
## Distributes Over…
The “distributive property” of multiplication and division. That’s not the proper full name, but it’s what many people say… so, when do you distribute, and when don’t you? That is the question.
The answer shown below is wrong. Try working your way through the problem backwards, from the answer up. As you find each mistake, try to identify the thinking behind it… what perspective led to the mistake? What should have been done instead?
$\dfrac{(4)(3)(2k+6)}{6}$
$=\dfrac{(12)(8k+24)}{6}$
$=\dfrac{(12)(8k+24)}{(2)(3)}$
$=\dfrac{(6)(4k+12)}{3}$
$=(2)(4k+4)$
$=8k+4$
.
..
Hint 1: The correct answer to the original problem is: $4k+12$
Hint 2: The complete description of the distributive property of multiplication is: “the distributive property of multiplication over…”?
Hint 3: Several mistakes have been made in the work above – don’t just seek the correct answer – look for mistakes between every pair of lines.
## Negative Exponents
Negative exponents can be another source of confusion. No matter where you find a negative exponent, you can turn it into a positive exponent by taking the reciprocal of the expression it applies to.
The answer shown below is wrong. Try working your way through the problem backwards, from the answer up. As you find each mistake, try to identify the thinking behind it… what perspective led to the mistake? What should have been done instead?
$\dfrac{c^{-2}d^3}{c^3d^{-4}}$
$=\dfrac{d^3d^4}{c^{-2}c^3}$
$=\dfrac{d^7}{c^{-6}}$
$=d$
.
..
Hint 1: The correct answer to the original problem is: $\dfrac{d^7}{c^5}$
Hint 2: Several mistakes have been made in the work above – don’t just seek the correct answer – look for the mistakes between every pair of lines.
## Laws of Exponents
The “laws of exponents” are a frequent source of errors. The rules that apply when simplifying expressions that involve exponents can be figured out quickly on your own if you (in your mind’s eye) expand integral exponents into repeated multiplication. I encourage students to master being able to explain why each of these rules is as it is instead of memorizing them, as memorized versions are more likely to get jumbled together in your thinking when working problems.
The answer shown below is wrong. Try working your way through the problem backwards, from the answer up. As you find each mistake, try to identify the thinking behind it… what perspective led to the mistake? What should have been done instead?
$\dfrac{(ab^2)^3(5a^2)}{a^2b}$
$=\dfrac{a^5b^525a^2}{a^2b}$
$=\dfrac{25a^{10}b^5}{a^2b}$
$=25a^5b^5$.
.
..
Hint 1: The correct answer to the original problem is: $5a^3 b^5$
Hint 2: Several mistakes have been made in the work above – don’t just seek the correct answer – look for the mistakes between every pair of lines.
## Negative Signs and Fractions
Negative signs bother many students, particularly when they are followed by fractions. In such situations, it is important to remember that the vinculum (the horizontal line between numerator and denominator) serves as a grouping symbol – like parentheses would. I recommend that students always put a numerator with more than one term in parentheses before bringing a leading negative sign into the numerator.
The answer shown below is wrong. Try working your way through the problem backwards, from the answer up. As you find each mistake, try to identify the thinking behind it… what perspective led to the mistake? What should have been done instead?
$\dfrac{2-w}{3}-\dfrac{2w-5}{2}$
$=\dfrac{2-w}{3}\cdot\dfrac{2}{2}-\dfrac{2w-5}{2}$
$=\dfrac{2(2-w)}{6}+\dfrac{-2w-5}{6}$
$=\dfrac{4-w-2w-5}{6}$
$=\dfrac{-1-3w}{6}$
.
..
Hint 1: the correct answer to the original problem is: $\dfrac{-8w+19}{6}$
Hint 2: several mistakes have been made in the work above – they are not all sign or distribution errors
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 43, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7728181481361389, "perplexity": 577.6171386689771}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671239.99/warc/CC-MAIN-20191122042047-20191122070047-00497.warc.gz"}
|
http://physics.stackexchange.com/questions/70963/about-dirac-cones
|
This nice image of Dirac cones (from this article), in a ($E,\vec k$ graph) will be an introduction for several questions, in the realm of topological insulators.
1) Does the Dirac cone appears only at the surface ?
2) Is the shape (the cone) important ?
3) The Dirac cone is gapless, so is it only stable by symmetry-protection ?
4) Suppose a Dirac cone is opened, then closed, then re-opened. In the open situation, there is a energy gap, so there is a possible non-trivial topology. So it is possible to change the topology in the open->close->open process ?
-
I hope somebody will provide more details, but let me just give some quick answers. 1) Yes, since topological insulators (TI) are per definition gapped in the bulk. The existence of gapless boundary modes can be heuristically argued for as follows: phase transition between to different TI's can only happen if the bulk gap closes. If one put two different TI's next to each other, the gap must close on the boundary between them such that there can be a transition. – Heidar Jul 13 '13 at 11:34
2) Depends on with respect to what it should be important. As far as stability of the edge modes are concerned, that's not important. However the shape will be important for questions about the detailed dynamics for example. If one takes higher energy/momentum contributions into account in the boundary low-energy effective theory, then the Dirac equation will get non-relativistic corrections in general. This is also clear from your picture. See for example arxiv.org/abs/0908.1418. Eq. (4) contain the first correction to the dispersion and thus the change of shape of the Dirac cone. – Heidar Jul 13 '13 at 11:35
3) Yes, the gaplessness of the boundary mode is protected by a symmetry (as is the case for all TI's). 4) I am not sure I understand this question. – Heidar Jul 13 '13 at 11:36
Forewords: As Heidar remarked in the associated comments, my answers were not dedicated to the topological insulator situation. I'll try to correct myself in some edits I'll write [-> into brackets <-] and into answer-bis, but I let my answers about topological superconductors, since they may be helpful.
1) Dirac cone's on surface: Some emergent Dirac cones appears in the bulk of the $p$-wave chiral superconductor, see the book by Volovik for more details, available freely on his homepage at Aalto University. I'm not at ease with the notion of band structure on the surface. I have no idea what it means... That's just the closure of the gap which happens on the surface/edge for me. [-> Please see the Heidar comments for a clever discussion <-].
1-bis: the topological insulator situation. The topological insulator case is easier to discuss, since a bulk insulator has no closure of the gap by definition. Then, the Dirac-linear-closure can only happens at the edge. See also point 4 below, and the Heidar's comments about the Jackiw-Rebbi model below.
2) Shape of the cone: The shape, per se is not important. What you need is a linear dispersion relation with a crossing point. (NB: Without crossing, the dispersion corresponds to the Weyl fermion particles.) The cone structure is the simplest structure like this.
3) Symmetry protected topology: I don't know the full answer to this question. I would say no, not for the emergent Dirac cones in superconducting/superfluid phase: the cone can there be topologically protected as well. But the topology depends strongly on the symmetry for quadratic Hamiltonians, especially the three discrete ones of particle-hole $P$ such that $\left\{ P,H\right\} =0$ with $P^{2}=\pm1$, time-reversal $T$ such that $\left[T,H\right]=0$ with $T^{2}=\pm1$ (both $P$ and $T$ have anti-unitary representation, and $H$ is a representation of the Hamiltonian), and the chiral $C\equiv PT$ ones (a situation exists when $C$ is present without neither $P$ nor $T$). This is still troubling for me. I think it's essentially a matter of convention whether you want to call these discrete symmetries some kind of topology (whatever it means) or not. Topology for me means you've got a Chern number $\nu\neq0$, and you will keep it until you change one of the discrete symmetries I mentioned. But some Chern numbers are protected by symmetry as well, so it is a mess to disentangle all these notions at the end.
3-bis: the topological insulator situation. For the topological insulator once again, the situation is easier, since the topological classification is crystal clear: the topological characteristic are provided by symmetry. These symmetries are just the three discrete symmetries I discussed in point 3.
4) Opening <--> closure of the gap I think the answer to this question has been answered long ago by Volkov, and Pankratov, Two-dimensional massless electrons in an inverted contact JETP, 42 178 (1985) (article for free) or I misunderstood it. The answer is yes, and you get an instanton solution at the boundary, as in the Jackiw-Rebbi. Volkov and Pankratov discuss the Dirac dispersion relation, not a relativistic model.
-
1) Since the question was in the context of topological insulators (TI), you cannot have gapless modes in the bulk per definition. If you have, then you are not in the phase of a TI. Its easy to get a Dirac cone in the bulk. Write down a simple model for a TI, say the one in physics.stackexchange.com/questions/3282/… . The low-energy theory will be a massive Dirac equation in the bulk. When the mass is zero and you have a Dirac cone in the bulk. That's the point of phase-transition, and therefore not a TI phase. – Heidar Jul 13 '13 at 12:16
Band structure on the surface actually do make sense. In the above mentioned model, I assumed translational symmetry and thus no edge and therefore $\mathbf k = (k_x,k_y)$ is a good quantum number. The eigenvalues of $H(\mathbf k)$ are the bulk band structure. Now assume that there is an edge at $x = 0$ and $x=L$. Now $k_x$ is not a good quantum number anymore but $k_y$ still is. Fourier transform $k_x$ to real space: $H(k_x,k_y)\rightarrow H_e(k_y)$. Our $2\times 2$ matrix is now turned into a $2L\times 2L$ matrix depending only on $k_y$. – Heidar Jul 13 '13 at 12:24
The eigenvalues of $H_e(k_y)$ (there will be $2L$ of them parametrized by $k_y$) is what you can call the edge band structure (although it also contain the bulk part). There one will see gapless bands, inside many gapped bands. Finding the eigenvector corresponding to the gapless modes, one will find that they are localized at the boundaries. Alternatively one a take the low-energy effective theory of the bulk and do the same, solving the diff equations one will get the boundary modes (similar to the Jakiw-Rebbi analysis). These are sometimes called Kaplan fermions in lattice gauge theory. – Heidar Jul 13 '13 at 12:29
@Heidar Thanks for your comments. I've tried to correct correspondingly. Please tell me if there are still mistakes. Thanks a lot for the bulk-edge-band-structure discussion, too. – FraSchelle Jul 13 '13 at 15:24
@Oaoa : +1 for the detailed answer – Trimok Jul 15 '13 at 8:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8888364434242249, "perplexity": 391.4077341386808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131299236.74/warc/CC-MAIN-20150323172139-00229-ip-10-168-14-71.ec2.internal.warc.gz"}
|
https://earthscience.stackexchange.com/search?q=user:12218+%5Bnitrogen%5D
|
Search Results
Results tagged with Search options user 12218
2 results
Would the dynamic between soil nitrogen and plant growth be a positive or negative feedback loop I think it is a negative feedback loop because as soil nitrogen increases then plant growth increases … , and as plant growth increases, soil nitrogen decreases, is this correct? Therefore, since $+ × - = -$, it's a negative feedback loop. Also, what about the feedback loop between temperature and soil …
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39923685789108276, "perplexity": 1943.8934673144208}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575844.94/warc/CC-MAIN-20190923002147-20190923024147-00204.warc.gz"}
|
http://clay6.com/qa/15722/a-small-square-loop-of-wire-of-side-i-is-placed-inside-a-large-square-loop-
|
# A small square loop of wire of side i is placed inside a large square loop of slide $L( L >>1)$. If the loops are coplanar and there centers coinside. the mutual induction of the system is directly proportional to :
$\begin {array} {1 1} (a)\;\frac{L}{l} & \quad (b)\;\frac{l}{L} \\ (c)\;\frac{L^2}{l} & \quad (d)\;\frac{l^2}{L} \end {array}$
$(d)\;\frac{l^2}{L}$
answered Nov 7, 2013 by
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8991680145263672, "perplexity": 748.1868762091913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947654.26/warc/CC-MAIN-20180425001823-20180425021823-00503.warc.gz"}
|
http://www.acmerblog.com/hdu-4663-plane-partition-7708.html
|
2015
09-17
# Plane Partition
A plane partition is a two-dimensional array of nonnegative integers ai,j (0<=i<n, 0<=j<m) that satisfies
1. 0<=ai,j<=p
2. ai,j>=ai,j+1
3. ai,j>=ai+1,j
In this problem, we add some additional constrains in the following form:
Given x, y, z, there exists some integer k (may be negative) such that ax+k,y+k=z+k.
Note: For i and j do not satisfy 0<=i<n and 0<=j<m, ai,j does not exist.
Count how many valid plane partitions are there.
First line, number of test cases, T.
Following are T test cases. For each test case, the first line contains four integers, n, m, p, t, where the last one is the number of additional constraints.
Following are t lines, each line contains three integers, x, y, z.
T<=200
1<=n,m,p<=7
0<=x,y,z<=7
It’s possible that there is no valid plane partitions.
First line, number of test cases, T.
Following are T test cases. For each test case, the first line contains four integers, n, m, p, t, where the last one is the number of additional constraints.
Following are t lines, each line contains three integers, x, y, z.
T<=200
1<=n,m,p<=7
0<=x,y,z<=7
It’s possible that there is no valid plane partitions.
2
1 1 1 0
1 1 1 1
1 1 1
2
1
#include<cstdio>
#include<cstring>
#include<cmath>
#include<algorithm>
using namespace std;
const int N = 60;
int n, p, q, x[N], y[N];
double dist[N][N];
double cal(double dx,double dy){
return sqrt(dx*dx + dy*dy);
}
bool done[N];
double d[N];
double solve(){
memset(done,0,sizeof(done));
double res = dist[p][q];
done[p]=done[q]=1;
for(int i=1; i<=n; i++){
d[i] = min(dist[p][i], dist[q][i]);
}
for(int i=3; i<=n; i++){
int j=-1;
for(int k=1; k<=n; k++){
if(done[k]) continue;
if(j==-1 || d[k]<d[j]) j=k;
}
done[j]=1;
res += d[j];
for(int k=1; k<=n; k++){
if(!done[k]) d[k] = min(d[k], dist[j][k]);
}
}
return res;
}
int main(){
while(~scanf("%d", &n) && n){
scanf("%d %d", &p, &q);
for(int i=1; i<=n; i++){
scanf("%d %d", x+i, y+i);
for(int j=1; j<i; j++){
dist[i][j] = dist[j][i] = cal(x[i]-x[j], y[i]-y[j]);
}
dist[i][i]=0.0;
}
printf("%.2lf\n", solve());
}
return 0;
}
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3203743100166321, "perplexity": 8118.045145436767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323842.29/warc/CC-MAIN-20170629015021-20170629035021-00253.warc.gz"}
|
http://mathhelpforum.com/calculus/120239-sequence-type-r-n-print.html
|
# is this sequence of the type r^n.
• December 13th 2009, 11:20 AM
swatpup32
is this sequence of the type r^n.
Would this sequence [(1+3/n)^4n] fall under the rule of r^n -1 < r < = 1. If so would it be convergent. If you take the limit it eventually becomes {(1+0)^4n}.
• December 13th 2009, 11:25 AM
skeeter
Quote:
Originally Posted by swatpup32
Would this sequence [(1+3/n)^4n] fall under the rule of r^n -1 < r < = 1. If so would it be convergent. If you take the limit it eventually becomes {(1+0)^4n}.
you should be familiar with this limit ...
$\lim_{n \to \infty} \left(1 + \frac{k}{n}\right)^n = e^k
$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9803428649902344, "perplexity": 986.2161928548497}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982293692.32/warc/CC-MAIN-20160823195813-00263-ip-10-153-172-175.ec2.internal.warc.gz"}
|
https://arxiv.org/abs/1602.00317
|
### Current browse context:
cond-mat.mes-hall
(what is this?)
# Title: Photoluminescence and the gallium problem for highest-mobility GaAs/AlGaAs-based 2d electron gases
Abstract: The quest for extremely high mobilities of 2d electron gases in MBE-grown heterostructures is hampered by the available purity of the starting materials, particularly of the gallium. Here we compare the role of different Ga lots having nominally the highest possible quality on the mobility and the photoluminescence (PL) of modulation doped single interface structures and find significant differences. A weak exciton PL reveals that the purity of the Ga is insufficient. No high mobility can be reached with such a lot with a reasonable effort. On the other hand, a strong exciton PL indicates a high initial Ga purity, allowing to reach mobilities of 15 million (single interface) or 28 million $cm^2/Vsec$ (doped quantum wells) in our MBE systems. We discuss possible origins of the inconsistent Ga quality. Furthermore, we compare samples grown in different MBE systems over a period of several years and find that mobility and PL is correlated if similar structures and growth procedures are used.
Subjects: Mesoscale and Nanoscale Physics (cond-mat.mes-hall) DOI: 10.1016/j.jcrysgro.2016.02.039 Cite as: arXiv:1602.00317 [cond-mat.mes-hall] (or arXiv:1602.00317v1 [cond-mat.mes-hall] for this version)
## Submission history
From: Werner Dietsche [view email]
[v1] Sun, 31 Jan 2016 20:56:16 GMT (1112kb,D)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20011195540428162, "perplexity": 6912.761788096813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687906.71/warc/CC-MAIN-20170921205832-20170921225832-00244.warc.gz"}
|
http://mathhelpforum.com/pre-calculus/223511-trig-question-pre-calc.html
|
# Math Help - Trig question pre calc!
1. ## Trig question pre calc!
Hi there, the question states
Prove algebraically: $\cos\theta/1-sin\theta$ = $sec\theta +(sec\theta)(csc\theta)-cot\theta$
Here are the steps i took to solving this i worked on the right side making it equal the left
1) change all sec/ csc/ cot to their respective forms ( $\1/sin\theta)$ and so forth
and when i get to this point I'm lost $\frac{sin\theta +1 - cos\theta^2} {cos\theta sin\theta}$
After this point I'm lost please help, i apologize if i have asked too many questions but i am trying to study for my final.
Thanks!
2. ## Re: Trig question pre calc!
Hey Gurp925.
Is it meant to be cos(theta) / [1 - sin(theta)] or cos(theta) - sin(theta)?
3. ## Re: Trig question pre calc!
$\frac{sin\theta+1-cos^2\theta}{cos\theta sin\theta}$
$=\frac{sin\theta+sin^2\theta}{cos\theta sin\theta}$
$=\frac{sin\theta(1+sin\theta)}{cos\theta sin\theta}$
$=\frac{1+sin\theta}{cos\theta}$
$=\frac{(1+sin\theta)(1-sin\theta)}{cos\theta(1-sin\theta)}$
$=\frac{1-sin^2\theta}{cos\theta(1-sin\theta)}$
$=\frac{cos^2\theta}{cos\theta(1-sin\theta)}$
...
Hope this helps
4. ## Re: Trig question pre calc!
Hey Acc100jt almost understood the question just wondering how did you multiply the denominator by (1-sin) ?thus multiplying the top? if i understand that then the question is solved. Thanks everyone for their help.
5. ## Re: Trig question pre calc!
Originally Posted by Gurp925
Hey Acc100jt almost understood the question just wondering how did you multiply the denominator by (1-sin) ?thus multiplying the top? if i understand that then the question is solved. Thanks everyone for their help.
Multiplying top and bottom by the bottom's conjugate.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.985832154750824, "perplexity": 2224.725891070796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637900397.29/warc/CC-MAIN-20141030025820-00169-ip-10-16-133-185.ec2.internal.warc.gz"}
|
http://ronaldocoisanossa.com.br/american-pie-wltq/symbolab-definite-integral-406534
|
# symbolab definite integral
Advanced Math Solutions – Integral Calculator, common functions. Partial fractions decomposition is the opposite of adding fractions, we are trying to break a rational expression... High School Math Solutions – Polynomial Long Division Calculator. View integral x^2(2x^3+3)^3dx - Indefinite Integral Calculator - Symbolab from MATH 122 at Oakland University. Tes Global Ltd is registered in England (Company No 02017289) with its registered office at 26 Red Lion Square London WC1R 4HQ. Now at first this might seem daunting, I have this rational expression, I have xs in the numerators and xs in the denominators, but we just have to remember, we just have to do some algebraic manipulation, and this is going to seem a lot more attractable. Help Deצnite Integral Calculator Solve deﺡnite integrals step by step Enter a topic Algebra Matrices & Vectors Functions & Graphing Like full pad » x 2 x d dx (☐)' x log √☐ √☐ ≤ ≥ ∂ ∂x ∫ ∫ lim ∑ ∞ ( f g) x It highlights that the Integration's variable is x. Advanced Math Solutions – Integral Calculator, integration by parts, Part II. The Integral Calculator supports definite and indefinite integrals (antiderivatives) as well as integrating functions with many variables. Enter your function in line 2 below... 1. f x = xsinx. Conic Sections. In the previous post we covered integrals involving powers of sine and cosine, we now continue with integrals involving... partial\:fractions\:\int_{0}^{1} \frac{32}{x^{2}-64}dx, substitution\:\int\frac{e^{x}}{e^{x}+e^{-x}}dx,\:u=e^{x}. =ln() ∫ | =√. Definite Integral Calculator Added Aug 1, 2010 by evanwegley in Mathematics This widget calculates the definite integral of a single-variable function given certain limits of integration. = limx → b − ( F ( x)) − limx → a + ( F ( x)) Odd function. Definite integral could be represented as the signed area in the XY-plane bounded by the function graph as shown on the image below. Make your first steps in evaluating definite integrals, armed with the Fundamental theorem of calculus. Free Series Comparison Test Calculator - Check convergence of series using the comparison test step-by-step Symbolab: equation search and math solver - solves algebra, trigonometry and calculus problems step by step This website uses cookies to ensure you get the best experience. This website uses cookies to ensure you get the best experience. Free definite integral calculator - solve definite integrals with all the steps. The usual stuff, solve the problems to discover the punchline to the joke. Compute answers using Wolfram's breakthrough technology & knowledgebase, relied on by millions of students & professionals. \int x\left (x^2-3\right)dx ∫ x(x2 −3)dx by applying integration by substitution method (also called U-Substitution). By using this website, you agree to our Cookie Policy. … ∫ is the Integral Symbol and 2x is the function we want to integrate. Definite Integrals Calculator. Related Symbolab blog posts Advanced Math Solutions - Integral Calculator, integration by parts, Part II In the previous post we covered integration by parts ; Free definite integral calculator - solve definite integrals with all the steps. u. u u, instead of. x. x x. U-substitution in definite integrals is just like substitution in indefinite integrals except that, since the variable is changed, the limits of integration must be changed as well. Example: A definite integral of the function f (x) on the interval [a; b] is the limit of integral sums when the diameter of the partitioning tends to zero if it exists independently of the partition and choice of points inside the elementary segments.. change password email address. Free calculus calculator - calculate limits, integrals, derivatives and series step-by-step This website uses cookies to ensure you get the best experience. 2. Pre-Álgebra. Orden (jerarquía) de operaciones Factores y números primos Fracciones Aritmética Decimales Exponentes y radicales Módulo Aritmética con notación científica. Log InorSign Up. Related Symbolab blog posts Advanced Math Solutions – Ordinary Differential Equations Calculator, Bernoulli ODE Last post, we learned about separable differential equations. The definite integral of from to , denoted , is defined to be the signed area between and the axis, from to . setting up the definite integral. Advanced Math Solutions – Integral Calculator, trigonometric substitution. u. u u ), which when substituted makes the integral easier. Homework later than 1 class period won't be accepted. Free definite integral calculator - solve definite integrals with all the steps. ∫ x ( x 2 − 3) d x. 3. Indefinite Integral Calculator - Symbolab Solutions My Notebook Practice Blog English New! If you don’t change the limits of integration, then you’ll need to back-substitute for the original variable at the en Your private math tutor, solves any math problem with steps! Show Instructions. Example: A definite integral of the function f (x) on the interval [a; b] is the limit of integral sums when the diameter of the partitioning tends to zero if it exists independently of the partition and choice of points inside the elementary segments.. To create your new password, just click the link in the email we sent you. Related Symbolab blog posts Advanced Math Solutions – Limits Calculator, Infinite limits In the previous post we covered substitution, where the limit is simply the function value at the point. You can check your own solution or get rid of unnecessary labour-intensive calculations and to confide in a high-tech automated machine when solving the definite integral with us. If one or both integration bounds a and b are not numeric, int assumes that a <= b unless you explicitly specify otherwise. This website uses cookies to ensure you get the best experience. Pre-Álgebra. Enter your function in line 2 below... 1. f x = xsinx. This website and its content is subject to our Terms and Conditions. U-substitution in definite integrals is just like substitution in indefinite integrals except that, since the variable is changed, the limits of integration must be changed as well. Get the free "Triple Integral Calculator" widget for your website, blog, Wordpress, Blogger, or iGoogle. - [Instructor] What we're gonna do in this video is introduce ourselves to the notion of a definite integral and with indefinite integrals and derivatives this is really one of the pillars of calculus and as we'll see, they're all related and we'll see that more and more in future videos and we'll also get a better appreciation for even where the notation of a definite integral comes from. Type in any integral to get the solution, free steps and graph. Both types of integrals are tied together by the fundamental theorem of calculus. this website uses cookies to ensure you get the best experience. Solve definite integrals with us! You can also check your answers! Solved exercises of Improper integrals. History! A free graphing calculator - graph function, examine intersection points, find maximum and minimum and much more Advanced Math Solutions – Integral Calculator, integration by parts Integration by parts is essentially the reverse of the product rule. The Integral Calculator supports definite and indefinite integrals (antiderivatives) as well as integrating functions with many variables. Because integral psychotherapy is a wide philosophy, anyone may opt to practice iteven without formal mental wellness training. $\int_a^bf\left (x\right)dx=F\left (b\right)-F\left (a\right)$. Integrals involving... Advanced Math Solutions – Integral Calculator, advanced trigonometric functions, Part II. Definite Integral Calculator Symbolab Solutions My Notebook Practice Blog English New! ∫abf ( x) dx = F ( b) − F ( a) $=\lim_ {x\to b-}\left (F\left (x\right)\right)-\lim_ {x\to a+}\left (F\left (x\right)\right)$. First of all I would like to start off by asking why do they have different change of variable formulas for definite integrals than indefinite...why cant we just integrate using U substitution as we normally do in indefinite integral and then sub the original U value back and use that integrand for definite integral?. ... Symbolab. Show More Show Less. This calculus video tutorial explains how to evaluate definite integrals using u-substitution. ... Related Symbolab blog posts. ∫ = . 2 ∫ b a f x dx. Type in any integral to get the solution, free steps and grap Also, be careful when you write fractions: 1/x^2 ln (x) is 1 x 2 ln ( … Definite Integrals . … Definite Integral Calculator. Free definite integral calculator - solve definite integrals with all the steps. Keywords Learn how to evaluate the integral of a function. Symbolab Integrals Cheat Sheet. Advanced Math Solutions – Integral Calculator, integration by … Polynomial long division is very similar to numerical long division where you first divide the large part of the... partial\:fractions\:\int_{0}^{1} \frac{32}{x^{2}-64}dx, substitution\:\int\frac{e^{x}}{e^{x}+e^{-x}}dx,\:u=e^{x}. Type in any integral to get the solution, free steps and graph This website uses cookies to ensure you get the best experience. Free Series Integral Test Calculator - Check convergence of series using the integral test step-by-step This website uses cookies to ensure you get the best experience. You da real mvps! We can solve the integral. Free definite integral calculator - solve definite integrals with all the steps. ∫ 1 2 x 2 d x. By using this website, you agree to our Cookie Policy. Definite Integral Calculator. Matrices & … the task is to set up the definite integral. First, we must identify a section within the integral with a new variable (let's call it. 2. Input a function, the integration variable and our math software will give you the value of the integral covering the selected interval (between the lower limit and the upper limit). Summary. Definite Integrals. partial fractions \int_{0}^{1} \frac{32}{x^{2}-64}dx, Please try again using a different payment method. Detailed step by step solutions to your Improper integrals problems online with our math solver and calculator. In general, you can skip the multiplication sign, so 5x is equivalent to 5*x. In the previous posts we covered substitution, but standard substitution is not always enough. For definite integrals, int restricts the integration variable var to the specified integration interval. The definite integral of a non-negative function is always greater than or equal to zero: $${\large\int\limits_a^b\normalsize} {f\left( x \right)dx} \ge 0$$ if $$f\left( x \right) \ge 0 \text{ in }\left[ {a,b} \right].$$ The definite integral of a non-positive function is always less than or equal to zero: Improper integrals Calculator online with solution and steps. By using this website, you agree to our Cookie Policy. In general, you can skip the multiplication sign, so 5 x is equivalent to 5 ⋅ x. High School Math Solutions – Partial Fractions Calculator. Show More Show Less. It is used to transform the integral of a product of functions into an integral that is easier to compute. Integral dx Use latex commands: * is multiplication oo is $\infty$ pi is $\pi$ x^2 is x 2 sqrt(x) is $\sqrt{x}$ sqrt[3](x) is $\sqrt[3]{x}$ (a+b)/(c+d) is $\frac{a+b}{c+d}$ Powered by Sympy. Message received. The calculator will approximate the integral using the trapezoidal rule, with steps shown. Our mission is to provide a free, world-class education to anyone, anywhere. This means . i know for a fact that the answer should be 9/2 because i solved for the horizontal strips. send reset link. This states that if is continuous on and is its continuous indefinite integral, then . Definite Integrals Calculator. Advanced Math Solutions – Integral Calculator, the complete guide. Thanks for the feedback. com, the most comprehensive source for safe, trusted, and spyware-free Symbolab - Math solver. - [Voiceover] So we wanna evaluate the definite integral from negative one to negative two of 16 minus x to the third over x to the third dx. Symbolab – Math solver Pro. Log InorSign Up. Definite Integrals. Functions 3D Plotter is an application to drawing functions of several variables and surface in the space R3 and to calculate indefinite integrals or definite integrals. Interactive graphs/plots help visualize and better understand the functions. Definite integrals calculator. ... Related Symbolab blog posts. The definite integral f(x) from, say, x=a to x= b, is defined as the signed area between f(x) and the x-axis from the point x = a to the point x = b. Definite Integral Calculator. Close. Each part of the symbol makes sense. 2 ∫ b a f x dx. I use for a starter or plenary or occasionally a homework. Show Instructions. Free definite integral calculator - solve definite integrals with all the steps. Input a function, the integration variable and our math software will give you the value of the integral covering the selected interval (between the lower limit and the upper limit). 1. Free definite integral calculator - solve definite integrals with all the steps. Given the condition mentioned above, consider the function F\displaystyle{F}F(upper-case "F") defined as: (Note in the integral we have an upper limit of x\displaystyle{x}x, and we are integrating with respect to variable t\displaystyle{t}t.) The first Fundamental Theorem states that: Proof If you have a table of values, see trapezoidal rule calculator for a table. The integral calculator allows you to solve any integral problems such as indefinite, definite and multiple integrals with all the steps. Advanced Math Solutions – Integral Calculator, integration by parts, Part II. Khan Academy is a 501(c)(3) nonprofit organization. Submit Assignment Start Over Back. - [Voiceover] So we wanna evaluate the definite integral from negative one to negative two of 16 minus x to the third over x to the third dx. When you use IgnoreAnalyticConstraints, int applies these rules: Advanced Math Solutions – Integral Calculator, integration by parts, Part II. The calculus integrals of function f(x) represents the area under the curve from x = a to x = b. Type in any integral to get the solution, free steps and graph. The dx shows the direction alon the x-axis & dy shows the direction along the y-axis. I'm trying to evaluate the following definite integral: $\int_{0}^{1} \frac{x}{\sqrt{x+1}} dx$ Well I put the integral on symbolab to know its value ($4/3$) However when I tried to calculate the Matrices & Vectors * Matrix Add/Subtract Definite Integral Calculator - Symbolab (2 days ago) Free definite integral calculator - solve definite integrals with all the steps. When evaluating definite integrals for practice, you may use your calculator to inspect the answers. 3. i need help. Type in any integral to get the solution, steps and graph This website uses cookies to ensure you get the best experience. ... * Integrals (definite, indefinite, multiple) * Derivatives * Partial derivatives * Series * ODE * Laplace Transform * Inverse Laplace Transform. Summary. We can read the integral sign as a summation, so that we get "add up an infinite number of infinitely skinny rectangles, from x=1 to x=2, with height x^2 times width dx." Line Equations Functions Arithmetic & Comp. the solution shown in the picture is from symbolab. The calculator will evaluate the definite (i.e. Free antiderivative calculator - solve integrals with all the steps. Definite Integrals . Definite integrals calculator. ... Related Symbolab blog posts. Advanced Math Solutions – Integral Calculator, the complete guide. Posted by 4 days ago. This calculator is convenient to use and accessible from any device, and the results of calculations of integrals and solution steps can be easily copied to the clipboard. The definite integral is denoted by a f(x) d(x). Message received. $1 per month helps!! Thanks for the feedback. Example: Proper and improper integrals. There is also the issue that the symbols make more sense in the definite integral. Algorithms. Type in any integral to get the solution, free steps and graph This website uses cookies to ensure you get the best experience. Submit Assignment Start Over Back. Functions. u = sin x. u=\sin {x} u = sinx to find limits of integration in terms of. All online services are accessible even for unregistered users and absolutely free of charge. type in any integral to get the solution, free steps and graph. Orden (jerarquía) de operaciones Factores y números primos Fracciones Aritmética Decimales Exponentes y radicales Módulo Aritmética con notación científica. For more about how to use the Integral Calculator, go to "Help" or take a look at the examples.$\mathrm {If}\:f\left (x\right)=-f\left (-x\right)\Rightarrow\int_ {-a}^af\left (x\right)dx=0$. Definite Integral Boundaries. Thanks to all of you who support me on Patreon. ∫sin() =−cos() ∫cos() =sin() Trigonometric Integrals: The definite integral has both the start value & end value. To create your new password, just click the link in the email we sent you. It is important to note that both the definite and indefinite integrals are interlinked by … Funcions 3D plotter calculates the analytic and numerical integral and too calculates partial derivatives with respect to x and y for 2 variabled functions. Compute answers using Wolfram's breakthrough technology & knowledgebase, relied on by millions of students & professionals. Type in any integral to get the solution, free steps and graph. i have been studying this problem. Definite Integral Calculator Added Aug 1, 2010 by evanwegley in Mathematics This widget calculates the definite integral of a single-variable function given certain limits of integration. Now at first this might seem daunting, I have this rational expression, I have xs in the numerators and xs in the denominators, but we just have to remember, we just have to do some algebraic manipulation, and this is going to seem a lot more attractable. You can learn how to calculate definite integrals by using our free definite integral calculator. Advanced Math Solutions – Integral Calculator, advanced trigonometric functions. Adjust the lower and upper bound of the integral by … setting up the definite integral. ... Symbolab. Please try again using a different payment method. This website uses cookies to ensure you get the best experience. Loading... Definite Integral Calculator. An absolutely free online step-by-step definite and indefinite integrals solver. :) https://www.patreon.com/patrickjmt !! Example: Proper and improper integrals. In this integral equation, dx is the differential of Variable x. ... Related Symbolab blog posts. Definite Integral Calculator computes definite integral of a function over an interval using numerical integration. Find more Mathematics widgets in Wolfram|Alpha. Derivatives Derivative Applications Limits Integrals Integral Applications Riemann Sum Series ODE Multivariable Calculus Laplace Transform Taylor/Maclaurin Series Fourier Series. Common Integrals: ∫−1 =ln() ∫ �� . Definite Integrals Rules. Type in any integral to get the solution, free steps and graph In general, you can skip parentheses, but be very careful: e^3x is e 3 x, and e^ (3x) is e 3 x. with bounds) integral, including improper, with steps shown. Definite and Improper Integral Calculator. The definite integral of the function $$f\left( x \right)$$ over the interval $$\left[ {a,b} \right]$$ is defined as the limit of the integral sum (Riemann sums) as the maximum length … By … Make more sense in the previous posts we covered substitution, but standard symbolab definite integral is not always enough evaluating integrals! Improper, with steps shown trigonometric functions, Part II function graph shown! Between and the axis, from to a + ( f ( x ) any... Use your Calculator to inspect the answers by millions of students & professionals better understand the.! Formal mental wellness training but standard substitution is not always enough and Conditions integrals are tied together by the theorem. May opt to Practice iteven without formal mental wellness training symbolab definite integral Pro & knowledgebase, relied on by millions students! The symbols make more sense in the email we sent you Aritmética con notación científica by. The complete guide integrating functions with many variables you agree to our Cookie Policy world-class education to anyone,.. In the previous posts we covered substitution, but standard substitution is not enough! Are accessible even for unregistered users and absolutely free online step-by-step definite integral of a product of functions into integral. ( Company No 02017289 ) with its registered office at 26 Red Square! Transform the integral using the Comparison Test step-by-step definite and indefinite integrals solver Math Solutions integral. Help visualize and better understand the functions let 's call it use for a table, then i use a... Your private Math tutor, solves any Math problem with steps shown calculus Calculator solve... Help visualize and better understand the functions restricts the integration 's variable is x, is to! X and y for 2 variabled functions post, we learned about separable differential Equations Calculator, common functions is. Has both the start value & end value of calculus evaluate definite with. Is not always enough will approximate the integral Symbol and 2x is the function graph as on! A product of functions into an integral that is easier to compute the dx shows the direction the! * x your website, you agree to our Cookie Policy this... To integrate with our Math solver and Calculator about separable differential Equations know for a.. To provide a free graphing Calculator - Symbolab ( 2 days ago free. Differential Equations Calculator, the complete guide, advanced trigonometric functions ( (... Int restricts the integration 's variable is x is also the issue that the integration variable var to the integration! When substituted makes the integral Symbol and 2x is the differential of variable.... Of students & professionals called U-Substitution ) including improper, with steps shown dx=F\left ( b\right ) -F\left a\right! Parts, Part II take a look at the examples evaluate the integral Symbol and 2x is the integral.! Solution shown in the picture is from Symbolab accessible even for unregistered users and absolutely free of charge improper... Com, the most comprehensive source for safe, trusted, and spyware-free Symbolab - symbolab definite integral. 'S call it, go to help '' or take a at. Symbolab ( 2 days ago ) free definite integral Calculator, go to help '' or take look... Integrals ( antiderivatives ) as well as integrating functions with many variables or a! Website and its content is subject to our Cookie Policy its content is subject to Cookie... Use the integral of from to, denoted, is defined to be signed. With all the steps i know for a fact that the symbols make more sense in the is... Integral using the trapezoidal rule, with steps shown symbolab definite integral, anywhere is subject our! ( f ( x ) with steps call it free online step-by-step definite Calculator! Is its continuous indefinite integral Calculator, the most comprehensive source for safe, trusted, and spyware-free -! By applying integration by parts, Part II with the fundamental theorem of calculus axis! Series step-by-step this website uses cookies to ensure you get the free Triple Calculator... Your private Math tutor, solves any Math problem with steps shown Matrix Add/Subtract this website cookies! Defined to be the signed area between and the axis, from to to set up definite! Website uses cookies to ensure you get the best experience occasionally a homework mission! World-Class education to anyone, anywhere and minimum and much more Symbolab – Math solver Pro guide... Primos Fracciones Aritmética Decimales Exponentes y radicales Módulo Aritmética con notación científica, dx is the of... By … an absolutely free online step-by-step definite integral Calculator, advanced trigonometric functions, Part II more how... Step Solutions to your improper integrals problems online with our Math solver Calculator... Solutions My Notebook Practice Blog English new all the steps than 1 class wo... D ( x 2 − 3 ) d x a f ( x ) ) Odd function indefinite Calculator! Dx by applying integration by parts, Part II that the symbols make more sense in the integral. Wordpress, Blogger, or iGoogle are accessible even for unregistered users and absolutely online! - Symbolab from Math 122 at Oakland University use your Calculator to inspect the answers jerarquía ) operaciones... Is used to transform the integral Calculator - solve definite integrals, armed with the fundamental theorem calculus! Parts, Part II not always enough of function f ( x ) −... First, we learned about separable differential Equations Calculator, Bernoulli ODE post... D x - Math solver and Calculator standard substitution is not always enough intersection points, find maximum and and. ) represents the area under the curve from x = xsinx ( jerarquía ) de Factores., trigonometric substitution into an integral that is easier to compute the XY-plane bounded the. A section within the integral Calculator the direction alon the x-axis & dy shows direction... 5x is equivalent to 5 * x – Math solver Pro by step Solutions your... Use your Calculator to inspect the answers Terms and Conditions variable ( let call! And Conditions dy shows the direction alon the x-axis & dy shows direction... And absolutely free online step-by-step definite and indefinite integrals ( antiderivatives ) as well as integrating functions with many.., anyone may opt to Practice iteven without formal mental wellness training derivatives and series step-by-step this website you... ( b\right ) -F\left ( a\right )$ 5 * x applying integration substitution... Steps in evaluating definite integrals by using this website, you can learn how to evaluate definite integrals all... Calculus video tutorial explains how to use the integral Calculator - graph,... And indefinite integrals ( antiderivatives ) as well symbolab definite integral integrating functions with many variables graphing Calculator - (. Of integrals are tied together by the function we want to integrate graph as shown on image. Integral has both the start value & end value i know for starter. And Conditions with bounds ) integral, then integrals, int restricts integration... Any integral to get the best experience the differential of variable x Comparison Test Calculator - Symbolab from Math at! Advanced trigonometric functions ) with its registered office at 26 Red Lion Square London WC1R 4HQ 02017289 ) its. The XY-plane bounded by the function graph as shown on the image.... Is subject to our Cookie Policy Calculator for a table of values, trapezoidal... Of functions into an integral that is symbolab definite integral to compute start value & end value be! Solution, steps and graph this website uses cookies to ensure you get the best experience common.... Value & end value philosophy, anyone may opt to Practice iteven formal... With bounds ) integral, then 's variable is x armed with the fundamental theorem calculus... I know for a starter or plenary or occasionally a homework 26 Red Lion Square London WC1R 4HQ understand functions! ) ( 3 ) nonprofit organization or occasionally a homework Cookie Policy just click link! Not always enough uses cookies to ensure you get the best experience 3D plotter the... Com, the complete guide private Math tutor, solves any Math with. Integrals ( antiderivatives ) as well as integrating functions with many variables as the signed area and... Relied on by millions of students & professionals an interval using numerical.! Series step-by-step this website uses cookies to ensure you get the best.! Blogger, or iGoogle, with steps are tied together by the fundamental theorem of calculus – differential! Series step-by-step this website uses cookies to ensure you get the best experience services are accessible even for users... Widget for your website, Blog, Wordpress, Blogger, or iGoogle 's breakthrough technology &,! The x-axis & dy shows the direction along the y-axis, relied on by millions of students &.! Into an integral that is easier to symbolab definite integral with all the steps is not always enough safe trusted!, go to help '' or take a look at the examples trapezoidal rule Calculator for a table 122. X restricts the integration variable var to the joke this calculus video tutorial how. Identify a section within the integral using the trapezoidal rule, with steps advanced trigonometric functions, Part.... Integral using the trapezoidal rule Calculator for a fact that the integration 's variable is x is denoted a. 5X is equivalent to 5 * x con notación científica equation, dx is integral! To be the signed area between and the axis, from to, denoted, is to. All online services are accessible even for unregistered users and absolutely free online step-by-step definite integral Calculator Symbolab My! Our mission is to set up the definite integral Calculator supports definite and integrals. The free ` Triple integral Calculator - graph function, examine intersection points, find maximum and and.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9633703231811523, "perplexity": 1224.0719784303735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150307.84/warc/CC-MAIN-20210724160723-20210724190723-00438.warc.gz"}
|
https://mathoverflow.net/questions/14404/serres-fac-in-english/14417
|
# Serre's FAC in English
Has somebody translated J.-P. Serre's "Faisceaux algébriques cohérents" into English? At least part of it?
In a fit of enthusiasm, I started translating it and started TeXing. But after section 8, I got tired and stopped.
However if somebody else already took the trouble, I would be most grateful. I do not know a word of French(except maybe faisceau), and forgot whatever I learned in the process of translation very quickly.
This is made community wiki, as I do not want to get into rep issues. Please feel free to close this if you think this qn is inappropriate for MO(I have added my own vote for closing, in case this helps). I would be happy to receive answers in comments.
• At the risk of getting booed, I would suggest just reading it in French. After all, it won't be the last time you'll need to read something published in French, and in my opinion translating things to English is probably less productive than just trying to read them and thinking about the words. I left the U.S. for France after my undergraduate years, never having studied or spoken French in my life before that, and I was able to get up to speed reasonably quickly by doing this. Besides, if you want to follow any of the references, you'll have to translate those too ! May 4 '10 at 9:12
• No booing at all -- if a paper as fantastic as GAGA doesn't provide enough inspiration to learn basic math French (which is really not hard; I can't read a French menu and have read thousands of pages of math French), then what will such a person do when confronted with a less dramatic paper which has to be read? Serre has written books in English and French. Compare them side by side, make a list, practice, learn to read. It's easier than the math! May 6 '10 at 1:49
• I won't boo, but my view is that the more translations the better. Someone whose field has virtually no papers in French but wants to cite something from FAC may want to look at the original without having to learn French. mathoverflow.net/questions/43147/… Some people may have more trouble than average learning foreign languages. And the existence of a translation does not prevent anyone from ignoring it and struggling with the French original if they want. How can it be bad to provide the math community with more options? Oct 25 '10 at 14:22
• Some of the comments on a similar type of MO question mathoverflow.net/questions/33348/theorem-of-borel-and-tits are relevant here, I think. Translations are convenient when they exist and are well done, but lots of important French mathematics won't get translated. Oct 25 '10 at 16:26
Together with some help from my friend, I translated FAC into English. I didn't have so much time to proofread it, so probably there are some mistakes.
It can be found here: FAC, Source.
• Very impressive. This is a great contribution from you to all students of algebraic geometry. Thank you very much. Bless your kind soul! Wish you all the best. Oct 26 '10 at 18:06
• Just started reading your translation. Thanks! Page 9, the paragraph above Proposition 1, I believe you want $t \in \mathscr{F}_U$. May 10 '18 at 6:44
• Proposition 2 page 25 is missing a " ' " Dec 17 '18 at 22:05
• Proposition 3 page 26 there is a subscript "j" that appears twice that should presumable be an "h" Jan 8 '19 at 23:43
• page 27 n23, $(\phi \circ \psi)^{*}$ is missing its equality Jan 21 '19 at 23:59
This is not an answer to your question, but I can't resist, especially with community wiki, pointing you to a (in my novice opinion) good translation of GAGA here by my former office mate Trevor. He probably knows if there is a FAC translation, but he is not on MO. You can try to email him.
• Oh GAGA translation already exists! Sir, if we were talking in real life, I would have jumped on you and hugged you! Thanks a lot! :) Sorry I can't upvote you now, in an urge for earning the "Civic Duty" badge, I spent all my votes for today... :( Feb 6 '10 at 17:51
• @Anweshi: No problem! Feb 6 '10 at 17:55
• Damn/Woohoo! I was halfway through my own GAGA translation. Feb 19 '10 at 0:54
There is another translation by Andy McLennan that comes with a lot of background material, the actual translation starts at page 235. I'm not really competent to make any comparisons.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2806529104709625, "perplexity": 944.5066357347459}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300574.19/warc/CC-MAIN-20220117151834-20220117181834-00416.warc.gz"}
|
http://deltasdnd.blogspot.com.es/2009/
|
## Sunday, December 27, 2009
### Spells in Chainmail/ D&D
Hey, I just realized for the first time that all of the spells in Chainmail were converted to spells in OD&D, with the "complexity" of the former turning exactly into the "level" of the latter.
With one exception: Darkness, which didn't appear until D&D Sup-I, and then at a different level. Huh.
## Thursday, December 24, 2009
### Gygax on Miniatures in D&D
I don't usually employ miniatures in my RPG play. We ceased that when we moved from CHAINMAIL Fantasy to D&D.
I have nothing against the use of miniatures, but they are generally impractical for long and free-wheeling campaign play where the scene and opponents can vary wildly in the course of but an hour.
The GW folks use them a lot, but they are fighting set-piece battles as is usual with miniatures gaming.
I don't believe that fantasy miniatures are good or bad for FRPGs in general. If the GM sets up gaming sessions based on their use, the resulting play is great from my standpoint. It is mainly a matter of having the painted figures and a big tabletop to play on.
- Gary Gygax on ENWorld, 2003 (
http://www.enworld.org/forum/1263669-post6.html)
I've heard in the past that Gygax didn't use miniatures in D&D, but it's interesting to hear it in his own voice. "We ceased that when we moved from CHAINMAIL Fantasy to D&D." This makes sense in a lot of ways.
One of the things it helps make sense is how the rules for use of miniatures in AD&D don't (to be frank) make single lick of sense. Consider how miniatures don't physically fit on a map at 1" = 10 feet scale, and the truly crazy stuff on DMG p. 10 (make maps at 1" = 3⅓ feet). The reason? Well, Gygax had ceased actually using them as soon as the RPG itself came into existence. The ranges and movements are copy-and-pasted from Chainmail, but he wasn't actually using them directly. In other words, the use of minis became a vestigial, unusable appendage in D&D.
Consider stuff like this. A fireball in Chainmail & OD&D has a fixed range of 24". In AD&D that gets changed to a caster-dependent range of 10" + 1" per level. Repeat that for every spell's range and area in the entire book. Why the enormous increase in complexity (requiring math on the fly just to find any spell's area and range)? Especially when Gygax wasn't using miniatures or a game map in any way himself?
Knowing how Gary would write, I can almost hear how he'd answer this. "It's self-evident that more powerful casters will have greater efficacy, and rules for minatures were included for the kind of person who would enjoy that sort of thing." Something like that. Kind of dodging the fact that AD&D is shot full of complicated rules, everywhere, that he neither used nor playtested; looking good on paper but not playing out so well. (Funny, too, that he's licensing and promoting "OFFICIAL ADVANCED DUNGEONS & DRAGONS miniature figures" [DMG p. 11] simultaneously with abandoning their use in his own play.)
Now, there are other things that this does helps rationalize. One is that it's an excellent point that, in a game with lots of random encounter tables, you really would be hamstrung if you have to go running for different types of miniatures every time a new encounter pops up. Another thing is the need for AD&D rules to now specify random contacts in combat. "Discharge of missiles into an existing melee is easily handled... Assign probabilities to each participant in the melee or target group according to sheer numbers." (DMG p. 63). "As with missile fire, it is generally not possible to select a specific opponent in a mass melee. If this is the case, simply use some random number generation to find out which attacks are upon which opponents..." (DMG p. 70). That sort of stuff.
But the one thing this egregiously overlooks is the interaction of area-effect spells (fireball and all the rest). If melee is an entirely abstracted, Pigpen-like dustup, how do you determine who gets hit by an area-effect spell? Everyone, friend and foe? Whoever you want? Just the bad guys? Random determination? To-hit rolls or Intelligence checks? In all of OD&D and AD&D, I'm pretty sure there's not a single line addressing this question, leaving it entirely ambiguous.
A critical history of D&D would include the following -- Start with Chainmail historical mass rules at 1:20 scale (1 turn = 1 minute, 1" = 10 yards); this includes catapult-fire where players declare the range shot without measurement. Then Gygax develops man-to-man combat, including jousting and the fantasy supplement (using the moves and ranges from 1:20 scale, but never addressing what the new scale is or what should change in that regard); a wizard's fireball simply refers back to the catapult rules.
Now OD&D comes out, and in large part it refers back to Chainmail for combat. "Special Ability functions are generally as indicated in CHAINMAIL where not contradictory to the information stated hereinafter", stuff like that (Vol. 2, p. 5). Of course, the game creator himself is not using miniatures anymore. Questions of scale are given only the most cursory treatment: the combat turn is still 1 minute (fixed from Chainmail's 1:20 mass combat), and not until Vol. 3 are we told, "In the underworld all distances are in feet, so wherever distances are given in inches convert them to tens of feet." (Vol. 3, p. 8). Perhaps that's all the attention you need to the issue if combat has been entirely abstracted at this point.
If I had been more observant, a few years ago when Gary was still with us and generously answering questions in several different Q&A questions online, I really wish I'd asked him this: "What do you do in D&D to adjudicate the effect of area spells like a fireball, et. al?" Are we to assume that OD&D's reference back to Chainmail Fantasy, which in turn references back to Chainmail mass combat's catapult rules, requires declaration of shot range? (In neither of the former cases is it explicitly stated.) Or are we to assume that since miniatures are no longer actually in use on a map, that the determination is done by caster fiat or some random method?
Once I noticed this, it seems funny how much ink is spent in AD&D covering the new hit-the-caster-lose-the-spell rules, and never once addressing who-gets-hit-by-the-spell in its new, purely abstract combat system. Take-away here is two things, I think. (1) Gygax never actually played the game with miniatures at the alleged 1" = 10 feet scale (or 1" = 3⅓ feet, or the stated ranges for moves, missiles, and spells, or anything else), and (2) Area-of-effect spells are left entirely unaddressed in classic D&D, and definitely require some novel, independent adjudication on who gets hit by each individual DM.
Addendum: Another Q&A post by Gygax rules that targets for a sleep spell would be randomly chosen, so perhaps that aims us in a suggestive direction. ( http://www.enworld.org/forum/1972519-post68.html )
Virel: Say a sleep spell is cast at a group of ten characters... Can the caster specifically select the six creatures or six levels he or she wants to be effected?
Gygax: No. Six of the 1st level NPCs would be affected at random.
## Monday, December 21, 2009
### Gygax on Chainmail's Fantasy Scale
Regarding Chainmail's Fantasy man-to-man scale, here's an exchange between Gary Gygax and our friend RFisher, from ENWorld in 2005 ( http://www.enworld.org/forum/2069195-post152.html ):
RFisher: A couple of Chainmail questions: When the combat tables say "1 die per man", do they mean 1 die per man (20 dice per figure) or 1 die per figure (1 die per 20 men)? (I've known people to interpret it both ways.)
Gygax: Read "man" as "figure" and you have it. One die is just that...
RFisher: Under Heroes, does "They have the fighting ability of four figures" mean that they are equivalent to 4 men or 80 men?
Gygax: Heroes are used only in Man-to-Man play, so one is equal to four normal men.
RFisher: I understand that hero v. hero would be resolved on the Fantasy Combat Table. Hero v. normal forces would be resolved on the regular Combat Table. (The hero being classed as heavy foot, armored foot, light horse, &c. as fit the particular hero.) But were heroes & other things from the Fantasy Supplement ever used with the man-to-man rules? If so, how?
Gygax: I am quite at a loss to answer that, as the Hero and all the other Fantasy supplement figures were employed only in the play of Man-to-Man games, never in the mass system where one figure equalled 20.
And just so we don't forget, Gygax was very consistent on this point over the years. From the Swords & Spells Introduction in 1976 (p. 1):
The FANTASY SUPPLEMENT written for CHAINMAIL assumed a man-for-man situation.
From the original Dungeons & Dragons Vol. 3, Land Combat in 1974 (p. 25):
The basic system is that from CHAINMAIL, with one figure representing one man or creature.
And from Wargamer's Newsletter #127 in 1972, where early play of the game is discussed purely in terms of man-to-man play:
http://grognardia.blogspot.com/2009/12/1972-gygax-article.html
And also from The Strategic Review #2 in 1975 (p. 3):
CHAINMAIL is primarily a system for 1:20 combat, although it provides a basic understanding for man-to-man fighting also. The "Man-To-Man" and "Fantasy Supplement" sections of Chainmail provide systems for table-top actions of small size.
It's funny how many of us (definitely me included!) were tricked into the illusion that Chainmail Fantasy apparently supported mixed 1:20 and 1:1 scale play, when it really doesn't.
### Saves as Severity
Saves in classic D&D come in 5 categories, and the exact categories evolved a bit from OD&D to AD&D, BXCMI, etc. Some thought has gone into tracking the different save categories, how they were meant to be distinguished, what they "mean", etc. Here's an observation I haven't seen expressed before: The save categories in OD&D are most easily interpreted as just levels-of-severity. Consider the following:
The first category is "Death Ray or Poison", and receives a +4 bonus in comparison to generic Spell saves for fighters (which we'll take as our baseline). Obviously, this is a category which represents instant death. In order to give our characters a fighting chance, a fairly hefty bonus is given.
Second is "All Wands -- Including Polymorph or Paralization". The emphasis on polymorph & paralyzation is interesting: these are effects that don't cause literal instant death, but do render the victim effectively helpless and subject to a follow-up coup de grace. Hence a relative +3 bonus is given to avoid these effects.
Third is "Stone", i.e., turn-to-stone (petrification). Similar to the preceding, a victim of stoning is immediately and permanently hors de combat. However, the victim is not quite so immediately subject to death, as the stony form doesn't allow an immediate dagger death-stroke. Presumably some amount of labor could break up the stone form, but that's a far more involved process. Bonus is +2 here compared to baseline.
Fourth is "Dragon Breath", which is not instant elimination from a failed save, but (obviously) pretty bad, major business. Bonus is effectively +1 in this case.
Fifth and finally you have "Staves & Spells" which is in some sense "everything else", i.e. non immediate death or incapacitation. This is our baseline, hardest to avoid, i.e., +0 bonus in similar terms.
Now, the counter-argument to all this is the position of "Wands" and "Staves" on the chart with the former (weaker) given an easier save, and the latter (stronger) given a more difficult save, which is counter to the observations above.
But more generally, you could use these principles for judgements on the fly about the severity of effect: basically you're awarding between +0 and +4 to the save, with more heinous effects given a more generous save (again, just to give the characters a fair, fighting chance). For example, I would consider giving the save for sleep (recall, "no-save" language doesn't yet exist in the OD&D LBBs) the same category as "paralization", since the effects are so similar. A bad falling-stone trap might be worth a save vs. dragon breath, whereas an instant pit-into-lava trap should be worth a save vs. death. Et. al.
## Friday, December 11, 2009
### No Scale in Man-to-Man Combat
I've said it before, but I've had it highlighted for me again recently. There are no scales given in Chainmail Man-to-Man Combat (corollary: same for the Fantasy Supplement), and I think that it's Gygax's single most fundamental oversight in the design of the original game. In any other miniatures context he's careful to specify the key 3 scales (figure, distance, and time). But the whole issue is conspicuously absent from the Chainmail Man-to-Man rules.
This caused a lot of grief over the years, IMO. First of all, the lack of figure scale made it unclear that the Chainmail Fantasy Supplement was actually for man-to-man action only (1:1), not mass combat, and thereby contributed to the faulty notion that original D&D had a mass-land-combat wargame included. Secondly, the lack of a distance scale precipitated a pretty short-sighted hack to OD&D that 1" = 10 feet (an evolution of Chainmail's 1" = 10 yards); this resulted in 25mm miniatures not actually fitting into a proportional space on a map, and really crazy convolutions like AD&D's DMG p. 10 (where it is specified that when using miniatures, ground scale should actually be mapped out at 1" = 3⅓ feet!). Thirdly, the lack of a time scale likewise caused Chainmail's mass combat turn (1 turn = 1 minute) to be carried over directly to D&D; and while this was entirely reasonable for Chainmail's 1:20 scale, it was frankly entirely unreasonable for man-to-man (1:1) swordplay and bowfire, again resulting in long-winded and unconvincing justification attempts in places like AD&D's DMG p. 61.
As much as I try to give priority to the original versions of the D&D rules, and hew to them as closely as possible, this trio of scaling issues is the #1 item that I simply cannot accept in OD&D and/or AD&D. It's the principal reason that miniatures never really worked all that well in classic D&D tactical play. I do feel that Holmes went in the right direction with a 1 round = 10 seconds time scale, and 3E was in fact a relief to allow 1" = 5 feet ground scale, thereby matching the miniatures we've always used. If Gary had only considered the issue more carefully when writing the original 2 pages of Chainmail Man-To-Man Combat, or sometime shortly thereafter.
## Tuesday, December 8, 2009
### No Heroes In War
I want to expand on a few things I mentioned in the last blog.
First of all, just so it's clear, the primary point of the last post was this: There are no endgame rules presented in either Chainmail or Original D&D, and that goes for either fantasy mass-warfare or dominion management. It's not specifically my point that that's either a good thing or a bad thing. But clearly the impression of such is given, when it's not truly available. (Again, see OD&D Vol. 3 p. 25, as an example.) If we want to be charitable, we could call this impression a "teaser" of things that could come later. If we wanted to get really cranky about it, we could perhaps call it "deceptive advertising" or somesuch. (Of course, I prefer the former.)
Now, as a very minor corollary to that observation, we've also discovered another, somewhat more specific piece of common wisdom that was also erroneous, and that's what I'd like to further highlight here. Throughout D&D mass-warfare writings, we're given the impression that higher-level fighters and monsters can, acting alone, stand against masses of normal troops. ("These fellows are one-man armies!" as per Chainmail p. 30.) This is incorrect, and what I'm interested in here is the proactive effort that was needed to obscure this rather key fact. Here are some case studies.
Case 1 -- Chainmail Fantasy Supplement. Note again that these rules are only for man-to-man action (1:1 figure scale). There is simply no provision available to adjudicate a high-level fighter acting against large masses of troops. Of course, this fact wasn't made explicit in Chainmail, and you have to look to the Introduction of Swords & Spells years later to see it in print from Gygax.
Case 2 -- Swords & Spells. Consider this example from the Introduction (p. 1):
At the scale of these rules a single man can be represented by a single figure on the table. So if one opponent has a lone hero (4th level fighting man) facing several figures of men-at-arms (or orcs or similar 1 hit die creatures), an actual melee can take place. The hero will inflict .40 of the damage shown for a 4th level creature on the combat tables and sustain damage until sufficient hits are scored upon the figure to kill the hero.
Now, there's no need to leave you hanging here, when we can calculate in advance exactly when the hero in question will be killed. And that is (taking reasonable assumptions): In one full turn of melee against one opposing figure. Proof: Assume the hero has average hit points (4.5 x 4 = 18), and is wearing full plate & shield (AC 2). Average damage is shown in the combat table on p. 24 (or, take d6 damage and compute 4/20 x 3.5 x 10 = 7). Note that one full turn in S&S allows 3 melee rounds of attacks (p. 17) and you have 7 x 3 = 21 points of damage against the hero, killing him in one single turn. If you like, feel free to add some hit points for Con bonus, and I'll add the plural "figures" from the quote above to dispatch the hero even more quickly.
Gygax doesn't spell out exactly when the 4th level fighter gets killed (namely: immediately), nor does he explain later on the page why "the admonition regarding single creatures is important" (namely: so they don't get killed immediately). But he could have.
Case 3 -- Battlesystem 2E. (Side note: I love Doug Niles Battlesystem 2E book. It's a really beautiful work, and if it weren't for some very small but critical flaws I wish I could use it all the time.) Consider the same quote I pointed out last time (p. 106):
From a mathematical perspective, the attributes of heroes in a BATTLESYSTEM scenario are inflated beyond those of the creatures in the units surrounding them. However, the conversion is based on the assumption that there is an intangible quality to heroism that exceeds in importance the hero's worth as a fighting machine.
Well, why not spell out exactly the factor by which hero attributes have been inflated? Let's see: A standard figure represents 10 men and is given a "Hits" value 1; hence each "hit" really represents 10 HD total (or more; see p. 105 or my posts from last month, which are comparable). Meanwhile, "All monster types, and characters of the fighter class, receive 1 hit for each 2 Hit Dice or experience levels." For example, a 10th-level fighter or monster (~10HD, comparable to the total hit dice for a normal 1-hit figure), is given 5 hits by this system. In other words: Heroes in Battlesystem have been quintupled over their actual D&D-scale health values. This is to say nothing of their attack values, which are more complicated to compute, but given similar inflation factors.
Niles could have spelled this out, as well, be he chose not to. (The Battlesystem boxed set came with special lead figures for use as heroes/ commanders, and I guess it would be raining on someone's parade if Niles were to suggest that they weren't going to be at all effective standing alone in mass land combat.) Even so, heroes and lone fantastic monsters can be quickly dispatched in Battlesystem by racking up just a few hits on them.
Now, there are a few narrow exceptions to all of the foregoing. One is if the hero's AC is so good that no normal man can hit them. In OD&D this point occurs at AC -2 (plate & shield with +4 total bonus), which is pretty difficult in core OD&D (impossible with a literal reading?), but becomes more likely with Greyhawk and other supplements. (This is possibly countered if you use AD&D's combat tables, count a natural 20 as always hitting, or strictly apply rear/flanking situational bonuses, but that discussion becomes highly edition- or house-rule-specific.) Second would be if the hero/monster is only hit by attacks of a special sort, such as silver or magical weapons, et. al. In these cases a hero figure obviously really could wade through armies of men untouched, until some enemy hero decided to give him chase. But still there is no real contest to play out; the specially-protected hero could automatically massacre an unlimited number of normal men without any threat whatsoever.
Again, I want to clarify that I'm not saying that this affair is either good thing or a bad thing in itself. (In fact, I guess I'd have to say that the realization is refreshing compared to my prior thinking. Numerous new possibilities open up.) Maybe given completely free design capacities you'd choose to have the interaction of heroes with masses one way, or the other. But clearly the impression that heroes can stand against masses doesn't, in general, bear out in play for either D&D or any of the mass-combat games that were later based on it. And that's an error that could have easily been avoided.
## Saturday, December 5, 2009
### The Problem with the Endgame
These rules are as simple and straightforward as I could devise for a game system which involves "magical" and fantastic factors. The FANTASY SUPPLEMENT written for CHAINMAIL assumed a man-for-man situation. While it is fine for such actions, it soon became obvious that something for large-scale battles was needed.
- Gary Gygax, "Swords & Spells" Introduction, 1976
As old-school D&D'ers, I think that many of us share the intuition that there is an "endgame" in which high-level PCs wind up managing castles, baronies, and leading fantasy armies in battle. It seems a little frustrating that the endgame seems to have been "lost" somehow over time.
For probably 30 years I've been trying to scratch this itch and find the proper solution to the reputed endgame. Once again I've been attacking the problem recently, having had an opportunity in the last year to become familiar with Chainmail, OD&D, re-reading Swords & Spells and Battlesystem, etc. Personally, I need my mass-war system to have the same statistical expectations as if you actually played the RPG rules out man-to-man (i.e., it's no good to have X beat Y in RPG rules, but Y beat X in mass-war rules; I'm looking at you, War Machine.)
Here's my new observation: The endgame never actually existed in original D&D. It was sort of an illusion all along, which caused a lot of personal frustration.
Let me be specific: In neither Chainmail nor OD&D is there any provision for handling fantasy battles between opposing armies of hundreds of men (or monsters). It looks like there is, but there really isn't. Consider the quote at the top of this post (emphasis mine). Indeed, the Chainmail fantasy rules were in their entirety only meant to work on a 1:1 scale, not a mass scale, i.e., they're a continuation of the "Man-to-Man Combat" section that immediately precedes them. In other words, OD&D is in some sense just a somewhat revised edition of the Chainmail Fantasy Man-to-Man rules, not a totally different game.
Let's think about this a little more, because you get conflicting signals/ advertising from Chainmail itself. Conflicts would include: (1) Chainmail in general is at a 1:20 mass scale, and the Fantasy section never says explicitly that anything has changed in that regard. (2) The opening to Chainmail Fantasy says that it can be used to "refight the epic struggles related by J.R.R. Tolkien, Robert E. Howard, and other fantasy writers," which is not truly the case at 1:1 scale. (3) The language for Super Heroes asserts that, "these fellows are one-man armies!", when in fact they are only able to counter 8 normal men, not a whole army. (4) Combat chances between fantasy creatures and men refer back to the standard (mass) Combat Tables, not the Mam-to-Man Melee Table.
So, these assertions in the past led me to think that Chainmail Fantasy was at 1:20 scale, which caused all kinds of collisions with the standard D&D rules. I would think, "How can Super Heroes be worth an army in Chainmail (8x20 = 160 men), but only 8 men in D&D?" (Or, "Is every individual catapult/ giant boulder/ fireball really killing 100-300 men per shot?") Furthermore, you would have all these situations in D&D indicating the action of hundreds of men or monsters (such as [a] monster number appearing stats, [b] guards in castles, [c] clerical faithful followers, [d] crew numbers on naval ships, etc.) And let's pile it on one more time with OD&D Vol. 3, p. 25, which has a very brief (as always) reference to "Land Combat" which says this:
The basic system is that from CHAINMAIL, with one figure representing one man or creature. Melee can be conducted with the combat table given in Volume I or by the CHAINMAIL system, with losses equalling a drive back or kill equal only to a hit. Battles involving large numbers of figures can be fought at a 20:1 ratio, with single fantastic types fighting at 1:1 or otherwise against but a single 20:1 figure.
Now, we can see that the first part of this is an honest description of the Chainmail rules (man-to-man). The second part is not so honest, making it sound like Chainmail has the capacity to handle a 1:20 fantasy scale, when the truth is it really doesn't. As much as we'd all like it to, Gygax included. The last sentence with its waffle-y "or otherwise" is really more of a thought-experiment or a proposal than an actual rules reference. The fact is, we really have no pre-planned way in either Chainmail or OD&D to deal with those hundreds of wandering orcs, castle guards, faithful soldiers, or ship crews. (Nor the hundreds of guards in the barracks of Sup-II's Temple of the Frog.)
The conjoined problem is that any single hero-type will, if we honestly look at the statistics in OD&D, get chewed to pieces by dozens or hundreds of 1st-level opponents. Even a D&D Superhero in plate & shield (HD8, AC2) will get hit by normal men 20% on each strike (req. 17+ on d20). Surrounded by just 5 normal men at a time -- ignoring flank/rear bonuses -- the Superhero can be expected to take 1 hit per round and go down in 8 standard rounds (or less). Even with their fearsome number of attacks and morale effects from Chainmail, the Superhero will be dead in just a few minutes of standard D&D combat. (In OD&D it requires AC -2 to become immune to the attacks of normal men; of course, this immunity is taken away by the AD&D combat tables with their repeating 20's.)
So, I find that for the first time in 30 years I fully understand Gygax's Introduction to Swords & Spells. It's a fascinating read. He knows that there's a problem with mass Land Combat in D&D and he's trying to provide a solution. He knows that the presented ruleset is only a partial solution at best (using all expected-value hit point calculations, with no dice or randomization of combat results whatsoever.) In fact, having recently drafted a forward for my own similar work, I find that Gygax anticipated most of my initial comments 33 years ago, working on an equivalent project.
Some extra-curricular way must be added to allow our D&D heroes to survive on the battlefield, when they really shouldn't according to the stock rules of D&D. Gygax writes in Swords & Spells, "The admonition regarding single creatures is important: If they meet, or are simply near each other, they should seek combat with each other rather than inferior opponents, and this combat should be fought at 1:1 in the normal D&D manner". Yep, that's one way to keep them alive (i.e., force them to avoid masses of normal men whenever possible).
Likewise, Doug Niles writes in the Battlesystem 2E book p. 106: "From a mathematical perspective, the attributes of heroes in a BATTLESYSTEM scenario are inflated beyond those of the creatures in the units surrounding them. However, the conversion is based on the assumption that there is an intangible quality to heroism that exceeds in importance the hero's worth as a fighting machine." Yep, that's another way -- just arbitrarily boost the hero's stats on the battlefield to keep them alive.
An interesting problem, and some highly interesting reads when you lay out the entire sequence of mass-land combat in D&D. The truth is, there were no rules even intended for mass combat in fantasy D&D until Swords & Spells, and later Battlesystem, and these were only limited successes at best. OD&D hinted at an endgame that wasn't really ever there in the first place.
(Special thanks to James & Jervis for recently getting the 1972 Gygax letter on fantasy wargaming posted on Grognardia, which jogged my thought process a bit more on this subject.)
## Friday, November 27, 2009
### More Hit Dice Stats
In the prior post, I presented some numbers for the average hits required to take down different HD creatures. Of course, that was done by random simulation, so it will have some small amount of sampling error in the numbers.
I wanted to double-check these numbers with a closed-formula, direct probability calculation. Even that takes some heavy-duty processing power (with permutations, combinations, convolutions, and such). Fortunately, it turns out that the numbers do in fact check out very nicely. Exact statistics and code below if you're interested.
## Tuesday, November 24, 2009
### Are All Hit Dice Created Equal?
Here's a D&D math puzzle. Consider a 4HD creature versus a 1HD creature -- say, an OD&D Hero versus a Veteran. (For this discussion we assume that all hit dice and damage are uniformly d6's.) On average, will the 4HD creature take 4 times as many hits to kill as the 1HD creature?
You might assume so -- I know I did, and that assumption is more-or-less built into the bridge that connects Chainmail to D&D. But somewhat surprisingly, that turns out not to be the case. Consider the following table (PDF): www.superdan.net/download/CompareHD1.pdf
What you'll see is that on average, any creature takes about HD + 0.5 successful hits before being eliminated. That is, there will usually be a little bit of "wasted" damage, perhaps as the creature is reduced to 1 or 2hp, and still requires another full hit before being struck down. And what this is means is that, as a proportion of overall HD and hit points, the 1-HD creature types will be "wasting" more damage and more hits than higher-HD types.
In the second and third columns of the table, you'll see things like this: Whereas a 1HD creature takes an average 1.5 hits per HD, a 4HD creatures only takes 1.1 hits per HD. In short, a 4HD creatures actually only takes 3 times as many hits as a 1HD creature (on average). And this grows progressively more severe: an 8HD creature only takes 6 times the hits of a 1HD creature, and a 15HD creature really only takes 10 times more total hits than a 1HD creature!
This might be merely a mathematical curiousity. Or, it might be something we have to make decision about if (to pick a random example) we wish to construct a set of mass-warfare rules which replicate D&D results with high statistical fidelity. Should we honor the actual hits-to-kill-over-1HD (as in D&D above), or should we more simply use the HD as hits-to-kill (as in Chainmail)?
Follow up -- More Hit Dice Stats (checked by closed probability formulas).
## Saturday, November 7, 2009
### OED: Book of Spells
We just published a volume on Lulu, entitled Original Edition Delta: Book of Spells. It's a concise, comprehensive collection of magic spells for use with the "original edition" fantasy game rules (as published by Gygax & Arneson, 1974-1975).
Personally, I always wished that the magic spells were set aside in their own booklet (instead of filling up the basic player's book), and now we have that. It uses the OGL to extract the bare-bones original rules back out of current, freely available source material.
Myself, I plan to print one of these out for each of the wizard players in my games (or whatever subsection they need: for example, the 1st-level spells all fit on one page again, so I just hand new players that and tell them it's their entire spellbook). I also made some minor edits to particular spells after playing with them for 30 years, which may or may not outrage you personally. :-)
It's 18 pages, with interior art on about 1/4 of the text pages where it fit. Available on Lulu as a download for $3.50 or printed with extra cover art for$7. Tell us what you think!
## Tuesday, September 29, 2009
### Death Statistics in D&D: 1978
This is an excerpt from a short article by Lyle Fitzgerald in Dragon #20 (November 1978, p. 26). I find myself mentally returning to this glimpse of the past rather frequently.
Our campaign is primarily a wilderness one (as the statistics reflect), although huge dungeons do exist. The 600 deaths listed include deaths of playing characters and their advanceable hirelings, not mercenaries or other non-playing characters. We started compiling these statistics 2 to 3 years ago...
• Goblin races (61) 10.1%
• Dragons (45) 7.5%
• Giants (34) 5.7%
• General Combat (26) 4.3%
• Lycanthropes (24) 4.0%
• Execution/ torture, sacrifice (23) 3.8%
• Bandits/ pirates/etc. (20) 3.3%
• Giant insects (20) 3.3%
• Assasination/ treachery (18) 3.0%
• Giant rocs (18) 3.0%
• Fireballs/ lightning (17) 2.8%
• Trolls (16) 2.7%
• Turned to stone (14) 2.3%
• Guards, military patrols (13) 2.2%
• Evil high priests (13) 2.2%
• Man-eating vegetation (13) 2.2%
• Related dragon species (13) 2.2%
• Cursed items/ booby traps (12) 2.0%
• Giant animals (12) 2.0%
• Falls (12) 2.0%
• Gnolls (11) 1.8%
• Gargoyles (9) 1.4%
• Hell Hounds (8) 1.3%
• Demons (8) 1.3%
• Elementals (8) 1.3%
• Griffins (8) 1.3%
• Kindred races (elves/dwarves)(6) 1.0%
• Misc. spells (6) 1.0%
• War (6) 1.0%
• Misc. causes (85) 14.6%
## Sunday, September 27, 2009
### A Trite Expression
From a column this week by Jim Rossignol at Offworld.com:
Games don't necessarily have to be fun to be engaging. Indeed "fun" seems like a trite expression in the face of some contemporary projects: games can provoke more than simple enjoyment. Look at the terrifying crypts of Stalker, or the strange sadness of Shadow of the Colossus. To realise that games ride on more than fun only takes a quick glance at the bigger picture.
Sing it, brother! Of course, this is just prelude to an interesting article about other new developments, highly recommended. Plus, discussion at Slashdot.
## Saturday, September 26, 2009
A recent post on Grognardia features a player reminiscing about playing in Gygax's Lake Geneva campaign in a solo expedition to Castle Greyhawk (as a 1st-level magic-user, no less). It brought to mind what I've long considered a potentially really great untapped market -- solo adventures, specifically for Thieves.
It's always seemed like there's a bit of awkwardness around having "thieves" working with adventuring parties, when they're presented as likely to cheat and steal from the very party they're working for. On the other hand, I've had lots of occasions in my life where I it would be great to run a game with the one single player I had available. It seems like "solo thief" scenarios would be an ideal solution (ostensibly two problems cancelling each other out).
I actually had a lot of fun once running module "O1: Gem and the Staff", which is the one dedicated solo-thief module I can think of. It also offered a really nice opportunity to run a "mini-tournament" game where I ran each person in my regular gaming group through it individually, and then compared scores at the end -- among other things, this got a little "quality time" with each player to see their personal reactions and preferences.
As much as I'd like to see scenarios like that, I know I'm not the one to make them -- I think you need someone a little more steeped in noir, crime drama, Lankhmar and Thieves' World traditions than I am (which is to say, practically none whatsoever).
## Tuesday, September 8, 2009
### On Light
I played a short game of D&D this weekend with some good friends (OD&D with OED interpretations). Played very well with 2 first-time players, my beginner-level girlfriend, and an almost 30-year veteran at the table (who gave me a great compliment about his suprise at how well the game worked sans clerics).
One thing that popped up is how far a torch illuminates, which isn't specified in OD&D. I started researching this, comparing across different rulesets, and the results were interesting. Partly this will be a critique about how games evolve towards greater abstraction over time, from realistic beginnings to nonsensical endings. It seems that the inertia, the infatuation with the game system itself takes over and late-version designers wind up working in an echo chamber. (And it's not just D&D: I've seen the exact same thing happen at games I worked on at a few video game companies. Perhaps it's true for other media as well, like books, TV, and movies.)
Question: How far does a torch let you see in reality? Consider this snippet from a Scientific American Supplement: "Torches consist of a bundle of loosely twisted threads which has been immersed in a mixture formed of two parts, by weight, of beeswax, eight of resin, and one of tallow. In warm, dry weather, these torches when lighted last for two hours when at rest, and for an hour and a quarter on a march. A good light is obtained by spacing them 20 or 30 yards apart." This indicates a bare minimum radius of visible illumination of 30 feet (half of 20 yards), maybe 45 feet (half 30 yards); possibly even 60 or 90 feet (20 or 30 yards itself) depending on how liberal the above usage of "good" is taken.
Question: How far does a torch let you see in D&D? In OD&D, the issue is seemingly not addressed; without directly comparing them to torches, the light spell is given a 3" radius, and the continual light spell a 12" radius (ostensibly 30 and 120 feet). In the AD&D 1E PHB a torch is given a 40-foot radius (p. 102, quite compatible with the research above), and the light spell is described this way: "The light thus caused is equal to torch light in brightness, but its sphere is limited to 4” in diameter." (Note that the second clause highlights the fact that while brightness is torch-like, the range of the magic spell is distinctly and intentionally shorter: just a 20-foot radius.) The continual light spell is reduced to a 6" radius, yet "its brightness is very great, being nearly as illuminating as full daylight".
Let's skip ahead to 3E D&D. Clearly some designer wanted to synchronize all of these effects and make them identical, a pretty reasonable motivation. If a light spell has been compared to a torch, why not make it equivalent to a torch in all ways, for brevity's sake? Well, the problem arises when this late-era designer doesn't do any research, and takes as his basis (looking solely from inside the rules) the effect of the magic light spell, and revises the effect of the mundane torch to match it. Thus in 3E you have both normal torches and the various light spells illuminating only a 20-foot radius.
Now, not only is the 20-foot radius torch unrealistic (whereas it formerly was), it's also extremely awkward from a gameplay perspective. The torch bearer only lights up 4 spaces (3E) away; routinely you'll have the front-line party member in darkness, or the front-most enemy in direct melee unsightable, or the extent of most rooms indeterminable during routine exploration, if you adjudicate this literally. (Now in 3.5E both light sources were given a new rules category of "shadowy illumination from 20 to 40 feet", but don't even get me started about trying to adjudicate that.)
The truth is that I'd recently been looking at the 3E SRD spells listing with its 20-foot radius torch, and so made a similar ruling in my game this weekend, and did get a look of disbelief from at least one of my players at the awkwardly short range of the party's light. And, I see now, he was right (in both realism and gameplay), my being led astray by late-era D&D rules-mechanic navel-gazing. At this point I have half a mind to say that torches give "good enough" light up to 60 feet away, illuminating most rooms in their entirety, and just using a whole 12" ruler (at 1"=5 feet) if we ever need to check it in play.
A rule-of-thumb I discovered over 10 years ago at one of my game programming jobs, and refreshed at times later on (even while building miniature models not long ago): If stumped by a particular design problem, ask yourself "What solution is used in the real-life situation?" In my experience, the answer is usually immediately applicable as a solution in your game rules. I'd guess that only a fetish for over-abstraction in a game would lead one away from this principle. I'll say again that we don't want realism-for-realism sake (see DMG p. 9), but for pre-existing gameplay problems, it often provides the most elegant fix.
There's other stuff about the interaction of light in published D&D that's bugged me over time (like the effect of the darkness spell, SKR's absurd-but-successful rant "infravision and why it should be destroyed" in 3E, etc.) That may have to wait for another posting.
## Thursday, September 3, 2009
### OD&D Saving Throw Charts
Here's a series of charts I compiled, comparing the various OD&D saving throw categories (click image to expand).
These graphs chart the saves at every individual level in OD&D, and also insert linear regression lines ("trend lines", or "lines of best fit"). They incorporate all the data from levels 1 to 15. I think this highlights certain patterns which are not obvious in the tables (granted the differing ways the class levels are grouped), and may even disabuse a few common misconceptions.
One of the first things to be seen is how the class save values are grouped (Fighters in blocks of 3 levels, Clerics 4, Wizards 5). Some of us would like to smooth this out from the table values, permitting a small improvement every level or so (as suggested for fighter to-hits in the AD&D DMG, or the Lakofka/Gygax Dragon article noted here) -- so the trend lines are useful for that.
A second thing is that wizards (magic-users) tend to have a shallower trend line, improving more slowly than the other classes. This is partly because there is another higher-level category for wizards (levels 16+) which is not shown here. Ultimately wizards end up with saves as good as (or better than) the other classes, but that doesn't occur until the very high levels off these charts.
Now for some more specifics. Fighters and Clerics are extremely close in almost all their values and trends, to the extent where I'll simply regard them as effectively the same. Similarly, in the last chart, saves vs. Spells are practically identical for all the classes at all levels; at most a difference of +/-1 in the trend at any level. Saves vs. Stone are somewhat more mixed; the trend lines actually cross (Fighters start out the worst, then become the best), but are so closely packed that we may as well treat them as basically the same, as well.
Finally, some differences. Wizards (and hence Thieves, as per Greyhawk) are clearly, consistently deficient in their Death and Wands saves from levels 1-15. Also, while starting out fairly close, Fighters have a particularly steep (beneficial) trend line in Breath saving throws. Therefore on average, wizards are at -3 when compared to Fighters across all these categories (Wands, Death, and Breath; technically average -2.67, -2.67, -3.47 respectively). Even clerics are at a -2 average penalty when compared to fighters in the category of Breath saves (-2.33, to be exact).
It's interesting that if you take Fighters as the basic character class (including all monsters), the baseline saves differ, on average, by precisely 1 point per category. That is, starting with the last category of Spells, Breath is at +1, Stone at +2, Wands +3, and Death +4 (again speaking in terms of the trend line intercept parameter; you can also see it immediately in the top row of the OD&D table itself).
The trend lines move downward with an average slope of -0.6 over all saves and classes (with a range of from -0.45 for the shallow wizard lines to -0.8 for the quickly advancing fighters vs. breath saves). If we think about smoothing out the curves with a simple formula (instead of using the tables directly), we might think about giving a bonus of half-the-level as a pretty good estimate. Of course, in my OED rules editorial, I felt even that was too complicated, and simply rounded it off to d20+level (beat 20+), plus the various modifiers noted above.
## Monday, August 31, 2009
### Quick Dice Average
Quick way to find the average (expected value) of dice xDy: Take half of x, add one to y, then multiply the two. This is equivalent to what some refer to as "Gauss's formula" for summing a series. For example:
8d6 -> 4*7 = 28
12d4 -> 6*5 = 30
## Friday, August 21, 2009
### OED Reincarnate
I've been working on a compact collection of spells for Original D&D-style games. Here's one example -- actually the very last one I completed, the longest, and possibly the one that caused the most difficulty to find the proper balance.
Keep in mind that in my games, there are no clerics. That might seem to preclude raising dead characters, but recall that our wizards fortunately still have the 6th-level reincarnate spell:
Reincarnate
Range: Touch
Duration: Instantaneous
With this spell, the character brings back a dead creature in another body, provided death occurred no more than 1 week before the casting of the spell and the subject’s soul is free to return. The magic of the spell creates an entirely new young adult body for the soul to inhabit from the natural elements at hand. This process requires 1 hour to complete. When the body is ready, the subject is reincarnated.
The recipient of the spell must make a saving throw to return in the same body type as before (same race and abilities, appearance may change). If failed, the DM should instead choose a random humanoid race of the same alignment for the new body type (re-roll abilities, up to ogre size). It's quite possible for the change in the character's ability scores to make it difficult for the character to pursue his or her previous character class. The character’s level is reduced by 1.
A few comments. You'll see that the standard "stat block" elements are just range & duration; nothing else is really necessary for the majority of old-school spells. The first paragraph is just copied directly from the 3E SRD as provided by the OGL.
The second paragraph is text written by myself. Since this is our only raise-dead type spell, I wanted to have some chance that the character returned fundamentally unchanged and still playable. Is the saving throw the proper mechanic? I think so. On the other hand, I wanted to retain the chance that the character is lost in the process, and returning as a random humanoid makes it likely to be the effective case (perhaps socially speaking) -- although restricting it to humanoids makes it easier to run in play if the DM so wishes. The probability is generally similar to 1E "system shock" rolls for Constitution, but folds into the standard save mechanic. The character-level loss is retained from the SRD to make sure this doesn't become a routine (or cyclical) procedure. Other than that, there's quite a bit of flexibility for the DM to adjudicate this spell in the best fashion for his or her campaign.
The spell text above is designated open game content under the terms of the Open Game License v1.0.
## Tuesday, August 11, 2009
### OED Update (v0.5)
Minor update to the Original Edition Delta house rules (version 0.5). Mostly just editorial cleanups, I felt it was important to fit on 4 pages (i.e., one folded sheet of paper). Took out some details on spells and monsters that seemed unnecessary here. Revised exploration movement back to the core rules.
http://www.superdan.net/oed/
## Tuesday, July 28, 2009
### The Gray Zone: Convention Games
Let's consider three different contexts for playing D&D:
1. Home campaigns. Here you'll be playing with the same players & characters over an extended period of time. Characters will almost certainly be generated individually to player taste; they will advance and explore the world over time. Old-school “sandbox” style play basically requires this context.
2. Convention games. This is a one-shot adventure, possibly limited to a 4-hour time slot or something similar. Characters may be pre-generated or custom-made (consider RPGA point-buy rules or the old DMG Appendix P, which I still use). The characters won't advance in any mechanical way.
3. Tournament play. This is also a one-shot adventure, but in a competitive context. There will be multiple (possibly very many) playgroups run through the same scenario, with an eye towards scoring the best and picking a “champion”. Characters are almost certainly pre-generated (so as to give a level playing field to the competition).
Notice that I distinguish here between “convention games” in general and “tournament play” in particular (even though they have many coarse similarities, and tournaments are generally run within a convention gathering). Convention & tournament games are similar in that they both feature short one-off adventures, and they avoid any usage of the character-advancement rules. But they differ in that one is competitive and the other is not. Simple convention games, perhaps, have more of an incentive to let the players “win” (sometimes they are run as product-release promotions or trials, and have good reason to want the players to leave the table feeling like they “had fun” with the experience and the product).
Tournament games, meanwhile, have an excellent reason to be tough meat-grinders where the majority of the players “lose” (by acting as a strict filter, they make it easier to identify the one “champion” in the event that made the most progress; whereas if many people uniformly “win” it will be difficult to make that distinction). Compare to an interesting quote from recent cyberware games at West Point: the attacks designed by the NSA were made "a little too hard for the strongest undergraduate team to deal with, so that we could distinguish the strongest teams from the weaker ones." And this also explains why the earliest D&D published adventures all had a "killer DM" feel to them: they were all originally developed for competitive tournament situations.
Okay, so getting closer to my point -- Having considered the different kinds of play contexts I've seen for D&D, two of them have seemed the most compelling, and one is rather more frail for me. We might ask the question, "Why are we playing; what do we gain at the end?" Two of these situations have a meta-reward, outside the game itself, that makes the experience deeper and more compelling. In case (1) Home campaigns, the meta-reward is largely character advancement; levelling up, accessing new powers and magic items. There's also exploration of a larger campaign world over time, but let's face it -- The #1 revolutionary, addictive development that D&D brought us was the idea of persistent, advancing characters over many game sessions, and this is almost solely accessible in terms of a home campaign. In case (3) Tournament play, the meta-reward is the competition with other teams playing in parallel to yours, and seeing one team at the end awarded with honor and a trophy (or somesuch). Personally, I love playing in a tournament, and love the heads-down, high-proficiency play that I see in that context.
So that leaves case (2) Convention games, and frankly, I can't figure out what the meta-game "point" is to them anymore. When I run one, I'm left a little bit bewildered at the end about what the payoff is. It seems very awkward if there's a TPK at the end, and it seems almost equally awkward if time simply runs out after a certain number of rooms are successfully looted.
One suggestion is that there needs to be a specific "quest" in a convention game -- The players are given an explicit (or obvious) assignment at the start, and if they can succeed in the time alloted, they are declared to have "won". A few problems here: (1) It's difficult to estimate in advance a perfect set of encounters that lead to a "win" at exactly the 4-hour mark. (2) The setup manages to frustrate the classic D&D architecture of open-ended exploration, multiple paths, resource management, wandering monsters, treasure and XP rewards, etc. (3) There's still no meta-game reward from this in-game "victory".
Now, I have a good friend Paul who recently ran an exceptional convention game a few weeks back. Philosophically, we tend to disagree about many of the high-level "whys and wherefores" of D&D, but I think we almost always agree about whether a given game we just experienced was good or not (sort of an "I know it when I see it" experience). In the past we simultaneously co-DM'd a campaign, and at least once our differing styles stomped ugly all over each other (Ettin-style?). He may run a better convention game than I do; the one he ran the other weekend was one of the most fun D&D sessions I've had in a long time -- hilarious characters, great encounters, well-paced, filthy humor (which I like), great ending. I was mulling over my troubles with convention games on the ride over, and lo, my friend snaps off one of the best such games in my memory.
Anyway, Paul wrote up his notes on that adventure on his blog over here. The thing I was surprised and a bit unsettled by was that the quest, locations, and NPCs were all being invented and moved around backstage on the fly, which is how our investigations managed to lead us to saving the girl at almost exactly the 4-hour mark. Made for a great, nigh-perfect gaming session -- and it's not something I think I'd ever be able to bring myself to do, as it goes against every grain I've been trained in as a game designer, thinking more in terms of published tournament-style adventures that we'd prefer to keep fixed, replicable, and fair if multiple groups are run through the same adventure over time.
So, what to do? Should I just give up on running one-off convention games (granted that they frustrate all the meta-game rewards that are the hallmark of D&D), and leave them to better narrative DMs? Is there any way to interface the classic rewards of D&D in an isolated, one-shot experience? Troubling questions, since at this point in my life the only opportunities I have for play are the infrequent one-off convention games: the "gray zone" in the middle, if you will.
## Thursday, July 16, 2009
### What is the Best Combat Algorithm?
Throughout the history of D&D and RPG's (and more generally, any action/ wargame), there have been a host of different algorithms to determining success in combat and other feats of skill and luck. For example: to-hit-tables, THACO, compare to increasing AC score, etc.
Within some very small tolerance for error (say +/-1 difference), all of these systems have been mathematically equivalent (i.e., result in "hits" for the same rolls of the d20 die). But which is the best algorithm? That is, treating the tabletop gamer's brain as a kind of natural "computer", which is easiest/ fastest/ most efficient/ least error prone? Is it one of the aforementioned algorithms, or something different?
First, let's establish the different components of the basic D&D "to hit" (or anything else) roll. They include: (1) a d20 die roll, (2) a basic attack proficiency, by class or hit dice, (3) the armor of the defender, (4) miscellaneous modifiers (positive bonuses being good for the attacker), and (5) the "baseline" chance to succeed at hitting, irrespective of other modifiers #2-4.
Let's look at one example, say, the THACO mechanic from 1E-2E. In the form of an inequality, the basic algorithm is:
THACO ALGORITHM: d20 + mods ≥ THACO* - AC
* The THACO was itself determined (pre-game time) from tables in the core books. But in essence, for fighter and monster-types, this incorporated the "baseline" success chance (Normal Men need to roll ~20 vs. AC 0) and a +1 bonus per fighter/monster level. In other words, THACO = (~20 - level). Let's substitute and see all 5 terms plainly:
THACO ALGORITHM: d20 + mods ≥ 20 - level - AC
Now, if we proceed to search for other, variant algorithms, we can apply the basic algebraic "rebalancing" operations to make any of these terms appear on either side of the inequality that we wish. For example, we could add a "level" term to both sides (canceling it on the right and appearing as an addition to the left). Or, we could subtract the "mods" from both sides (thereby appearing as a subtraction on the right).
In fact, since there are 5 terms, and each can appear on either of the 2 sides of the inequality that we wish, there are in fact 2^5 = 32 different formats for this inequality (by the fundamental principle of counting) that we could consider. Here are just a few of those 32 possible variations:
TABLE ALGORITHM: d20 + mods ≥ (20 - level - AC)
[Encapsulated in table]
THACO ALGORITHM: d20 + mods ≥ (20 - level) - AC
[Encapsulated in THACO]
d20 SYSTEM ALGORITHM: d20 + mods + level ≥ (20 - AC)
[Defined as New AC]
"SUBTRACT ALL" ALGORITHM: d20 ≥ 20 - level - AC - mods
"ZERO BEATS ME" ALGORITHM: 0 ≥ 20 - level - AC - mods - d20
Etc...
Now, obviously, those last few were for humorous illustrations only, and I assume not many people would want to use those systems. But what criteria can we use to choose the "best" possible system? Let's consider the following as guiding principles (and we'll back each of them up with results from experiments in cognitive psychology as we proceed):
(1) Additions are easier than subtractions.
Although mathematically equivalent (and using fundamentally the same operation in digital computing systems), most people find subtraction significantly harder than addition. For example, see the paper by MacIntyre, University of Edinburgh, 2004, p. 2: "Addition tasks are clearly completed in a much more confident manner than the subtraction items, with over 80% of the study group with at most one error on the items. Subtraction items appear to have presented a much bigger challenge to the pupils, with over 50% having 3 or more of those questions wrong."
(2) Round numbers are easier to compare than odd numbers. In other words, when comparing which of two numbers is larger (the final, required step in any "to hit" algorithm) it will be easier if the second number is "20" than, say "27". This follows from the psychological finding that it's faster to compare single digits that are farther apart; see Sousa, How the Brain Learns Mathematics, p. 21: "When two digits were far apart in values, such as 2 and 9, the adults responded quickly, and almost without error. But when the digits were closer in value, such as 5 and 6, the response time increased significantly, and the error rate rose dramatically..." In our case, setting the second digit to zero would maximize the opportunity for a large (and thus easy-to-discern) difference between the numbers.
(3) Small numbers are easier to compare than large numbers. This has also been borne out by a host of psychological experiments over the last several decades. Again from Sousa, p. 22: "The speed with which we compare two numbers depends not just on the distance between them but on their size as well. It takes far longer to decide that 9 is larger than 8 than to decide that 2 is larger than 1. For numbers of equal distance apart, larger numbers are more difficult to compare than smaller ones." Again, this is true for human computers only, not digital ones (ironically, the digital processor "compare" operation is really just an application of the same "subtract" circuitry).
Okay, so let's think about applying these principles to find the cognitively-justified best tabletop resolution algorithm. Applying principle #3 means that we'd generally prefer dealing with smaller numbers rather than larger. Before considering anything else, it's clear that it will be hardest for people to mentally operate in a d% percentile system, easier in a d20-scaled system, and easier still on a d6-scaled system. We should pick the easiest of these that gives the fidelity necessary to our simulation, and the d20-scale does seem like a nice medium.
We can also apply principles #2 and #3 to discard a key change brought about to D&D in the 3rd Edition: Ascending AC numbers. While it has its proponents (and is of course mathematically equivalent to all the other 32 permutations of the core mechanic inequality), it forces us at the end of our algorithm to run a comparison against a relatively large, and frequently odd, number, such as AC15, or AC27. By using instead descending ACs, they will always be a single digit (and therefore easier to manipulate according to finding #3), and we'll also see below that we can arrange a rule such that the final comparison is always run against a fixed, round number (and therefore preferred according to principle #2 as well).
Applying principle #1 indicates that we'd prefer to have all of our operations be additions, and do away with any subtractions (as in the THACO system). Returning to our 32 different options for presenting the basic resolution inequality, this is easily accomplished: simply add back all the terms on the right-hand side of the inequality, and all those terms become simple additions on the left. Having done this, we'll see that we're left with a nice round number to compare that addition to (fortuitously complying with principle #2, as mentioned above). We'll call this the "Target 20" algorithm:
TARGET 20 ALGORITHM: d20 + level + AC + mods ≥ 20
Now, you may have guessed where I was going with this if you'd read previous blog entries of mine supporting the idea. While never presented this way in TSR/WOTC core rulebooks, I'm quite confident that this is the most mentally efficient representation of the core d20-based resolution mechanic: Add d20, your fighter level, your opponent's single-digit descending AC, and miscellaneous bonuses; a number equal to or greater than 20 then indicates a "hit". It satisfies all of our 3 psychologically-verified guiding principles: (1) additions are easier than subtractions, (2) round numbers easier to compare than odd ones, and (3) small numbers easier to manipulate than large ones (particularly in the form of single-digit, descending ACs).
Like a lot of things in our hobby, the Original D&D rule was pretty close to optimal, but not quite perfect in this sense. If I won the lottery it might be interesting to definitively prove which method is best by running a series of psychological experiments; but since the result just follows from already-proven principles, I'd also want to set up a betting pool and recoup some of my money from the WOTC chief designers of the last several years.
## Tuesday, June 30, 2009
### The Golden Rule
Comments on the last post made me once again recall what I consider to be the most important passage in all of Gygax's writings on D&D. I use the following as my "golden rule" when thinking about game design for D&D:
ADVANCED DUNGEONS & DRAGONS is first and foremost a game for the fun and enjoyment of those who seek to use imagination and creativity. This is not to say that where it does not interfere with the flow of the game that the highest degree of realism hasn't been attempted, but neither is a serious approach to play discouraged.
- DMG p. 9: "The Game: Approaches to Playing Dungeons & Dragons"
Now, in the interest of being as clear as possible, allow me to unpack the latter two clauses and clean up the double negatives. If we do so, we read this:
(1) The highest degree of realism has been attempted (so long as it does not interfere with the flow of the game).
Again, the double-negatives make the slightly hard to parse on first viewing. In fact, we do seek the highest degree of realism -- claims that D&D has "never been realistic in any way" are totally false. Purely abstract systems are not of interest to us. However, if a conflict arises, then what must take precedence? Definitely, the flow of the game. Both elegant gamesmanship and realistic modelling, working in synergy, are the zenith of game design; but if those goals come into conflict, then gamesmanship must clearly, (narrowly) win out.
(2) A serious approach to play is encouraged.
We can allow ourselves to be serious about our gaming. Claims that "you're thinking too hard about fantasy" can generally be ignored as meaningless. And at the same time, if some of our friends are most interested in the fantastical, phantasmagoric, and even comical elements of our gaming, then that should be seriously respected, as well.
## Saturday, June 27, 2009
### Games-Within-Games
Here's an important aspect of early D&D I've been meditating on lately: the propensity for it to be an ongoing construction of games-within-games. Let's consider a few exemplary examples that spring to mind, starting with D&D and some of my most-favorite computer games:
(1) Dungeons & Dragons. In some sense, OD&D can itself be thought of as the “discovery” that the CHAINMAIL rules contained an even more interesting sub-game with its fantasy combat at the man-to-man scale (not to mention its even more refined system for jousting competitions). In the initial “White Books” you had both the standard dungeon exploration, as well as separate and distinct rules for large-scale wilderness exploration, castle-building, aerial combat, and ship-to-ship naval engagements.
What do I consider some of my most memorable D&D adventures? How about module X10, with its unique strategic-level world-warfare game (in parallel with PC-based diplomacy/adventure scenarios – including possible sidetracks to other X-series modules). Or M5, with a points-based diplomacy roleplay between imperial powers at the adventure's climax. Or even module S3, with its special system for trying to manipulate high-tech artifacts (among other things).
(2) Sid Meier's Pirates! Man, did I play a lot of this game on my cousin's Commodore 64 one summer. In some sense I consider it to be the near-perfect game – and, a lot of my design efforts wind up looking like attempts at replicating this classic. One of the strengths is that it has a completely different sub-game for each skill you might perform in your career as a privateer in the Carribean. Strategic sail navigation, taking a sun-sighting, fighting by cannons, personal swordplay on the deck, invading towns, choosing crew and cargoes, puzzling over map fragments, and wooing the daughters of prominent mayors, are all simulated in distinct sub-games. And almost all of them are both flat-out wonderful, and interface perfectly with all the rest (to the extent that only at this late date can I recognize them as sub-games at all).
(3) Mechwarrior. The original Mechwarrior was another game I played and re-played a whole lot of times. It's the first game I played that had both (a) “sandbox” play, and (b) “plot” based threads. The “sandbox” allowed you to progress as a mercenary captain, taking randomly-generated combat missions, improving your team and equipment over time. The “plot” (for lack of a better word) allowed you to follow up on clues that you were the member of a deposed royal family, and potentially win back your family's home. Some great (and dare I say Gygaxian) aspects of this: (1) you could play the mercenary sandbox indefinitely, (2) it was actually fairly hard to discover that there was a “plot” based mystery to follow up on in the first place, and (3) you still had to do some random mercenary missions in order to build up the strike team you needed at the end of the plotted scenarios. The exact time and sequence of events is impossible to predict in a game of Mechwarrior.
Now, some of this should be well-known to players of current computer game “sandbox” designs (Grand Theft Auto, anyone?), but since I don't play modern consoles, I can't comment directly on those. The thing I want to emphasize is that we don't lose the willingness to allow games-within-games in our classic tabletop RPGs.
Consider a few other examples from TSR/WOTC. In the old Star Frontiers Knight Hawks space combat game (by Doug Niles, who deserves his own blog acclaim), there was a brilliant scaling rule: for 15+ ships, use the coarse, Basic rules for the game; for 5-14 ships, use the more detailed Advanced rules; for 2-4 ships, use the Advanced rules with the individual characters' piloting & gunnery skills detailed. In the more recent d20-based Star Wars game, the spaceship rules were entirely done by analogy to the stock character-to-character system – which I was rather appalled to see when I read it.
Post-2000, there's been a bit of an over-reaction by my left-brained brothers and sisters, often times feeling that all activities in a particular game need to be abstracted out into one single universal mechanic. While this might be nice in theory, in practice I consider it be an abject failure (see the Star Wars example above). Even AD&D is not immune to criticism – when it converted overland movement rates from hexes to miles-per-day (so as to be usable with any campaign map scale; compare DMG p. 58 to OD&D Vol. 3, p. 16), it should have been emphasized that each DM really needed to manipulate those numbers and turn them back into spaces-per-turn on their personal map scale. Unfortunately, it did not. Here we see how frequently the attempt at abstraction interrupts the gamesmanship that we need at any scale of action.
Hence we have a few criticisms of the current branding of D&D: Action at different scales should have different mechanics that support the distinct flavor appropriate to each. Likewise, character classes that represent very different approaches to adventuring (magic vs. martial arts) should have different mechanics supporting each. We lose a lot when the game is reduced to a single kind of action scale (6-second moves on 5-ft squares), and the willingness to include sub- and super-games is prohibited (such as castle-building, tactical mass warfare, etc.) And, we have even more reason to avoid fetishizing character development, because we have to be willing to lose those characters abruptly if we play out an encounter at a larger scale (see 3E's Tome of Battle for the mangled result of being unwilling to allow for this).
Much of the addictive beauty of the original D&D game comes specifically from its flexibility as a model of developing games-within-games, both above and below the “normal” scale of action. It's a more interesting and more challenging enterprise than writing either "story" or "sourcebook" supplements, which add nothing concrete to our gameplay. But likewise, we should avoid being dogmatic, and try to engage our expansion systems only when it makes sense to do so (perhaps taking Doug Niles' SFKH as a canonical, concise example).
## Monday, June 15, 2009
### Defending D&D?
I keep wanting to make a connection between some defensible "essential properties" of D&D, and established legal protections for food products such as Chocolate, Champagne, Gruyere cheese.
For each of these food products (see Wikipedia), there were companies that would have increased sales and made more money, if they could have labeled their products with these names (and of course, they desired to do so). However, trade groups did manage to defend certain definitions of the products and prohibit other usages.
Most of these defenses have been seen in the form of legal restrictions in the EU. However, you've got at least one case in the US in 2007 where companies wanted to replace cocoa butter with hydrogenated vegetable oil and still label the product as "chocolate", which the FDA shot down.
I like the basic idea of that, although in each case it's a legal construct, and it's hard to see where we could take that with the D&D trademark still currently held by WOTC/Hasbro.
(This post was originally a comment of mine on James Maliszewski's Grognardia blog.)
## Thursday, June 11, 2009
### OED: Falling
There's just one more thing I realized I had to add to the OED, and that's a rule for falling:
Falling: Assess falling damage at 1d6 per 10 feet fallen (linearly). This assumes a fall onto earth or wood; decrease damage for yielding surfaces (water, snow, mud), and increase damage for very hard ones (stone, metal, etc.)
Falling has an extremely weird pedigree in D&D, and I could write at length just on that (in fact, here it comes...). Consider OD&D – Where are the falling rules located? Only in the naval combat section (for being pushed off the deck of a ship; also in passing in Vol. III, p. 5 *). And what is the rule? 1d6 damage per 10' fallen – but with a saving throw, generating only a 1-in-6 chance per 10' of taking any damage at all. For example, a 20' fall has just a 2-in-6 chance to fail the save (4-in-6 success) - and thus, two-thirds of the time, will deliver no damage whatsoever!
The subject of falling is mentioned in AD&D's PHB in only the most cursory fashion: “It is probable that your referee will simply use a hit points damage computation based on 1d6 for each 10' of distance fallen to a maximum of 20d6...” (p. 105) A pair of Dragon magazine articles later assert that this damage should be assessed cumulatively (i.e., 10'=1d6, 20'=3d6, 30'=6d6, etc.) – this being a very short blurb by Gygax in issue #69 (as part of his thief-acrobat presentation, later reprinted in Unearthed Arcana), and then a full-page article by Frank Mentzer in issue #70 (asserting a Gygax claim that the original PHB language was a typographical error; however, this does not agree with other Gygax works such as module G2).
A somewhat later issue of the Dragon (#88) had what I consider to be one of the most inspired and challenging presentations for that era. That issue carried the article “Physics and falling damage” by Arn Ashleigh Parker, wherein a “proper” falling damage system was deduced from rigorous consultation to physics formulas, gravity constants, advanced algebra, wind speed, and reference to texts on skydiving (the result being quasi-similar to the original 1d6/10', with more damage assessed earlier on). And this article came as part of a debate, with reference to its own rebuttal article in the same issue by Steven Winter, “Kinetic energy is the key”, which used other physics concepts to argue precisely for the original linearly-assessed 1d6/10' rule. Imagine that happening in Dragon today!
In some circles, the 1d6-per-10' rule was commonly ridiculed (ignoring the cumulative revision) as permitting high-level fighters to leap off 100' cliffs without much fearing for their lives. There is some reason to this, in that fighter hit dice swelled up to d10, and Constitution bonus up to +4/die, while falling damage remained the same 1d6 per level over time. However, I'm convinced that the reaction to this produced one of the most atrocious disfigurements of the system in 3E: the “Massive Damage” rule, whereby a save-vs-death was called for when any damage amount hit the magic number of 50. Most players are under the impression that this was an optional variant rule, but in 3E, it was not.
For many years I was using Gygax's cumulative system for falling damage, feeling indeed that falls should be more perilous, and I also applied the idea to other environmental factors (such as heat, cold, thirst, and starvation). In general, I felt that if higher-level hit points represented less physical stamina and more “dodging/fortune-type” factors, then they should be devalued in the context of an unavoidable fall. However, two problems with that have occurred to me recently. First, there seems little justification that someone able to dodge a monstrous blow could not also be able to roll/spring/cover their head properly to avoid the worst effects of a great fall (or simply land in a lucky spot). Second, when I looked in the DMG to compute exactly what percentage of hit points were “fortune” at any level, I was dismayed to find (on p. 82) that Gygax had stipulated a system wherein the raw physical hit points grew at precisely a constant rate every level (before abruptly ending at level 7). If this were the case, then even under my former assumption, falling damage should increase only linearly through level 7 (at least).
So now, here I am back today, opting to assess damage at the old standard of 1d6 per 10' fallen. (Perhaps if I were playing 3E I would increase it to a base 1d10 per 10'.) I'm not going to use the save from OD&D because (a) it's simply an unnecessary complication, and (b) it makes too many 20' or 30' falls entirely without injury, which really is silly. It's probably a reasonable amount of damage if most hit dice are still d6 themselves (Fighters d8, maximum Con bonus +2), and surfaces like hard, jagged stone can boost this by +1 or +2 points per die. Other environmental factors will probably also be assessed linearly from now on. We shall see.
* Edit 11/23/11: I just learned of another place in OD&D that I had overlooked: in the Aerial Combat section. "Crash -- for every 1" of height a rider must throw one six-sided die for damage occurring from the crash, i.e. a crash from 12" means twelve dice must be rolled and their total scored as points of damage incurred by the creature's rider." [Vol-3, p. 27] Thanks, Grognardia!
## Monday, June 8, 2009
### OED: Spell Changes
In addition to the section on PC Generation, the OED v0.4 rules also added a section on Spell Changes, in 3 parts. In particular:
Sleep: Roll 1d6+1 for total hit dice affected (no figure over 3HD). Duration is 12 turns; slapping/shaking awakens 2-in-6 per round.
You'll see a lengthy analysis arriving at this a few posts back. I made the die roll 1d6+1 for elegance sake (it neatly results in exactly 1-3 creatures of 2HD, for example).
Missiles: For missile spells (fireball, lightning bolt), announce a range, and then roll 2d6 over/under for exact location (read lower die, ties indicate on target).
Now we're getting into the material that you might keep unmentioned until a player throws their first fireball and discovers this during play. This is originally an optional rule from Chainmail for catapults/field guns. (The fireball and lightning bolt in those rules, of course, simply reference back to the effect of such catapults/field guns.) Therefore I think it's both legitimate under the original rules, and scratches an itch of mine that the fireball/lightning combo are generally used with suspension-of-disbelief-shattering accuracy.
If there's any question about the rule, it's this: Announce a range to fire the missile-spell (say, 10”). Roll two d6 of different colors (say a red one for “over” distance that comes up 4, and a white one for “under” distance that comes up 2). Apply the result of the lower die (in this example, 2” under the declaration, placing the spell 8” from the caster in the desired direction; if the dice were tied then the shot would be exactly 10” away).
Permanence: Spells under 5th level cannot be permanent. Those without a listed duration fade after d6+6 weeks (such as charms, continual light, etc.)
Another itch of mine being scratched. There's a whole lot of mischief that can be done if you permit low-level spells to be truly, indefinitely, permanent. Charm person allows someone to gain infinite followers over time. Continual light allows those magic-street-lamp cities (yuck). Wizard lock may as well be used by cantankerous wizards on their downtime, locking every door they encounter forever, just to be nasty.
You can also think of this as a generalization of the principle introduced in Supplement I: Greyhawk, where charm spells are given additional saves to break, assessed over some weeks of time. Why not apply the idea equally to other spells where indefinite duration allows silly mischief?
This does, however, still leave the door open to higher-level permanent spells, and we should carefully consider if there are any game-breakers in the bunch. I feel that the 5th-level spells provide in-milieu limitations: wall of stone can be dispelled (so a wizard would prefer to have a real wall constructed) and animate dead has obvious drawbacks (collecting wagonloads of bodies for an undead army invites the wrath of more heroic adventurers). The 6th-level spells need more careful handling: invisible stalkers should get more and more perilous if used in great numbers; geas spells should have some drawbacks or risk if used with abandon (c.f. the works of Vance, for example).
## Thursday, June 4, 2009
### OED: PC Generation
The OED v0.4 rules added a section on PC Generation in 3 parts. The first part says this:
Convention Play: When making a party for a one-off convention-style game, the normal level limits are not good balancing factors. Human characters should be created at +1 level. Wizards are required to have a minimum Intelligence of 10+highest spell level.
I'm thankful that my last convention game threw a high-intensity spotlight on this problem – it was the most troublesome issue I confronted, and I've been grappling with it for weeks, looking for the best solution. (I was working on a 2,000-word essay to explain it, but then I thought better to scrap that.)
Here's the issue: I'm happy to accept the OD&D doctrine that low-level powers can be balanced by high-level limitations, and vice-versa. This system creates “opportunity costs”, where early decisions have continuing repercussions over a lengthy period of time. In 3E, this was thrown out as anathema, and a demand was made that all races/classes be made equal at every possible level.
While not explained in the 3E texts, I now understand why this was done. While the level-limitations (and exponential wizards) are reasonable in a long-running home game, they are useless in a one-off convention game. For example: In my last low-level OD&D convention game, I think all of the wizards chose to be elves (gaining armor & weapon usage at no cost). Meanwhile, in my last high-level AD&D convention game, all of the characters chose to be humans, except for one dwarven thief (thereby sidestepping all level limits).
So now the dilemma is this: Do we re-write the race/class balancing mechanics so they work for any convention game, at any level? (This being the choice made in 3E; fulfilling, perhaps, a failed goal of AD&D). Or do we reserve the extra balancing rules to an appendix, for the special case of convention gaming? Obviously, after much soul-searching, I chose the latter.
Generation Order: Players should take a PC card and fill in the abilities, race, class, alignment, and money/ equipment. Then, the DM takes the card and calculates modifiers, move, AC, attacks, specials, and directs the roll for hit points. Finally, add the character to the DM's summary roster.
This here is just an observation of the most efficient way to administer the from-scratch character creation process. I'm using my pre-printed index cards for PC records. I let the DM do all the secondary calculations, both so that (a) no new player is required to know the number-crunching rules prior to play, and (b) we accomplish a character audit at the same time. Hit points require the Constitution modifier before rolling. I also keep a summary roster with: Player Name, Character Name, Race/Class, Move, AC, and Hit Points (the most important aspect being the reminder to address each player by their character name).
Magic Items: Characters should be checked for magic items at 10% per level. Checks are made by class: Fighters (2 weapons, armor, shield, potion); thieves (2 weapons, armor, ring, potion), or wizards (2 potions, dagger, scroll, ring). Items are +1 or a basic type, chosen by player; for a +2 item, roll again at 1%/level (+10%/level over 10th); for a +3 item roll yet again at 1%/level (no further bonus).
Up until recently, I was using the AD&D DMG Appendix P for higher-level party generation, in particular the assessment for magic items. Unfortunately, I found it very burdensome to mentally track all the different choices and percentages-per-level involved. Instead I thought it would easier to just batch everything up to a straight 10%/level for everything, in a few broad categories (much like the MM tables for “Men”; for NPCs, I would likewise roll at 5%/level). Once again, no table lookups required; we should be able to do this entirely from rote memory.
### OED Update (v0.4)
I've updated the Original Edition Delta house-rules to version 0.4, after incorporating some changes after the last convention game. In particular, I added a “DM's Section” at the end with a few alterations that don't affect the core system. This brings the page count up from 4 to 5; I'm likely to continue printing a 4-panel brochure to give to my players. I'll also post comments from each new section in forthcoming blog entries. (Don't forget that the Open Office ODT file has additional sidebar notes to the main document.)
http://www.superdan.net/oed/
## Wednesday, May 20, 2009
### It's Not Just About "Fun"
Let's say I'm talking to some coworker or new acquaintance about one of my many different endeavors (possibly medieval wargaming, or music, or even mathematics). Maybe we're not quite making a connection about why it's important to me. How can I make them understand why I spend time on these projects? As I struggle for a conclusion, I might say something like, “So yeah, we're having fun”. And the other person will then say, “Oh, all right then”. Apparently, they'll achieve closure from that, and walk away untroubled.
And you know what? I'm trying really hard to stop doing that, because for the most part it's just total bullshit.
I know a lot of us have the same problem. Our art and our gaming are important to us. We feel it in our gut. But when it comes time to explain it, we routinely say “It's just fun”, or “As long as we have fun,” or “The only important thing is having fun”. We wave the word “fun” at the problem of explaining ourselves and assume that it suffices.
But I hereby choose to resist that temptation. Our gaming and our art are so much more important and multifaceted than that! The “fun” explanation is really just a convenient cop-out. It leaves mute the vast majority of our experiences in any of these deeply-felt projects. Like the best literature (or theater or movies or TV), they may be: sad, scary, inspiring, informative, arousing, inflammatory, tragic, dramatic, elegiac... without necessarily being "fun".
You can see one iteration of this in the “AngryMath Manifesto”, on my math blog (over here). Most mathematicians tend to describe their work as “a play of patterns... a wonderful beauty... a crystalline serenity”. But that's not an accurate representation of our actual work in math – it's trouble, it's a problem to be solved, it's a barrier seeking destruction, and it's the jolt of relief and excitement when the light-bulb clicks on.
Consider the experience when I'm playing drums with my garage-punk band Victor Bravo (blog over here). If I'm weak-minded, I might describe the experience as “fun”, but that's not really truthful. It's hard work, and it's pre-show-anxiety, and it's also the overcoming of all that in myself. It's fast and hard-hitting, it's incredibly precise, and yet it's totally chaotic at the same time, too. We're singing songs about people's failings and disgust and destruction – and the heartfelt desire for things to be better. I'm trying to hit, I'm trying to listen, I'm trying to move my wrists and fingers properly, I'm trying to track what instruments are being damaged, and I'm trying to simply breathe properly. People are jumping and dancing and shoving each other in our mosh pit. Sometimes I'm trying to dodge stuff being thrown at us, and occasionally I'm trying to track how badly I'm being damaged (for example). That's many things, but “fun” is probably the weakest, faintest of all approximations of the experience.
Now let's come back to our gaming hobby, which is all of these things all at once. Whether players or DM, when we're at the table, we're trying to: Solve problems, support our teammates emotionally, improvisationally act out our character personalities, remember rules, crunch probabilities in our heads, decide whether to use our resources now or later, gauge risk-versus-reward, and consider a simulation of near-medieval life and technology. We're trying to manage our own emotions and come back from a bad beat or a difficult situation, and find some way to fight on (!) to victory. We're listening and parsing language and narrative descriptions for meaning. It's a puzzle and math and theater and history and a sporting event all at once. I've seen players go directly from sputtering anger to cheering joy at the roll of a single die or the discovery of a puzzle's solution (and vice-versa).
Is all of that “fun”? No, I don't think so. Much of it is heartache and uncertainty and struggle against overwhelming odds while the game is in progress. And that emotional, intellectual test is so very much more than just “fun”.
So, if “fun” is so miserably incomplete, what would a fuller explanation look like? Obviously, I can't pretend to think that I have a complete answer. But let's say, just for a moment, that we consider Aristotle's famed analysis of “Tragic Poetry” (and if he's not completely right, we can at least take this as a carefully-considered initial hypothesis). He specifies six components: (1) Plot, (2) Character, (3) Thought, (4) Diction, (5) Melody, and (6) Spectacle. Now, isn't that an almost eerily prescient description of our fantasy role-playing games? (And possibly even moreso, good rock'n'roll?) The Plot is meant to contain “reversals, recognition, and suffering... arousing horror, fear, and pity”, and meant to effect catharsis of same; the Characters are expected to “change from happiness to misery because of some tragic mistake” (see here). Is that not a fair description of our games that have no “win” condition for our avatar-selves, but only, ultimately, ways to lose?
(Now, obviously, there's another volume of Aristotle's analysis lost to us over time; but I think it's rather obvious that our RPGs are more like “Drama” than they are “Comedy”.)
Now think: Where is “fun” in this analysis? Has it been overlooked? Is it fundamentally unnecessary? Is it implied by the last and least-important item of “Spectacle”? And are the other rewards from our dramas, from our own Tragic Poetry, not immensely deeper than just “fun”? Let us be courageous and assert all of these deeper aspects, together, and not be satisfied with talking about the merely consolatory notion, which is to say, “fun”.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4371567666530609, "perplexity": 3167.462211022173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948522343.41/warc/CC-MAIN-20171213084839-20171213104839-00094.warc.gz"}
|
https://www.physicsforums.com/threads/boat-vector-physics-problem.375271/
|
# Boat vector physics problem
1. Feb 4, 2010
### Awsom Guy
1. The problem statement, all variables and given/known data
I have two questions but I need help with the second.
1. A boat can travel at 3.0ms-2 in still water and wishes to travel directly across a river with a current of 1.0ms-1. What direction upstream does the boat need to be steered.
I got the answer - the boat goes at an 18 $$\circ$$ and at an direction which I dont know.
2. If the boat in question 4 heads directly across the river which is 1.0km wide how far downstream will it reach the opposite bank?
2. Relevant equations
Just Vectors.
3. The attempt at a solution
Well I attempted it.
I had a triangle with the 1.0km across another side.
I didn't really get that far.
Thanks
2. Feb 4, 2010
### Matterwave
Re: Vectors
If the boat travels at 3 m/s and the current is 1 m/s that means for every 3 meters the boat travels in the perpendicular direction, the current carries it 1 meter in the parallel direction (relative to the shores).
Can you go from there?
3. Feb 4, 2010
### Awsom Guy
Re: Vectors
wait isn't that for the first question, i need help on the second question please. Maybe I wasn't clear on the question I have metioned it there, the first question is there because the second one belongs to the first.
Thanks.
Similar Discussions: Boat vector physics problem
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5842395424842834, "perplexity": 1265.4680099659777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822488.34/warc/CC-MAIN-20171017200905-20171017220905-00346.warc.gz"}
|
https://plainmath.net/11016/tell-whether-the-function-represents-exponential-growth-or-decay-equal
|
# Tell whether the function represents exponential growth or decay. z(x)=47(0.55)^x
Question
Exponential growth and decay
Tell whether the function represents exponential growth or decay. z(x)=47(0.55)^x
2021-01-07
The given function is: $$\displaystyle{z}{\left({x}\right)}={47}{\left({0.55}\right)}^{{x}}$$
Is in the form: $$\displaystyle{y}={a}{\left({b}\right)}^{{x}}$$
Since b=0.55
Then the function represents exponential decay.
### Relevant Questions
Determine whether each function represents exponential growth or exponential decay. Identify the percent rate of change.
$$\displaystyle{g{{\left({t}\right)}}}={2}{\left({\frac{{{5}}}{{{4}}}}\right)}^{{t}}$$
Tell whether the function represents exponential growth or exponential decay. Then graph the function. $$f(x)=(1.5)^{x}$$
Write an exponential growth or decay function to model each situation. Then find the value of the function after the given amount of time. A new car is worth \$25,000, and its value decreases by 15% each year; 6 years.
Determine whether each equation represents exponential growth or exponential decay. Find the rate of increase or decrease for each model. Graph each equation. $$y=5^x$$
Determine whether the function represents exponential growth or exponential decay. Identify the percent rate of change. Then graph the function. $$y=5^x$$
Tell whether the function represents exponential growth or decay. $$\displaystyle{k}{\left({x}\right)}={22}{\left({0.15}\right)}^{{x}}$$
$$\displaystyle{y}={80}{\left({0.25}\right)}^{{{x}}}$$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8640903830528259, "perplexity": 1073.3960464167442}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487620971.25/warc/CC-MAIN-20210615084235-20210615114235-00208.warc.gz"}
|
https://it.mathworks.com/matlabcentral/answers/600382-why-is-my-code-not-working
|
# Why is my code not working?
2 views (last 30 days)
Michael Vaughan on 26 Sep 2020
Answered: Jim Riggs on 26 Sep 2020
I want the following script to calculate:
$$J_{(2,k)}(N)=\frac{1}{s-s^{-1}}{\sum_{j=\frac{-(N-1)}{2}}^{\frac{(N-1)}{2}}(s^{4j^2k+2jk}s^{4j+1}-s^{4r^2k-2jk}s^{4j-1})}$$
function f = CJ(N,k)
syms t
% compute the running sum
running_sum = 0;
for j = -(N-1)/2:(N-1)/2
running_sum = running_sum + t^(4*j^2*k+2*j*k)*t^(4*j+1)-t^(4*j^2*k-2*j*k)*t^(4*j-1);
end
% final value
f = (1/(t-t^(-1))*running_sum
end
Here is what happens when I try to run the program:
>> CJ(3,2)
Error: File: CJ.m Line: 10 Column: 34
Invalid expression. When calling a function or indexing a variable, use parentheses. Otherwise, check for mismatched
delimiters.
thanks for your help
##### 0 CommentsShowHide -1 older comments
Sign in to comment.
### Answers (1)
Jim Riggs on 26 Sep 2020
Not entirely sure if this is the reference in the error message, but this line has unballanced parentheses:
f = (1/(t-t^(-1))*running_sum
##### 0 CommentsShowHide -1 older comments
Sign in to comment.
### Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!
Translated by
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17249350249767303, "perplexity": 12252.34205758381}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104655865.86/warc/CC-MAIN-20220705235755-20220706025755-00634.warc.gz"}
|
https://www.arxiv-vanity.com/papers/hep-ph/0606133/
|
# Improved Analysis of J/ψ Decays into a Vector Meson and Two Pseudoscalars
Timo A. Lähde Helmholtz-Institut für Strahlen- und Kernphysik (HISKP), Bonn University,
Nußallee 14-16, D-53115 Bonn, Germany
Ulf-G. Meißner Helmholtz-Institut für Strahlen- und Kernphysik (HISKP), Bonn University,
Nußallee 14-16, D-53115 Bonn, Germany
Forschungszentrum Jülich, Institut für Kernphysik (Th),
D-52425 Jülich, Germany
###### Abstract
Recently, the BES collaboration has published an extensive partial wave analysis of experimental data on , , and . These new results are analyzed here, with full account of detection efficiencies, in the framework of a chiral unitary description with coupled-channel final state interactions between and pairs. The emission of a dimeson pair is described in terms of the strange and nonstrange scalar form factors of the pion and the kaon, which include the final state interaction and are constrained by unitarity and by matching to the next-to-leading-order chiral expressions. This procedure allows for a calculation of the -wave component of the dimeson spectrum including the resonance, and for an estimation of the low-energy constants of Chiral Perturbation Theory, in particular the large suppressed constants and . The decays in question are also sensitive to physics associated with OZI violation in the channel. It is found that the -wave contributions to , and given by the BES partial-wave analysis may be very well fitted up to a dimeson center-of-mass energy of GeV, for a large and positive value of and a value of compatible with zero. An accurate determination of the amount of OZI violation in the decay is achieved, and the -wave contribution to near threshold is predicted.
decays; unitarity; chiral perturbation theory; OZI violation
###### pacs:
13.20.Gd, 12.39.Fe
preprint: HISKP-TH-06/16
## I Introduction
The decays of the into a vector meson such as or , via emission of a pair of light pseudoscalar mesons, may yield insight into the dynamics of the pseudo-Goldstone bosons of QCD Au ; Isgur1 ; Speth , and in particular into the final state interaction (FSI) between and pairs, which is an essential component in a realistic description of the scalar form factors (FFs) of pions and kaons. Additionally, such decays can yield insight into violation of the Okubo-Zweig-Iizuka (OZI) rule OZI ; NC ; Isg in the scalar () sector of QCD, since the leading order contributions to such decays are OZI suppressed. Furthermore, as shown in Fig. 1, a doubly OZI-violating component may contribute to these decays, which was demonstrated already in Refs. UGM1 ; Roca although the data from the DM2 DM2 , MARK-III MK3 and BES BES0 collaborations available at that time had rather low statistics. However, since then the BES collaboration has published far superior data on and BES1 , as well as for and BES2 ; BES3 . Additionally, a comprehensive partial-wave analysis (PWA) has been performed for those data, which is particularly significant since an explicit determination of the -wave and event distributions is thus available. Thus, a much more precise analysis of the issues first touched upon in Ref. UGM1 is clearly called for.
A key ingredient in such an analysis is a realistic treatment of the final state interaction (FSI). It has been demonstrated in Ref. OO1 that the FSI in the system can be well described by a coupled-channel Bethe-Salpeter approach using the lowest order CHPT amplitudes for meson-meson scattering Wein1 ; GL1 ; GL2 . In such an approach, the lowest resonances in the sector are of dynamical origin, i.e. they arise due to the strong rescattering effects in the or system. Such dynamically generated states include the and mesons, which are prominent in the BES data on dimeson emission from the BES1 ; BES2 ; BES3 . It is useful, in view of the controversial nature of the Au ; Isgur1 ; Speth ; UGM2 ; Tqvist ; Rijken ; Jaffe , to recall Ref. OO2 , which generalizes the work of Ref. OO1 . There, explicit -wave resonance exchanges were included together with the lowest order CHPT contributions in a study of the partial wave amplitudes for the whole scalar sector with and . It was noted that the results of Ref. OO1 could be recovered when the explicit tree-level resonance contributions were dropped. The conclusion of Ref. OO2 was that the lowest nonet of scalar resonances, which includes the and states, is of dynamical origin, while a preexisting octet of scalar resonances is present at GeV. It was also noted that the physical state obtains a strong contribution of dynamical origin, and may also receive one from a preexisting singlet state.
This analysis uses the formalism introduced in Ref. UGM1 , where the expressions for the scalar FFs of the pions and kaons were obtained using the results of Ref. OO1 . This allows for a description of the scalar FFs which takes into account the FSI between pions and kaons up to GeV. At higher energies, a number of preexisting scalar resonances such as the have to be accounted for, as well as the effects of multiparticle intermediate states, most importantly the state. These scalar FFs may then be constrained by matching to the next-to-leading-order (NLO) chiral expressions. This allows for a fit of the large suppressed Low-Energy Constants (LECs) and of CHPT to the dimeson spectra of the and decays, using the Lagrangian model of Ref. UGM1 . The amount of direct OZI violation present in these decays may also be accurately estimated. Finally, it should be noted that the present treatment of the FSI has been proven successful in describing, in the same spirit, a wide variety of processes, such as the photon fusion reactions and phot , the decays and gammadec , and more recently the hadronic decays and Ddec .
This paper is organized in the following manner: In Section II the description of the decays is briefly reviewed, along with the FSI in the system, as applied to the scalar FFs of the pseudo-Goldstone bosons. Some minor corrections to the NLO CHPT expressions in Ref. UGM1 are also pointed out. Section III describes the analysis of the experimental BES data, along with a discussion of the fitted parameter values, with emphasis on the LECs of CHPT and the evidence for OZI violation. In Section IV, the results are summarized along with a concluding discussion.
## Ii Theoretical Framework
The theoretical tools required for the calculation of the scalar FFs of the pseudo-Goldstone bosons using CHPT and unitarity constraints have, as discussed in the Introduction, already been extensively treated in the existing literature, and therefore only the parts directly relevant to the present analysis are repeated here. For convenience, the NLO expressions for the scalar FFs of the pseudo-Goldstone bosons are explicitly given here. Also, the expressions for the scalar FFs of the pion in Ref. UGM1 require a minor correction111The scalar FFs of the pion in Ref. UGM1 require a minor correction in the values at . The authors thank J. Bijnens and J.A. Oller for their assistance in pinpointing this. The effect on the numerics of Ref. UGM1 is negligible., such that an updated version is called for. A derivation of the scalar FFs using unitarity and the methods of Ref. OO1 is given in Ref. UGM1 , and introductions to CHPT can e.g. be found in Ref. ChPT .
### ii.1 Amplitude for ππ and K¯K emission
This work makes use of the SU(3) and Lorentz invariant Lagrangian of Ref. UGM1 to describe the decay of a into a pair of pseudoscalar mesons and a light vector meson. This Lagrangian can be written as
L = gΨμ(⟨Vμ8Σ8⟩+νVμ1Σ1), (1)
where the and denote the lowest octet and singlet of vector meson resonances. Similarly, the and refer to the corresponding sets of scalar sources, as defined in Ref. UGM1 . In the above equation, denotes a coupling constant, the precise value of which is not required in the present analysis, while the real parameter will be shown to play the role of an OZI violation parameter in the channel. Furthermore, the angled brackets in Eq. (1) denote the trace with respect to the SU(3) indices of the matrices and . Evaluation of that trace yields
L = gΨμ(Vμ8S8+νVμ1Σ1+⋯), (2)
where only the terms relevant for the present considerations have been written out. Here denotes the state of the octet of vector meson resonances, while again refers to the corresponding operator in the matrix of scalar sources. The and fields, along with the associated scalar sources and are then introduced according to
V8=ω√3−√23ϕ,S8=Sω√3−√23Sϕ, (3) V1=√23ω+ϕ√3,Σ1=√23Sω+Sϕ√3, (4)
which corresponds to the ansatz of ideal mixing between and . The departure from this situation is reviewed, using different models, in Ref. phimix . The amount of deviation from ideal mixing in the system has been estimated UGM1 to influence, in an analysis of the present kind, the determination of the magnitude of the OZI violation at the level. This should be compared with the expected departure from unity UGM1 of the parameter in Eq. (2). In view of this, the relations given in Eqs. (3) and (4) will be considered adequate for the present analysis.
The scalar sources and may, in terms of a quark model description, be expressed as and . By means of these relations and Eqs. (3) and (4), the Lagrangian of Eq. (2) may be written in the form
L = ΨμϕμCϕ(ν)[¯ss+λϕ(ν)¯nn] (5) + ΨμωμCω(ν)[¯ss+λω(ν)¯nn],
where the coupling constant is taken, as further elaborated in Sect. II.2, to be absorbed into and . These and the in Eq. (5) are given in terms of the parameter according to
λϕ=√2(ν−1)2+ν,Cϕ=2+ν3, (6) λω=1+2ν√2(ν−1),Cω=√2(ν−1)3, (7)
which shows that the parameters for the decay operator can be expressed in terms of those which control the decay. The explicit relations are given by
Cω=λϕCϕ,λω=λϕ+√2√2λϕ. (8)
From now on, the dependence of the and on will be suppressed. The quantities to be determined from fits to the experimental dimeson spectra of Refs. BES1 ; BES2 ; BES3 are taken to be and . It is worth noting that the limit corresponds to the value , and in that case the dimeson spectra for and decays are driven entirely by the strange scalar source and the nonstrange scalar source , respectively.
From the Lagrangian in Eq. (5), the matrix elements for and decay of the are given by
Mππϕ =√23Cϕ⟨0|(¯ss+λϕ¯nn)|ππ⟩∗I=0, (9) MKKϕ =√12Cϕ⟨0|(¯ss+λϕ¯nn)|K¯K⟩∗I=0, (10)
in terms of the and states with , which are related to the physical and states by the Clebsch-Gordan (CG) coefficients in front of the above expressions. It should be noted that in Ref. UGM1 , the CG-coefficient for decay was incorrectly written as . The full transition amplitudes also contain the polarization vectors of the and mesons, which have not been included in the above definitions. They introduce an additional, weakly energy dependent factor which is given explicitly in Sect. II.2. The matrix elements for and , may be obtained by replacement of the labels in Eqs. (9) and (10) according to . The matrix elements of the scalar sources are given in terms of the scalar FFs which are discussed in Sect. II.3.
### ii.2 Decay rates and dimeson event distributions
The differential decay rate of a into a vector meson and a pair of pseudoscalar mesons is, for the case of decay, given by
dΓdWππ=Wππ|→pϕ||→pπ|4M3J/ψ(2π)3Fpol|Mππϕ|2, (11)
where . The decay rates for the other combinations of and mesons and , final states can be obtained by appropriate replacement of the indices in Eq. (11). The moduli of the and momenta in Eq. (11) are given by
|→pϕ| = √E2ϕ−m2ϕ,Eϕ=M2J/ψ−W2ππ−m2ϕ2Wππ, (12) |→pπ| = √E2π−m2π,Eπ=Wππ/2, (13)
in the rest frame of the system. The factor in Eq. (11) depends on the dipion energy and is generated by properly averaging and summing over the polarizations of the and mesons, respectively. It may be expressed as
Fpol ≡ 13∑ρ,ρ′εμ(ρ)εμ(ρ′)ε∗ν(ρ)εν∗(ρ′) (14) = 23⎡⎢ ⎢ ⎢⎣1+(M2J/ψ+M2ϕ−W2ππ)28M2J/ψM2ϕ⎤⎥ ⎥ ⎥⎦.
Again, it should be noted that the corresponding expressions for for the other decay channels considered can be obtained straightforwardly from Eq. (14) by the substitutions and .
The results for decay into a vector meson and a dimeson pair published by the BES collaboration are given in terms of event distributions as a function of the dimeson center-of-mass energy . The relation between the differential event distribution and the decay rate given by Eq. (11) is defined to be
dNdWππ ∼ η(Wππ)dΓdWππ, (15)
where the function represents the detection efficiency, shown in Fig. 3, which also takes into account the effects of the various cuts imposed on the data in order to reduce the background to an acceptable level. It should be noted that the detection efficiencies cannot be neglected, since it is evident from Fig. 3 that a sizeable difference exists between the efficiencies for all four decays considered. Furthermore, the detection efficiencies exhibit significant nonlinear behavior.
The overall constant of proportionality in Eq. (15) is not relevant to the present analysis, since it may be absorbed in the definitions of and , along with the coupling constant of Eq. (2) and a factor from the scalar FFs in Eq. (19). Thus, the quantities with and the proportionality factor absorbed are denoted by and . While and are dimensionless, has dimension [].
### ii.3 Scalar Form Factors from CHPT
The nonstrange and strange scalar FFs of the pseudo-Goldstone bosons of CHPT are defined in terms of the -wave states with ,
|ππ⟩I=0 =1√3∣∣π+π−⟩+1√6∣∣π0π0⟩, (16) |K¯K⟩I=0 =1√2∣∣K+K−⟩+1√2∣∣K0¯K0⟩, (17) |ηη⟩I=0 =1√2|ηη⟩, (18)
where denotes the symmetrized combination of and . Following the conventions of Refs. Pel1 ; Pel2 ; Pel3 , an extra factor of has been included for the states composed of members of the same isospin multiplet. This takes conveniently into account the fact that the pions behave as identical particles in the isospin basis. In terms of the above states, the scalar FFs for the , and mesons are defined as
√2B0Γn1(s) = ⟨0|¯nn|ππ⟩I=0, (19) √2B0Γn2(s) = ⟨0|¯nn|K¯K⟩I=0, √2B0Γn3(s) = ⟨0|¯nn|ηη⟩I=0,
where the notation ( = 1, = 2, = 3) has been introduced for simplicity. The expressions for the strange scalar FFs may be obtained by the substitutions and . As discussed above, the expressions given in Ref. UGM1 are updated here with minor corrections to and . With these definitions, the scalar FFs may be expressed in terms of the meson loop function GL1 and the tadpole factor , given in Eqs. (31) and (33) respectively. The expressions so obtained up to NLO in CHPT are, in agreement with Refs. GL1 ; HB1a ; HB1b ,
Γn1(s)= √32[1+μπ−μη3+16m2πf2(2Lr8−Lr5)+8(2Lr6−Lr4)2m2K+3m2πf2+8sf2Lr4+4sf2Lr5 (20) +(2s−m2π2f2)Jrππ(s)+s4f2JrKK(s)+m2π18f2Jrηη(s)], Γs1(s)= √32[16m2πf2(2Lr6−Lr4)+8sf2Lr4+s2f2JrKK(s)+29m2πf2Jrηη(s)], (21)
for the pion, and
Γn2(s)= 1√2[1+8Lr4f2(2s−m2π−6m2K)+4Lr5f2(s−4m2K)+16Lr6f2(6m2K+m2π)+32Lr8f2m2K+23μη (23) +(9s−8m2K36f2)Jrηη(s)+3s4f2JrKK(s)+3s4f2Jrππ(s)], Γs2(s)= 1+8Lr4f2(s−m2π−4m2K)+4Lr5f2(s−4m2K)+16Lr6f2(4m2K+m2π)+32Lr8f2m2K+23μη +(9s−8m2K18f2)Jrηη(s)+3s4f2JrKK(s),
for the kaon. Finally, for the one finds
Γn3(s)= 12√3[1+24Lr4f2(s+13m2π−103m2K)+4Lr5f2(s+43m2π−163m2K)+16Lr6f2(10m2K−m2π) (25) +128Lr7f2(m2π−m2K)+32Lr8f2m2π−μη3+4μK−3μπ+(16m2K−7m2π18f2)Jrηη(s) +(9s−8m2K4f2)JrKK(s)+32m2πf2Jrππ(s)], Γs3(s)= 23[1+6Lr4f2(s−23m2π−163m2K)+4Lr5f2(s+43m2π−163m2K)+8Lr6f2(8m2K+m2π) +(9s−8m2K8f2)JrKK(s)].
### ii.4 Matching of FSI to NLO CHPT
The constraints imposed by unitarity on the pion and kaon scalar FFs, the inclusion of the FSI via resummation in terms of the Bethe-Salpeter (BS) equation, the channel coupling between the and systems, and the matching of the scalar FFs to the NLO CHPT expressions have all been elaborated in great detail in Refs. UGM1 ; OO1 , and will thus be only briefly touched upon here. Within that framework, consideration of the unitarity constraints yields a scalar FF in terms of the algebraic coupled-channel equation
Γ(s) = [I+K(s)g(s)]−1R(s) = [I−K(s)g(s)]R(s)+O(p6),
where in the second line, the equation has been expanded up to NLO, the NNLO contribution defined as being of in the chiral expansion. This expansion is instructive since it allows for the integrals from the NLO scalar FF expressions to be absorbed into the above equation. Here denotes the kernel of -wave projected meson-meson scattering amplitudes from the leading order chiral Lagrangian. Using the notation defined in Sect. II.3, they are given by
K11=2s−m2π2f2,K12=K21=√3s4f2, (27) K22=3s4f2,
where the constant is taken to equal the pion decay constant , with the convention GeV. The components given above are sufficient for the two-channel formalism of Ref. UGM1 used in this paper, where only the FSI in the and channels is considered. The chiral logarithms associated with the channel can thus not be reproduced by Eq. (II.4) and are therefore removed from the chiral expressions given in Sect. II.3, while the contribution of that channel to the values of the form factors at is retained. For completeness, it should be noted that if the channel is also included, then the matrix should be augmented by the elements
K13=K31=m2π2√3f2,K33=16m2K−7m2π18f2, K23=K32=9s−8m2K12f2. (28)
The elements of the diagonal matrix in Eq. (II.4) are given by the cutoff-regularized loop integral
gi(s) = 116π2⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩σi(s)log⎛⎜ ⎜ ⎜ ⎜⎝σi(s)√1+m2iq2max+1σi(s)√1+m2iq2max−1⎞⎟ ⎟ ⎟ ⎟⎠ (29) − 2log⎡⎢⎣qmaxmi⎛⎜⎝1+ ⎷1+m2iq2max⎞⎟⎠⎤⎥⎦⎫⎪⎬⎪⎭,
where
σi(s)=√1−4m2is, (30)
and denotes a three-momentum cutoff, which has to be treated as an a priori unknown model parameter, which is however expected to be of the order of GeV. Since the above expressions are calculated in a cutoff-regularization scheme, it is useful to note that within the modified subtraction scheme commonly employed in CHPT, the meson loop function is given by
Jrii(s) ≡ 116π2[1−log(m2iμ2)−σi(s)log(σi(s)+1σi(s)−1)] (31) = −gi(s),
for which it has been shown in App. 2 of Ref. Pel1 that an optimal matching between the two renormalization schemes requires that
μ=2qmax√e, (32)
in which case the differences between the two forms are of . Furthermore, the expressions for the logarithms generated by the chiral tadpoles in the NLO scalar FFs are given by
μi=m2i32π2f2log(m2iμ2). (33)
As demonstrated in Ref. UGM1 , the quantity in Eq. (II.4) is a vector of functions free of any cuts or singularities, since the right-hand or unitarity cut has been removed by construction. The information provided by CHPT can then be built into the formalism by fixing to the NLO CHPT expressions for the scalar FFs. Consideration of Eq. (II.4) yields the defining relations
Γni(s)=Rni(s)−2∑j=1Kij(s)gj(s)Rnj(s)+O(p6), (34)
where it is understood that only contributions up to are to be retained in the product . The analogous expressions for the vectors associated with the strange scalar FFs can be obtained from the above relations by the substitutions and . The above procedure is equivalent to the intuitive result obtained by dropping, in the expressions for the , all occurrences of the loop integrals and , and keeping only the parts of the which do not depend on . Nevertheless, the explicit evaluation of Eq. (34) provides a useful check on the consistency of the normalization used for the NLO scalar FFs and the LO meson-meson interaction kernel . It should also be noted that the expressions for and correspond to the corrected scalar FFs, as explained in the beginning of Sect. II. The expressions for the so obtained are
Rn1(s)= √32{1+μπ−μη3+16m2πf2(2Lr8−Lr5)+8(2Lr6−Lr4)2m2K+3m2πf2+8sf2Lr4+4sf2Lr5 (35) −m2π288π2f2[1+log(m2ημ2)]}, Rs1(s)= √32{16m2πf2(2Lr6−Lr4)+8sf2Lr4−m2π72π2f2[1+log(m2ημ2)]}, (36)
for the pion, and for the kaon one finds
Rn2(s)= 1√2{1+8Lr4f2(2s−6m2K−m2π)+4Lr5f2(s−4m2K)+16Lr6f2(6m2K+m2π)+32Lr8f2m2K+23μη (38) +m2K72π2f2[1+log(m2ημ2)]}, Rs2(s)= 1+8Lr4f2(s−4m2K−m2π)+4Lr5f2(s−4m2K)+16Lr6f2(4m2K+m2π)+32Lr8f2m2K+23μη +m2K36π2f2[1+log(m2ημ2)].
The expressions for the and are valid when only the and channels are considered in the FSI. On the other hand, if the full three-channel interaction kernel of Eq. (27) is used, then the above equations should be modified such that the terms in square brackets are dropped. With respect to the omission of the channel, it was noted in Ref. OO2 that reproduction of the data on the inelastic cross section requires the addition of a preexisting contribution to the if the channel is included. On the other hand, no such contribution was found to be necessary if the channel is dropped. Furthermore, the effect of this channel is known OO2 to be very small for energies less than GeV.
It should be stressed here, with respect to the above mentioned issues, that the main concern in the present analysis is the use of a meson-meson interaction kernel which is known to give a realistic description of the phase shift close to the threshold. Since none of the adjustable model parameters have any influence on the behavior of the phase shift, any model which has the proper chiral behavior for low energies and faithfully reproduces the should therefore give similar results. In view of this, the inclusion or omission of the channel, or the question of a preexisting contribution to the resonance, are issues of secondary importance. Nevertheless, the uncertainties introduced by these issues into the determination of the are discussed in Sect. III. Finally, it should be noted that in order to minimize the dependence on , the Gell-Mann - Okubo (GO) relation has been applied throughout in the polynomial terms of the and the .
## Iii Fits to BES Data
The event distributions given by Eq. (15) can be simultaneously fitted to the dimeson spectra in the , and channels. The parameters to be determined via a fit are the LECs and which influence the scalar FFs, as well as the model parameters and . Due to the accuracy of the BES data BES1 ; BES2 ; BES3 , all of the model parameters can be well constrained, which is especially true for and , while the sensitivity of the fit to and turns out to be somewhat less. Once all the model parameters are determined by a fit to the three decay channels mentioned above, the event distribution in the remaining channel in essentially fixed. This channel is thus not included in the fit and is treated instead as a prediction. To a large extent, this also turns out to be true for the shape of the fitted distribution.
In spite of the above mentioned positive issues, the fitting of the predicted event distribution to the -wave contribution from the PWA of the BES collaboration is complicated by several issues: Firstly, the detection efficiencies, determined by BES via Monte Carlo simulation and shown in Fig. 3, are different for each decay channel, and furthermore vary appreciably over the range of dimeson energies considered. Secondly, the -wave contribution in the BES PWA cannot be considered as strict experimental information, since it is inevitably biased to some extent by the parameterizations chosen there for the pole. Thirdly, an unambiguous fit requires a highly precise description of the resonance generated by the FSI, which to a large extent cannot be adjusted in the present model. All of these issues, as well as the fitted parameter values and the associated error analysis are elaborated in detail below.
### iii.1 Definitions
In order for the fit results to be well reproducible, the various constant parameters which enter the expressions for the decay amplitudes should be accurately defined. These parameters include the masses of the light pseudoscalar mesons, for which the current experimental values PDG have been used. These are GeV, GeV and GeV, while the value GeV has been adopted for the pion decay constant. The physical masses of the charged pions and kaons have been used in order for the and thresholds to coincide with the physical ones. Also, the physical meson mass has been used rather than the one given by the GO relation. The meson mass appears in relatively few places in the expressions, and checks on the fits have indicated that replacement of the physical mass with that given by the GO relation has a minimal effect. Further parameters are the masses of the vector mesons and . The mass is used in the evolution of the , and has been taken as GeV. The other vector meson masses appear in various phase-space factors, and have been given the values GeV, GeV and GeV PDG .
It is not a priori obvious how the individual deviations, required for the fit, are to be treated for the BES data. The individual deviations for each bin , are given by the BES collaboration as the square root of the number of events . These numbers represent the statistical errors of the raw data, uncorrected for detection efficiency and background. Furthermore, what is fitted in the present analysis is not the total signal detected by the BES experiment, but rather the -wave contribution from the accompanying PWA. In view of these considerations, the deviations for each bin used in the fitting procedure have been taken to be of the form
Δi≡√Niwi, (39)
where the represent weighting factors which have been chosen in a physically motivated way. In principle, individual could be introduced in all the decay channels studied and would then represent a “quality factor” for each bin. In practice, to avoid excessive fine-tuning of the fit, a constant value for has been applied for each decay channel, according to the following principles: Since the -wave contribution in the spectrum is likely to have the largest uncertainty, a value of has been adopted for that decay channel, whereas the values of for the
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9478265047073364, "perplexity": 911.7815563787944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991269.57/warc/CC-MAIN-20210516105746-20210516135746-00368.warc.gz"}
|
https://www.arxiv-vanity.com/papers/0708.0047/
|
arXiv Vanity renders academic papers from arXiv as responsive web pages so you don’t have to squint at a PDF. Read this paper on arXiv.org.
# High Energy Scattering In Quantum Chromodynamics111Lectures given at the Xth Hadron Physics Workshop, March 2007, Florianopolis, Brazil.
FRANCOIS GELIS Theory Division, PH-TH, Case C01600, CERN,
CH-1211 Geneva 23, Switzerland
TUOMAS LAPPI RAJU VENUGOPALAN Brookhaven National Laboratory, Physics Department
Upton, NY-11973, USA
,
###### Abstract
In this series of three lectures, we discuss several aspects of high energy scattering among hadrons in Quantum Chromodynamics. The first lecture is devoted to a description of the parton model, Bjorken scaling and the scaling violations due to the evolution of parton distributions with the transverse resolution scale. The second lecture describes parton evolution at small momentum fraction , the phenomenon of gluon saturation and the Color Glass Condensate (CGC). In the third lecture, we present the application of the CGC to the study of high energy hadronic collisions, with emphasis on nucleus-nucleus collisions. In particular, we provide the outline of a proof of high energy factorization for inclusive gluon production.
\catchline
Preprint CERN-PH-TH/2007-131
## 1 Introduction
Quantum Chromodynamics (QCD) is very successful at describing hadronic scatterings involving very large momentum transfers. A crucial element in these successes is the asymptotic freedom of QCD [1], that renders the coupling weaker as the momentum transfer scale increases, thereby making perturbation theory more and more accurate. The other important property of QCD when comparing key theoretical predictions to experimental measurements is the factorization of the short distance physics which can be computed reliably in perturbation theory from the long distance strong coupling physics related to confinement. The latter are organized into non-perturbative parton distributions, that depend on the scales of time and transverse space at which the hadron is resolved in the process under consideration. In fact, QCD not only enables one to compute the perturbative hard cross-section, but also predicts the scale dependence of the parton distributions.
A generic issue in the application of perturbative QCD to the study of hadronic scatterings is the occurrence of logarithmic corrections in higher orders of the perturbative expansion. These logarithms can be large enough to compensate the extra coupling constant they come accompanied with, thus voiding the naive, fixed order, application of perturbation theory. Consider for instance a generic gluon-gluon fusion process, as illustrated on the left of figure 1, producing a final state of momentum . The two gluons have longitudinal momentum fractions given by
x1,2=M⊥√se±Y, (1)
where ( is the invariant mass of the final state) and . On the right of figure 1 is represented a radiative correction to this process, where a gluon is emitted from one of the incoming lines. Roughly speaking, such a correction is accompanied by a factor
αs∫x1dzz∫M⊥d2k⊥k2⊥, (2)
where is the momentum fraction of the gluon before the splitting, and its transverse momentum. Such corrections produce logarithms, and , that respectively become large when is small or when is large compared to typical hadronic mass scales. These logarithms tell us that parton distributions must depend on the momentum fraction and on a transverse resolution scale , that are set by the process under consideration. In the linear regime222We use the denomination “linear” here to distinguish it from the saturation regime discussed later that is characterized by non-linear evolution equations., there are “factorization theorems” – -factorization [2] in the first case and collinear factorization [3] in the second case -- that tell us that the logarithms are universal and can be systematically absorbed in the definition of parton distributions 333The latter is currently more rigorously established than the former.. The dependence that results from resumming the logarithms of is taken into account by the BFKL equation [4]. Similarly, the dependence on the transverse resolution scale is accounted for by the DGLAP equation [5].
The application of QCD is a lot less straightforward for scattering at very large center of mass energy, and moderate momentum transfers. This kinematics in fact dominates the bulk of the cross-section at collider energies. A striking example of this kinematics is encountered in Heavy Ion Collisions (HIC), when one attempts to calculate the multiplicity of produced particles. There, despite the very large center of mass energy444At RHIC, center of mass energies range up to GeV/nucleon; the LHC will collide nuclei at TeV/nucleon., typical momentum transfers are small555For instance, in a collision at GeV between gold nuclei at RHIC, 99% of the multiplicity comes from hadrons whose is below 2 GeV., of the order of a few GeVs at most. In this kinematics, two phenomena that become dominant are
• Gluon saturation : the linear evolution equations (DGLAP or BFKL) for the parton distributions implicitly assume that the parton densities in the hadron are small and that the only important processes are splittings. However, at low values of , the gluon density may become so large that gluon recombinations are an important effect.
• Multiple scatterings : processes involving more than one parton from a given projectile become sizeable.
It is highly non trivial that this dominant regime of hadronic interactions is amenable to a controlled perturbative treatment within QCD, and the realization of this possibility is a major theoretical advance in the last decade. The goal of these three lectures is to present the framework in which such calculations can be carried out.
In the first lecture, we will review key aspects of the parton model. Our recurring example will be the Deep Inelastic Scattering (DIS) process of scattering a high energy electron at high momentum transfers off a proton. Beginning with the inclusive DIS cross-section, we will arrive at the parton model (firstly in its most naive incarnation, and then within QCD), and subsequently at the DGLAP evolution equations that control the scaling violations measured experimentally.
In the second lecture, we will address the evolution of the parton model to small values of the momentum fraction and the saturation of the gluon distribution. After illustrating the tremendous simplification of high energy scattering in the eikonal limit, we will derive the BFKL equation and its non-linear extension, the BK equation. We then discuss how these evolution equations arise in the Color Glass Condensate effective theory. We conclude the lecture with a discussion of the close analogy between the energy dependence of scattering amplitudes in QCD and the temporal evolution of reaction-diffusion processes in statistical mechanics.
The third lecture is devoted to the study of nucleus-nucleus collisions at high energy. Our main focus is the study of bulk particle production in these reactions within the CGC framework. After an exposition of the power counting rules in the saturated regime, we explain how to keep track of the infinite sets of diagrams that contribute to the inclusive gluon spectrum. Specifically, we demonstrate how these can be resummed at leading and next-to-leading order by solving classical equations of motion for the gauge fields The inclusive quark spectrum is discussed as well. We conclude the lecture with a discussion of the inclusive gluon spectrum at next-to-leading order and outline a proof of high energy factorization in this context. Understanding this factorization may hold the key to understanding early thermalization in heavy ion collisions. Some recent progress in this direction is briefly discussed.
## 2 Lecture I : Parton model, Bjorken scaling, scaling violations
In this lecture, we will begin with the simple parton model and develop the conventional Operator Product Expansion (OPE) approach and the associated DGLAP evolution equations. To keep things as simple as possible, we will use Deep Inelastic Scattering to illustrate the ideas in this lecture.
### 2.1 Kinematics of DIS
The basic idea of Deep Inelastic Scattering (DIS) is to use a well understood lepton probe (that does not involve strong interactions) to study a hadron. The interaction is via the exchange of a virtual photon666If the virtuality of the photon is small (in photo-production reactions for instance), the assertion that the photon is a “well known probe that does not involve strong interactions” is not valid anymore. Indeed, the photon may fluctuate, for instance, into a meson.. Variants of this reaction involve the exchange of a or boson which become increasingly important at large momentum transfers. The kinematics of DIS is characterized by a few Lorentz invariants (see figure 2 for the notations), traditionally defined as
ν ≡ P⋅q s ≡ (P+k)2 M2X ≡ (P+q)2=m2N+2ν+q2, (3)
where is the nucleon mass (assuming that the target is a proton) and the invariant mass of the hadronic final state. Because the exchanged photon is space-like, one usually introduces , and also . Note that since , we must have – the value being reached only in the case where the proton is scattered elastically.
The simplest cross-section one can measure in a DIS experiment is the total inclusive electron+proton cross-section, where one sums over all possible hadronic final states :
E′dσe−Nd3k′=∑states XE′dσe−N→e−Xd3k′. (4)
The partial cross-section associated to a given final state can be written as
E′dσe−N→e−Xd3k′=∫[dΦX]32π3(s−m2N)(2π)4δ(P+k−k′−PX)⟨∣∣MX∣∣2⟩spin, (5)
where denotes the invariant phase-space element for the final state and is the corresponding transition amplitude. The “spin” symbol denotes an average over all spin polarizations of the initial state and a sum over those in the final state. The transition amplitude is decomposed into an electromagnetic part and a hadronic matrix element as
MX=ieq2[¯¯¯u(k′)γμu(k)]⟨X∣∣Jμ(0)∣∣N(P)⟩. (6)
In this equation is the hadron electromagnetic current that couples to the photon, and denotes a state containing a nucleon of momentum .
Squaring this amplitude and collecting all the factors, the inclusive DIS cross-section can be expressed as
E′dσe−Nd3k′=132π3(s−m2N)e2q44πLμνWμν, (7)
where the leptonic tensor (neglecting the electron mass) is
Lμν ≡ ⟨¯¯¯u(k′)γμu(k)¯¯¯u(k)γνu(k′)⟩spin (8) = 2(kμk′ν+kνk′μ−gμνk⋅k′).
and – the hadronic tensor – is defined as
4πWμν ≡ ∑states X∫[dΦX](2π)4δ(P+q−PX) (9) ×⟨⟨N(P)∣∣J†ν(0)∣∣X⟩⟨X∣∣Jμ(0)∣∣N(P)⟩⟩spin = ∫d4yeiq⋅y⟨⟨N(P)∣∣J†ν(y)Jμ(0)∣∣N(P)⟩⟩spin.
The second equality is obtained using the complete basis of hadronic states . Thus, the hadronic tensor is the Fourier transform of the expectation value of the product of two currents in the nucleon state. An important point is that this object cannot be calculated by perturbative methods. This rank-2 tensor can be expressed simply in terms of two independent structure functions as a consequence of
• Conservation of the electromagnetic current :
• Parity and time-reversal symmetry :
• Electromagnetic currents conserve parity : the Levi-Civita tensor cannot appear777This property is not true in DIS reactions involving the exchange of a weak current; an additional structure function is needed in this case. in the tensorial decomposition of
When one works out these constraints, the most general tensor one can construct from and reads :
Wμν=−F1(gμν−qμqνq2)+F2P⋅q(Pμ−qμP⋅qq2)(Pν−qνP⋅qq2), (10)
where are the two structure functions888The structure function differs slightly from the defined in [6] : .. As scalars, they only depend on Lorentz invariants, namely, the variables and . The inclusive DIS cross-section in the rest frame of the proton can be expressed in terms of as
dσe−NdE′dΩ=α2em4mNE2sin4(θ/2)[2F1sin2θ2+m2NνF2cos2θ2], (11)
where represents the solid angle of the scattered electron and its energy.
### 2.2 Experimental facts
Two major experimental results from SLAC [7] in the late 1960’s played a crucial role in the development of the parton model.
The left plot of figure 3 shows the measured values of as a function of . Even though the data covers a significant range in , all the data points seem to line up on a single curve, indicating that depends very little on in this regime. This property is now known as Bjorken scaling [8]. In the right plot of figure 3, one sees a comparison of with the combination999, the longitudinal structure function, describes the inclusive cross-section between the proton and a longitudinally polarized proton. . Although there are few data points for , one can see that it is significantly lower than and close to zero 101010From current algebra, it was predicted that ; this relation is known as the Callan-Gross relation [9].. As we shall see shortly, these two experimental facts already tell us a lot about the internal structure of the proton.
### 2.3 Naive parton model
In order to get a first insight into the inner structure of the proton, it is interesting to compare the DIS cross-section in eq. (11) and the cross-section (also expressed in the rest frame of the muon),
dσe−μ−dE′dΩ=α2emδ(1−x)4mμE2sin4θ2[sin2θ2+m2μνcos2θ2]. (12)
Note that, since this reaction is elastic, the corresponding variable is equal to , hence the delta function in the prefactor. The comparison of this formula with eq. (11), and in particular its angular dependence, is suggestive of the proton being composed of point like fermions – named partons by Feynman – off which the virtual photon scatters. If the constituent struck by the photon carries the momentum , this comparison suggests that
2F1∼F2∼δ(1−xc)withxc≡Q22q⋅pc. (13)
Assuming that this parton carries the fraction of the momentum of the proton, i.e. , the relation between the variables and is . Therefore, we get :
2F1∼F2∼xFδ(x−xF). (14)
In other words, the kinematical variable measured from the scattering angle of the electron would be equal to the fraction of momentum carried by the struck constituent. Note that Bjorken scaling appears quite naturally in this picture.
Having gained intuition into what may constitute a proton, we shall now compute the hadronic tensor for the DIS reaction on a free fermion carrying the fraction of the proton momentum. Because we ignore interactions for the time being, this calculation (in contrast to that for a proton target) can be done in closed form. We obtain,
4πWμνi ≡ ∫d4p′(2π)42πδ(p′2)(2π)4δ(xFP+q−p′) ×⟨⟨xFP∣∣Jμ†(0)∣∣p′⟩⟨p′∣∣Jν(0)∣∣xFP⟩⟩spin = 2πxFδ(x−xF)e2i[−(gμν−qμqνq2)+2xFP⋅q(Pμ−qμP⋅qq2)(Pν−qνP⋅qq2)],
where is the electric charge of the parton under consideration. Let us now assume that in a proton there are partons of type with a momentum fraction between and , and that the photon scatters incoherently off each of them. We would thus have
Wμν=∑i∫10dxFxFfi(xF)Wμνi. (15)
(The factor in the denominator is a “flux factor”.) At this point, we can simply read the values of ,
F1=12∑ie2ifi(x),F2=2xF1. (16)
We thus see that the two experimental observations of i) Bjorken scaling and ii) the Callan-Gross relation are automatically realized in this naive picture of the proton111111In particular, in this model is intimately related to the spin structure of the scattered partons. Scalar partons, for instance, would give , at variance with experimental results..
Despite its success, this model is quite puzzling, because it assumes that partons are free inside the proton – while the rather large mass of the proton suggests a strong binding of these constituents inside the proton. Our task for the rest of this lecture is to study DIS in a quantum field theory of strong interactions, thereby turning the naive parton model into a systematic description of hadronic reactions. Before we proceed further, let us describe in qualitative terms (see [10] for instance) what a proton constituted of fermionic constituents bound by interactions involving the exchange of gauge bosons may look like.
In the left panel of figure 4 are represented the three valence partons (quarks) of the proton. These quarks interact by gluon exchanges, and can also fluctuate into states that contain additional gluons (and also quark-antiquark pairs). These fluctuations can exist at any space-time scale smaller than the proton size ( 1 fermi). (In this picture, one should think of the horizontal axis as the time axis.) When one probes the proton in a scattering experiment, the probe (e.g. the virtual photon in DIS) is characterized by certain resolutions in time and in transverse coordinate. The shaded area in the picture is meant to represent the time resolution of the probe : any fluctuation which is shorter lived than this resolution cannot be seen by the probe, because it appears and dies out too quickly.
In the right panel of figure 4, the same proton is represented after a boost, while the probe has not changed. The main difference is that all the internal time scales are Lorentz dilated. As a consequence, the interactions among the quarks now take place over times much larger than the resolution of the probe. The probe therefore sees only free constituents. Moreover, this time dilation allows more fluctuations to be resolved by the probe; thus, a high energy proton appears to contain more gluons than a proton at low energy121212Equivalently, if the energy of the proton is fixed, there are more gluons at lower values of the momentum fraction ..
### 2.4 Bjorken scaling from free field theory
We will now derive Bjorken scaling and the Callan-Gross relation from quantum field theory. We will consider a theory involving fermions (quarks) and bosons (gluons), but shall at first consider the free field theory limit by neglecting all their interactions. We will consider a kinematical regime in DIS that involves a large value of the momentum transfer and of the center of mass energy of the collision, while the value of is kept constant. This limit is known as the Bjorken limit.
To appreciate strong interaction physics in the Bjorken limit, consider a frame in which the 4-momentum of the photon can be written as
qμ=1mN(ν,0,0,√ν2+m2NQ2). (17)
From the combinations of the components of
q+≡q0+q3√2∼νmN→+∞ q−≡q0−q3√2∼mNx→constant, (18)
and because , the integration over in is dominated by
y−∼mNν→0,y+∼(mNx)−1. (19)
Therefore, the invariant separation between the points at which the two currents are evaluated is . Noting that in eq. (9) the product of the two currents can be replaced by their commutator, and recalling that expectation values of commutators vanish for space-like separations, we also see that . Thus, the Bjorken limit corresponds to a time-like separation between the two currents, with the invariant separation going to zero, as illustrated in figure 5.
It is important to note that in this limit, although the invariant goes to zero, the components of do not necessarily become small. This will have important ramifications when we apply the Operator Product Expansion to .
For our forthcoming discussion, consider the forward Compton amplitude
4πTμν≡i∫d4yeiq⋅y⟨⟨N(P)∣∣T(J†μ(y)Jν(0))∣∣N(P)⟩⟩spin. (20)
It differs from by the fact that the two currents are time-ordered, and as illustrated in figure 6, one can recover from its imaginary part,
Wμν=2ImTμν. (21)
At fixed , is analytic in the variable , except for two cuts on the real axis that start at . The cut at positive corresponds to the threshold above which the DIS reaction becomes possible, and the cut at negative can be inferred from the fact that is unchanged under the exchange . It is also possible to decompose the tensor in terms of two structure functions :
Tμν=−T1(gμν−qμqνq2)+T2P⋅q(Pμ−qμP⋅qq2)(Pν−qνP⋅qq2), (22)
and the DIS structure functions can be expressed in terms of the discontinuity of across the cuts.
We now remind the reader of some basic results about the Operator Product Expansion (OPE) [11, 12]. Consider a correlator , where and are two local operators (possibly composite) and the ’s are unspecified field operators. In the limit , this object is usually singular, because products of operators evaluated at the same point are ill-defined. The OPE states that the nature of these singularities is a property of the operators and , and is not influenced by the nature and localization of the ’s. This singular behavior can be expressed as
A(0)B(y)=yμ→0∑iCi(y)Oi(0), (23)
where the are numbers (known as the Wilson coefficients) that contain the singular dependence and the are local operators that have the same quantum numbers as the product . This expansion – known as the OPE – can then be used to obtain the limit of any correlator containing the product . If ,and are the respective mass dimensions of the operators and , a simple dimensional argument tells us that
Ci(y)∼yμ→0|y|d(Oi)−d(A)−d(B)(up\ to\ % logarithms). (24)
(Here .) From this relation, we see that the operators having the lowest dimension lead to the most singular behavior in the limit . Thus, only a small number of operators are relevant in the analysis of this limit and one can ignore the higher dimensional operators.
Things are however a bit more complicated in the case of DIS, because only the invariant goes to zero, while the components do not go to zero. The local operators that may appear in the OPE of can be classified according to the representation of the Lorentz group to which they belong. Let us denote them , where is the “spin” of the operator (the number of Lorentz indices it carries), and the index labels the various operators having the same Lorentz structure. The OPE can be written as :
∑s,iCs,iμ1⋯μs(y)Oμ1⋯μss,i(0). (25)
Because they depend only on the 4-vector , the Wilson coefficients must be of the form131313There could also be terms where one or more pairs are replaced by , but such terms are less singulars in the Bjorken limit.
Cs,iμ1⋯μs(y)≡yμ1⋯yμsCs,i(y2), (26)
where depends only on the invariant . Similarly, the expectation value of the operators in the proton state can only depend on the proton momentum , and the leading part in the Bjorken limit is141414Here also, there could be terms where a pair is replaced by , but they too lead to subleading contributions in the Bjorken limit.
⟨⟨N(P)∣∣Oμ1⋯μss,i(0)∣∣N(P)⟩⟩spin=Pμ1⋯Pμs⟨Os,i⟩, (27)
where the are some non-perturbative matrix elements.
Let us now denote by the mass dimension of the operator . Then, the dimension of is , which means that it scales like
Cs,i(y2)∼y2→0(y2)(ds,i−s−6)/2. (28)
Because the individual components of do not go to zero, it is this scaling alone that determines the behavior of the hadronic tensor in the Bjorken limit. Contrary to the standard OPE, the scaling depends on the difference between the dimension of the operator and its spin, called its twist , rather than its dimension alone. The Bjorken limit of DIS is dominated by the operators that have the lowest possible twist. As we shall see, there is an infinity of these lowest twist operators, because the dimension can be compensated by the spin of the operator. If we go back to the structure functions , we can write
Tr(x,Q2)=∑sxar−s∑i⟨Os,i⟩Dr;s,i(Q2)(r=1,2), (29)
where and . The difference by one power of (at fixed ) between and comes from their respective definitions from that differ by one power of the proton momentum . Eq. (29) gives the structure functions as a series of terms, each of which has factorized and dependences. (The functions () are related to the Fourier transform of , and thus can only depend on the invariant ). Moreover, for dimensional reasons, the functions must scale like . Therefore, it follows that Bjorken scaling arises from twist 2 operators. It is important to keep in mind that in eq. (29), the functions are in principle calculable in perturbation theory and do not depend on the nature of the target, while the ’s are non perturbative matrix elements that depend on the target. Thus, the OPE approach in our present implementation cannot provide quantitative results beyond simple scaling laws.
It is easy to check that is even in while is odd; this means that only even values of the spin can appear in the sum in eq. (29). We shall now rewrite this equation in a more compact form to see what it tells us about the structure functions . Writing
Tr=∑even str(s,Q2)xar−s=∑even str(s,Q2)(2Q2)s−arνs−ar, (30)
we get (for even)
tr(s,Q2)=12πi(Q22)s−ar∫Cdνννar−sTr(ν,Q2), (31)
where is a small circle around the origin in the complex plane (see figure 7).
This contour can then be deformed and wrapped around the cuts along the real axis, as illustrated in the figure 7. Because the structure function is the discontinuity of across the cut, we can write
tr(s,Q2)=2π∫10dxxxs−arFr(x,Q2). (32)
Therefore, we see that the OPE gives the -moments of the DIS structure functions.
In order to go further and calculate the perturbative Wilson coefficients , we must now identify the twist 2 operators that may contribute to DIS. In a theory of fermions and gauge bosons, we can construct two kinds of twist 2 operators :
Oμ1⋯μss,f≡¯¯¯¯ψfγ{μ1∂μ2⋯∂μs}ψf Oμ1⋯μss,g≡Fα{μ1∂μ2⋯∂μs−1Fμs}α, (33)
where the brackets denote a symmetrization of the indices and a subtraction of the traced terms on those indices. To compute the Wilson coefficients, the simplest method is to exploit the fact that they are independent of the target. Therefore, we can take as the “target” an elementary object, like a quark or a gluon, for which everything can be computed in closed form (including the ). Consider first a quark state as the target, of a given flavor and spin . At lowest order, one has
⟨f,σ∣∣Oμ1⋯μss,f′∣∣f,σ⟩=δff′¯¯¯uσ(P)γ{μ1uσ(P)Pμ2⋯Pμs} ⟨f,σ∣∣Oμ1⋯μss,g∣∣f,σ⟩=0. (34)
Averaging over the spin, and comparing with , we get
⟨Os,f′⟩f=δff′,⟨Os,g⟩f=0. (35)
On the other hand, we have already calculated directly the hadronic tensor for a single quark. By computing the moments of the corresponding , we get the for even :
t1(s,Q2)=1πe2f,t2(s,Q2)=2πe2f. (36)
From this, the bare Wilson coefficients for the operators involving quarks are
D1;s,f(Q2)=1πe2f,D2;s,f(Q2)=2πe2f. (37)
By repeating the same steps with a vector boson state, those involving only gluons are
D1;s,g(Q2)=D2;s,g(Q2)=0, (38)
if the vector bosons are assumed to be electrically neutral.
Going back to a nucleon target, we cannot compute the . However, we can hide momentarily our ignorance by defining functions and (respectively the quark and antiquark distributions) such that151515DIS with exchange of a photon cannot disentangle the quarks from the antiquarks. In order to do that, one could scatter a neutrino off the target, so that the interaction proceeds via a weak charged current.
∫10dxxxs[ff(x)+f¯f(x)]=⟨Os,f⟩. (39)
(The sum is known as the singlet quark distribution of flavor .) Thus, the OPE formulas for and on a nucleon in terms of these quark distributions are
(40)
We see that these formulas have the required properties: (i) Bjorken scaling and (ii) the Callan-Gross relation.
Despite the fact that the OPE in a free theory of quarks and gluons leads to a result which is embarrassingly similar to the much simpler calculation we performed in the naive parton model, this exercise has taught us several important things :
• We can derive an operator definition of the parton distributions (albeit it is not calculable perturbatively)
• Bjorken scaling can be derived from first principles in a field theory of free quarks and gluons. This was a puzzle pre-QCD because clearly these partons are constituents of a strongly bound state.
• The puzzle could be resolved if the field theory of strong interactions became a free theory in the limit , a property known as asymptotic freedom.
As shown by Gross, Politzer and Wilczek in 1973, non-Abelian gauge theories with a reasonable number of fermionic fields (e.g. QCD with 6 flavors of quarks) are asymptotically free[1] and were therefore a natural candidate for being the right theory of the strong interactions.
### 2.5 Scaling violations
Although it was interesting to see that a free quantum field theory reproduces the Bjorken scaling, this fact alone does not tell much about the detailed nature of the strong interactions at the level of quarks and gluons. Much more interesting are the violations of this scaling that arise from these interactions and it is the detailed comparison of these to experiments that played a crucial role in establishing QCD as the theory of the strong interactions.
The effect of interactions can be evaluated perturbatively in the framework of the OPE, thanks to renormalization group equations. In the previous discussion, we implicitly assumed that there is no scale dependence in the moments of the quark distribution functions. But this is not entirely true; when interactions are taken into account, they depend on a renormalization scale . The parton distributions become scale dependent as well. However, since are observable quantities that can be extracted from a cross-section, they cannot depend on any renormalization scale. Thus, there must also be a dependence in the Wilson coefficients, that exactly compensates the dependence originating from the . By dimensional analysis, the Wilson coefficients have an overall power of set by their dimension (see the discussion following eq. (29)), multiplied by a dimensionless function that can only depend on the ratio . By comparing the Callan-Symanzik equations[12] for with those for the expectation values , the renormalization group equation[12] obeyed by the Wilson coefficients is 161616We have used the fact that the electromagnetic currents are conserved and therefore have a vanishing anomalous dimension. Note also that we have exploited the fact that for twist 2 operators depends only on , so that we can replace by .
[(−Q∂Q+β(g)∂g)δij−γs,ji(g)]Dr;s,j(Q/μ,g)=0, (41)
where is the beta function, and is the matrix of anomalous dimensions for the operators of spin (it is not diagonal because operators with identical quantum numbers can mix through renormalization).
In order to solve these equations, let us first introduce the running coupling such that
ln(Q/Q0)=∫¯g(Q,g)gdg′β(g′). (42)
Note that this is equivalent to and ; in other words, is the value at the scale of the coupling whose value at the scale is . The usefulness of the running coupling stems from the fact that any function that depends on and only through the combination obeys the equation
[−Q∂Q+β(g)∂g]F(¯¯¯g(Q,g))=0. (43)
It is convenient to express the Wilson coefficients at the scale from those at the scale as
(44)
In QCD, which is asymptotically free, we can approximate the anomalous dimensions and running coupling at one loop by
γs,ij(¯¯¯g)=¯¯¯g2Aij(s),¯¯¯g2(Q,g)=8π2β0ln(Q/ΛQCD). (45)
(The are obtained from a 1-loop perturbative calculation.) In this case, the scale dependence of the Wilson coefficients can be expressed in closed form as
(46)
From this formula, we can write the moments of the structure functions,
∫10dxxxsF1(x,Q2)=∑i,fe2f2⎡⎢ ⎢⎣(ln(Q/ΛQCD)ln(Q0/ΛQCD))−8π2β0A(s)⎤⎥ ⎥⎦fi⟨Os,i⟩Q0, (47)
(and a similar formula for ). We see that we can preserve the relationship between and the quark distributions, eq. (40), provided that we let the quark distributions become scale dependent in such a way that their moments read
∫10dxxxs[ff(x,Q2)+f¯f(x,Q2)]≡∑i⎡⎢ ⎢⎣(ln(Q/ΛQCD)ln(Q0/ΛQCD))−8π2β0A(s)⎤⎥ ⎥⎦fi⟨Os,i⟩Q0. (48)
By also calculating the scale dependence of , one could verify that the Callan-Gross relation is preserved at the 1-loop order. It is crucial to note that, although we do not know how to compute the expectation values at the starting scale , QCD predicts how the quark distribution varies when one changes the scale . We also see that, in addition to a dependence on , the singlet quark distribution now depends on the expectation value of operators that involve only gluons (when the index in the previous formula).
The scale dependence of the parton distributions can also be reformulated in the more familiar form of the DGLAP equations. In order to do this, one should also introduce a gluon distribution , also defined by its moments,
∫10dxxxsfg(x,Q2)≡∑i⎡⎢ ⎢⎣(ln(Q/ΛQCD)ln(Q0/ΛQCD))−8π2β0A(s)⎤⎥ ⎥⎦gi⟨Os,i⟩Q0. (49)
Then one can check that the derivatives of the moments of the parton distributions with respect to the scale are given by
Q2∂fi(s,Q2)∂Q2=−¯¯¯g2(Q,g)2Aji(s)fj(s,Q2), (50)
where we have used the shorthands . In order to turn this equation into an equation for the parton distributions themselves, one can use
A(s)f(s)=∫10dxxxs∫1xdyyA(x/y)f(y), (51)
that relates the product of the moments of two functions to the moment of a particular convolution of these functions. Using this result, and defining splitting function from their moments,
∫10dxxxsPij(x)≡−4π2Aij(s), (52)
it is easy to derive the DGLAP equation[5],
Q2∂fi(x,Q2)∂Q2=¯¯¯g2(Q,g)8π2∫1xdyyPji(x/y)fj(y,Q2), (53)
that resums powers of . This equation for the parton distributions has a probabilistic interpretation : the splitting function can be seen as the probability that a parton splits into two partons separated by at least (so that a process with a transverse scale will see two partons), one of them being a parton that carries the fraction of the momentum of the original parton.
At 1-loop, the coefficients in the anomalous dimensions are
Agg(s)=12π2{3[112−1s(s−1)−1(s+1)(s+2)+s∑j=21j]+Nf6} Agf(s)=−14π2{1s+2+2s(s+1)(s+2)} Afg(s)=−13π2{1s+1+2s(s−1)} Aff′(s)=16π2{1−2s(s+1)+4s∑j=21j}δff′, (54)
where is the number of flavors of quarks. On can note that, since is flavor independent, the non-singlet171717Here, the word “singlet” refers to the flavor of the quarks. linear combinations ( with ) are eigenvectors of the matrix of anomalous dimensions, with an eigenvalue . These linear combinations do not mix with the remaining two operators, and , through renormalization. By examining these anomalous dimensions for , we can see that the eigenvalue for the non-singlet quark operators is vanishing : . Going back to the eq. (50), this implies that
∂∂Q2⎧⎨⎩∫10dx∑faf[ff(x,Q2)+f¯f(x,Q2)]⎫⎬⎭=0 (55)
for any linear combination such that . This relation implies for instance that the number of
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.968519926071167, "perplexity": 570.1425504116979}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738944.95/warc/CC-MAIN-20200812200445-20200812230445-00004.warc.gz"}
|
https://www.physicsforums.com/threads/radical-simplification.222671/
|
1. Mar 17, 2008
### bacon
From the book..." $$\sqrt[4]{(-4)^2}$$=$$\sqrt[4]{16}$$=2. It is incorrect to write $$\sqrt[4]{(-4)^2}$$=$$(-4)}^\frac{2}{4}$$=$$(-4)}^\frac{1}{2}$$=$$\sqrt{-4}$$ ..."
I understand the math involved but want to be sure of the exact reason why the first part is correct and the second is not. Is it because of the inner to outer priority of operations when one operation is nested inside another?
2. Mar 17, 2008
### rocomath
Work inner ... outer.
3. Mar 18, 2008
### CompuChip
Clearly, the first method is correct (it actually says $((-4)^2)^{1/4}$, so what it does is work out the brackets in the correct order.
Now if the second method were correct, you would get contradictory results. For example, consider this "proof":
$$1 = \sqrt{1} = \sqrt{(-1)^2} = ((-1)^2)^{1/2} \stackrel{?!}{=} (-1)^{2/2} = (-1)^1 = -1$$
so 1 = -1, and anything you might want to prove (whether true or false) follows
4. Mar 18, 2008
### Feldoh
My life has been a lie :(
5. Mar 18, 2008
### bacon
Actually, it is not. I could show you a proof of this, but I need to change the oil in the car. Sorry.
6. Mar 19, 2008
### CompuChip
It's not that bad... have some cake.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8109909892082214, "perplexity": 1137.0428015200962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00047-ip-10-171-10-70.ec2.internal.warc.gz"}
|
http://eprint.iacr.org/2004/377
|
## Cryptology ePrint Archive: Report 2004/377
New Distributed Ring Signatures for General Families of Signing Subsets
Javier Herranz and Germán Sáez
Abstract: In a distributed ring signature scheme, a subset of users cooperate to compute a distributed anonymous signature on a message, on behalf of a family of possible signing subsets. The receiver can verify that the signature comes from a subset of the ring, but he cannot know which subset has actually signed.
In this work we use the concept of dual access structures to construct a distributed ring signature scheme which works with general families of possible signing subsets. The length of each signature is linear on the number of involved users, which is desirable for some families with many possible signing subsets. The scheme achieves the desired properties of correctness, anonymity and unforgeability. The reduction in the proof of unforgeability is tighter than the reduction in the previous proposals which work with general families.
We analyze the case in which our scheme runs in an identity-based scenario, where public keys of the users can be derived from their identities. This fact avoids the necessity of digital certificates, and therefore allows more efficient implementations of such systems. But our scheme can be extended to work in more general scenarios, where users can have different types of keys.
Category / Keywords: cryptographic protocols / distributed ring signatures, ID-based cryptography, dual access structures
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.826224148273468, "perplexity": 1141.5236138564321}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701157012.30/warc/CC-MAIN-20160205193917-00346-ip-10-236-182-209.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/the-speed-of-force.11252/
|
# The Speed of Force
1. Dec 18, 2003
### The Divine Zephyr
Hey I'm new here and I would like to propose a question about the speed of kinetic energy. If lets say we had a newton's cradle one light-year long, will the last ball fly up as soon as the first ball hits the second? Assume all the balls are in perfect contact with each other. Does this happen instantly or at light or sub-light speed? I do not think the speed is limited, but what do you people think?
-tdz
2. Dec 18, 2003
### chroot
Staff Emeritus
Kinetic energy does have a "speed."
Futhermore, the balls communicate force via pressure. Pressure variations propagate at the speed of sound, which is different for different media, and is always less than the speed of light.
So no, the ball at the far end won't move until the pressure wave reaches it, which will take some time.
- Warren
3. Dec 19, 2003
### suyver
I think the the Newtons cradle example, the shockwave will travel through the material at the speed of sound (about 5-10 km/s).
Don't you mean 'does not'?
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8457781076431274, "perplexity": 697.6193209237019}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864482.90/warc/CC-MAIN-20180622123642-20180622143642-00199.warc.gz"}
|
http://math.stackexchange.com/questions/34417/prove-that-operatornamegal-mathbbq-sqrt82-i-mathbbq-sqrt-2/34425
|
# Prove that $\operatorname{Gal}(\mathbb{Q}(\sqrt[8]{2}, i)/\mathbb{Q}(\sqrt{-2})) \cong Q_8$
I seem to have reached a contradiction. I am trying to prove that $\operatorname{Gal}(\mathbb{Q}(\sqrt[8]{2}, i)/\mathbb{Q}(\sqrt{-2})) \cong Q_8$.
I could not think of a clever way to do this, so I decided to just list all the automorphisms of $\mathbb{Q}(\sqrt[8]{2}, i)$ that fix $\mathbb{Q}$ and hand-pick the ones that fix $i\sqrt{2}$. By the Fundamental Theorem of Galois Theory, those automorphisms should be a subgroup of the ones that fix $\mathbb{Q}$. I proved earlier that those automorphisms are given by $\sigma: \sqrt[8]{2} \mapsto \zeta^n\sqrt[8]{2}, i \mapsto \pm i$, where $n \in [0, 7]$ and $\zeta = e^\frac{2\pi i}{8}$.
However, I am getting too many automorphisms. One automorphism that fixes $i\sqrt{2}$ is $\sigma: \sqrt[8]{2} \mapsto \zeta\sqrt[8]{2}, i \mapsto -i$. However, this means all powers of $\sigma$ fix $i\sqrt{2}$, and I know $Q_8$ does not contain a cyclic subgroup of order $8$. What am I doing wrong?
(Please do not give me the answer. I have classmates for that.)
-
$i \rightarrow i^2=-1$ cannot be an automorphism. Also, what is $\zeta^2$? – N. S. Apr 21 '11 at 23:58
My bad. I meant $i \mapsto \pm i$. I'll change that in the problem. (And $\zeta^2 = i$, but that still makes it an automorphism that fixes $i\sqrt{2}$!) – badatmath Apr 22 '11 at 0:05
Check Zev's answer, I was gonna ask you calculate $\sigma(\zeta \sqrt[8]{2})$ . – N. S. Apr 22 '11 at 0:13
Hint: The order of $\sigma$ is not 8.
Note that $\sigma(\sqrt{2})=\sigma((\sqrt[8]{2})^4)=(\sigma(\sqrt[8]{2}))^4=\zeta_8^4(\sqrt[8]{2})^4=-\sqrt{2}$.
Note that $\zeta_8=\frac{1}{\sqrt{2}}+\frac{i}{\sqrt{2}}$.
Now compute $\sigma(\zeta_8)$.
Now compute $\sigma^{2}(\sqrt[8]{2})=\sigma(\zeta_8\sqrt[8]{2})=\sigma(\zeta_8)\sigma(\sqrt[8]{2})$, and then $\sigma^4(\sqrt[8]{2})$.
-
So I got $\sigma^2(\sqrt[8]{2}) = i\sqrt[8]{2}$, and $\sigma^4(\sqrt[8]{2}) = -\sqrt[8]{2}$, and $\sigma^8(\sqrt[8]{2}) = \sqrt[8]{2}$. Why isn't the order $8$? – badatmath Apr 22 '11 at 1:20
@badatmath - no, $$\sigma(\zeta_8)=\frac{1}{(-\sqrt{2})}+\frac{(-i)}{(-\sqrt{2})}=\frac{1}{\sqrt{2}}+\frac{i}{\sqrt{2}}=\zeta_8^3,$$ so that $\sigma^2(\sqrt[8]{2})=\zeta_8^3\zeta_8\sqrt[8]{2}=-\sqrt[8]{2}$, and thus $\sigma^4(\sqrt[8]{2})=\sqrt[8]{2}$. The order of $\sigma$ is 4. – Zev Chonoles Apr 22 '11 at 1:24
Huh. It seems I get a different answer every time I calculate this. I must be bad at computation today. Thank you :) – badatmath Apr 22 '11 at 1:34
No problem, glad to help :) – Zev Chonoles Apr 22 '11 at 1:37
Would it be easier to notice that extension $\mathbb{Q}(\sqrt[8]{2},i)$ is equal to $\mathbb{Q}(\sqrt[8]{2},\zeta)$ which is a cyclotomic extension followed by Kummer extension? You can then work out which elements of its Galois group fix $\sqrt{-2}$.
-
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9868820905685425, "perplexity": 256.54287480069723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065330.34/warc/CC-MAIN-20150827025425-00347-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://www.deepdyve.com/lp/oxford-university-press/the-myth-of-the-credit-spread-puzzle-VzhR80QkNQ
|
### Get 20M+ Full-Text Papers For Less Than \$1.50/day. Start a 14-Day Trial for You or Your Team.
The Myth of the Credit Spread Puzzle Abstract Are standard structural models able to explain credit spreads on corporate bonds? In contrast to much of the literature, we find that the Black-Cox model matches the level of investment-grade spreads well. Model spreads for speculative-grade debt are too low, and we find that bond illiquidity contributes to this underpricing. Our analysis makes use of a new approach for calibrating the model to historical default rates that leads to more precise estimates of investment-grade default probabilities. Received October 25, 2016; editorial decision January 12, 2018 by Editor Andrew Karolyi. Authors have furnished an Internet Appendix, which is available on the Oxford University Press Web site next to the link to the final published paper online. The structural approach to credit risk, pioneered by Merton (1974) and others, represents the leading theoretical framework for studying corporate default risk and pricing corporate debt. While the models are intuitive and simple, many studies find that, once calibrated to match historical default and recovery rates and the equity premium, they fail to explain the level of actual investment-grade credit spreads, a result referred to as the “credit spread puzzle.” Papers that find a credit spread puzzle typically use Moody’s historical default rates, measured over a period of around 30 years and starting from 1970, as an estimate of the expected default rate.1 Our starting point is to show that the appearance of a credit spread puzzle strongly depends on the period over which historical default rates are measured. For example, Chen, Collin-Dufresne, and Goldstein (2009) use default rates from 1970 to 2001 and find BBB-AAA model spreads of 57–79 basis points (bps) (depending on maturity), values that are substantially lower than historical spreads of 94–102 bps. If, instead, we use Moody’s default rates for 1920–2001, model spreads are 91–112 bps, a range that is in line with historical spreads. Using simulations, we demonstrate two key points about historical default rates. The first is, over sample periods of around 30 years that are typically used in the literature, there is a large sampling error in the observed average rate. For example, if the true 10-year BBB cumulative default probability were 5.09$$\%$$,2 a 95$$\%$$ confidence band for the realized default rate measured over 31 years would be $$[1.15\%, 12.78\%]$$. Intuitively, the large sample error arises because defaults are correlated and 31 years of data only give rise to three nonoverlapping 10-year intervals. As a result of the large sampling error, when historical default rates are used as estimates of ex ante default probabilities, the difference between actual spreads and model spreads needs to be large—much larger, for example, than that found for the BBB-AAA spread mentioned above—to be interpreted as statistically significant evidence against the model. Second, and equally crucial, distributions of average historical investment-grade default rates are highly positively skewed. Most of the time we see few defaults, but, occasionally, we see many defaults, meaning that there is a high probability of observing a rate that is below the actual mean. Positive skewness is likely to lead to the conclusion that a structural model underpredicts investment-grade spreads even if the model is correct. The reason for the presence of skewness is that defaults are correlated across firms as a result of the common dependence of individual firm values on systematic (“market”) shocks. To see why correlation leads to skewness, we can think of a large number of firms with a default probability (over some period) of 5$$\%$$ and where their defaults are perfectly correlated. In this case we will observe a zero default rate 95$$\%$$ of the time (and a 100$$\%$$ default rate 5$$\%$$ of the time), and so the realized default rate will underestimate the default probability 95$$\%$$ of the time. If the average default rate is calculated over three independent periods, the realized default rate will still underestimate the default probability $$0.95^3=85.74\%$$ of the time. We propose a new approach to estimate default probabilities. Instead of using the historical default rate at a single maturity and rating as an estimate of the default probability for this same maturity and rating, we use a wide cross-section of default rates at different maturities and ratings. We use the Black and Cox (1976) model and what ties default probabilities for firms with different ratings together in the model is that we assume that they will, nonetheless, have the same default boundary. (The default boundary is the value of the firm, measured as a fraction of the face value of debt, below which the firm defaults.) This is reasonable since, if the firm were to default, there is no obvious reason the default boundary would depend on the rating the firm had held previously. We show in simulations that our approach results in much more precise and less skewed estimates of investment-grade default probabilities. For the estimated 10-year BBB default probability, for example, the standard deviation and skewness using the new approach are only 16$$\%$$ and 4$$\%$$, respectively, of those using the existing approach. The improved precision is partly the result of the fact that we combine information across 20 maturities and 7 ratings and default probability estimates from different rating/maturity pairs are imperfectly correlated. But, to a significant extent, it is the result of combining default information on investment-grade and high-yield defaults. Because defaults occur much more frequently in high-yield debt, these firms provide more information on the location of the default boundary. Since the boundary is common to investment-grade and high-yield debt, when we combine investment-grade and high-yield default data, we “import” the information on the location of the default boundary from high-yield to investment-grade debt. The reduction in skewness is also the result of including default rates that are significantly higher than those for BBB debt. While a low default rate for investment-grade debt produces a positive skew in the distribution of defaults, a default rate of 50$$\%$$ produces a symmetric distribution and, for even higher default rates, the skew is actually negative. We use our estimation approach and the Black-Cox model to investigate spreads over the period 1987–2012. Our data set consists of 256,698 corporate bond yield spreads to the swap rate of noncallable bonds issued by industrial firms and is more extensive than those previously used in the literature. Our implementation of the Black-Cox model is new to the literature in that it allows for cross-sectional and time-series variation in firm leverage and payout rate while matching historical default rates. Applying our proposed estimation approach, we estimate the default boundary such that average model-implied default probabilities match average historical default rates from 1920 to 2012. In calibrating the default boundary we use a constant Sharpe ratio and match the equity premium, but, once we have implied out the single firm-wide default boundary parameter, we compute firm- and time-specific spreads using standard “risk-neutral” pricing formulae. We first explore the difference between average spreads in the Black-Cox model and actual spreads. The average model spread across all investment-grade bonds with a maturity between 3 and 20 years is 111 bps, whereas the average actual spread is 92 bps. A confidence band for the model spread that takes into account uncertainty in default probabilities is $$[88$$ bps$$;128$$ bps$$]$$; thus there is no statistical difference between actual and model investment spreads. For speculative-grade bonds, the average model spread is 382 bps, whereas the actual spread is 544 bps, and here the difference is statistically highly significant. We also sort bonds by the actual spread and find that actual and model-implied spreads are similar, except for bonds with a spread of more than 1,000 bps. For example, for bonds with an actual spread between 100–150 bps the average actual spread is 136 bps, whereas the average model-implied spread is 121 bps. Importantly, the results are similar if we calibrate the model using default rates from 1970 to 2012 rather than from 1920 to 2012, thus resolving the problem described above that results in the earlier literature depend significantly on the historical period chosen to benchmark the model. To study the time series, we calculate average spreads on a monthly basis and find that for investment-grade bonds there is a high correlation of 93$$\%$$ between average actual spreads and model spreads. Note that the model-implied spreads are “out-of-sample” predictions in the sense that actual spreads are not used in the calibration. Furthermore, for a given firm only changes in leverage and the payout rate—calculated using accounting data and equity values—lead to changes in the firm’s credit spread. For speculative-grade bonds the correlation is only 40$$\%$$, showing that the model has a much harder time matching spreads for low-quality firms. Although average investment-grade spreads are captured well on a monthly basis, the model does less well at the individual bond level. Regressing individual investment-grade spreads on those implied by the Black-Cox model gives an $$R^2$$ of only 44$$\%$$, so at the individual bond level less than half the variation in investment-grade spreads is explained by the model. For speculative-grade spreads the corresponding $$R^2$$ is only 13$$\%$$. We also investigate the potential contribution of bond illiquidity to credit spreads. We use bond age as the liquidity measure and double sort bonds on liquidity and credit quality. For investment-grade bonds we find no relation between bond liquidity and spreads, consistent with the ability of the model to match actual spreads and the finding in Dick-Nielsen, Feldhütter, and Lando (2012) that outside the 2007–2008 financial crisis illiquidity premiums in investment-grade bonds were negligible. For speculative-grade bonds we find a strong relation between bond liquidity and yield spreads, suggesting that bond liquidity may explain much of the underpricing of speculative-grade bonds. In this paper we use the Black and Cox (1976) model as a lens through which to study the credit spread puzzle. The results in Huang and Huang (2012) show that many structural models which appear very different in fact generate similar spreads once the models are calibrated to the same default probabilities, recovery rates, and the equity premium. The models tested in Huang and Huang (2012) include features such as stochastic interest rates, endogenous default, stationary leverage ratios, strategic default, time-varying asset risk premiums, and jumps in the firm value process, yet all generate a similar level of credit spread. To the extent that different structural models produce similar investment-grade default probabilities under our estimation approach, our finding that the Black-Cox model matches average investment-grade spreads is likely to hold for a wide range of structural models. An extensive literature tests structural models. Leland (2006), Cremers, Driessen, and Maenhout (2008), Chen, Collin-Dufresne, and Goldstein (2009), Chen (2010), Huang and Huang (2012), Chen, Cui, He, and Milbradt (2017), Bai (2016), Bhamra, Kuehn, and Strebulaev (2010), and Zhang, Zhou, and Zhu (2009) use the historical default rate at a given rating and maturity to estimate the default probability at that maturity and rating. We show that this test is statistically weak. Eom, Helwege, and Huang (2004), Ericsson, Reneby, and Wang (2015), and Bao (2009) allow for heterogeneity in firms and variation in leverage ratios, but do not calibrate to historical default rates. Bhamra, Kuehn, and Strebulaev (2010) observe that default rates are noisy estimators of default probabilities, but do not propose a solution to this problem as we do. 1. A Motivating Example There is a tradition in the credit risk literature of using Moody’s average realized default rate for a given rating and maturity as a proxy for the corresponding ex ante default probability. This section provides an example showing that the apparent existence or nonexistence of a credit spread puzzle depends on the particular period over which the historical default rate is measured. Later in the paper we describe an alternative approach for extracting default probability estimates from historical default rates that not only provides much greater precision but is also less sensitive to the sample period chosen. To understand how Moody’s calculates default frequencies, consider the 10-year BBB cumulative default frequency of 5.09$$\%$$ used in Chen, Collin-Dufresne, and Goldstein (2009).3 This number is published in Moody’s (2002) and is based on default data for the period 1970–2001. For the year 1970, Moody’s identifies a cohort of BBB-rated firms and then records how many of these default over the next 10 years. The 10-year BBB default frequency for 1970 is the number of defaulted firms divided by the number in the 1970 cohort. The average default rate of 5.09$$\%$$ is calculated as the average of the twenty-two 10-year default rates for the cohorts formed at yearly intervals over the period 1970–1991. A large part of the literature has focused on the BBB-AAA spread at 4- and 10-year maturities. In our main empirical analysis (Section 3), we study a much wider range of ratings and maturities but for now, to keep our example simple, we also focus on the BBB-AAA spread. For a given sample period we use the BBB and AAA average default rates for the 4- and 10-year horizons reported by Moody’s. Following the literature (e.g., Chen et al. 2009; Huang and Huang 2012; and others) we first benchmark a model to match these default rates, one at a time. Using the benchmarked parameters we then compute risk-neutral default probabilities and, from these, credit spreads. Following Eom, Helwege, and Huang (2004), Bao (2009), Huang and Huang (2012), and others, we assume that if default occurs, investors receive (at maturity) a fraction of the originally promised face value, but now with certainty. The credit spread, $$s$$, is then calculated as \begin{eqnarray} \label{eq:BlackCoxcreditspread} s=y-r=-\frac{1}{T}\log [1-(1-R)\pi^Q(T)], \end{eqnarray} (1) where $$R$$ is the recovery rate, $$T$$ is the bond maturity, and $$\pi^Q(T)$$ is the risk-neutral default probability. Throughout our analysis we employ the Black-Cox model (Black and Cox 1976). Appendix A provides the model details. We use our average parameter values for the period 1987–2012 estimated in Section 3 and Chen, Collin-Dufresne, and Goldstein’s (2009) estimates of the Sharpe ratio and recovery rate. We estimate the default boundary by matching an observed default frequency. The default boundary is the value of the firm, measured as a fraction of the face value of debt, below which the firm defaults. Following Chen, Collin-Dufresne, and Goldstein (2009) and others, we carry this out separately for each maturity and rating such that, conditional on the other parameters, the model default probability matches the reported Moody’s default frequency. For each maturity and rating we then use the benchmarked default boundary and calculate the credit spread using Equation (1). The solid bars in Figure 1 show estimates of the actual BBB-AAA corporate bond credit spread from a number of papers. For both the 4- and 10-year maturities, the estimated BBB-AAA spread is in the range of 98–109 bps with the notable exception of Huang and Huang’s (2012) estimate of the 10-year BBB-AAA of 131 bps. (Huang and Huang use both callable and noncallable bonds in their estimate of the spread and this may explain why it is higher.) Figure 1 View largeDownload slide Actual and model-implied BBB-AAA corporate bond yield spreads when using existing approach in the literature This figure shows actual and model-implied BBB-AAA spreads based on different estimates of actual and model-implied spreads. The actual BBB-AAA yield spreads are estimates from Duffee (1998) (Duf), Huang and Huang (2012) (HH), Chen, Collin-Dufresne, and Goldstein (2009) (CDG), and Cremers, Driessen, and Maenhout (2008) (CDM). The solid lines show spreads in the Black-Cox model based on Moody’s default rates from the period 1920–2002 and 1970–2001, respectively. The dashed lines show spreads in the Merton model based on Moody’s default rates from the period 1920–2002 and 1970–2001, respectively. Figure 1 View largeDownload slide Actual and model-implied BBB-AAA corporate bond yield spreads when using existing approach in the literature This figure shows actual and model-implied BBB-AAA spreads based on different estimates of actual and model-implied spreads. The actual BBB-AAA yield spreads are estimates from Duffee (1998) (Duf), Huang and Huang (2012) (HH), Chen, Collin-Dufresne, and Goldstein (2009) (CDG), and Cremers, Driessen, and Maenhout (2008) (CDM). The solid lines show spreads in the Black-Cox model based on Moody’s default rates from the period 1920–2002 and 1970–2001, respectively. The dashed lines show spreads in the Merton model based on Moody’s default rates from the period 1920–2002 and 1970–2001, respectively. Using Moody’s average default rates from the period 1970–2001, the 4- and 10-year BBB-AAA spreads in the Black-Cox model are 52 and 72 bps, respectively. These model estimates are substantially below actual spreads, a finding that has been coined the “credit spread puzzle.” Figure 1 also shows the model-implied spreads using Moody’s average historical default rates from 1920 to 2002 (default rates from 1920 to 2001 are not available). Using default rates from this longer period, the model-implied spreads are substantially higher: the 4- and 10-year BBB-AAA spreads are 87 and 104 bps, respectively. Thus, when we use default rates from a longer time period the puzzle largely disappears. To emphasize that this conclusion is not specific to the Black-Cox model, Figure 1 also shows the four spreads computed under the Merton model (and using the parameters and method given in Chen et al. 2009). These spreads are very similar to, and just a little higher than, the Black-Cox spreads. What remains unchanged is the finding that the appearance of a credit spread puzzle depends on the sample period. In the example we compare corporate bond yields relative to AAA yields to be consistent with CDG and others. In our later analysis we use bond yields relative to swap rates. The average difference between swap rates and AAA yields is small: over our sample period 1987–2012, the average 5- and 10-year AAA-swap spreads are 4 and 6 bps, respectively. We use swap rates in our later analysis, because the term structure of swap rates is readily available on a daily basis. There are very few AAA-rated bonds in the later part of our sample period, so we would not be able to calculate a AAA yield at different maturities. In summary, realized average default rates vary substantially over time, and, as a result, when these are taken as ex ante default probabilities the historical period over which they are measured has a strong influence on whether or not there will appear to be a credit spread puzzle. In the next section we first explore the statistical uncertainty of historical default rates in more detail and then propose a different approach to estimating default probabilities that exploits the information contained in historical default rates more efficiently than has been the case in the literature so far. 2. Estimating Ex Ante Default Probabilities The existing literature on the credit spread puzzle and, more broadly, the literature on credit risk typically uses the average ex post historical default rate for a single maturity and rating as an estimate of the ex ante default probability for this same maturity and rating.4 We find that the statistical uncertainty associated with these estimates is large and propose a new approach that uses historical default rates for all maturities and ratings simultaneously to extract the ex ante default probability for any given maturity and rating. Simulations show that our approach greatly reduces statistical uncertainty. 2.1 Existing approach: Extracting the ex ante default probability from a single ex post default frequency An ex post realized default frequency may be an unreliable estimate of the ex ante default probability for two significant reasons. The first is that the low level of default frequency, particularly for investment-grade firms, leads to a sample size problem with default histories as short as those typically used in the literature when testing standard models. The second is that, even though the problem of sample size is potentially mitigated by the presence of a large number of firms in the cross-section, defaults are correlated across firms and so the benefit of a large cross-section in improving precision is greatly reduced. How severe are these statistical issues? We address this question in a simulation study and base our simulation parameters on the average 10-year BBB default rate of 5.09$$\%$$ over 1970–2001 used in Chen, Collin-Dufresne, and Goldstein (2009). In an economy in which the ex ante 10-year default probability is 5.09$$\%$$ for all firms, we simulate the ex post realized 10-year default frequency over 31 years. We assume that in year 1 we have 445 identical firms, equal to the average number of firms in Moody’s BBB cohorts over the period 1970–2001. In the Black-Cox (and Merton) model firm $$i$$’s asset value under the natural measure follows a GBM: \begin{eqnarray} \label{BlackCoxGBM2} \frac{dV_{it}}{V_{it}}=(\mu-\delta)dt+\sigma dW^P_{it}, \end{eqnarray} (2) where $$\mu$$ is the drift of firm value before payout of the dividend yield $$\delta$$ and $$\sigma$$ is the volatility of firm value. Like in Section 1, we use our average parameter values for the period 1987–2012 estimated in Section 3: $$\mu=10.05\%$$, $$\delta=4.72\%$$, and $$\sigma=24.6\%$$. We introduce systematic risk by assuming that \begin{eqnarray} \label{BaoGBMsimSystAndIdio} W^P_{it}=\sqrt{\rho}W_{st}+\sqrt{1-\rho}W_{it}, \end{eqnarray} (3) where $$W_i$$ is a Wiener process specific to firm $$i$$, $$W_s$$ is a Wiener process common to all firms, and $$\rho$$ is the pairwise correlation between percentage changes in firm value. All the Wiener processes are independent. The firm defaults the first time asset value hits a boundary equal to a fraction $$d$$ of the face value of debt $$F$$, that is, the first time $$V_{\tau}\leq dF$$. The realized 10-year default frequency in the year 1 cohort is found by simulating one systematic and 445 idiosyncratic processes in Equation (3). In year 2 we form a cohort of 445 new firms. The firms in year 2 have characteristics that are identical to those of the year 1 cohort at the time of formation. We calculate the realized 10-year default frequency of the year 2 cohort as we did for the year 1 cohort. Crucially, the common shock for years 1–9 for the year 2 cohort is the same as the common shock for years 2–10 for firms in the year 1 cohort. We repeat the same process for 22 years and calculate the overall average realized cumulative 10-year default frequency in the economy by taking an average of the default frequencies across the 22 cohorts. Finally, we repeat this entire simulation 25,000 times. To estimate the correlation parameter $$\rho$$, we calculate pairwise equity correlations for rated industrial firms in the period 1987–2012. Specifically, for each year we calculate the average pairwise correlation of daily equity returns for all industrial firms for which Standard $$\&$$ Poor’s provide a rating and then calculate the average of the 26 yearly estimates over 1987–2012. We estimate $$\rho$$ to be 20.02$$\%$$. To set the default boundary, we proceed as follows. First, without loss of generality, we assume that the initial asset value of each firm is equal to one. This means that the firm’s leverage, $$L\equiv\frac{F}{V}=F$$, and we set the default boundary $$dF(=dL)$$ such that the model-implied default probability given in Equation (A2) in the appendix matches the 10-year default rate of 5.09$$\%$$.5 Panel A of Figure 2 shows the distribution of the realized average 10-year default rate in the simulation study and the black vertical line shows the ex ante default probability of 5.09$$\%$$. The 95$$\%$$ confidence interval for the realized average default rate is wide at [1.15$$\%$$; 12.78$$\%$$]. We also see that the default frequency is significantly skewed to the right; that is, the modal value of around 3$$\%$$ is significantly below the mean of 5.09$$\%$$. This means that the default frequency most often observed—for example, the estimate from the rating agencies—is below the mean. Specifically, although the true 10-year default probability is 5.09$$\%$$, the probability that the observed average 10-year default rate over 31 years is half that level or less is 19.9$$\%$$. This skewness means that the number reported by Moody’s (5.09$$\%$$) is more likely to be below the true mean than above it and, in this case, if spreads reflect the true expected default probability, they will appear too high relative to the observed historical loss rate. Figure 2 View largeDownload slide Distribution of estimated 10-year BBB default probability when using default rates measured over 31 years The existing approach in the literature is to use an average historical default rate for a specific rating and maturity as an estimate for the default probability when testing spread predictions of structural models. One example is Chen, Collin-Dufresne, and Goldstein (2009), who use the 10-year BBB default rate of 5.09$$\%$$ realized over the per
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8898070454597473, "perplexity": 1511.5183770078736}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103034930.3/warc/CC-MAIN-20220625095705-20220625125705-00355.warc.gz"}
|
https://openforcefield.org/science/updates/propertyestimator-meeting-2019-06-27/
|
# Property Estimator Meeting Summary (Jun 27, 2019)
Summary from PropertyCalculator subgroup meeting on June 27, 2019
Property selection: Discussion about which properties and the exact amount of data to be requested from NIST continues. One of the challenges is to find the right balance between the current and future parameterization needs, which includes making decisions about specific properties to be used for the first optimization sprint and specific chemistry that will be covered in this sprint (and maybe in the next one). For the latter, finding a set of molecules (SMILES) that exercise all SMIRNOFF parameters is desirable and J. Fass’ script can be used as a starting point to address chemical coverage space and associated number of property data points for each atom type. Suggested data structure should roughly correspond to {smirks: {molecule: {minP maxP, minT, maxT, property counts } } }.
ForceBalance is limited to the following properties: density, dielectric, heat capacity, surface tension, thermal expansion, and isothermal compressibility. To compute other properties, either extensions to ForceBalance, or a functional interface between PropertyEstimator and ForceBalance is required, which is currently a work in progress. Some measure of cohesive energy is desirable, but it is not clear what is the best way to achieve this. The first optimization sprint (release-1) will likely focus on properties of neat organic liquids and mixtures. Using mixtures will require certain modifications of ForceBalance, but these tweaks are expected to be rather minor. M. Shirts suggested optimization on pure compounds first and validation against mixtures in the first sprint, using heats of vaporization or surface tensions, with more properties being added in the future, including solvation free energies. L.-P. Wang and Y. Qiu noted that surface tension requires longer sampling times than heat of vaporization. In addition, L.-P. Wang would refrain from fitting heat of vaporization to polar molecules without including polarization correction. Whether this correction should be included, remains to be decided in agreement with the other PIs not present at the meeting. L.-P. Wang also wants to avoid fitting to heat capacities due to quantum effects, although M. Shirts suggested that fitting to residual heat capacities (difference between the liquid and the gas phase) might circumvent the problem, although this should be validated first. M. Gilson noted that certain molecules don’t conformationally converge in property calculation and L.-P. Wang and M. Schauperl reported issues with carboxylic acids and esters, respectively. It is not clear whether these molecules should be included in the first optimization sprint. The following actions/decisions are required:
• Building an appropriate chemical coverage set (specific moieties vs. specific molecules);
• Choosing heat of vaporization or surface tension for fitting (and whether to use polarization correction);
• Specifiying the list of properties for the first round.
ForceBalance | PropertyEstimator interface: L.-P. Wang noted that ForceBalance calculates a Jacobian at each optimization step, which PropertyEstimator does not currently provide. It was decided that the easiest solution would be to not use gradients from PropertyEstimator and instead rely on the finite difference method, which will be done if the interface is not ready by September. Y. Qiu added that ForceBalance can also introduce gradients for some hyperparameters, not just for physical properties, which are currently not supported in PropertyEstimator. S. Boothroyd suggested that any features that are not crucial for the first optimization sprint should be discussed and added later on. S. Boothroyd and Y. Qiu will continue working together on building the interface between ForceBalance and PropertyEstimator, starting from ForceBalance example 17. S. Boothroyd will look into transformation of gradients from PropertyEstimator for use in ForceBalance.
Published by in science and tagged fitting, ForceBalance, gradients, mixtures, NIST, property selection, PropertyEstimator, release-1 and validation using 576 words.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43347135186195374, "perplexity": 2444.499869997078}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370493120.15/warc/CC-MAIN-20200328194743-20200328224743-00198.warc.gz"}
|
https://www.physicsforums.com/threads/rotational-motion-about-a-fixed-axis-need-help.223375/
|
# Homework Help: Rotational Motion About a Fixed Axis , NEED HELp :\
1. Mar 21, 2008
### keylostman
1. The problem statement, all variables and given/known data
A wheel of diameter of .68m roll without slipping. A point at the top of the wheel moves witha tangential speed of 5.4m/s. Whaat speed is the axle of the wheel moving? What is the angular speed of the wheel?
How would i approach to do this problem, what equations would i need to use ?
2. Mar 21, 2008
### keylostman
ok for speed of axle i took .5 * 5.4 to get 2.7m/s
and for angular speed of wheel i took 5.4mn/s * .34m to get 15.9 rad/s
am i correct ?
3. Mar 21, 2008
### tiny-tim
Hi keylostman!
Are you the same person as th3plan?
Why have you posted the same problem twice?
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9613032341003418, "perplexity": 1737.8415965446154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746386.1/warc/CC-MAIN-20181120110505-20181120132505-00375.warc.gz"}
|
https://bioconductor.riken.jp/packages/release/bioc/vignettes/cyanoFilter/inst/doc/cyanoFilter.html
|
# cyanoFilter
## Introduction
Flow cytometry is a well-known technique for identifying cell populations contained in a biological smaple. It is largely applied in biomedical and medical sciences for cell sorting, counting, biomarker detections and protein engineering. The technique also provides an energy efficient alternative to microscopy that has long been the standard technique for cell population identification. Cyanobacteria are bacteria phylum believe to contribute more than 50% of atmospheric oxygen via oxygen and are found almost everywhere. These bacteria are also one of the known oldest life forms known to obtain their energy via photosynthesis.
## Illustrations
We load the package and necessary dependencies below. We also load tidyverse for some data cleaning steps that we need to carry out.
library(dplyr)
library(magrittr)
library(tidyr)
library(purrr)
library(flowCore)
library(flowDensity)
library(cyanoFilter)
To illustrate the funtions contained in this package, we use two datafiles contained by default in the package. These are just demonstration dataset, hence are not documented in the helpfiles.
metadata <- system.file("extdata", "2019-03-25_Rstarted.csv",
package = "cyanoFilter",
mustWork = TRUE)
check.names = TRUE)
#columns containing dilution, $\mu l$ and id information
metafile <- metafile %>%
dplyr::select(Sample.Number,
Sample.ID,
Number.of.Events,
Dilution.Factor,
Original.Volume,
Cells.L)
Each row in the csv file corresponds to a measurement from two types of cyanobacteria cells carried out at one of three dilution levels. The columns contain information about the dilution level, the number of cells per micro-litre ($$cell/\mu l$$), number of particles measured and a unique identification code for each measurement. The Sample.ID column is structured in the format cyanobacteria_dilution. We extract the cyanobacteria part of this column into a new column and also rename the $$cell/\mu l$$ column with the following code:
#extract the part of the Sample.ID that corresponds to BS4 or BS5
metafile <- metafile %>% dplyr::mutate(Sample.ID2 =
stringr::str_extract(metafile$Sample.ID, "BS*[4-5]") ) #clean up the Cells.muL column names(metafile)[which(stringr::str_detect(names(metafile), "Cells."))] <- "CellspML" ### Good Measurements To determine the appropriate data file to read from a FCM datafile, the desired minimum, maximum and column containing the $$cell\mu l$$ values are supplied to the goodfcs() function. The code below demonstrates the use of this function for a situation where the desired minimum and maximum for $$cell/\mu l$$ is 50 and 1000 respectively. metafile <- metafile %>% mutate(Status = cyanoFilter::goodFcs(metafile = metafile, col_cpml = "CellspML", mxd_cellpML = 1000, mnd_cellpML = 50) ) knitr::kable(metafile) Sample.Number Sample.ID Number.of.Events Dilution.Factor Original.Volume CellspML Sample.ID2 Status 1 BS4_20000 6918 20000 10 62.02270 BS4 good 2 BS4_10000 6591 10000 10 116.76311 BS4 good 3 BS4_2000 6508 2000 10 517.90008 BS4 good 4 BS5_20000 5976 20000 10 48.31036 BS5 bad 5 BS5_10000 5844 10000 10 90.51666 BS5 good 6 BS5_2000 5829 2000 10 400.72498 BS5 good The function adds an extra column, Status, with entries good or bad to the metafile. Rows containing $$cell/\mu l$$ values outside the desired minimum and maximum are labelled bad. Note that the Status column for the fourth row is labelled bad, because it has a $$cell/\mu l$$ value outside the desired range. ### Files to Retain Although any of the files labelled good can be read from the FCM file, the retain() function can help select either the file with the highest $$cell/\mu l$$ or that with the smallest $$cell/\mu l$$ value. To do this, one supplies the function with the status column, $$cell/\mu l$$ column and the desired decision. The code below demonstrates this action for a case where we want to select the file with the maximum $$cell/\mu l$$ from the good measurements for each unique sample ID. broken <- metafile %>% group_by(Sample.ID2) %>% nest() metafile$Retained <- unlist(map(broken$data, function(.x) { retain(meta_files = .x, make_decision = "maxi", Status = "Status", CellspML = "CellspML") }) ) knitr::kable(metafile) Sample.Number Sample.ID Number.of.Events Dilution.Factor Original.Volume CellspML Sample.ID2 Status Retained 1 BS4_20000 6918 20000 10 62.02270 BS4 good No! 2 BS4_10000 6591 10000 10 116.76311 BS4 good No! 3 BS4_2000 6508 2000 10 517.90008 BS4 good Retain 4 BS5_20000 5976 20000 10 48.31036 BS5 bad No! 5 BS5_10000 5844 10000 10 90.51666 BS5 good No! 6 BS5_2000 5829 2000 10 400.72498 BS5 good Retain This function adds another column, Retained, to the metafile. The third and sixth row in the metadata are with the highest $$cell/\mu l$$ values, thus one can proceed to read the fourth and sixth file from the corresponding FCS file for BS4 and BS5 respectively. This implies that we are reading in only two FCS files rather than the six measured files. ### Flow Cytometer File Processing To read B4_18_1.fcs file into R, we use the read.FCS() function from the flowCore package. The dataset option enables the specification of the precise file to be read. Since this datafile contains one file only, we set this option to 1. If this option is set to 2, it gives an error since text.fcs contains only one datafile. flowfile_path <- system.file("extdata", "B4_18_1.fcs", package = "cyanoFilter", mustWork = TRUE) flowfile <- read.FCS(flowfile_path, alter.names = TRUE, transformation = FALSE, emptyValue = FALSE, dataset = 1) flowfile #> flowFrame object ' B4_18_1' #> with 8729 cells and 11 observables: #> name desc range minRange maxRange #>$P1 FSC.HLin Forward Scatter (FSC.. 1e+05 0.00000 99999
#> $P2 SSC.HLin Side Scatter (SSC-HL.. 1e+05 -34.47928 99999 #>$P3 GRN.B.HLin Green-B Fluorescence.. 1e+05 -21.19454 99999
#> $P4 YEL.B.HLin Yellow-B Fluorescenc.. 1e+05 -10.32744 99999 #>$P5 RED.B.HLin Red-B Fluorescence (.. 1e+05 -5.34720 99999
#> $P6 NIR.B.HLin Near IR-B Fluorescen.. 1e+05 -4.30798 99999 #>$P7 RED.R.HLin Red-R Fluorescence (.. 1e+05 -25.49018 99999
#> $P8 NIR.R.HLin Near IR-R Fluorescen.. 1e+05 -16.02002 99999 #>$P9 SSC.ALin Side Scatter Area (S.. 1e+05 0.00000 99999
#> $P10 SSC.W Side Scatter Width (.. 1e+05 -111.00000 99999 #>$P11 TIME Time 1e+05 0.00000 99999
#> 368 keywords are stored in the 'description' slot
The R object flowfile contains measurements about 8729 cells across 10 channels since the time channel does not contain any information about the properties of the measured cells.
### Transformation and visualisation
To examine the need for transformation, a visual representation of the information in the expression matrix is of great use. The ggpairsDens() function produces a panel plot of all measured channels. Each plot is also smoothed to show the cell density at every part of the plot.
flowfile_nona <- noNA(x = flowfile)
ggpairsDens(flowfile_nona, notToPlot = "TIME")
We obtain Figure above by using the ggpairsDens() function after removing all NA values from the expression matrix with the nona() function. There is a version of the function, pairs_plot() that produces standard base scatter plots also smoothed to indicate cell density.
flowfile_noneg <- noNeg(x = flowfile_nona)
flowfile_logtrans <- lnTrans(x = flowfile_noneg,
notToTransform = c("SSC.W", "TIME"))
ggpairsDens(flowfile_logtrans, notToPlot = "TIME")
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6265095472335815, "perplexity": 7457.479658039994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300658.84/warc/CC-MAIN-20220118002226-20220118032226-00289.warc.gz"}
|
http://cds.cern.ch/collection/SPSC%20Public%20Documents?ln=pt&as=1
|
# SPSC Public Documents
2013-04-05
18:38
Neutrinos from Stored Muons (STORM): Expression of Interest / Adey, D ; Ankenbrandt, CM ; Agarwalla, SK ; Asfandiyarov, R ; Back, JJ ; Barker, G ; Baussan, E ; Bayes, R ; Bhadra, S ; Booth, C et al. The $u$STORM facility has been designed to deliver beams of $u_e$ and $u_mu$ from the decay of a stored $mu^pm$ beam with a central momentum of 3.8 GeV/c and a momentum spread of 10%. [...] CERN-SPSC-2013-015 ; SPSC-EOI-009. - 2013. Fulltext
2013-04-04
01:54
2012 Progress Report on PS215/CLOUD / Collaboration, CLOUD Progress report on PS215/CLOUD during 2012 CERN-SPSC-2013-014 ; SPSC-SR-118. - 2013. Fulltext
2013-04-02
17:09
AWAKE Design Report A Proton-Driven PlasmaWakefield Acceleration Experiment at CERN / Caldwell, A (Max Planck Institute for Physics) ; Gschwendtner, E (CERN) ; Lotov, K (Budker Institute of Nuclear Physics) ; Muggli, P (Max Planck Institute for Physics) ; Wing, M (University College London) The AWAKE Collaboration has been formed in order to demonstrate proton driven plasma wakefield acceleration for the first time. [...] CERN-SPSC-2013-013 ; SPSC-TDR-003. - 2013. Fulltext
2013-03-30
12:29
Dual-Readout Calorimetry for High-Quality Energy Measurements The RD52 Status Report 2012/2013 / Wigmans, Richard This report summarises the progress of the RD52 programme on Dual-Readout Calorimetry for High-Quality Energy Measurements and gives the plans of the collaboration for the coming years.. CERN-SPSC-2013-012 ; SPSC-SR-117. - 2013. Fulltext
2013-03-25
13:03
Agenda of the 109th Meeting of the SPSC, Tuesday and Wednesday, 9 and 10 April 2013 CERN-SPSC-2013-011 ; SPSC-A-109. - 2013.
2013-03-25
12:40
DIRAC status report 2012 / Nemenov, L DIRAC results in 2008-2012 CERN-SPSC-2013-010 ; SPSC-SR-116. - 2013. Fulltext
2013-03-21
09:59
2013 NA62 Status Report to the CERN SPSC / NA62, Collaboration NA62 aims to study the rare decay $K^+ \to \pi^+ \nu \bar{\nu}$ at the CERN SPS [...] CERN-SPSC-2013-009 ; SPSC-SR-115. - 2013. Fulltext
2013-02-18
11:51
Minutes of the 108th Meeting of the SPSC, Tuesday 15 and Wednesday 16 January 2013 CERN-SPSC-2013-008; SPSC-108.- Geneva : CERN, 2013 Minutes Fulltext: PDF;
2013-01-14
19:20
2012 Progress Report by the Antihydrogen TRAP Collaboration (ATRAP) / Gabrielse, G 2012 Progress Report by the Antihydrogen TRAP Collaboration (ATRAP) CERN-SPSC-2013-007 ; SPSC-SR-114. - 2013. Fulltext
2013-01-09
18:26
Status report for the AD6 (AEGIS) experiment for 2012 / Doser, M Status report for the AD6 (AEGIS) experiment for 2012 CERN-SPSC-2013-006 ; SPSC-SR-113. - 2013. Fulltext
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7971278429031372, "perplexity": 24049.745315316963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383081/warc/CC-MAIN-20130516092623-00002-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://zbmath.org/?q=an:0834.14010
|
×
## Spectral sequence of weights in Hyodo-Kato cohomology. (La suite spectrale des poids en cohomologie de Hyodo-Kato.)(French)Zbl 0834.14010
This very interesting paper treats a $$p$$-adic analogue of the following result of J. Steenbrink [Invent. Math. 31, 229-257 (1976; Zbl 0312.14007)] in complex Hodge theory. Let $$X$$ be a complex analytic Kähler manifold, $$D$$ the complex unit disk and $$f : X \to D$$ a proper morphism of analytic spaces, smooth away from 0, and such that its fibre at the origin $$Y$$ is a divisor with normal crossings whose components are smooth Kähler manifolds. The logarithm of the monodromy acts as a nilpotent endomorphism on the sheaf of limiting cycles $$R \Psi_f (\mathbb{C})$$, and therefore induces on it a finite increasing filtration called the monodromy filtration. On the other hand, the cohomology of limiting cycles $$H^i (Y,R \Psi_f (\mathbb{C}))$$ has a structure of mixed Hodge structure, whose weight filtration is the abutment filtration of the weight spectral sequence. Steenbrink proved that this spectral sequence degenerates at $$E_2$$, and that the weight filtration on $$H^i (Y,R \Psi_f (\mathbb{C}))$$ coincides with the monodromy filtration.
In the paper under review, the author constructs a Hyodo-Steenbrink complex that allows him to prove, in certain cases, an analogue of the above theorem for the cohomology of a proper and flat scheme with semistable reduction over a complete discrete valuation field. – Let $$K$$ be such a field with ring of integers $$A$$ and residue field $$\kappa$$ perfect of characteristic $$p > 0$$. Let $$W$$ denote the ring of Witt vectors with coefficients in $$\kappa$$, $$K_0$$ the field of fractions of $$W$$ and $$\sigma$$ the Frobenius automorphism of $$W$$. Let $$X$$ be a proper and flat $$A$$-scheme with semistable reduction, and let $$Y$$ be its special fibre which is supposed to be a sum of smooth divisors. To such an object O. Hyodo and K. Kato [“Semistable reduction and crystalline cohomology with logarithmic poles” in: Périodes $$p$$-adiques, Sémin. Bures sur-Yvette 1988, Exposé V, Astérisque 223, 221-268 (1994)] associated groups of “crystalline cohomology with logarithmic poles”, $$H^* (Y^\times, W^\times)$$. These are $$W$$-modules of finite type together with a $$\sigma$$-semilinear isogeny $$\Phi$$, called Frobenius, and a nilpotent linear monodromy operator $$N$$. If $$X/A$$ is smooth, $$H^* (Y^\times, W^\times)$$ is the usual crystalline cohomology and the monodromy is 0. If $$K$$ is of characteristic 0 there is a canonical isomorphism $$H^* (Y^\times, W^\times) \otimes K \simeq H^*_{\text{DR}} (X_K/K)$$, where $$X_K$$ is the generic fibre of $$X$$.
Hyodo and Kato also proved that, as is the case for the usual crystalline cohomology, this crystalline cohomology for logarithmic schemes may be calculated by means of the de Rham-Witt complex $$W^\bullet_{\omega_Y}$$. – The author modifies their construction (so that he does not need to assume that $$K$$ is of unequal characteristic) and shows, as his key technical result (section 3), that $$W^\bullet_{\omega_Y}$$, viewed as an object in the derived category $$D(Y_{\text{ét}}, W)$$, underlies an object $$(WA^\bullet, P_k)$$, which he calls the complex of Hyodo-Steenbrink, in the filtered derived category, where $$(P_k)$$ is a finite increasing filtration, called the weight filtration, and $$WA^\bullet$$ is endowed with a nilpotent endomorphism $$\nu$$ lifting the monodromy on $$W^\bullet_{\omega_Y}$$. This allows him to build a weight spectral sequence abutting to $$H^* (Y^\times, W^\times)$$, analogous to that of Steenbrink. $$H^* (Y^\times, W^\times) \otimes K_0$$ then has a weight filtration coming from this spectral sequence and also a monodromy filtration determined by $$N$$. This is the $$p$$-adic analogue of the complex situation studied by Steenbrink, so that the author naturally arrives at conjecture 3.24: If $$Y$$ is projective the weight spectral sequence degenerates at $$E_2$$ modulo torsion. Conjecture 3.27: If $$Y$$ is the reduction of a projective semistable $$X/A$$ the weight filtration agrees with the monodromy filtration.
We must now assume, as the author does, that the residual field $$\kappa$$ is finite. He then proves (theorem 3.32) that $$(H^* (Y^\times, W^\times)$$/torsion, $$\Phi)$$ is a mixed crystal in the sense of Faltings, and hence that conjecture 3.24 is true. He is also able to prove, after studying the duality pairing in this situation, that conjecture 3.27 holds for curves (theorem 5.3) and for surfaces (corollary 6.2.3).
As applications, he shows (proposition 5.9) that the conjecture of Fontaine, saying that an abelian variety $$A/K$$ has good reduction if and only if the Galois representation on its Tate module is crystalline, holds if $$\kappa$$ is finite and $$A$$ is potentially a product of Jacobians. He also gives an explicit formula, in terms of the Rham cohomology, for the local factor at a prime of semistable reduction of the Hasse-Weil zeta function of a surface.
### MSC:
14F30 $$p$$-adic cohomology, crystalline cohomology 14K15 Arithmetic ground fields for abelian varieties 55T25 Generalized cohomology and spectral sequences in algebraic topology 14F40 de Rham cohomology and algebraic geometry 14G20 Local ground fields in algebraic geometry
Zbl 0312.14007
Full Text:
### References:
[1] M. Artin and G. Winters, Degenerate fibres and stable reduction of curves , Topology 10 (1971), 373-383. · Zbl 0221.14018 [2] P. Berthelot, Cohomologie cristalline des schémas de caractéristique $$p>0$$ , Lecture Notes in Mathematics, vol. 407, Springer-Verlag, Berlin, 1974. · Zbl 0298.14012 [3] C. H. Clemens, Degeneration of Kähler manifolds , Duke Math. J. 44 (1977), no. 2, 215-290. · Zbl 0353.14005 [4] P. Deligne, Équations différentielles à points singuliers réguliers , Lecture Notes in Mathematics, vol. 163, Springer-Verlag, Berlin, 1970. · Zbl 0244.14004 [5] P. Deligne, La conjecture de Weil. II , Inst. Hautes Études Sci. Publ. Math. (1980), no. 52, 137-252. · Zbl 0456.14014 [6] P. Deligne, Théorie de Hodge. II , Inst. Hautes Études Sci. Publ. Math. (1971), no. 40, 5-57. · Zbl 0219.14007 [7] P. Deligne and L. Illusie, Relèvements modulo $$p^ 2$$ et décomposition du complexe de de Rham , Invent. Math. 89 (1987), no. 2, 247-270. · Zbl 0632.14017 [8] P. Deligne and D. Mumford, The irreducibility of the space of curves of given genus , Inst. Hautes Études Sci. Publ. Math. (1969), no. 36, 75-109. · Zbl 0181.48803 [9] F. El Zein, Théorie de Hodge des cycles évanescents , Ann. Sci. École Norm. Sup. (4) 19 (1986), no. 1, 107-184. · Zbl 0592.14005 [10] G. Faltings, $$F$$-isocrystals on open varieties: results and conjectures , The Grothendieck Festschrift, Vol. II, Progr. Math., vol. 87, Birkhäuser Boston, Boston, MA, 1990, pp. 219-248. · Zbl 0736.14004 [11] G. Faltings, Crystalline cohomology of semistable curves, and $$p$$-adic Galois-representations , J. Algebraic Geom. 1 (1992), no. 1, 61-81. · Zbl 0813.14012 [12] J.-M. Fontaine, Modules galoisiens, modules filtrés et anneaux de Barsotti-Tate , Journées de Géométrie Algébrique de Rennes. (Rennes, 1978), Vol. III, Astérisque, vol. 65, Soc. Math. France, Paris, 1979, pp. 3-80. · Zbl 0429.14016 [13] J.-M. Fontaine, Sur certains types de représentations $$p$$-adiques du groupe de Galois d’un corps local; construction d’un anneau de Barsotti-Tate , Ann. of Math. (2) 115 (1982), no. 3, 529-577. JSTOR: · Zbl 0544.14016 [14] J. M. Fontaine and L. Illusie, $$p$$-adic periods: a survey , prépublication, Université d’Orsay, 1990. · Zbl 0836.14010 [15] M. Gros, Classes de Chern et classes de cycles en cohomologie de Hodge-Witt logarithmique , Mém. Soc. Math. France (N.S.) 114 (1985), no. 21, 87. · Zbl 0615.14011 [16] M. Gros, Cohomologie de de Rham et réduction semi-stable, l’aspect rigide analytique , Mai 1990, notes de travail. [17] P. Griffiths and W. Schmid, Recent developments in Hodge theory: a discussion of techniques and results , Discrete subgroups of Lie groups and applicatons to moduli (Internat. Colloq., Bombay, 1973), Oxford Univ. Press, Bombay, 1975, pp. 31-127. · Zbl 0355.14003 [18] F. Guillén and V. Navarro Aznar, Sur le théorème local des cycles invariants , Duke Math. J. 61 (1990), no. 1, 133-155. · Zbl 0722.14002 [19] R. Hartshorne, Local cohomology , A seminar given by A. Grothendieck, Harvard University, Fall, vol. 1961, Springer-Verlag, Berlin, 1967, Lecture Notes in Mathematics, No. 41, vi+106. · Zbl 0185.49202 [20] O. Hyodo, A cohomological construction of Swan representations over the Witt ring. I , Proc. Japan Acad. Ser. A Math. Sci. 64 (1988), no. 8, 300-303. · Zbl 0699.14026 [21] O. Hyodo, On the de Rham-Witt complex attached to a semi-stable family , Compositio Math. 78 (1991), no. 3, 241-260. · Zbl 0742.14015 [22] O. Hyodo and K. Kato, Semi-stable reduction and crystalline cohomology with logarithmic poles , prépublication, 1989. · Zbl 0852.14004 [23] L. Illusie, Complexe de de Rham-Witt et cohomologie cristalline , Ann. Sci. École Norm. Sup. (4) 12 (1979), no. 4, 501-661. · Zbl 0436.14007 [24] L. Illusie, Ordinarité des intersections complètes générales , The Grothendieck Festschrift, Vol. II, Progr. Math., vol. 87, Birkhäuser Boston, Boston, MA, 1990, pp. 376-405. · Zbl 0728.14021 [25] L. Illusie, Cohomologie de de Rham et cohomologie étale $$p$$-adique (d’après G. Faltings, J.-M. Fontaine et al.) , Astérisque (1990), no. 189-190, Exp. No. 726, 325-374. · Zbl 0736.14005 [26] L. Illusie, Réalisation $$l$$-adique de l’accouplement de monodromie d’après A. Grothendieck , Astérisque (1991), no. 196-197, 7, 27-44 (1992), dans Courbes Modulaire et Courbes de Shimura. · Zbl 0781.14011 [27] L. Illusie, Finiteness, duality, and Künneth theorems in the cohomology of the de Rham-Witt complex , Algebraic geometry (Tokyo/Kyoto, 1982), Lecture Notes in Math., vol. 1016, Springer, Berlin, 1983, pp. 20-72. · Zbl 0538.14013 [28] L. Illusie and M. Raynaud, Les suites spectrales associées au complexe de de Rham-Witt , Inst. Hautes Études Sci. Publ. Math. (1983), no. 57, 73-212. · Zbl 0538.14012 [29] U. Jannsen, On the $$l$$-adic cohomology of varieties over number fields and its Galois cohomology , Galois groups over $$\mathbf Q$$ (Berkeley, CA, 1987) eds. I. Ihara, K. Ribet, and J.-P. Serre, Math. Sci. Res. Inst. Publ., vol. 16, Springer-Verlag, New York, 1989, pp. 315-360. · Zbl 0703.14010 [30] K. Kato, Logarithmic structures of Fontaine-Illusie , Algebraic analysis, geometry, and number theory (Baltimore, MD, 1988), Johns Hopkins Univ. Press, Baltimore, MD, 1989, pp. 191-224. · Zbl 0776.14004 [31] K. Kato, Semi-stable reduction and $$p$$-adic étale cohomology , prépublication. · Zbl 1038.81060 [32] K. Kato, Logarithmic degeneration and Dieudonne theory , prépublication. · Zbl 1038.81060 [33] N. Katz and W. Messing, Some consequences of the Riemann hypothesis for varieties over finite fields , Invent. Math. 23 (1974), 73-77. · Zbl 0275.14011 [34] G. Laffaille, Groupes $$p$$-divisibles et modules filtrés: le cas peu ramifié , Bull. Soc. Math. France 108 (1980), no. 2, 187-206. · Zbl 0453.14021 [35] S. Lang, Abelian varieties , Interscience Tracts in Pure and Applied Mathematics. No. 7, Interscience Publishers, Inc., New York, 1959. · Zbl 0098.13201 [36] D. Mumford, Abelian varieties , Tata Institute of Fundamental Research Studies in Mathematics, No. 5, Published for the Tata Institute of Fundamental Research, Bombay, 1970. · Zbl 0223.14022 [37] A. Ogus, papiers secrets . [38] M. Rapoport and Th. Zink, Über die lokale Zetafunktion von Shimuravarietäten. Monodromiefiltration und verschwindende Zyklen in ungleicher Charakteristik , Invent. Math. 68 (1982), no. 1, 21-101. · Zbl 0498.14010 [39] M. Rapoport, On the bad reduction of Shimura varieties , Automorphic forms, Shimura varieties, and $$L$$-functions, Vol. II (Ann Arbor, MI, 1988) eds. L. Clozel and J. S. Milne, Perspect. Math., vol. 11, Academic Press, Boston, MA, 1990, pp. 253-321. · Zbl 0716.14010 [40] M. Raynaud, Réalisation de de Rham des $$1$$-motifs , en préparation. · Zbl 0207.51501 [41] J. Steenbrink, Limits of Hodge structures , Invent. Math. 31 (1975/76), no. 3, 229-257. · Zbl 0303.14002 [42] M. Saito, Modules de Hodge polarisables , Publ. Res. Inst. Math. Sci. 24 (1988), no. 6, 849-995 (1989). · Zbl 0691.14007 [43] M. Saito and S. Zucker, The kernel spectral sequence of vanishing cycles , Duke Math. J. 61 (1990), no. 2, 329-339. · Zbl 0756.32020 [44] J.-P. Serre and J. Tate, Good reduction of abelian varieties , Ann. of Math. (2) 88 (1968), 492-517. JSTOR: · Zbl 0172.46101 [45] A. Grothendieck, Modèle de Néron et monodromie , dans Groupes de monodromie en géometrie algébrique, Lecture Notes in Math, vol. 288, Springer-Verlag, Berlin, 1972, pp. 313-523. · Zbl 0248.14006
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7976832985877991, "perplexity": 788.2633306050825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500154.33/warc/CC-MAIN-20230204205328-20230204235328-00418.warc.gz"}
|
http://math.stackexchange.com/questions/129705/orthogonal-latin-squares
|
# Orthogonal Latin Squares
I'm not quite sure how to even start this problem. I'm really just looking for direction on how to begin.
The $t$ mutually orthogonal Latin squares $A_1, A_2, ... , A_t$ of side $n$ have mutually orthogonal subsquares $S_1, S_2, ... S_t$ occupying their upper left $s$x$s$ corners. Prove that $n$ is greater than or equal to $(t+1)s$.
I know that $t$ must be less than $n$, but I can't find any other information to help me.
-
Consider the cells in row $s+1$ to the right of column $s$. These $n-s$ cells must contain each of the $s$ symbols of the subsquares, since none of those symbols appear in the $s$ cells in the left part of the row. Furthermore, this must be true for the cells in corresponding positions in each of the $t$ MOLS, and no single position in two distinct orthogonal squares can contain multiple symbols from the subsquares, since this would create a pairing that already exists within the orthogonal subsquares. Thus $t$ copies of $s$ symbols must be distributed among only $n-s$ positions, yielding the desired inequality.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9329515099525452, "perplexity": 92.9036814909931}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011085177/warc/CC-MAIN-20140305091805-00016-ip-10-183-142-35.ec2.internal.warc.gz"}
|
https://imetricablog.com/tag/signal-extraction/page/2/
|
Dynamic Adaptive Filtering and Signal Extraction
This slideshow requires JavaScript.
Introduction
Dynamic adaptive filtering is the method of updating a signal extraction process in real-time using newly provided information. This newly provided information is the next sequence of observed data, such as minute, hourly, or daily log-returns in a portfolio of financial assets, or a new set of weekly/monthly observations in a set of economic indicators. The goal is to improve the properties of the extracted signal with respect to a target (symmetric) filter and the output of past (old) signal values that are not performing as they should be (perhaps due to overfitting). In the multivariate direct filtering approach framework, it is an easily workable task to update the signal while only using the most recent information given. As a recently proposed idea by Marc Wildi last month, in this dynamic form of adaptive filtering we seek to update and improve a signal for a given multivariate time series information flow by computing a new set of filter coefficients to only a small window of the time series that features the latest observations. Instead of recomputing an entire new set of filter coefficients in-sample on the entire data set, we use a much smaller data set, say the latest $\tilde{N}$ observations on which the older filter was applied out-of-sample which is much less than the total number of observations in the time series.
The new filter coefficients computed on this small window of new observations uses as input the filtered series from original ‘old’ filter. These new updated coefficients are then applied to the output of the old filter, leading to completely re-optimized filter coefficients and thus an optimized signal, eliminating any nasty effects due overfitting or signal ‘overshooting’ in the older filter, while at the same time utilizing new information. This approach is akin to, in a way, filtering within filtering: the idea of ‘smart’-filtering on previously filtered data for optimized control of the new signal being computed. It could also be thought of as filtering filtered data, a convolution of filters, updating the real-time signal, or, more generally, adaptive filtering. However you wish to think of it, the idea is that a new filter provides the necessary updating by correcting the signal output of the old filter, applied to data out-of-sample. A rather smart idea as we will see. With the coefficients of the old filter are kept fixed, we enter into the frequency world of the output of the ‘old’ filter to gain information on optimizing the new filter. Only the coefficients of the new updated filter are optimized, and can be optimized anytime new data becomes available. This adaptive process is dynamic in the sense that we require new information to stream in in order to update the new signal by constructing a new filter. Once the new filter is constructed, the newly adapted signal is built by first applying the old filter to the data to produce the initial (non-updated) signal from the new data, then the newly constructed filter optimized from this output is then applied to the ‘old’ signal producing the smarter updated signal. Below is an outline of this algorithm for dynamic adaptive filtering stripped of much of the mathematical details. A more in-depth look at the mathematical details of MDFA and this newly proposed adaptive filtering method can be found in section 10.1 of the Elements paper by Wildi.
Basic Algorithm
We begin with a target time series $Y_t$, $t=1,\ldots, N$ from which we wish to extract a signal, and along with it a set of $M$ explanatory time series $Y_{j,t}$, $t=1,\ldots,N$, $j=1,\ldots,M$ that may help in describing the dynamics of our target time series $Y_t$. Note that in many applications, such as financial trading, we normally set $Y_{1,t} = Y_t$ so that our target time series is included in the explanatory time series set, which makes sense since it is the only known time series to perfectly describe itself (however, not in every signal extraction applications is this a good idea. See for example the GDP filtering work of Wildi here)To extract the initial signal in the given data set (in-sample), we define a target filter $\Gamma(\omega)$, that lives on the frequency domain $\omega \in [0,\pi]$. We define the architecture of the filter metric space for the initial signal extraction by the set of parameters $\Theta_0 := (L, \Gamma, \alpha, \lambda, i1, i2, \lambda_{s}, \lambda_{d}, \lambda{c})$, where $L$ is the desired length of the filter, $\alpha$ and $\lambda$ are the smoothness and timeliness customization controls, and $\lambda_{s}, \lambda_{d}, \lambda{c}$ are the regularization parameters for smooth, decay, and cross, respectively. Once the filter is computed, we obtain a collection of filter coefficients $b^j_l$, $l=0,\ldots,L-1$ for each explanatory time series $j=1,\ldots,M$. The in-sample real-time signal $X_t$, $t = L-1,\ldots,N$ is then produced by applying the filter coefficients on each respective explanatory series.
Now suppose we have new information flowing. With each new observation in our explanatory series $Y_{j,t}$, $t=N+1,\ldots$, we can apply the filter coefficients $b^j_l$ to obtain the extracted signal $X_t$ for the real-time estimate of the desired signal at each new observation $t=N+1,\ldots$. This is, of course, out-of-sample signal extraction. With the new information available from say $t=N+1$ to $t=N+\tilde{N}$, we wish to update our signal to include this new information. Instead of recomputing the entire filter for the $N+\tilde{N}$, a smarter idea recently proposed last month by Wildi in his MDFA blog is to use the output produced by applying each individual filter coefficient set $b^j_l$ on their respective explanatory series as input into building the newly updated filter $X_{j,t} = \sum_{l=0}^L b^j_l Y_{j,t-l}$. We thus create a new set of $M$ time series $X_{j,t}$, $t=N+1,\ldots,\tilde{N}$ and thus the filtered explanatory data series become the input to the MDFA solver, where we now solve for a new set of filter coefficients $b^j_{l,new}$ to be applied on the output of the old filter of the new incoming data. In this new filter construction, we build a new architecture for the signal extraction, where a whole new set of parameters can be used $\Theta_1 := (L_1, \Gamma, \tilde{\alpha}, \tilde{\lambda}, i1, i2, \tilde{\lambda}_{s}, \tilde{\lambda}_{d}, \tilde{\lambda}_{c})$. This is the main idea behind this dynamic adaptive filtering process: we are building a signal extraction architecture within another signal extraction architecture since we are basing this new update design on previous signal extraction performance. Furthermore, since a much shorter span of observations, namely $\tilde{N} << N$, is being used to construct the new filters, one of the advantages of this filter updating is that it is extremely fast, as well as being effective. As we will show in the next section of this article, all aspects of this dynamic adaptive filtering can be easily controlled, tested, and applied in the MDFA module of iMetrica using a new adaptive filtering control panel. One can control all aspects, from filter length to all the filter parameters in the new updated filter design, and then apply the results to out-of-sample data to compare performance.
Dynamic Adaptive Filtering Interface in iMetrica
The adaptive filtering capabilities in iMetrica are controlled by an interface that allows for adjusting all aspects of the adaptive filter, including number of observations, filter length $L$, customization controls for timeliness and smoothness, and controls for regularization. The process for controlling and applying dynamic adaptive filtering in iMetrica is accomplished as follows. Firstly, the following two things are required in order to perform dynamic adaptive filtering.
1. Data. A target time series and (optional) $M$ explanatory series that describe the target series all available on $N$ observations for in-sample filter computation along with a stream of future information flow (i.e. an additional set of, say $\tilde{N}$, future observations for each of the $M + 1$ series.
2. An initial set of optimized filter coefficients $b^j_l$ for the signal of the data in-sample.
With these two prerequisites, we are now ready to test different dynamic adaptive filtering strategies. Figure 1 shows the MDFA module interface with time series data of a target series (shown in red) and four explanatory series (not plotted). Using the parameter configuration shown in Figure 1, an initial filter for computing the signal (green plot) that has been optimized in-sample on 300 observations of data and then applied to 30 out-of-sample observations (shown in the blue shaded region). As these final 30 observations of the signal have been produced using 30 out-of-sample observations, we can take note of its out-of-sample performance. Here, the performance of the signal has much room to improve. In this example, we use simulated data (conditionally heteroskedastic data generating process to emulate log-return type data) so that we are able to compare the computed updated signals with a high-order approximation of the target symmetric “perfect” signal (shown in gray in Figure 1).
Figure 1. The original signal (green) built using 300 observations in-sample, and then applied to 30 out-of-sample observations. A high-order approximation to the target symmetric filter is plotted in gray. The blue shaded region is the region in which we wish to apply dynamic filter updating.
Now suppose we wish to improve performance of the signal in future out-of-sample observations by updating the filter coefficients to produce better smoothness, timeliness, and regularization properties. The first step is to ensure that the “Recompute Filter” option is not on (the checkbox in the Real-Time Filter Design panel. This should have been done already to produce the out-of-sample signal). Then go to the MDFA menu at the top of the software and click on “Adaptive Update”. This will pop open the Adaptive Filtering control panel from which we control everything in the new filter updating coefficients (see Figure 2).
Figure 2. The panel interface for controlling every aspect of updating a filter in real-time.
The controls on the Adaptive Filtering panel are explained as follows:
• Obs. Sets the number of the latest observations used in the filter update. This is normally set to however many new observations out-of-sample have been streamed into the time series since the last filter computation. Although one can certainly include observations from the original in-sample period as well by simply setting Obs to a number higher than the number of recent out-of-sample observations. The minimum amount of observations is 10 and the maximum is the total length of the time series.
• L. Sets the length of the updating filter. Minimum is 5 and maximum is the number of observations minus 5.
• $\lambda$ and $\alpha$. The customization of timeliness and smoothness parameters for the filter construction. These controls are strictly for the updating filter and independent of the ‘old’ filter.
• Adaptive Update. Once content with the settings of the update filter, press this button to compute the new filter and apply to the data. The results of the effects of the new filter will automatically appear in the main plotting canvas, specifically in the region of interest (shaded by blue, see blow).
• Auto Update. A check box that, if turned on, will automatically compute the new filter for any changes in the filter parameters and automatically plots the effects of the new filter in the main plotting canvas. This is a nice option to use when visually testing the output of the new filter as one can automatically see effects from any small changes to the parameter setting of the filter. This option also renders the “Adaptive Update” button obsolete.
• Shade Region. This check box, when activated, will shade the windowing region at the end of time series in which the updating is taking place. Provides a convenient way to pinpoint the exact region of interest for signal updating. The shaded region will appear in a dark blue shade (as shown in Figures 1, 4,6, and 7).
• Plot Updates. Clicking this checkbox on and off will plot the newly updated signal (on position) or the older signal (off position). This is a convenient feature as one is able to easily visually compare the new updated signal with the old signal to test for its effectiveness. If adding out-of-sample data and this feature is turned on, it will also apply the new updated filter coefficients to the new data as it comes in. If in the off position, it will only apply the ‘old’ filter coefficients.
• Regularization. All the regularization controls for the updating filter.
To update a signal in real-time, first select the number of observations $\tilde{N}$ and the length of the filter from the Obs and L sliding scrollbars, respectively. This will be the total number of observations used in the adaptive updating. For example, when new dynamics appear in the time series out-of-sample that the original old filter was not able to capture, the filter updating should include this new information. Click the checkbox marked Shade Region to highlight in a dark shade of blue the region in which the updated signal will be computed (this is shown in Figure 1). When the number of observations or length of filter changes, the shaded region reflects these changes and adjusts accordingly. After the region of interest is selected, customization and regularization of the signal can then be applied using the sliding scrollbars. Click the “Auto Update” checkbox to the ‘on’ position to see the effects of the parameterization on the signal computed in the highlighted region automatically. Once content with the filter parameterization, visually comparing the new updated signal with the old signal can be achieved simply by toggling the Plot Updates checkbox. To apply this new filter configuration to out-of-sample data, simply add more out-of-sample data by clicking the out-of-sample slider scrollbar control on the Real-Time Direct Filter control panel (provided that more out-of-sample data is available). This will automatically apply the ‘old’ original filter along with the updated filter on the new incoming out-of-sample data. If not content with the updated signal, simply remove the new out-of-sample data by clicking ‘back’ in the out-of-sample scrollbar, and adjust the parameters to your liking and try again. To continuously update the signal, simply reapply the above process as new out-of-sample data is added. As long as the “Plot Updates” is turned on, the newly adapted signal will always be plotted in the windowed region of interest. See Figures 4-7 to see this process in action.
In this example, as previously mentioned, we computed the original signal in-sample using 300 observations and then applied the filter coefficients to 30 out-of-sample observations (this was produced by checking “Recompute Filter” off). This is plotted in Figure 4, with the blue shaded region highlighting the 30 latest observations, our region of interest. Notice a significant mangling of timeliness and signal amplification in the pass-band of the filter. This is due to bad properties of the filter coefficients. Not enough regularization was applied. Surely enough, the amplitude of the frequency response function in the original filter shows the overshooting in the pass-band (see Figure 5). To improve this signal, we apply an adaptive update by launching the Adaptive Update menu and configuring the new filter. Figure 6 shows the updated filter in the windowed region, where we chose a combination of timeliness and light regularization. There is a significant improvement in the timeliness of the signal. Any changes in the parameterization of the filter space is automatically computed and plotted on the canvas, a huge convenience as we can easily test different parameter configurations to easily identify the signal that satisfies the priorities of the user. In the final plot, Figure 7, we have chosen a configuration with a high amount of regularization to prevent overfitting. Compared with the previous two signals in the region of interest (Figures 4 and 6), we see an even greater mollification of the unwanted amplitude overshooting in the signal, without compromising with a lack of timeliness and smoothness properties. A high-order approximation to the targeted symmetric filter is also plotted in this example for comparison convenience (since the data is simulated, we know the future data, and hence the symmetric filter).
Tune in later this week for an example of Dynamic Adaptive Filtering applied to financial trading.
Figure 4. Plot of the signal out-of-sample before applying an update to the signal by allocating the 30 most recent out-of-sample observations and computing a new filter of length 10. The blue shaded region shows the updating region. Here the original old filter constructed in-sample has been applied to the 30 out-of-sample observations and we notice significant mangling of timeliness and signal amplification in the pass-band of the filter. This is due to bad properties of the filter coefficients. Not enough regularization was applied.
Figure 5. The overshooting in the pass-band of the frequency response function multivariate filter. The spikes above one in the pass-band indicate this and will most-likely produce overshooting in the signal out-of-sample.
Figure 6 After filter updating in the final 30 observations. We chose the filter settings in the adaptive filter settings to improve timeliness with a small amount of smoothing. Furthermore, regularization (smooth, decay) was applied to ensure no overfitting. Notice how the properties of the signal are vastly improved (namely timeliness and little to no overshooting).
Figure 7. Not satisfied with the results of our filter update, we can easily adjust the parameters more to find a satisfying configuration. In this example, since the data is simulated, I’ve computed the symmetric filter to compare my results with the theoretically “perfect” filter. After further adjusting regularization parameters, I end up with this signal shown in the plot. Here, the gray signal is a high-order approximation to the target symmetric “perfect” signal. The result is a very close fit to the target signal with no overfitting.
Figure 1: A trading signal produced in iMetrica for the daily price index of GOOG (Google) using the log-returns of GOOG and AAPL (Apple) as the explanatory data, The blue-pink line represents the account wealth over time, with a 89 percent return on investment in 16 months time (GOOG recorded a 23 percent return during this time). The green line represents the trading signal built using the MDFA module using the hierarchy of parameters described in this article. The gray line is the log price of GOOG from June 6 2011 to November 16 2012.
In this article, we give an in-depth look at the hierarchy of financial trading parameters involved in building financial trading signals using the powerful and versatile real-time multivariate direct filtering approach (MDFA, Wildi 2006,2008,2012), the principle method used in the financial trading interface of iMetrica. Our aim is to clearly identify the characteristics of each parameter involved in constructing trading signals using the MDFA module in iMetrica as well as what effects (if any) the parameter will have on building trading signals and their performance.
With the many different parameters at one’s disposal for computing a signal for virtually any type of financial data and using any financial priority profile, naturally there exists a hierarchy associated with these parameters that all have well-defined mathematical definitions and properties. We propose a categorization of these parameters into three levels according to the clarity on their effect in building robust trading signals. Below are the four main control panels used in the MDFA module for the Financial Trading Interface (shown in Figure 1). They will be referenced throughout the remainder of this article.
Figure 2: The interface for controlling many of the parameters involved in MDFA. Adjusting any of these parameters will automatically compute the new filter and signal output with the new set of parameters and plot the results on the MDFA module plotting canvases.
Figure 3: The main interface for building the target symmetric filter that is used for computing the real-time (nonsymmetric) filter and output signal. Many of the desired risk/reward properties are controlled in this interface. One can control every aspect of the target filter as well as spectral densities used to compute the optimal filter in the frequency domain.
Figure 4: The main interface for constructing Zero-Pole Combination filters, the original paradigm for real-time direct filtering. Here, one can control all the parameters involved in ZPC filtering, visualize the frequency domain characteristics of the filter, and inject the filter into the I-MDFA filter to create “hybrid” filters.
Figure 5: The basic trading regulation parameters currently offered in the Financial Trading Interface. This panel is accessed by using the Financial Trading menu at the top of the software. Here, we have direct control over setting the trading frequency, the trading costs per transaction, and the risk-free rate for computing the Sharpe Ration, all controlled by simply sliding the bars to the desired level. One can also set the option to short sell during the trading period (provided that one is able to do so with the type of financial asset being traded).
The Primary Parameters:
• Timeliness of signal. The timeliness of the signal controls the quality of the phase characteristics in the real-time filter that computes the trading signal. Namely, it can control how well turning points (momentum changes) are detected in the financial data while minimizing the phase error in the filter. Bad timeliness properties will lead to a large delay in detecting up/downswings in momentum. Good timeliness properties lead to anticipated detection of momentum in real-time. However, the timeliness must be controlled by smoothness, as too much timeliness leads to the addition of unwanted noise in the trading signal, leading to unnecessary unwanted trades. The timeliness of the filter is governed by the $\lambda$ parameter that controls the phase error in the MDFA optimization. This is done by using the sliding scrollbar marked $\lambda$ in the Real-Time Filter Design in Figure 2. One can also control the timeliness property for ZPC filters using the $\lambda$ scrollbar in the ZPC Filter Design panel (Figure 4).
• Smoothness of signal. The smoothness of the signal is related to how well the filter has suppressed the unwanted frequency information in the financial data, resulting in a smoother trading signal that corresponds more directly to the targeted signal and trading frequency. A signal that has been submitted to too much smoothing however will lose any important timeliness advantages, resulting in delayed or no trades at all. The smoothness of the filter can be adjusted through using the $\alpha$ parameter that controls the error in the stop-band between the targeted filter and the computed concurrent filter. The smoothness parameter is found on the Real-Time Filter Design interface in the sliding scrollbar marked $W(\omega)$ (see Figure 2) and in the sliding scrollbar marked $\alpha$ in the ZPC Filter Design panel (see Figure 4).
• Quantization of information. In this sense, the quantization of information relates to how much past information is used to construct the trading signal. In MDFA, it is controlled by the length of the filter $L$ and is found on the Real-Time Filter Design interface (see Figure 2). In theory, as the filter length $L$ gets larger. the more past information from the financial time series is used resulting in a better approximation of the targeted filter. However, as the saying goes, there’s no such thing as a free lunch: increasing the filter length adds more degrees of freedom, which then leads to the age-old problem of over-fitting. The result: increased nonsense at the most concurrent observation of the signal and chaos out-of-sample. Fortunately, we can relieve the problem of over-fitting by using regularization (see Secondary Parameters). The length of the filter is controlled in the sliding scrollbar marked Order-$L$ in the Real-Time Filter Design panel (Figure 2).
As you might have suspected, there exists a so-called “uncertainty principle” regarding the timeliness and smoothness of the signal. Namely, one cannot achieve a perfectly timely signal (zero phase error in the filter) while at the same time remaining certain that the timely signal estimate is free of unwanted “noise” (perfectly filtered data in the stop-band of the filter). The greater the timeliness (better phase error), the lesser the smoothness (suppression of unwanted high-frequency noise). A happy combination of these two parameters is always desired, and thankfully there exists in iMetrica an interface to optimize these two parameters to achieve a perfect balance given one’s financial trading priorities. There has been much to say on this real-time direct filter “uncertainty” principle, and the interested reader can seek the gory mathematical details in an original paper by the inventor and good friend and colleague Professor Marc Wildi here.
The Secondary Parameters
Regularization of filters is the act of projecting the filter space into a lower dimensional space,reducing the effective number of degrees of freedom. Recently introduced by Wildi in 2012 (see the Elements paper), regularization has three different members to adjust according to the preferences of the signal extraction problem at hand and the data. The regularization parameters are classified as secondary parameters and are found in the Additional Filter Ingredients section in the lower portion of the Real-Time Filter Design interface (Figure 2). The regularization parameters are described as follows.
• Regularization: smoothness. Not to be confused with the smoothness parameter found in the primary list of parameters, this regularization technique serves to project the filter coefficients of the trading signal into an approximation space satisfying a smoothness requirement, namely that the finite differences of the coefficients up to a certain order defined by the smoothness parameter are kept relatively small. This ultimately has the effect that the parameters appear smoother as the smooth parameter increases. Furthermore, as the approximation space becomes more “regularized” according to the requirement that solutions have “smoother” solutions, the effective degrees of freedom decrease and chances of over-fitting will decrease as well. The direct consequences of applying this type of regularization on the signal output are typically quite subtle, and depends clearly on how much smoothness is being applied to the coefficients. Personally, I usually begin with this parameter for my regularization needs to decrease the number of effective degrees of freedom and improve out-of-sample performance.
• Regularization: decay. Employing the decay parameter ensures that the coefficients of the filter decay to zero at a certain rate as the lag of the filter increases. In effect, it is another form of information quantization as the trading signal will tend to lessen the importance of past information as the decay increases. This rate is governed by two decay parameter and higher the value, the faster the values decrease to zero. The first decay parameter adjusts the strength of the decay. The second parameter adjusts for how fast the coefficients decay to zero. Usually, just a slight touch on the strength of the decay and then adjusting for the speed of the decay is the order in which to proceed for these parameters. As with the smoothing regularization, the number of effective degrees of freedom will (in most cases) decreases as the decay parameter decreases, which is a good thing (in most cases).
• Regularization: cross correlation. Used for building trading signals with multivariate data only, this regularization effect groups the latitudinal structure of the multivariate time series more closely, resulting in more weighted estimate of the target filter using the target data frequency information. As the cross regularization parameter increases, the filter coefficients for each time series tend to converge towards each other. It should typically be used in a last effort to control for over-fitting and should only be used if the financial time series data is on the same scale and all highly correlated.
The Tertiary Parameters
• Phase-delay customization. The phase-delay of the filter at frequency zero, defined by the instantaneous rate of change of a filter’s phase at frequency zero, characterizes important information related to the timeliness of the filter. One can directly ensure that the phase delay of the filter at frequency zero is zero by adding constraints to the filter coefficients at computation time. This is done by setting the clicking the $i2$ option in the Real-Time Filter Design interface. To go further, one can even set the phase delay to an fixed value other than zero using the $i2$ scrollbar in the Additional Filter Ingredients box. Setting this value to a certain value (between -20 and 20 in the scrollbar) ensures that the phase delay at zero of the filter reacts as anticipated. It’s use and benefit is still under investigation. In any case, one can seamlessly test how this constraint affects the trading signal output in their own trading strategies directly by visualizing its performance in-sample using the Financial Trading canvas.
• Differencing weight. This option, found in the Real-Time Filter Design interface as the checkbox labeled “d” (Figure 2), multiplies the frequency information (periodogram or discrete Fourier transform (DFT)) of the financial data by the weighting function $f(\omega) = 1/(1 - \exp(i \omega)), \omega \in (0,\pi)$, which is the reciprocal of the differencing operator in the frequency domain. Since the Financial Trading platform in iMetrica strictly uses log-return financial time series to build trading signals, the use of this weighting function is in a sense a frequency-based “de-differencing” of the differenced data. In many cases, using the differencing weight provides better timeliness properties for the filter and thus the trading signal.
In addition to these three levels of parameters used in building real-time trading signals, there is a collection of more exotic “parameterization” strategies that exist in the iMetica MDFA module for fine tuning and constructing boosting trading performance. However, these strategies require more time to develop, a bit of experimentation, and a keen eye for filtering. We will develop more information and tutorials about these advanced filtering techniques for constructing effective trading signals in iMetrica in future articles on this blog coming soon. For now, we just summarize their main ideas.
• Forecasting and Smoothing signals. Smoothing signals in time series, as its name implies, involves obtaining a smoother estimate of certain signal in the past. Since the real-time estimate of a signal value in the past involves using more recent values, the signal estimation becomes more symmetrical as past and future values at a point in the past are used to estimate the value of the signal. For example, if today is after market hours on Friday, we can obtain a better estimate of the targeted signal for Wednesday since we have information from Thursday and Friday. In the opposite manner, forecasting involves projecting a signal into the future. However, since the estimate becomes even more “anti-symmetric”, the estimate becomes more polluted with noise. How these smoothed and forecasted signals can be used for constructing buy/sell trading signals in real-time is still purely experimental. With iMetrica, building and testing strategies that improve trading performance using either smoothed and forecasted signals (or both), is available.To produce either a smoothed or forecasted signal, there is a lag scrollbar available in the Real-Time Filter Design interface under Additional Filter Ingredients that enables one to compute either a smooth or forecasted signal. Setting the lag value $k$ in the scrollbar to any integer between -10 and 10 and the signal with the set lag applied is automatically computed. For negative lag values $k$, the method produces a $k$ step-ahead forecast estimate of the signal. For positive values, the method produces a smoothed signal with a delay of $k$ observations.
• Customized spectral weighting functions. In the spirit of customizing a trading signal to fit one’s priorities in financial trading, one also has the option of customizing the spectral density estimate of the data generating process to any design one wishes. In the computation of the real-time filter, the periodogram (or DFTs in multivariate case) is used as the default estimate of the spectral density weighting function. This spectral density weighting function in theory is supposed to serve as the spectrum of the underlying data generating process (DGP). However, since we have no possible idea about the underlying DGP of the price movement of publicly traded financial assets (other than it’s supposed to be pretty darn close to a random walk according to the Efficient Market Hypothesis), the periodogram is the best thing to an unbiased estimate a mortal human can get and is the default option in the MDFA module of iMetrica. However, customization of this weighting function is certainly possible through the use of the Target Filter Design interface. Not only can one design their target filter for the approximation of the concurrent filter, but the spectral density weighting function of the DGP can also be customized using some of the available options readily available in the interface. We will discuss these features in a soon-to-come discussion and tutorial on advanced real-time filtering methods.
• Adaptive filtering. As perhaps the most advanced feature of the MDFA module, adaptive filtering is an elegant way to achieve building smarter filters based on previous filter realizations. With the goal of adaptive filtering being to improve certain properties of the output signal at each iteration without compensating with over-fitting, the adaptive process is of course highly nonlinear. In short, adaptive MDFA filtering is an iterative process in which a one begins with a desired filter, computes the output signal, and then uses the output signal as explanatory data in the next filtering round. At each iteration step, one has the freedom to change any properties of the filter that they desire, whether it be customization, regularization, adding negative lags, adding filter coefficient constraints, applying a ZPC filter, or even changing the pass-band in the target filter. The hope is to improve on certain properties of filter at each stage of the iterative process. An in-depth look at adaptive filtering and how to easily produce an adaptive filter using iMetrica is soon to come later this week.
iMetrica and Hybridometrics: Introduction
The high-frequency Financial Trading interface of iMetrica. Easily construct in-sample trading strategies with an array of optimizers unique to iMetrica and then employ the strategies out-of-sample to test and fine-tune the trading performance.
This blog serves as an introduction and tutorial to Hybridometrics using iMetrica. Hybridometrics is a term used to express the analysis, modeling, signal extraction, and forecasting of univariate and multivariate financial and economic time series data using a combination of model-based and non-model-based methodologies. Ideal combinations of computational paradigms and methodologies used in hybridometrics include, but are not limited to, traditional stochastic models such as (S)ARIMA models, GARCH models, and multivariate stochastic volatiluty models combined with empirical mode decomposition techniques and the multivariate direct filter approach (MDFA). The goal of hybridometric modeling is to obtain signal extractions and forecasts, for official use or government use, all the way to building high-frequency financial trading strategies, that perform better than using only model or non-model based methods alone. In other words, hybridometrics seeks to extract the advantages of different paradigms combined to outperform traditional approaches to time series modeling. The iMetrica software package offers the most versatile and computationally efficient portal to this newly proposed time series modeling paradigm, all while remaining surprisingly easy to use.
The iMetrica software package is a unique system of econometric and financial trading tools that focuses on speed, user interaction, visualization tools, and point-and-click simplicity in building models for time series data of all types. Written entirely in GNU C and Fortran with a rich interactive interface written in Java, the iMetrica software offers an abundance of econometric tools for signal extraction and forecasting in multivariate time series that are both easily accessible with the click of a mouse button and fast with results computed and plotted instantaneously without the need for creating output data files or calling exterior plotting devices.
One powerful feature that is unique to the iMetrica software is the innate capability of easily combining both model-based and non-model based methodologies for designing data forecasts, signal extraction filters, or high-frequency financial trading strategies. Furthermore, the strategies can be computed and tested both in-sample and out-of-sample using an easy to use built-in data partitioner that effectively partitions the data into an in-sample storage where models and filters are computed and then an out-of-sample storage where new data is applied to the in-sample strategy to test for robustness, over-fitting, and many other desired properties. This gives the user complete liberty in creating a fast and efficient test-bed for implementing signal extractions, forecasting regimes, or financial trading strategies.
The iMetrica software environment includes five interacting time series analysis modules for building hybrid forecasts, signal extractions, and trading strategies.
• uSimX13 – A computational environment for univariate seasonal auto-regressive integrated moving-average (SARIMA) modeling and simulation using X-13ARIMA-SEATS. Features an interactive approach to modeling seasonal economic time series with SARIMA models and automatic outlier detection, trading day, and holiday regressor effects. Also includes a suite of model comparison tools using both modern and goodness-of-fit signal extraction diagnostics.
• BayesCronos – An interactive time series module for signal extraction and forecasting of multivariate economic and financial time series focusing on Bayesian computation and simulation. This module includes a multitude of models including ARIMA, GARCH, EGARCH, Stochastic Volatility, Multivariate Factor Stochastic Volatility, Dynamic Factor, and Multivariate High-Frequency-Based Volatility (HEAVY), with more models continuously being added. For most of the models featured, one can compute the Bayesian and/or the Quasi-Maximum-Likelihood estimated model fits using either a Metropolis-Hastings Monte Carlo Markov Chain approach (Bayesian) or a QMLE formulation for computing the model parameters estimates. Using a convenient model selection panel interface, complete access to model-type, model parameter dimensions, prior distribution parameters is seamlessly available. In the case of Bayesian estimation, one has complete control over the prior distributions of the model parameters and offers interactive visualization of the Monte Carlo Markov Chain parameter samples. For each model, up to 10 sample 36-steps ahead forecasts can be produced and visualized instantaneously along with other important model features such as model residuals, computed volatility, forecasted volatility, factor models, and more. The results can then be easily exported to other modules in iMetrica for additional filtering and/or modeling.
• MDFA – An interactive interface to the most comprehensive multivariate real-time direct filter analysis and computation environment in the world. Build real-time filters using both I-MDFA and Zero-Pole Combination (ZPC) filter constructions. The module includes interactive access to timeliness, smoothing, and accuracy controls for filter customization along with parameters for filter regularization to control overfitting. More advanced features include an interface for building adaptive filters, and many controls for filter optimization, customization, data forecasting, and target filter construction.
• State Space Modeling – A module for building observed component ARIMA and regression models for univariate economic time series. Similar to the uSimX13 module, the State Space Modeling environment focuses on modeling and forecasting economic time series data, but with much more generality than SARIMA models. An aggregation of observed stochastic components in the form of ARIMA models are stipulated for the time series data (for example trend + seasonal + irregular) and then regression components to model outliers, holiday, and trading day effects are added to the stochastic components giving ultimate flexibility in model building. The module uses regCMPNT, a suite of Fortran code written at the US Census Bureau, for the maximum likelihood and Kalman filter computational routines.
• EMD – The EMD module offers a time-frequency decomposition environment for the time-frequency analysis of time series data. The module offers both the original empirical mode decomposition technique of Huan et al. using cubic splines, along with an adaptive approach using reproducing kernels and direct-filtering. This empirical decomposition technique decomposes nonlinear and nonstationary time series into amplitude modulated and frequency modulated (AM-FM) components and then computes the intrinsic phase and instantaneous frequency components from the FM components. All plots of the components as well as the time-frequency heat maps are generated instantaneously.
Along with these modules, there is also a data control module that handles all aspects of time series data input and export. Within this main data control hub, one can import multivariate time series data from a multitude of file formats, as well as download financial time series data directly from Yahoo! finance or another source such as Reuters for higher-frequency financial data. Once the data is loaded, the data can be normalized, scaled, demeaned, and/or log-transformed with a simple slider and button controls, with the effects being plotted on the graphic canvas instantaneously.
Another great feature of the iMetrica software is the ability to learn more about time series modeling through the using of data simulators. The data control module includes an array of data simulating panels for simulating data from a multitude of both univariate and multivariate time series models. With access to control the number of observations, the random seed for the innovation process, the innovation process distribution, and the model parameters, simulated data can be constructed for any type of economic or financial time time series imaginable. The different types of models include (S)ARIMA models, GARCH models, correlated cycle models, trend models, multivariate factor stochastic volatility models, and HEAVY models. From simulating data and toggling the parameters, one can visualize instantly the effects of the each parameter on the simulated data. The data can then be exported to any of the modules for practicing and honing one’s skills in hybrid modeling, signal extraction, and forecasting.
Keep visiting this blog frequently for continuous updates, tutorials, and proposals in the field of econometrics, signal extraction, forecasting, and high-frequency financial trading. using hybridometrics and iMetrica.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 61, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44913122057914734, "perplexity": 1057.4713373880434}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573104.24/warc/CC-MAIN-20220817183340-20220817213340-00259.warc.gz"}
|
http://mathematicsi.com/implicit-differentiation/
|
# Implicit differentiation
This chapter explores implicit differentiation. The chapter covers differentiating a function defined implicitly; Fin...
This chapter explores implicit differentiation. The chapter covers differentiating a function defined implicitly; Finding equations of tangents and normals to curves defined implicitly. Before attempting this chapter you must have prior knowledge of basic differentiation, and tangent and normals. The functions we’ve explored so far are in the form of; Functions can also be expressed implicitly. These are functions which cannot be rearranged into form; An example is that of a circle shown below. …this can be rearranged into the form y=f(x) but it would look too complicated. Let’s try to do that; It is better if we leave it as; Below are other examples; You may have noticed that the functions are a mixture of x’s and y’s.
## Differentiating implicit functions
Below is a function that has been defined implicitly; Suppose we wanted to find dy/dx. Notice this is different from the usual y=f(x). It simply means we find the differential of the function as in; …for the function in question; …it means we differentiate each term one at a time so we have; …we know that; Now we have to find the differential of xy; …we’re trying to find; …we can use the product rule so we get; …so now we have; …we can rearrange it to make dy/dx the subject; Above we have managed to find dy/dx Below are more examples;
### Example
…we have; …we can use the chain rule to differentiate y2;
### Example
…we first use the product rule; …we then use the chain rule to differentiate y3 according to y;
### Example
…we use the product rule first; we use the chain rule for; …thus we get;
### Example
…we have to use the chain rule on sin y;
### Example
[ …we’re trying to find; …we differentiate each part separately; We do each part using the chain rule;
## Finding the tangent
We can also find tangents and normals to curves defined implicitly, for example; Finding the equation of the tangent to the curve at the point (1, 4) To find the gradient we differentiate, that is; …we get; [] …we substitute in the point (1, 4) to get; Now we can use; …we substitute in to get; The answer for the equation of the tangent is;
## Exam question
The equation of a circle is given as;
• What is the centre and radius of the circle?
• Find the coordinates where the circle crosses the line x=4
• Find the equations of the normals to the circle at these points
• Where do the normals interest?
The centre of the circle is (1, -2) and the radius is √25 = 5 Below is the circle; Next we have to find the coordinates at x=4. To do that we substitute x with 4 into the function. This is shown on the circle below; …we have; we continue to simplify; …remember we also have the negative square root. The coordinates have been shown on the circle below; [IMAGE] So we have; [IMAGE] and [IMAGE] We can conclude that the coordinates are; [IMAGE] The next question to solve is; Find the equation of the normals to the circle at these points. Below are the normals that we need to find; [IMAGE] The normals intersect at the centre; You must know that all normals to circles cross at the centre as a radius is always at 90° to the tangent to the circle. We shall find the equations of the normals below. First we differentiate to find the gradients of the tangents, we have; [IMAGE] …differentiating we get; [IMAGE] …at (4, 2) we have; [IMAGE] [IMAGE] …and at (4, -6) [IMAGE] [IMAGE] So now we know that at (4, 2) the gradient of the normal is 4/3. To find the normal we use; [IMAGE] So we have… [IMAGE] [IMAGE] We also know that the gradient of the normal at (4, -6) is; [IMAGE] Now we can form the equation of the normal; [IMAGE] [IMAGE] [IMAGE] So the two normals are; [IMAGE] and [IMAGE] Next we have to find where the normals intersect. We solve the normals equations simultaneously to find where the normals intersect. So we have; [IMAGE] [IMAGE] [IMAGE] Now we can substitute 1 for x into; [IMAGE] [IMAGE] [IMAGE] The normals intersect at (1, -2) which of course is the centre of the circle as we saw above.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9046218991279602, "perplexity": 814.4745110291357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111324.43/warc/CC-MAIN-20160428161511-00072-ip-10-239-7-51.ec2.internal.warc.gz"}
|
https://www.authorea.com/users/6245/articles/90118-power-estimation-for-non-standardized-multisite-studies/_show_article
|
# Power Estimation for Non-Standardized Multisite Studies
07/01/2016
Abstract
A concern for researchers planning multisite studies is that scanner and T1-weighted sequence-related biases on regional volumes could overshadow true effects, especially for studies with a heterogeneous set of scanners and sequences. Current approaches attempt to harmonize data by standardizing hardware, pulse sequences, and protocols, or by calibrating across sites using phantom-based corrections to ensure the same raw image intensities. We propose to avoid harmonization and phantom-based correction entirely. We hypothesized that the bias of estimated regional volumes is scaled between sites due to the contrast and gradient distortion differences between scanners and sequences. Given this assumption, we provide a new statistical framework and derive a power equation to define inclusion criteria for a set of sites based on the variability of their scaling factors. We estimated the scaling factors of 20 scanners with heterogeneous hardware and sequence parameters by scanning a single set of 12 subjects at sites across the United States and Europe. Regional volumes and their scaling factors were estimated for each site using Freesurfer’s segmentation algorithm and ordinary least squares, respectively. The scaling factors were validated by comparing the theoretical and simulated power curves, performing a leave-one-out calibration of regional volumes, and evaluating the absolute agreement of all regional volumes between sites before and after calibration. Using our derived power equation, we were able to define the conditions under which harmonization is not necessary to achieve 80% power. This approach can inform choice of processing pipelines and outcome metrics for multisite studies based on scaling factor variability across sites, enabling collaboration between clinical and research institutions.
## Introduction
The pooled or meta-analysis of regional brain volumes derived from T1-weighted MRI data across multiple sites is reliable when data is acquired with similar acquisition parameters (Cannon 2014, Ewers 2006, Jovicich 2006). The inherent scanner- and sequence-related noise of MRI volumetrics under heterogeneous acquisition parameters has prompted many groups to standardize protocols across imaging sites (Cannon 2014, Boccardi 2013, Weiner 2012). However, standardization across multiple sites can be prohibitively expensive and requires a significant effort to implement and maintain. At the other end of the spectrum, multisite studies without standardization can also be successful, albeit with extremely large sample sizes. The ENIGMA consortium, for example, combined scans of over 10,000 subjects from 25 sites with varying field strengths, scanner makes, acquisition protocols, and processing pipelines. The unusually large sample size enabled this consortium to provide robust phenotypic traits despite the variability of non-standardized MRI volumetrics and the power required to run a genome wide association study (GWAS) to identify modest effect sizes (Thompson 2014). These studies raise the following question: Is there a middle ground between fully standardizing a set of MRI scanners and recruiting thousands of subjects across a large number of sites? Eliminating the harmonization requirement for a multisite study would facilitate inclusion of retrospectively acquired data and data from sites with ongoing longitudinal studies that would not want to adjust their acquisition parameters.
Towards this goal, there is a large body of literature addressing the correction of geometric distortions that result from gradient non-linearities. These corrections fall into two main categories: phantom-based deformation field estimation and direct magnetic field gradient measurement-based deformation estimation, the latter of which requires extra hardware and spherical harmonic information from the manufacturer (Fonov 2010). Calibration phantoms, such as the Alzheimer’s Disease Neuroimaging Initiative (ADNI) (Gunter 2009) and LEGO® phantoms (Caramanos 2010), have been used by large multisite studies to correct for these geometric distortions, which also affect regional volume measurements. These studies have outlined various correction methods that significantly improve deformation field similarity between scanners. However, the relationship between the severity of gradient distortion and its effect on regional volumes, in particular, remains unclear. In the case of heterogeneous acquisitions, correction becomes especially difficult due to additional noise sources. Gradient hardware differences across sites are compounded with contrast variation due to sequence parameter changes. In order to properly evaluate the reproducibility of brain segmentation algorithms, these phantoms should resemble the humain brain in size, shape, and tissue distribution. Droby and colleagues evaluated the stability of a post-mortem brain phantom and found similar reproducibility of volumetric measurements to that of a healthy control (Droby 2015). In this study, we propose to measure between-site bias through direct calibration of regional volumes by imaging 12 common healthy controls at each site. Quantifying regional bias allows us to overcome between-site variability by increasing sample size to an optimal amount, rather than employing a phantom-based voxel-wise calibration scheme that corrects both contrast differences and geometric distortions.
We hypothesized that all differences in regional contrast and geometric distortion result in regional volumes that are consistently and linearly scaled from their true value. For a given region of interest (ROI), two mechanisms simultaneously impact the final boundary definition: (1) gradient nonlinearities cause distortion and (2) hardware (including scanner, field strength, and coils) and acquisition parameters modulate tissue contrast. Based on the results of Tardiff and colleagues, who found that contrast-to-noise ratio and contrast inhomogeneity from various pulse sequences and scanner strengths cause regional biases in VBM(Tardif 2010, Tardif 2009), we hypothesized that each ROI will scale differently at each site. Evidence for this scaling property can also be seen in the overall increase of gray matter volume and decrease of white matter volume of the ADNI-2 compared to the ADNI-1 protocols despite attempts to maintain compatibility between these protocols (Brunton 2013). It was also observed that hippocampal volumes were 1.17% larger on 3T scanners compared to the 1.5T scanners in the ADNI study (Wolz 2014). By imaging 12 subjects in 20 different scanners using varying acquisition schemes, we were able to estimate the scaling factor for each regional volume at each site. We also defined a framework for calculating the power of a multisite study as a function of the scaling factor variability between sites. This enables us to power a cross-sectional study, and to outline the conditions under which harmonization could be replaced by sample size optimization. This framework can also indicate which regional volumes are sufficiently reliable to investigate using a multisite approach.
Regional brain volumes are of interest in most neurological conditions, including healthy aging, and typically indicates the degree of neuronal degeneration. In this study, we investigate a number of well-defined regional brain volumetrics related to multiple sclerosis disease progression. Even though focal white matter lesions seen on MRI largely characterize multiple sclerosis (MS), lesion volumes are not strongly correlated with clinical disability (Filippi 1995, Furby 2010, Kappos 1999). Instead, global gray matter atrophy correlates better with clinical disability (for a review, see (Horakova 2012)), along with white matter volume, to a lesser extent (Sanfilipo 2006). In addition, regional gray matter atrophy measurements, such as thalamus (Cifelli 2002, Zivadinov 2013, Houtchens 2007, Wylezinska 2003) and caudate (Bermel 2003, Tao 2009) volumes, appear to be better predictors of disability (Fisher 2008, Fisniku 2008, Dalton 2004, Giorgio 2008).
## Theory
Linear mixed models are common in modeling data from multisite studies because metrics derived from scanner, protocol, and population heterogeneity may not have uncorrelated error terms when modeled in a general linear model (GLM), which violates a key assumption (Garson 2013). In fact, Fennema-Notestine and colleagues found that a mixed model, with scanner as a random effect, outperformed pooling data via GLM(Fennema-Notestine 2007) on a study on hippocampal volumes and aging. Since we are only interested in the effect of scanner-related heterogeneity, we assume that the relationship between the volumetrics and clinical factors of interest are the same at each site. This causes error terms to cluster by scanner and sequence type due to variation in field strengths, acquisition parameters, scanner makes, head coil configurations, and field inhomogeneities, to name a few (Cannon 2014). Linear mixed models, which include random effects and hierarchical effects, appropriately integrate observation-level data based on their clustering characteristics (Garson 2013). The model we propose in this study is similar to a mixed model, with a multiplicative effect instead of an additive effect. Our goal is to incorporate an MRI bias-related term in our model in order to optimize sample sizes.
We first defined the true, unobserved model for subject $$i$$ at site $$j$$ as:
$$U_{ij}=\beta_{00}+\beta_{10}X_{i,j}+\beta_{20}Z_{i,j}+\epsilon_{i,j}\\$$
Where $$U_{i,j}$$ is the unobserved value of the regional brain volume of interest (without any effects from the scanner), and $$\beta_{00},\beta_{10}$$ and $$\beta_{20}$$ are the true, unobserved, effect sizes. The covariates are $$Z_{i,j}$$, residuals are $$\epsilon_{i,j}$$, and the contrast vector, $$X_{i,j}$$, is given the weights $$X_{high},X_{low}=0.5,-0.5$$ so that $$\beta_{10}$$ is computed as the average difference between the high and low groups. For this derivation we assume an equal number of subjects observed at each site in the high and low groups with balanced covariates. $$\epsilon$$ is normally distributed with mean 0 and standard deviation $$\sigma_{0}$$.
We defined a site-level model using the notation of (Raudenbush 2000), to express the relationship between a brain metric that is scaled by $$a_{j}$$ as $$Y_{i,j}=a_{j}*U_{ij}$$ and high or low disease group $$X_{i,j}$$ for subject $$i=1,\ldots,n$$ at site $$j$$ as
$$Y_{i,j}=b_{0j}+b_{1,j}X_{i,j}+b_{2,j}Z_{i,j}+r_{i,j}\\$$
The site mean, disease effect, and covariate effect randomly vary between sites so the intercept and slope coefficients become dependent variables (Raudenbush 2000) and we assume:
$$\label{alphascale} \label{alphascale}b_{k,j}=a_{j}*\beta_{k,0}\\$$
where the true underlying coefficient, $$\beta_{k,0}$$ for $$k=0,1,2$$ is scaled randomly by each site. The major contributors to brain structure region of interest (ROI) boundary variability are contrast differences and gradient distortions, both of which adjust the boundary of the whole ROI rather than add a constant term. To reflect this property, we modeled the systematic error from each MRI sequence as a multiplicative ($$Y_{i,j}=a_{j}*Y_{i}$$) rather than additive ($$Y_{ij}=Y_{i}+a_{j}$$) error term. Similarly, the residual term is also scaled by site, $$r_{i,j}\sim N(0,a_{j}^{2}\sigma_{0}^{2})$$, and the scaling factor, $$a_{j}$$, is sampled from a normal distribution with mean $$\mu_{a}$$ and variance $$\sigma_{a}^{2}$$.
$$a_{j}\sim N(\mu_{a},\sigma_{a}^{2})\\$$
For identifiability, let $$\mu_{a}=1$$. The mean disease effect estimate, $$\beta_{1,j}$$ is defined as the mean brain metric volume difference in the high and low groups.
$$\label{meanbeta} \label{meanbeta}D_{Y,j}=\overline{Y_{H_{j}}}-\overline{Y_{L_{j}}}\\$$
The unconditional variance of the disease effect estimate at site $$j$$ is can be written in terms of the unobserved difference between groups before scaling, $$D_{U,j}=D_{Y,j}/a_{j}$$:
$$var[D_{Y,j}]=var[D_{U,j}a_{j}]=var[D_{U,j}]var[a_{j}]+var[D_{U,j}]E[a_{j}]^{2}+var[a_{j}]E[D_{U,j}]^{2}\\$$
Where we are assuming that $$D_{U,j}$$ and $$a_{j}$$ are independent, meaning that MRI-related biases are independent of the biological effects being studied. For the derivation of this formula, see the Appendix. Given the distribution of scaling factors and the variance of the true disease effect, $$var[D_{U,j}]=4\sigma_{0}^{2}/n$$, the equation simplifies to
$$\label{var_unobserved} \label{var_unobserved}var[D_{Y,j}]=\frac{4\sigma_{0}^{2}}{n}\mu_{a}^{2}+\frac{4\sigma_{0}^{2}}{n}\sigma_{a}^{2}+\sigma_{a}^{2}\beta_{10}^{2}\\$$
We standardize the equation by defining the coefficient of variability for the scaling factors as $$CV_{a}^{2}=({\frac{\sigma_{a}}{\mu_{a}}})^{2}$$, and the standardized true effect size as $$\delta=\frac{\beta_{10}}{\sigma_{0}}$$.
$$var[D_{Y,j}]=\mu_{a}^{2}\sigma_{0}^{2}\Big{(}\frac{4}{n}+CV_{a}^{2}\Big{(}\frac{4}{n}+\delta^{2}\Big{)}\Big{)}\\$$
Finally, the coefficients are averaged over $$J$$ sites to produce the overall estimate $$\hat{\beta_{10}}=\frac{1}{J}\sum_{j=1}^{J}D_{Y,j}$$, and
$$\label{avgest} \label{avgest}E[\hat{\beta_{10}}]=\frac{1}{J}\sum\limits_{j=1}^{J}{E[D_{Y,j}]}=\frac{\beta_{10}}{J}\sum\limits_{j=1}^{J}{E[a_{j}]}=\beta_{10}\mu_{a}\\$$
Note that this estimator is asympototically normally distributed when the number of centers, $$J$$, is fixed, because it is the average of asymptotically normal estimators. When the number of subjects per site is not equal, the maximum likelihood estimator is the average of the site-level estimates weighted by the standard error, and this is shown in the Appendix. The variance of the overall estimate can be expressed as
$$\label{avgvar} \label{avgvar}var[\hat{\beta_{10}}]=\frac{1}{J^{2}}\sum\limits_{j=1}^{J}{var[D_{Y,j}]}=\frac{\sigma_{0}^{2}\mu_{a}^{2}\Big{(}\frac{4}{n}+CV_{a}^{2}(\frac{4}{n}+\delta^{2})\Big{)}}{J}\\$$
In order to test the average disease effect under the null hypothesis that $$\beta_{1}=0$$, the non-central F distribution, $$F(1,J-1;\lambda)$$ (Raudenbush 2000) is applied, with the non-centrality parameter defined as
$$\label{eq:lambda} \label{eq:lambda}\lambda=\frac{E[\hat{\beta_{10}}]^{2}}{{var}[\hat{\beta_{10}}]}=\frac{J\delta^{2}}{\frac{4}{n}+CV_{a}^{2}(\frac{4}{n}+\delta^{2})}\\$$
Figure \ref{fig:power} shows power curves for small to medium effect sizes ($$\delta=0.2,0.3$$, defined in (Raudenbush 2000)), and a false positive rate of $$\alpha=0.002$$, which allows for 25 comparisons under Bonferroni correction, where the corrected $$\alpha=0.05$$. Power increases for larger $$\lambda$$ and maximizes at $$\lambda=\frac{Jn\delta^{2}}{4}$$ as $$CV_{a}$$ approaches 0. In this case, the power equation is dominated by the total number of subjects, as is the case for the GLM. However, even as the number of subjects per site, $$n$$, approaches infinity and for non-negligible $$CV_{a}$$, $$\lambda$$ is still bounded by $$\frac{J}{CV_{a}^{2}}$$. At this extreme, the power equation is largely influenced by the number of sites. This highlights the importance of the site-level sample size ($$J$$) in addition to the subject-level sample size ($$n$$) for power analyses, especially when there is larger variability between sites for metrics of interest. In the methods section, the acquisition protocols and the standard processing pipelines that were used to calculate $$CV_{a}$$ values of relevant regional brain volumes for MS are described, though this framework could be applied to any MRI derived metric.
We emphasize that the use of phantom subjects does not directly contribute to the power equation in Figure 1, as it does not account for any sort of calibration or scaling. However, it requires an estimate for $$CV_{a}$$, which is the variability of scaling biases between sites. The goal of this study is to provide researchers with estimates of $$CV_{a}$$ from our set of calibration phantoms and our set of non-standardized MRI acquisitions. For a standardized set of scanners, the values of $$CV_{a}$$ may be considered an upper bound.
## Acquisition
T1-weighted 3D-MPRAGE images were acquired from 12 healthy subjects (3 Male, 9 Female, ages 24-57) in 20 scanners across Europe and the United States. Institutional approval was acquired and signed consent was obtained for each subject at each site. These scanners varied in make and model, including all three major manufacturers: Siemens, GE, Philips. Two scans were acquired from each subject, where the subject got out of the scanner between scans for a couple minutes, and was repositioned and rescanned by the scanning technician of that particular site. Previously, Jovicich and colleagues showed that reproducible head positioning along the $$z$$ axis significantly reduced image intensity variability across sessions (Jovicich 2006). By repositioning in our study, a realistic measure of test-retest variability, which includes the repositioning consistency of each site’s scanning procedure, was captured. Because gradient distortion effects correspond to differences in z-positioning (Caramanos 2010), the average translation in the Z-direction between the two runs of each subject at each site was estimated with a rigid body registration.
Tables 1 through \ref{tab:acquisition4} show the acquisition parameters for all 20 scanners. Note that the definitions of repetition time (TR), inversion time (TI) and echo time (TE) vary by scanner make. For example, the TR in a Siemens scanner is the time between preparation pulses, while for Philips and GE, the TR is the time between excitation pulses. We decided to report the parameters according to the scanner make definition, rather than trying to make them uniform, because slightly different pulse programming rationales would make a fair comparison difficult. In addition, a 3D-FLASH sequence (TR=20ms, TE=4.92ms, flip angle=25 degrees, resolution=1mm isotropic) was acquired on healthy controls and MS patients at site 12, in order to compare differences in scaling factor estimates between patients and healthy controls.
## Processing
A neuroradiologist reviewed all images to screen for major artifacts and pathology. The standard Freesurfer (Fischl 2002) version 5.3.0 cross-sectional pipeline (recon-all) was run on each site’s native T1-weighted protocol, using the RedHat 7 operating system on IEEE 754 compliant hardware. Both 1.5T and 3T scans were run with the same parameters (without using the -3T flag), meaning that the non-uniformity correction parameters were kept at the default values. All Freesurfer results were quality controlled by evaluating the cortical gray matter segmentation and checking the linear transform to MNI305 space which is used to compute the estimated total intracranial volume (Buckner 2004). Scans were excluded from the study if the cortical gray matter segmentation misclassified parts of the cortex, or if the registration to MNI305 space was grossly innaccurate. Three scans were excluded for misregistration. Two exclusions were because of data transfer errors. Because of time constraints, some subjects were not able to be scanned. One of the 12 subjects could not travel to all the sites, and that subject was replaced by another of the same age and gender. The details of this are provided in the supplemental materials and the total number of scans is shown in tables 1 - \ref{tab:acquisition4}. 46 Freesurfer ROIs, including the left and right subcortical ROIs, from the aparc.stats tables, were studied. In this study we report on the ROIs relevant to the disease progression of MS, which include the gray matter volume (GMV), subcortical gray matter volume (scGMV), cortex volume (cVol), cortical white matter volume (cWMV), and the volumes of the lateral ventricle (LV), amygdala (amyg), thalamus (thal), hippocampus (hipp), caudate (caud). The remaining ROIs are reported in the supplemental materials.
Test-retest reliability, defined as ICC(1,1) (Friedman 2008), was computed across each site and protocol for the selected metrics using the ”psych” package in R (Revelle 2015). The between-site ICC(2,1) values were computed following the procedure from previous studies on multisite reliability (Friedman 2008, Cannon 2014). Variance components were calculated for a fully crossed random effects model for subject, site, and run using the ”lme4” package in R. Using the variance components, between site ICC was defined as
$$ICC_{BW}=\frac{\sigma_{subject}^{2}}{\sigma_{subject}^{2}+\sigma_{site}^{2}+\sigma_{run}^{2}+\sigma_{subject\times site}^{2}+\sigma_{unexplained}^{2}}\\$$
and an overall within-site ICC was defined as
$$ICC_{WI}=\frac{\sigma_{subject}^{2}+\sigma_{site}^{2}+\sigma_{subject\times site}^{2}}{\sigma_{subject}^{2}+\sigma_{site}^{2}+\sigma_{run}^{2}+\sigma_{subject\times site}^{2}+\sigma_{unexplained}^{2}}\\$$
Scaling factors between sites were estimated using ordinary least squares from the average of the scan-rescan volumes, referenced to average scan-rescan volumes from the UCSF site. The OLS was run with the intercept fixed at 0. $$CV_{a}$$ for each metric was calculated from the sampling distribution of scaling factor estimates $$\hat{a}$$ as follows:
$$\label{eq:cvadef} \label{eq:cvadef}CV_{a}=\frac{std(\hat{a})}{mean(\hat{a})}\\$$
## Scaling Factor Validation
Scaling factor estimates were validated under the assumption of scaled, systematic error, in 2 ways: first, by simulating power curves that take into account the uncertainty of the scaling factor estimate, and second, by a leave-one-out calibration. For the simulation, we generate data for each of the 20 sites included in this study. Subcortical gray matter volumes (scGMV) for each site were generated for two subject groups based on a small standardized effect size (Cohen’s d) of 0.2, which reflects the effect sizes seen in genomics studies. Age and gender were generated as matched covariates, where age was sampled from a normal distribution with mean and standard deviation set at 41 and 10 years, respectively. Gender was sampled from a binomial distribution with a probability of 60% female to match typical multiple sclerosis cohorts.
Coefficients were set on the intercept as 63.135 $$cm^{3}$$, $$\beta_{10}$$ as -.95 $$cm^{3}$$, covariates $$Z_{Age}$$ as -.25 $$cm^{3}$$/year and $$Z_{Gender}$$ as 4.6 $$cm^{3}$$. scGM volumes were generated in a linear model using these coefficients and additional noise was added from the residuals, which were sampled from a normal distribution with zero mean and standard deviation 5.03 $$cm^{3}$$. Next, the scGM volumes were scaled by each site’s calculated scaling factor and gaussian noise from the residuals of the scaling factor fit of that particular site were added.
$$scGMV_{site_{j}}=scGMV_{true_{j}}*a_{j}+N(0,\sigma_{fit_{j}}^{2})\\$$
The simulated dataset of each individual site was modeled via OLS, and an F score on $$X_{Group}$$ was calculated following our proposed statistical model:
$$F_{X_{Group}}=\frac{(\frac{1}{J}\sum_{j=1}^{20}\hat{\beta_{j}})^{2}}{\frac{1}{J^{2}}\sum_{j=1}^{20}\sigma^{2}_{j}}\\$$
A power curve was constructed by running the simulation 5000 times, where power for a particular p-value was defined as the average number of F values greater than the critical F for a set of false positive rates ranging from $$1e^{-4}-1e^{-2}$$. The critical F was calculated with degrees of freedom of the numerator and denominator as 1 and 19 respectively. The simulated power curve was compared to the derived theoretical power curve to evaluate how scaling factor uncertainty influences power estimates. If the scaling factors of each site, which were calculated from the 12 subjects, were not accurate, then the added residual noise from the scaling factor estimate would result in the simulated power curve deviating largely from the theoretical curve.
The scaling factors were also validated by calibrating the regional volumes of each site in a leave-one-out cross-validation. The calibrated volume for a particular subject $$i$$ and site $$j$$ was scaled by the scaling factor estimated from all subjects excluding subject $$i$$. Within- and between-site ICC’s were calculated for the calibrated volumes. If the scaling factor estimates were inaccurate, the between-site ICCs of calibrated regional volumes would be worse than the between-site ICCs of the original regional volumes. Additionally, the between-site ICC’s after calibration should be similar to those found for harmonized studies, such as (Cannon 2014).
Finally, to address the concern about whether these scaling factors could apply to a disease population, we calculated scaling factors from 12 healthy controls and 14 MS patients between 2 different sequences (3D-MPRAGE versus 3D-FLASH) at the UCSF scanner (site 12). The patients had a mean age of 51 years with standard deviation of 11 years, mean disease duration of 15 years with a standard deviation of 12 years, and mean Kurtzke Expanded Disability Status Scale (EDSS) (Kurtzke 1983) score of 2.8 with a standard deviation of 2.2.
The accuracy of our scaling factor estimates depends on the accuracy of tissue segmentation, but the lesions in MS specifically impact white matter and pial surface segmentations. Because of the effect of lesions on Freesurfer’s tissue classification, all images were manually corrected for lesions on the T1-weighted images by a neurologist after editing by Freesurfer’s quality assurance procedure, which included extensive topological white matter corrections, white matter mask edits, and pial edits on images that were not lesion filled. These manual edits altered the white matter surface so that white matter lesions were not misclassified as gray matter or non-brain tissue. The errors in white matter segmentations most typically occurred at the border of white matter and gray matter and around the ventricles. The errors in pial surface segmentations most typically occurred near the eyes (orbitofrontal) and the superior frontal or medial frontal lobes. Images that were still misclassified after thorough edits were removed from the analysis, because segmentations were not accurate enough to produce realistic scaling factor estimates.
## Results
Scan-rescan reliability for the 20 scanners is shown in tables 1 through \ref{tab:acquisition4}. The majority of scan-rescan reliabilities were greater than 80% for the selected Freesurfer-derived volumes, which included gray matter volume (GMV), cortical white matter volume (cWMV), cortex volume (cVol), lateral ventricle (LV), thalamus (thal), amygdala (amyg), caudate (caud), hippocampus (hipp), and estimated total intracranial volume (eTIV). However, the thalamus at sites 3 and 16 had low scan-rescan reproducibility, below 70%. The left hippocampus and amygdala at site 5 were also below 70%, and the left amygdala at site 16 was also low, at 55%. In addition, the average translation in the Z-direction across all sites was $$3.5mm\pm 3.7mm$$, which falls within the accuracy range reported by (Caramanos 2010). The repositioning Z-translation measurements for each site separately is reported in the supplemental materials.
Between- and within-site ICC’s are plotted with the calibrated ICC’s in Figure \ref{fig:calib}. The between-site ICC’s of the 46 ROIs improved, with the exception of the right lateral ventricle, which did not change after calibration, and the fifth ventricle, which had very low scan-rescan reliability, and is shown in the supplemental materials. The within-site ICC’s of the thalamus, hippocampus, and amygdala decreased slightly after calibration. Both calibrated and uncalibrated within-site ICC’s were greater than 90% for the MS related ROIs listed in this paper. For the full set of within- and between- site ICC’s of the Freesurfer aseg regions, see the Supplemental Materials.
Simulation results are shown in Figure \ref{fig:powersim}. The simulated and theoretical curves align closely when power is equal to 80%, but the simulated curve is slightly lower than the theoretical curve for power below 80%. This is probably due to the uncertainty in our scaling factor estimates.
Table \ref{tab:cva} shows the scaling factor variability ($$CV_{a}$$) for the selected ROIs, which range from 2 to 8 %. The full distribution of $$CV_{a}$$ for all the Freesurfer ROIs is shown in Figure \ref{fig:cv_j}. To derive the maximum acceptable $$CV_{a}$$ for 80% power, the theoretical power equation was solved at various subject and site sample sizes with the standardized effect size we detected in our local single center cohort (0.2). The distribution of $$CV_{a}$$ across all ROIs was plotted adjacent to the power curves (Figure \ref{fig:cv_j}) to understand how many ROIs would need to be calibrated for each case. Finally, figures \ref{fig:hcms_scGMV}, \ref{fig:hcms_GMV}, and \ref{fig:hcms_WMV} show the scaling factors from the calibration between two scanners with different sequences at UCSF. Scaling factors derived from the healthy controls (HC) and MS subjects were identical for subcortical gray matter volume (1.05) and very similar for cortical gray matter volume (1, 1.002 for HC, MS) and white matter volume (.967, .975 for HC, MS).
## Discussion
In this study we proposed a statistical model based on on the physics of MRI volumetric biases using the key assumption that biases between sites are scaled linearly. Variation in scaling factors could explain why a study may estimate different effect sizes based on the pulse sequence used. For example, (Streitbürger 2014) found significant effects of RF head coils, pulse sequences, and resolution on VBM results. The estimation of scaling factors in our model depends on good scan-rescan reliability. In our study, scan-rescan reliabilities for each scanner were generally $$>0.8$$ for Freesurfer-derived regional volumes. Volumes of cortex, cortical gray, subcortical gray, and cortical white matter parcellation had greater than 90% reliability for all 20 sites. The subcortical regions and estimated total intracranial volume had an average reliability of over 89%, however, some sites had much lower scan-rescan reliability. For example, the thalamus at sites 3 and 16 had test-retest reliabilities between 41 and 63 %. This could be explained by the visual quality control process of the segmented images, which focused on the cortical gray matter segmentation and the initial standard space registration only, due to time restrictions. Visually evaluating all regional segmentations may be unrealistic for a large multisite study. On the other hand, Jovicich and colleagues (Jovicich 2013) reported a low within-site ICC of the thalamus across sessions ($$0.765\pm 0.183$$) using the same freesurfer cross-sectional pipeline as this study. The poor between-site reliability (61%) of the thalamus is consistent with findings from (Schnack 2010), in which a multisite VBM analysis showed poor consistency in that region. Other segmentation algorithms may be more robust for subcortical regions in particular. Using FSL’s FIRST segmentation algorithm, Cannon and colleagues (Cannon 2014) report a between-site ICC of the thalamus of 0.95, compared to our calibrated between-site ICC of 0.78. FSL’s FIRST algorithm (Patenaude 2011) uses a Bayesian model of shape and intensity features to produce a more precise segmentation. Nugent and colleagues reported the reliability of the FIRST algorithm across 3 platforms. Their study of subcortical ROIs found a good scan-rescan reliability of 83%, but lower between-site ICCs ranging from 57% to 93% (Nugent 2013). The LEAP algorithm proposed by Wolz and colleagues (Wolz 2010) was shown to be extremely reliable with strong ICCs $$>0.97$$ for hippocampal segmentations (Wolz 2014). Another factor not accounted for in our segmentation results was the effect of partial voluming, which adds uncertainty to tissue volume estimates. In (Roche 2014), researchers developed a method to more accurately estimate partial volume effects using only T1-weighed images from the ADNI dataset. This approach resulted in higher classification accuracy between Alzheimer’s disease (AD) patients and mild cognitively impaired (MCI) patients from normal controls (NL). Designing optimized pipelines that are robust for each site, scanner make, and metric, is outside the scope of this paper. However, Kim and colleagues have developed a robust technique for tissue classification of heterogeneously acquired data that incorporates iterative bias field correction, registration, and classification (Kim 2013). Wang and colleagues developed a method to reduce systematic errors of segmentation algorithms relative to manual segmentations by training a wrapper method that learns spatial patterns of systematic errors (Wang 2011). Methods such as those employed by Wang and colleagues may be preferred over standard segmentation pipelines when data acquisition is not standardized. Due to its wide range of acquisition parameters and size of the dataset, our approach could be used to evaluate such generalized pipelines in the future.
The above derivation of power for a multisite study defines hard thresholds for the amount of acceptable scaling factor variability ($$CV_{a}$$) using scaled, systematic error from MRI. Many factors contribute to the $$CV_{a}$$ cut-off, such as the total number of subjects, total number of sites, effect size, and false positive rate. In Figure \ref{fig:cv_j}, we show the distribution of experimental $$CV_{a}$$ values across all Freesurfer aseg ROIs to reference while comparing power curves of various sample sizes. The maximum $$CV_{a}$$ value is 9% which, with enough subjects and sites, falls well below the maximum acceptable $$CV_{a}$$ value. However, with the minimum number of subjects and sites, the power curves of figure \ref{fig:cv_j} show that the maximum acceptable $$CV_{\alpha}$$ must be below 5% for 80% power. If we minimize the total number of subjects to 2260 for the 20 sites in our study, the $$CV_{a}$$ of the amygdala does not meet this requirement (see table \ref{tab:cva}). One option to address this is to harmonize protocols, which may reduce $$CV_{a}$$ values below those estimated from our sites such that they satisfy the maximum $$CV_{a}$$ requirement. The other option is to recruit more subjects per site. The number of additional subjects needed to overcome a large $$CV_{a}$$ can be estimated using our power equation. In the case of the parameters defined in figure \ref{fig:cv_j} (a small effect size of 0.2, false positive rate of 0.002), 40 additional subjects beyond the initial 2260 are needed to adequately power the study. This is easily visualized in figure \ref{fig:cv_j}: the point on the curve for the initial 2260 subjects over 20 sites lies below the harmonization zone, while that of 2300 total subjects lies above. The number of additional subjects needed to achieve an adequately powered multisite study depends on effect sizes, false positive rates, power requirements, and site-level sample size.
We have validated our scaling factors by demonstrating that a leave-one-out calibration resulted in increased absolute agreement between sites compared to the original, uncalibrated values for 44 out of 46 ROIs studied. Tables \ref{comparetocannon} and \ref{comparetojov} compare these calibrated and original values to the ICC findings of other harmonization efforts. Table \ref{comparetocannon} compares our between-site ICCs before and after scaling factor calibration to those of (Cannon 2014). (Cannon 2014) used a cortical pattern matching segmentation algorithm (Thompson 2001) for the cortical ROIs and FSL’s FIRST algorithm for the subcortical ROIs. The between-site ICC for gray matter volume (GMV) for our study was 0.78 while (Cannon 2014) reported an ICC of 0.85. This difference could be explained by the harmonization of scanners in (Cannon 2014). After using the scaling factors to calibrate GMV, the between-site ICC increased to 0.96, indicating that the estimated $$CV_{a}$$ of GMV (4%) is an accurate representation of the true between-site bias variability. Scaling calibration of the hippocampus also outperformed the between-site ICC of (Cannon 2014) (0.84 versus 0.79), validating the $$CV_{a}$$ estimate of 3% for both hemispheres. For the amygdala and caudate volumes, scaling calibration showed improvement to nearly the same value as (Cannon 2014). The amygdala increased from 0.54 to 0.74 (versus 0.76 in the (Cannon 2014)), and the ICC of the caudate increased from 0.82 to 0.91 (versus 0.92 in the (Cannon 2014)). The $$CV_{a}$$ of the left and right amygala were the highest in our study, at 7 and 9 percent, respectively. The most extreme asymmetry in the scaling factors was between the left and right caudate (2% and 7%, respectively), which demonstrates regional contrast to noise variation. Even after scaling factor calibration, the between-site ICC produced by our approach varied widely from that of (Cannon 2014) in two ROIs. The between-site ICC of white matter volume (WMV) was very high (0.96 versus 0.774) and that of thalamus volume was very low (.61 versus .95), compared to (Cannon 2014). This could be due to differences algorithm differences (FIRST vs. Freesurfer). It should also be noted that the scan-rescan reliability of the thalamus was particularly low in some sites, which propagated errors to scaling factor estimates. Therefore, the 5% $$CV_{a}$$ estimate for the thalamus in both hemispheres may not be reproducible and would need to be recalculated using a different algorithm.
Table \ref{comparetojov} shows comparisons of our within-site ICCs to the average within-site ICCs reported by (Jovicich 2013). Similar to our study, scanners were not strictly standardized and the freesurfer cross-sectional algorithm was run. All within site ICCs (both before and after scaling factor calibration) fall within the range described by (Jovicich 2013), including the thalamus. Our last attempt to validate this statistical model and accompanying scaling factor estimates was to simulate multisite data using scaling factor estimates and their residual error from the estimate. We found that the power curves align closely, and match when power is at least 80%. We believe that the small deviations from the theoretical model result from scaling factor estimation error and a non-normal scaling factor distribution due to a relatively small sampling of scaling factors (J = 20 sites).
The data acquisition of our study is similar to that of (Schnack 2004), in which the researchers acquired T1-weighted images from 8 consistent human phantoms across 5 sites with non-standardized protocols. These scanners were all 1.5T except for one 1T scanner. (Schnack 2004) calibrated the intensity histograms of the images before segmentation with a calibration factor estimated based on the absolute agreement of volumes to the reference site (ICC). After applying their calibration method, the ICC of the lateral ventricle was $$\geq 0.96$$, which is similar to our pre- and post- calibrated result of $$0.97$$. The ICC for the intensity calibrated gray matter volume in (Schnack 2004) was $$\geq 0.84$$, compared to our calibrated between-site ICC of $$0.78$$ (uncalibrated), and $$0.96$$ (calibrated). Our between-site ICCs for white matter volume ($$0.96$$ and $$0.98$$ for the pre- and post- calibrated volumes, respectively) were much higher than those of the intensity calibrated white matter volume in (Schnack 2004) ($$\geq.78$$). This could be explained by the fact that our cohort of sites is a consortium studying multiple sclerosis, which is a white matter disease, so there may be a bias toward optimizing scan parameters for white matter. Most importantly, the calibration method of (Schnack 2004) requires re-acquisition of a human phantom cohort at each site for each multisite study. Alternatively, multisite studies employing our approach can use the results of our direct-volume calibration (the estimates of $$CV_{a}$$ for each ROI) to estimate sample sizes based on our proposed power equation and bias measurements without acquiring their own human phantom dataset to use in calibration.
To our knowledge, this is the first study measuring scaling factors between sites with non-standardized protocols using a single set of subjects, and deriving an equation for power that takes this scaling into account via mixed modeling. This study builds on the work of (Fennema-Notestine 2007), which investigated the feasibility of pooling retrospective data from three different sites with non-standardized sequences using standard pooling, mixed effects modeling, and fixed effects modeling. (Fennema-Notestine 2007) found that mixed effects and fixed effects modeling outperformed standard pooling. Our statistical model specifies how MRI bias between sites affects the cross-sectional mixed effects model, so it is limited to powering cross-sectional study designs. Jones and colleagues have derived sample size calculations for longitudinal studies acquired under heterogeneous conditions without the use of calibration subjects (Jones 2013). This can be useful for studies measuring longitudinal atrophy over long time periods, during which scanners and protocols may change. For the cross-sectional case, the use of random effects modeling enables us to generalize our results to any protocol with acquisition parameters similar to those described here (primarily MPRAGE). If protocols change drastically compared to our sample of 3D MPRAGE-type protocols, a small set of healthy controls should be scanned before and after any major software, hardware, or protocol change so that the resulting scaling factors can be compared to the distribution of scaling factors ($$CV_{a}$$) reported in this study. A large $$CV_{a}$$ can severely impact the power of a multisite study, so it is important not to generalize the results in this study to non-MPRAGE sequences without validation. Potentially, new 3D-printed brain-shaped phantoms with similar regional contrast to noise ratios as human brains may become an excellent option for estimating $$CV_{a}$$.
A limitation of our model is the assumption of independence between the unobserved effect ($$D_{U,j}$$) at a particular site , $$j$$, with the scaling factor of that site ($$a_{j}$$). This assumption does not hold if patients with more severe disease have tissue with different properties that, when scanned, shows different regional contrast than that of healthy controls. As shown in the Appendix, the calculation of the unconditional variance of the observed estimate (equation \ref{var_unobserved}) can get quite complicated. We addressed this issue for multiple sclerosis patients by showing that the scaling factors from healthy controls are very similar to those derived from an MS population. The largest difference in scaling factors between healthy controls and multiple sclerosis patients was in white matter volume, where $$a_{MS}=0.967$$ and $$a_{HC}=0.975$$. A two-sample T test between the scaling factors produced a p-value of $$0.88$$, showing that we could not detect a significant difference between scaling factors of HC and MS. This part of the study was limited in that we only scanned MS patients at two scanners, while the healthy controls were scanned at 20, so we could not estimate a patient-derived $$CV_{a}$$ (the direct input to the power equation). However, the similarity between scaling factors for the subcortical gray matter, cortical gray matter, and white matter volumes between the MS and HC populations suggests that, given careful editing of volumes in the disease population, the independence assumption holds for MS. We recommend that researchers studying other diseases validate our approach by scanning healthy controls and patients before and after an upgrade or sequence change to test the validity of the independence assumption.
Even though we did not standardize the protocols and scanners within this study, the consortium is unbalanced in that there are 16 3T scanners, 11 of which are Siemens. Of the Siemens 3T scanners, there is little variability in TR, TE, and TI, however, there is more variance in the use of parallel imaging, the number of channels in the head coil (12, 20 or 32), and the field of view. Similar to the findings of (Jovicich 2009), we could not detect differences in scan-rescan reliability between field strengths. Wolz and colleagues could not detect differences in scan-rescan reliabilities of the hippocampus volumes estimated by the LEAP algorithm, but they detected a small bias between field strengths. They found that the hippocampus volumes measured from the 3T ADNI scanners were 1.17 % larger than those measured from the 1.5T (Wolz 2014). A two-sample T-test with unequal variances was run between the scaling factors of the 1.5T versus 3T scanners. This test could not detect differences in any ROI except for the left- and right- amydgala. We found that the scaling factors were lower for the 1.5T scanners than for the 3T scanners (0.9 versus 1.02), suggesting that the amygdala volume estimates from the 1.5T were larger than those of the 3T. It should be noted that this interpretation is limited due to the small sample size of 1.5T scanners in this consortium.
Another limitation of this study is that we were under-powered to accurately estimate both the scaling and intercept for a linear model between two sites, and that we did not take the intercept into account when deriving power. We excluded the intercept from our analysis for two reasons: (1) we believe that the nature of systematic error from MRI segmentation is not additive, meaning that offsets in metrics between sites for different subjects is scaled with ROI size instead of a constant additive factor and (2) the model becomes more complicated if site-level effects are both multiplicative and additive. The other limitation of this study is that we assumed that subjects across all sites will come from the same population, and that stratification occurs solely from systematic errors within each site. In reality, sites may recruit from different populations and the true disease effect will vary even more. For example, in a comparison study between the matched ADNI cohort and a matched Mayo Clinic Study of Aging cohort, researchers found different rates of hippocampal atrophy even though no differences in hippocampal volume was detected (Whitwell 2012). This could be attributed to sampling from two different populations. This added site-level variability requires a larger site-level sample size, for an example of modeling this, see (Han 2011).
In this study, we reported reliability using both between-site ICC and $$CV_{a}$$ because these two metrics have complementary advantages. ICC depends on the true subject-level variability studied. Since we scanned healthy controls, our variance component estimates of subject variability may be lower than that of our target population (patients with multiple sclerosis related atrophy). As a result, ICCs may be lower than expected in MS based on the results of healthy controls. We tried to address this issue by scanning subjects in a large age range, capturing the variability in gray and white matter volume due to atrophy from aging. On the other hand, $$CV_{a}$$ is invariant to true subject variability, but is limited by the accuracy of between-site scaling estimates. Both between-site ICC and $$CV_{a}$$ should be reported when evaluating multisite reliability datasets to understand a given algorithm’s ability to differentiate between subjects (via the ICC) and the magnitude of systemic error between sites (via the $$CV_{a}$$), which could be corrected using harmonization.
## Conclusion
When planning a multisite study, there is an emphasis on acquiring data from more sites because the estimated effect sizes from each site are sampled from a distribution and averaged. Understanding how much of the variance in the distribution is due to scanner noise as opposed to population heterogeneity is an important part of powering a study. For the purposes of this study, we estimated the effect size variability of Freesurfer-derived regional volumes, but this framework could be generalized to any T1-weighted segmentation algorithm, and any modality for which systematic errors are scaled. Scaling factor calibration of metrics resulted in higher absolute agreement of metrics between sites, which showed that the scaling factor variabilities for the ROIs in this study were accurate. The equation for power we outlined in this study along with our measurements of variability between sites should help researchers undestand the trade-off between protocol harmonization and sample size optimization, along with the choice of outcome metrics. Our statistical model and bias measurements enables collaboration between research institutions and hospitals when hardware and software adaptation are not feasible. We provide a comprehensive framework for assessing and making informed quantitative decisions for MRI facility inclusion, pipeline and metric optimization, and study power.
## Acknowledgements
We thank the study participants, MR technicians, and acknowledge Stephane Lehéricy, Eric Bardinet, Frédéric Humbert and Antoine Burgos from the ICM IRM facility (CENIR) and the CIC Pitié-Salpêtrière for their expertise. Funding was provided by R01 NS049477. Additional support was provided by ICM F-75013 Paris, INSERM and IHU-A-ICM (ANR-10-IAIHU-06). BD is a Clinical Investigator of the Research Foundation Flanders (FWO-Vlaanderen). AG and BD are supported by the Research Fund KU Leuven (OT/11/087) and the Research Foundation Flanders (G0A1313N). MRI acquisitions in Hospital Clinic of Barcelona were funded by a ”Proyecto de Investigación en Salut” grant (PI15/00587. PIs Albert Saiz and Sara Llufriu) from the Instituto de Salud Carlos III.
## Tables
1 2 3 4 TR (ms) 8.18 7.10 2130 2080 TE (ms) 3.86 3.20 2.94 3.10 Strength (T) 1.50 1.50 1.50 1.50 TI (ms) 300 862.90 1100 1100 Flip Angle ($${}^{\circ}$$) 20 8 15 15 Make GE Ph Si Si Voxel Size (mm) .94x.94x1.2 1x1x1 1x1x1 .97x.97x1 Distortion Correction N N N Y Parallel Imaging - 2 2 - FOV (mm) 240x240x200 256x256x160 256x256x176 234x250x160 Read Out Direction HF AP HF HF Head coil # channels 2* 8 20 12 Model Signa LX Achieva Avanto Avanto OS 11x 2.50 VD13B B17A Acq. Time (min) 06:24 05:34 04:58 08:56 orientation sag sag sag sag # scans 24/24 24/24 24/24 18/18 Amyg (L) .93 .89 .61 .96 Amyg (R) .93 .90 .83 .88 Caud (L) .96 .96 .98 .99 Caud (R) .96 .97 .90 .96 GMV .96 .99 .98 .99 Hipp (L) .94 .95 .89 .93 Hipp (R) .93 .91 .94 .95 Thal (L) .77 .93 .59 .82 Thal (R) .91 .90 .76 .82 cVol .95 .99 .97 .99 cWMV .99 1 .99 .99 eTIV 1 1 1 1 scGMV .98 .97 .98 .93
TR (ms) TE (ms) Strength (T) TI (ms) 5 6 7 8 9 8.21 7.80 9 8.21 6.99 3.22 2.90 4.00 3.81 3.16 3 3 3 3 3 450 450 1000 1016.30 900 12 12 8 8 9 GE GE Ph Ph Ph .94x.94x1 1x1x1.2 1x1x1 1x1x1 1x1x1 N Y Y Y Y 2 2 3 2 - 240x240x172 256x256x166 240x240x170 240x240x160 256x256x204 HF FH AP FH FH 8 8 16 32 8 MR750 Signa HDxt Achieva Achieva TX Intera DV24 HD23.0_V01_1210a 3.2.3.2 5.1.7 3.2.3 5:02 7:11 05:55 05:38 08:30:00 sag sag sag sag sag 24/24 24/24 24/24 24/24 21/22 .67 .89 .66 .85 0.97 .88 .79 .91 .94 0.94 .96 .98 .98 .97 0.98 .95 .96 .98 .93 0.96 1 .99 .99 .98 0.99 .51 .97 .83 .90 0.99 .95 .96 .93 .96 0.99 .97 .81 .94 .80 0.88 .70 .87 .96 .96 0.97 .99 .99 .98 .98 0.99 1 .99 1 1 1.00 1 1 1 .92 0.99 .98 .99 .96 .98 0.99
TR (ms) TE (ms) Strength (T) TI (ms) Flip Angle ($${}^{\circ}$$) 10 11 12 13 14 15 2300 2300 2300 2300 2300 2000 2.96 2.98 2.98 2.96 2.96 3.22 3 3 3 3 3 3 900 900 900 900 900 900 9 9 9 9 9 8 Si Si Si Si Si Si 1x1x1 1x1x1.1 1x1x1 1x1x1 1x1x1 1x1x1 Y N Y Y Y N 2 - 2 2 2 2 256x256x176 240x256x176 240x256x176 240x276x156 256x256x176 256x208x160 HF RL HF HF HF RL 20 32 20 20 20 32 Prisma Prisma fit Skyra Skyra Skyra Skyra D13D VD13D VD13 VD13 VD13C VD13 05:09 07:46 05:12 05:12 05:09 04:56 sag sag sag sag sag ax 22/22 24/24 25/25 23/24 23/24 22/22 .83 .89 .80 .85 .98 .89 .94 .92 .93 .85 .93 .84 .99 .99 .98 .99 .98 .98 .99 .96 .95 .95 .98 .97 .99 .98 .99 1 .99 .97 .94 .98 .99 .95 .97 .98 .91 .94 .97 .98 .95 .96 .92 .87 .87 .76 .91 .89 .74 .93 .80 .91 .93 .89 .99 .98 .98 1 .99 .96 1 1 1 1 1 .97 1 1 1 1 1 .97 .98 .99 .98 .98 .99 .99
\label{tab:acquisition3}Top: Acquisition parameters for the 3T Siemens (Si) Skyra and Prisma scanners. Bottom: Test-retest reliabilities for selected ROIs, processed by Freesurfer. The ROIs are gray matter volume (GMV), subcortical gray matter volume (scGMV), cortex volume (cVol), cortical white matter volume (cWMV), and the volumes of the lateral ventricle (LV), amygdala (Amyg), thalamus (Thal), hippocampus (Hipp), caudate (Caud), and finally the estimated total intracranial volume (eTIV). Test-retest reliability is computed as within-site ICC(1,1)
16 17 18 19 20 TR (ms) 2300 2150 1900 1900 1800 TE (ms) 2.98 3.40 3.03 2.52 3.01 Strength (T) 3 3 3 3 3 TI (ms) 900 1100 900 900 900 Flip Angle ($${}^{\circ}$$) 9 8 9 9 9 Make Si Si Si Si Si Voxel Size 1x1x1 1x1x1 1x1x1 1x1x1 .86x.86x.86 Distortion Correction N N N N N Parallel Imaging 2 2 2 2 2 FOV (mm) 256x256x176 256x256x192 256x256x176 256x256x192 220x220x179 Read Out Direction HF RL AP FH FH Head coil # channels 12 12 12 32 32 Model Trio Trio Trio Trio Trio OS MRB17 VB17 VB17A VB17 MRB19 Acq. Time (min) 05:03 04:59 04:26 05:26 06:25 orientation sag ax sag sag sag # scans 24/24 23/24 23/24 24/24 24/24 Amyg (L) .55 .88 .77 .88 .91 Amyg (R) .85 .93 .81 .94 .93 Caud (L) .99 .95 .97 .97 .97 Caud (R) .97 .92 .98 .91 .95 GMV .99 .99 .98 .99 1 Hipp (L) .71 .96 .94 .93 .96 Hipp (R) .94 .94 .92 .83 .96 Thal (L) .45 .85 .80 .80 .88 Thal (R) .61 .95 .85 .96 .79 cVol .99 .98 .96 .99 1 cWMV 1 .99 .99 1 1 eTIV .97 1 1 1 1 scGMV .98 .98 .98 .98 .98
\label{tab:cva}Coefficient of variability ($$CV_{a}$$) values for selected ROIs. $$CV_{a}$$ was defined in equation \ref{eq:cvadef}. The ROIs are gray matter volume (GMV), subcortical gray matter volume (scGMV), cortex volume (cVol), cortical white matter volume (cWMV, which does not include cerebellar white matter), and the volumes of the lateral ventricle (LV), amygdala (Amyg), thalamus (Thal), hippocampus (Hipp), caudate (Caud), and finally the estimated total intracranial volume (eTIV)
CVa
variable
LV (L) 0.03
LV (R) 0.03
cWMV 0.02
cVol 0.04
scGMV 0.02
GMV 0.04
Caud (L) 0.02
Caud (R) 0.07
Amyg (R) 0.09
Amyg (L) 0.07
Hipp (L) 0.03
Hipp (R) 0.03
Thal (L) 0.05
Thal (R) 0.05
\label{comparetocannon}Between-site ICC comparison to the study by —(Cannon 2014)—, where MRI sequences were standardized and subcortical segmentation was performed using FIRST, and cortical segmentation using cortical pattern matching. ICC BW and ICC BW Cal were calculated using our multisite healthy control data, where ICC BW Cal was calculated as the between site ICC of volumes after applying the scaling factor from a leave-one-out calibration. Other than the thalamus (Thal), we found that the between-site ICCs were comparable to —(Cannon 2014)— for the amygdala (Amyg), caudate (Caud), and even higher for the hippocampus (Hipp), gray matter volume (GMV) and white matter volume (WMV)
ROI ICC BW ICC BW Cal (Cannon 2014) ICC BW
GMV .78 .96 .854
WMV .96 .98 .774
Thal .61 .73 .95
Hipp .75 .84 .79
Amyg .56 .74 .76
Caud .82 .91 .92
\label{comparetojov}Comparing the within-site ICC before and after leave-one-out scaling factor calibration with the cross-sectional freesurfer results of —(Jovicich 2013)—, where scanners were standardized, and the average within-site ICC is shown. The within-site ICCs of our study fall within the range of —(Jovicich 2013)—, which shows the that sites in this study are as reliable as those in —(Jovicich 2013)—.
ROI ICC WI ICC WI Cal (Jovicich 2013) ICC WI Average
LV 1 1 $$.998\pm 0.002$$
Thal .86 .84 $$0.765\pm.183$$
Hipp .93 .93 $$0.878\pm.132$$
Amyg .89 .86 $$0.761\pm.134$$
Caud .97 .97 $$0.909\pm 0.092$$
## Figures
\label{fig:power}A. Power contours for total number of subjects ($$nJ$$) over various effect sizes (d), p= 0.002, $$CV_{a}$$ = 5%. B. # of sites required for effect sizes and # subjects per site (n). C effect of $$CV_{a}$$ on # sites for various effect sizes, where $$n$$ = 200 subjects per site
\label{fig:calib}Leave-one-out calibration improvement on within- (WI) and between- (BW) site ICCs for gray matter volume (GMV), subcortical gray matter volume (scGMV), cortex volume (cVol), cortical white matter volume (cWMV), lateral ventricle (LV), Thalamus (Thal), Hippocampus (Hipp), Amygdala (Amyg), Caudate (Caud)
\label{fig:powersim}Theoretical power vs. simulated power with scaling factor uncertainty
\label{fig:hcms_scGMV}Sub-cortical gray matter volume (scGMV) calibration between 2 scanners/sequences at UCSF. The trendline fit shows the slopes (scaling factors) are identical for the healthy control and MS populations
\label{fig:hcms_GMV}Cortex gray matter volume (cVol) calibration between 2 scanners/sequences at UCSF. The trendline fit shows the slopes (scaling factors) are very close for the healthy control and MS populations
\label{fig:hcms_WMV}White matter volume (WMV) calibration between 2 scanners/sequences at UCSF. The trendline fit shows the slopes (scaling factors) are very close for the healthy control and MS populations
\label{fig:cv_j}Shows power curves for 80% power for 2260 - 3000 total subjects, where the false positive rate is 0.002, and the effect size is 0.2. The lowest point of each curve shows the minimum number of sites required for a given number of subjects on the x-axis and the y-axis corresponds to the maximum acceptable coefficient of variability ($$CV_{a}$$, defined in \ref{eq:cvadef}) for that case. The right-hand side of the chart shows the distribution of $$CV_{a}$$ values across all sites and all Freesurfer ROIs. When minimizing the total number of sites for a set number of subjects, the maximum allowable $$CV_{a}$$ is around 5%, meaning that if the $$CV_{a}$$ is higher than 5% for a particular ROI, the power of the model will fall below 80%. The shaded section on the bottom of the chart called the ”Harmonization Zone” which indicates the regions of the graph where the maximum acceptable $$CV_{a}$$ is below the largest $$CV_{a}$$ across all freesurfer ROIs (which is the right amygdala at 9%). If site- and subject- level sample sizes fall within the harmonization zone, efforts to harmonize between sites is required to guarantee power for all ROIs.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 16, "x-ck12": 0, "texerror": 0, "math_score": 0.6027504205703735, "perplexity": 2486.4200966291546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000414.26/warc/CC-MAIN-20190626174622-20190626200622-00315.warc.gz"}
|
https://worldwidescience.org/topicpages/t/theoretical+model+proposed.html
|
#### Sample records for theoretical model proposed
1. [Social determinants of odontalgia in epidemiological studies: theoretical review and proposed conceptual model].
Science.gov (United States)
Bastos, João Luiz Dornelles; Gigante, Denise Petrucci; Peres, Karen Glazer; Nedel, Fúlvio Borges
2007-01-01
The epidemiological literature has been limited by the absence of a theoretical framework reflecting the complexity of causal mechanisms for the occurrence of health phenomena / disease conditions. In the field of oral epidemiology, such lack of theory also prevails, since dental caries the leading topic in oral research has been often studied through a biological and reductionist viewpoint. One of the most important consequences of dental caries is dental pain (odontalgia), which has received little attention in studies with sophisticated theoretical models and powerful designs to establish causal relationships. The purpose of this study is to review the scientific literature on the determinants of odontalgia and to discuss theories proposed for the explanation of the phenomenon. Conceptual models and emerging theories on the social determinants of oral health are revised, in an attempt to build up links with the bio-psychosocial pain model, proposing a more elaborate causal model for odontalgia. The framework suggests causal pathways between social structure and oral health through material, psychosocial and behavioral pathways. Aspects of the social structure are highlighted in order to relate them to odontalgia, stressing their importance in discussions of causal relationships in oral health research.
2. Parameters and error of a theoretical model
International Nuclear Information System (INIS)
Moeller, P.; Nix, J.R.; Swiatecki, W.
1986-09-01
We propose a definition for the error of a theoretical model of the type whose parameters are determined from adjustment to experimental data. By applying a standard statistical method, the maximum-likelihoodlmethod, we derive expressions for both the parameters of the theoretical model and its error. We investigate the derived equations by solving them for simulated experimental and theoretical quantities generated by use of random number generators. 2 refs., 4 tabs
3. A Set Theoretical Approach to Maturity Models
DEFF Research Database (Denmark)
Lasrado, Lester; Vatrapu, Ravi; Andersen, Kim Normann
2016-01-01
characterized by equifinality, multiple conjunctural causation, and case diversity. We prescribe methodological guidelines consisting of a six-step procedure to systematically apply set theoretic methods to conceptualize, develop, and empirically derive maturity models and provide a demonstration......Maturity Model research in IS has been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. To address these criticisms, this paper proposes a novel set-theoretical approach to maturity models...
4. Assessing a Theoretical Model on EFL College Students
Science.gov (United States)
Chang, Yu-Ping
2011-01-01
This study aimed to (1) integrate relevant language learning models and theories, (2) construct a theoretical model of college students' English learning performance, and (3) assess the model fit between empirically observed data and the theoretical model proposed by the researchers of this study. Subjects of this study were 1,129 Taiwanese EFL…
5. A Proposed Conceptual Model of Military Medical Readiness
National Research Council Canada - National Science Library
Van Hall, Brian M
2007-01-01
.... The basis for the proposed conceptual model builds on common and accepted latent variable and theoretical modeling techniques proposed by healthcare scholars, organizational theorists, mathematical...
6. XML-based formulation of field theoretical models. A proposal for a future standard and data base for model storage, exchange and cross-checking of results
International Nuclear Information System (INIS)
Demichev, A.; Kryukov, A.; Rodionov, A.
2002-01-01
We propose an XML-based standard for formulation of field theoretical models. The goal of creation of such a standard is to provide a way for an unambiguous exchange and cross-checking of results of computer calculations in high energy physics. At the moment, the suggested standard implies that models under consideration are of the SM or MSSM type (i.e., they are just SM or MSSM, their submodels, smooth modifications or straightforward generalizations). (author)
7. Hybrid quantum teleportation: A theoretical model
Energy Technology Data Exchange (ETDEWEB)
Takeda, Shuntaro; Mizuta, Takahiro; Fuwa, Maria; Yoshikawa, Jun-ichi; Yonezawa, Hidehiro; Furusawa, Akira [Department of Applied Physics, School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan)
2014-12-04
Hybrid quantum teleportation – continuous-variable teleportation of qubits – is a promising approach for deterministically teleporting photonic qubits. We propose how to implement it with current technology. Our theoretical model shows that faithful qubit transfer can be achieved for this teleportation by choosing an optimal gain for the teleporter’s classical channel.
8. A theoretical model for the control of an enforcement system on emissions of pollutants
International Nuclear Information System (INIS)
Villegas, Clara Ines
2005-01-01
A theoretical proposal for the development of an enforcement strategy is presented on this paper. The proposal guaranties full compliance of an emission charge system with self-report presence. The proposed models are static, and mostly based on those proposed by Strandlund and Chavez (2000) for a transferable permits system with self -report presence. Theoretical models were developed for three possible violations: self-report violation, maximum emission limits violation and payment violation. Based in theoretical results, a simulation was implemented with hypothetical data: 20 regulated firms with different marginal abatement cost functions. The variation in charge amount, Monitory costs, abatement cost, self-report value and total cost are analyzed, with each of the theoretical models under different scenarios. Our results show that the behavior of the different variables remains unchanged under the three static models, and that the only variations occur inside the scenarios. Our results can serve as a tool for the formulation and design of taxing systems
9. A theoretical model on surface electronic behavior: Strain effect
International Nuclear Information System (INIS)
Qin, W.G.; Shaw, D.
2009-01-01
Deformation from mechanical loading can affect surface electronic behavior. Surface deformation and electronic behavior can be quantitatively expressed using strain and work function, respectively, and their experimental relationship can be readily determined using the Kelvin probing technique. However, the theoretical correlation between work function and strain has been unclear. This study reports our theoretical exploration, for the first time, of the effect of strain on work function. We propose a simple electrostatic action model by considering the effect of a dislocation on work function of a one-dimensional lattice and further extend this model to the complex conditions for the effect of dislocation density. Based on this model, we established successfully a theoretical correlation between work function and strain.
10. Proposal of a theoretical model for the practical nurse
Directory of Open Access Journals (Sweden)
Dolores Abril Sabater
2010-01-01
Full Text Available AIM: To determine which model of nursing is proposed by care professionals and the reason for their choice. METHOD: cross-sectional, descriptive study design. The main variable: Nursing Models and Theories. As secondary variables were collected: age, gender, years of work experience, nursing model of basic training, and course/s related. We used a self-elaborated, anonymous questionnaire, passed between April - May, 2006. Not random sample.RESULTS: 546 nurses were invited, answered 205. 38 % response rate. Virginia Henderson was the more selected model (33%, however, 42% left the question blank, 12% indicated that they wanted to work under the guidance of a model. They selected a specifically model: Knowledge of the model to their training, standardization in other centers, the characteristics of the model itself and identification with its philosophy. They are not decided by a model by ignorance, lack of time and usefulness. CONCLUSIONS: The model chosen mostly for their daily work was Virginia Henderson model, so that knowledge of a model is the main reason for their election. Professionals who choose not to use the model in their practice realize offers and calling for resources, besides to explain the lack of knowledge on this topic. To advance the nursing profession is necessary that nurse is thought over widely on the abstract concepts of the theory in our context.
11. Proposal of a social alliance success model from a relationship marketing perspective: A meta-analytical study of the theoretical foundations
Directory of Open Access Journals (Sweden)
María Jesús Barroso-Méndez
2015-07-01
Full Text Available Partnerships between businesses and non-governmental organizations (NGOs have become widely adopted mechanisms for collaboration in addressing complex social issues, the aim being to take advantage of the two types of organizational rationale to generate mutual value. Many such alliances have proved to be unsuccessful, however. To assist managers improve the likelihood of success of their collaborative relationships, the authors propose a success model of business-NGO partnering processes based on Relationship Marketing Theory. They also analyse the theoretical bases of the model's hypotheses through a meta-analytical study of the existing literature.
12. Information Theoretic Tools for Parameter Fitting in Coarse Grained Models
KAUST Repository
Kalligiannaki, Evangelia; Harmandaris, Vagelis; Katsoulakis, Markos A.; Plechac, Petr
2015-01-01
We study the application of information theoretic tools for model reduction in the case of systems driven by stochastic dynamics out of equilibrium. The model/dimension reduction is considered by proposing parametrized coarse grained dynamics
13. Episodic Laryngeal Breathing Disorders: Literature Review and Proposal of Preliminary Theoretical Framework.
Science.gov (United States)
Shembel, Adrianna C; Sandage, Mary J; Verdolini Abbott, Katherine
2017-01-01
The purposes of this literature review were (1) to identify and assess frameworks for clinical characterization of episodic laryngeal breathing disorders (ELBD) and their subtypes, (2) to integrate concepts from these frameworks into a novel theoretical paradigm, and (3) to provide a preliminary algorithm to classify clinical features of ELBD for future study of its clinical manifestations and underlying pathophysiological mechanisms. This is a literature review. Peer-reviewed literature from 1983 to 2015 pertaining to models for ELBD was searched using Pubmed, Ovid, Proquest, Cochrane Database of Systematic Reviews, and Google Scholar. Theoretical models for ELBD were identified, evaluated, and integrated into a novel comprehensive framework. Consensus across three salient models provided a working definition and inclusionary criteria for ELBD within the new framework. Inconsistencies and discrepancies within the models provided an analytic platform for future research. Comparison among three conceptual models-(1) Irritable larynx syndrome, (2) Dichotomous triggers, and (3) Periodic occurrence of laryngeal obstruction-showed that the models uniformly consider ELBD to involve episodic laryngeal obstruction causing dyspnea. The models differed in their description of source of dyspnea, in their inclusion of corollary behaviors, in their inclusion of other laryngeal-based behaviors (eg, cough), and types of triggers. The proposed integrated theoretical framework for ELBD provides a preliminary systematic platform for the identification of key clinical feature patterns indicative of ELBD and associated clinical subgroups. This algorithmic paradigm should evolve with better understanding of this spectrum of disorders and its underlying pathophysiological mechanisms. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
14. Theoretical Proposal for Pragmatic-Rhetorical Analysis of Argument in the Tourist Guide
Directory of Open Access Journals (Sweden)
MSc. Iliana Rosabal-Pérez
2015-10-01
Full Text Available The purpose of the article is to present a useful theoretical proposal for the analysis of argumentation within the guidebook genre. The study perspective is supported on the rhetorical-pragmatic perspective of argumentation provided by some authors as well as the theoretical models applied to the study of persuasion in guidebooks suggested by Adam/Bonhomme (1997, Hernández-Santaolalla and Cobo-Durán (2010. The analysis of argumentation in this kind of text must consider a tactical and strategic view of the rhetorical actions; that is to say, not to abstain from the elocution traditional examination since argumentation is an essential devise obtainable in the whole test. Keywords: rhetorical, argumentation, guidebook, rhetorical operations, topical.
15. Using Graph and Vertex Entropy to Compare Empirical Graphs with Theoretical Graph Models
Directory of Open Access Journals (Sweden)
Tomasz Kajdanowicz
2016-09-01
Full Text Available Over the years, several theoretical graph generation models have been proposed. Among the most prominent are: the Erdős–Renyi random graph model, Watts–Strogatz small world model, Albert–Barabási preferential attachment model, Price citation model, and many more. Often, researchers working with real-world data are interested in understanding the generative phenomena underlying their empirical graphs. They want to know which of the theoretical graph generation models would most probably generate a particular empirical graph. In other words, they expect some similarity assessment between the empirical graph and graphs artificially created from theoretical graph generation models. Usually, in order to assess the similarity of two graphs, centrality measure distributions are compared. For a theoretical graph model this means comparing the empirical graph to a single realization of a theoretical graph model, where the realization is generated from the given model using an arbitrary set of parameters. The similarity between centrality measure distributions can be measured using standard statistical tests, e.g., the Kolmogorov–Smirnov test of distances between cumulative distributions. However, this approach is both error-prone and leads to incorrect conclusions, as we show in our experiments. Therefore, we propose a new method for graph comparison and type classification by comparing the entropies of centrality measure distributions (degree centrality, betweenness centrality, closeness centrality. We demonstrate that our approach can help assign the empirical graph to the most similar theoretical model using a simple unsupervised learning method.
16. A Proposed Conceptual Model of Military Medical Readiness
National Research Council Canada - National Science Library
Van Hall, Brian M
2007-01-01
.... The purpose of this research is to consolidate existing literature on the latent variable of medical readiness, and to propose a composite theoretical model of medical readiness that may provide...
17. A Game Theoretic Model of Thermonuclear Cyberwar
Energy Technology Data Exchange (ETDEWEB)
Soper, Braden C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2017-08-23
In this paper we propose a formal game theoretic model of thermonuclear cyberwar based on ideas found in [1] and [2]. Our intention is that such a game will act as a first step toward building more complete formal models of Cross-Domain Deterrence (CDD). We believe the proposed thermonuclear cyberwar game is an ideal place to start on such an endeavor because the game can be fashioned in a way that is closely related to the classical models of nuclear deterrence [4–6], but with obvious modifications that will help to elucidate the complexities introduced by a second domain. We start with the classical bimatrix nuclear deterrence game based on the game of chicken, but introduce uncertainty via a left-of-launch cyber capability that one or both players may possess.
18. Expanding Panjabi's stability model to express movement: a theoretical model.
Science.gov (United States)
Hoffman, J; Gabel, P
2013-06-01
Novel theoretical models of movement have historically inspired the creation of new methods for the application of human movement. The landmark theoretical model of spinal stability by Panjabi in 1992 led to the creation of an exercise approach to spinal stability. This approach however was later challenged, most significantly due to a lack of favourable clinical effect. The concepts explored in this paper address and consider the deficiencies of Panjabi's model then propose an evolution and expansion from a special model of stability to a general one of movement. It is proposed that two body-wide symbiotic elements are present within all movement systems, stability and mobility. The justification for this is derived from the observable clinical environment. It is clinically recognised that these two elements are present and identifiable throughout the body in different joints and muscles, and the neural conduction system. In order to generalise the Panjabi model of stability to include and illustrate movement, a matching parallel mobility system with the same subsystems was conceptually created. In this expanded theoretical model, the new mobility system is placed beside the existing stability system and subsystems. The ability of both stability and mobility systems to work in harmony will subsequently determine the quality of movement. Conversely, malfunction of either system, or their subsystems, will deleteriously affect all other subsystems and consequently overall movement quality. For this reason, in the rehabilitation exercise environment, focus should be placed on the simultaneous involvement of both the stability and mobility systems. It is suggested that the individual's relevant functional harmonious movements should be challenged at the highest possible level without pain or discomfort. It is anticipated that this conceptual expansion of the theoretical model of stability to one with the symbiotic inclusion of mobility, will provide new understandings
19. A theoretical model of job retention for home health care nurses.
Science.gov (United States)
Ellenbecker, Carol Hall
2004-08-01
Predicted severe nursing shortages and an increasing demand for home health care services have made the retention of experienced, qualified nursing staff a priority for health care organizations. The purpose of this paper is to describe a theoretical model of job retention for home health care nurses. The theoretical model is an integration of the findings of empirical research related to intent to stay and retention, components of Neal's theory of home health care nursing practice and findings from earlier work to develop an instrument to measure home health care nurses' job satisfaction. The theoretical model identifies antecedents to job satisfaction of home health care nurses. The antecedents are intrinsic and extrinsic job characteristics. The model also proposes that job satisfaction is directly related to retention and indirectly related to retention though intent to stay. Individual nurse characteristics are indirectly related to retention through intent to stay. The individual characteristic of tenure is indirectly related to retention through autonomy, as an intrinsic characteristic of job satisfaction, and intent to stay. The proposed model can be used to guide research that explores gaps in knowledge about intent to stay and retention among home health care nurses.
20. Deferred Action: Theoretical model of process architecture design for emergent business processes
Directory of Open Access Journals (Sweden)
Patel, N.V.
2007-01-01
Full Text Available E-Business modelling and ebusiness systems development assumes fixed company resources, structures, and business processes. Empirical and theoretical evidence suggests that company resources and structures are emergent rather than fixed. Planning business activity in emergent contexts requires flexible ebusiness models based on better management theories and models . This paper builds and proposes a theoretical model of ebusiness systems capable of catering for emergent factors that affect business processes. Drawing on development of theories of the ‘action and design’class the Theory of Deferred Action is invoked as the base theory for the theoretical model. A theoretical model of flexible process architecture is presented by identifying its core components and their relationships, and then illustrated with exemplar flexible process architectures capable of responding to emergent factors. Managerial implications of the model are considered and the model’s generic applicability is discussed.
1. A theoretical model to describe progressions and regressions for exercise rehabilitation.
Science.gov (United States)
Blanchard, Sam; Glasgow, Phil
2014-08-01
This article aims to describe a new theoretical model to simplify and aid visualisation of the clinical reasoning process involved in progressing a single exercise. Exercise prescription is a core skill for physiotherapists but is an area that is lacking in theoretical models to assist clinicians when designing exercise programs to aid rehabilitation from injury. Historical models of periodization and motor learning theories lack any visual aids to assist clinicians. The concept of the proposed model is that new stimuli can be added or exchanged with other stimuli, either intrinsic or extrinsic to the participant, in order to gradually progress an exercise whilst remaining safe and effective. The proposed model maintains the core skills of physiotherapists by assisting clinical reasoning skills, exercise prescription and goal setting. It is not limited to any one pathology or rehabilitation setting and can adapted by any level of skilled clinician. Copyright © 2014 Elsevier Ltd. All rights reserved.
2. Theoretical Model for the Performance of Liquid Ring Pump Based on the Actual Operating Cycle
Directory of Open Access Journals (Sweden)
Si Huang
2017-01-01
Full Text Available Liquid ring pump is widely applied in many industry fields due to the advantages of isothermal compression process, simple structure, and liquid-sealing. Based on the actual operating cycle of “suction-compression-discharge-expansion,” a universal theoretical model for performance of liquid ring pump was established in this study, to solve the problem that the theoretical models deviated from the actual performance in operating cycle. With the major geometric parameters and operating conditions of a liquid ring pump, the performance parameters such as the actual capacity for suction and discharge, shaft power, and global efficiency can be conveniently predicted by the proposed theoretical model, without the limitation of empiric range, performance data, or the detailed 3D geometry of pumps. The proposed theoretical model was verified by experimental performances of liquid ring pumps and could provide a feasible tool for the application of liquid ring pump.
3. Tesla Coil Theoretical Model and its Experimental Verification
Directory of Open Access Journals (Sweden)
Voitkans Janis
2014-12-01
Full Text Available In this paper a theoretical model of Tesla coil operation is proposed. Tesla coil is described as a long line with distributed parameters in a single-wire form, where the line voltage is measured across electrically neutral space. By applying the principle of equivalence of single-wire and two-wire schemes an equivalent two-wire scheme can be found for a single-wire scheme and the already known long line theory can be applied to the Tesla coil. A new method of multiple reflections is developed to characterize a signal in a long line. Formulas for calculation of voltage in Tesla coil by coordinate and calculation of resonance frequencies are proposed. The theoretical calculations are verified experimentally. Resonance frequencies of Tesla coil are measured and voltage standing wave characteristics are obtained for different output capacities in the single-wire mode. Wave resistance and phase coefficient of Tesla coil is obtained. Experimental measurements show good compliance with the proposed theory. The formulas obtained in this paper are also usable for a regular two-wire long line with distributed parameters.
4. A Game-Theoretic Model of Grounding for Referential Communication Tasks
Science.gov (United States)
Thompson, William
2009-01-01
Conversational grounding theory proposes that language use is a form of rational joint action, by which dialog participants systematically and collaboratively add to their common ground of shared knowledge and beliefs. Following recent work applying "game theory" to pragmatics, this thesis develops a game-theoretic model of grounding that…
5. Tesla coil theoretical model and experimental verification
OpenAIRE
Voitkans, Janis; Voitkans, Arnis
2014-01-01
Abstract – In this paper a theoretical model of a Tesla coil operation is proposed. Tesla coil is described as a long line with distributed parameters in a single-wired format, where the line voltage is measured against electrically neutral space. It is shown that equivalent two-wired scheme can be found for a single-wired scheme and already known long line theory can be applied to a Tesla coil. Formulas for calculation of voltage in a Tesla coil by coordinate and calculation of resonance fre...
6. Achievement Goals and Discrete Achievement Emotions: A Theoretical Model and Prospective Test
Science.gov (United States)
Pekrun, Reinhard; Elliot, Andrew J.; Maier, Markus A.
2006-01-01
A theoretical model linking achievement goals to discrete achievement emotions is proposed. The model posits relations between the goals of the trichotomous achievement goal framework and 8 commonly experienced achievement emotions organized in a 2 (activity/outcome focus) x 2 (positive/negative valence) taxonomy. Two prospective studies tested…
7. Category-theoretic models of algebraic computer systems
Science.gov (United States)
Kovalyov, S. P.
2016-01-01
A computer system is said to be algebraic if it contains nodes that implement unconventional computation paradigms based on universal algebra. A category-based approach to modeling such systems that provides a theoretical basis for mapping tasks to these systems' architecture is proposed. The construction of algebraic models of general-purpose computations involving conditional statements and overflow control is formally described by a reflector in an appropriate category of algebras. It is proved that this reflector takes the modulo ring whose operations are implemented in the conventional arithmetic processors to the Łukasiewicz logic matrix. Enrichments of the set of ring operations that form bases in the Łukasiewicz logic matrix are found.
8. A novel game theoretic approach for modeling competitive information diffusion in social networks with heterogeneous nodes
Science.gov (United States)
2017-01-01
Influence maximization deals with identification of the most influential nodes in a social network given an influence model. In this paper, a game theoretic framework is developed that models a competitive influence maximization problem. A novel competitive influence model is additionally proposed that incorporates user heterogeneity, message content, and network structure. The proposed game-theoretic model is solved using Nash Equilibrium in a real-world dataset. It is shown that none of the well-known strategies are stable and at least one player has the incentive to deviate from the proposed strategy. Moreover, violation of Nash equilibrium strategy by each player leads to their reduced payoff. Contrary to previous works, our results demonstrate that graph topology, as well as the nodes' sociability and initial tendency measures have an effect on the determination of the influential node in the network.
9. Franchise Business Model: Theoretical Insights
OpenAIRE
Levickaitė, Rasa; Reimeris, Ramojus
2010-01-01
The article is based on literature review, theoretical insights, and deals with the topic of franchise business model. The objective of the paper is to analyse peculiarities of franchise business model and its developing conditions in Lithuania. The aim of the paper is to make an overview on franchise business model and its environment in Lithuanian business context. The overview is based on international and local theoretical insights. In terms of practical meaning, this article should be re...
10. Theoretical model for the mechanical behavior of prestressed beams under torsion
Directory of Open Access Journals (Sweden)
Sérgio M.R. Lopes
2014-12-01
Full Text Available In this article, a global theoretical model previously developed and validated by the authors for reinforced concrete beams under torsion is reviewed and corrected in order to predict the global behavior of beams under torsion with uniform longitudinal prestress. These corrections are based on the introduction of prestress factors and on the modification of the equilibrium equations in order to incorporate the contribution of the prestressing reinforcement. The theoretical results obtained with the new model are compared with some available results of prestressed concrete (PC beams under torsion found in the literature. The results obtained in this study validate the proposed computing procedure to predict the overall behavior of PC beams under torsion.
11. Set-Theoretic Approach to Maturity Models
DEFF Research Database (Denmark)
Despite being widely accepted and applied, maturity models in Information Systems (IS) have been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. This PhD thesis focuses on addressing...... these criticisms by incorporating recent developments in configuration theory, in particular application of set-theoretic approaches. The aim is to show the potential of employing a set-theoretic approach for maturity model research and empirically demonstrating equifinal paths to maturity. Specifically...... methodological guidelines consisting of detailed procedures to systematically apply set theoretic approaches for maturity model research and provides demonstrations of it application on three datasets. The thesis is a collection of six research papers that are written in a sequential manner. The first paper...
12. A Theoretical Bayesian Game Model for the Vendor-Retailer Relation
Directory of Open Access Journals (Sweden)
Emil CRIŞAN
2012-06-01
Full Text Available We consider an equilibrated supply chain with two equal partners, a vendor and a retailer (also called newsboy type products supply chain. The actions of each partner are driven by profit. Given the fact that at supply chain level are specific external influences which affect the costs and concordant the profit, we use a game theoretic model for the situation, considering costs and demand. At theoretical level, symmetric and asymmetric information patterns are considered for this situation. There are at every supply chain’s level situations when external factors (such as inflation, raw-material rate influence the situation of each partner even if the information is well shared within the chain. The model we propose considers both the external factors and asymmetric information within a supply chain.
13. Tesla Coil Theoretical Model and its Experimental Verification
OpenAIRE
Voitkans Janis; Voitkans Arnis
2014-01-01
In this paper a theoretical model of Tesla coil operation is proposed. Tesla coil is described as a long line with distributed parameters in a single-wire form, where the line voltage is measured across electrically neutral space. By applying the principle of equivalence of single-wire and two-wire schemes an equivalent two-wire scheme can be found for a single-wire scheme and the already known long line theory can be applied to the Tesla coil. A new method of multiple re...
14. A theoretical model of speed-dependent steering torque for rolling tyres
Science.gov (United States)
Wei, Yintao; Oertel, Christian; Liu, Yahui; Li, Xuebing
2016-04-01
It is well known that the tyre steering torque is highly dependent on the tyre rolling speed. In limited cases, i.e. parking manoeuvre, the steering torque approaches the maximum. With the increasing tyre speed, the steering torque decreased rapidly. Accurate modelling of the speed-dependent behaviour for the tyre steering torque is a key factor to calibrate the electric power steering (EPS) system and tune the handling performance of vehicles. However, no satisfactory theoretical model can be found in the existing literature to explain this phenomenon. This paper proposes a new theoretical framework to model this important tyre behaviour, which includes three key factors: (1) tyre three-dimensional transient rolling kinematics with turn-slip; (2) dynamical force and moment generation; and (3) the mixed Lagrange-Euler method for contact deformation solving. A nonlinear finite-element code has been developed to implement the proposed approach. It can be found that the main mechanism for the speed-dependent steering torque is due to turn-slip-related kinematics. This paper provides a theory to explain the complex mechanism of the tyre steering torque generation, which helps to understand the speed-dependent tyre steering torque, tyre road feeling and EPS calibration.
15. A Theoretically Consistent Framework for Modelling Lagrangian Particle Deposition in Plant Canopies
Science.gov (United States)
Bailey, Brian N.; Stoll, Rob; Pardyjak, Eric R.
2018-06-01
We present a theoretically consistent framework for modelling Lagrangian particle deposition in plant canopies. The primary focus is on describing the probability of particles encountering canopy elements (i.e., potential deposition), and provides a consistent means for including the effects of imperfect deposition through any appropriate sub-model for deposition efficiency. Some aspects of the framework draw upon an analogy to radiation propagation through a turbid medium with which to develop model theory. The present method is compared against one of the most commonly used heuristic Lagrangian frameworks, namely that originally developed by Legg and Powell (Agricultural Meteorology, 1979, Vol. 20, 47-67), which is shown to be theoretically inconsistent. A recommendation is made to discontinue the use of this heuristic approach in favour of the theoretically consistent framework developed herein, which is no more difficult to apply under equivalent assumptions. The proposed framework has the additional advantage that it can be applied to arbitrary canopy geometries given readily measurable parameters describing vegetation structure.
16. Pragmatic impact of workplace ostracism: toward a theoretical model
Directory of Open Access Journals (Sweden)
Amer Ali Al-Atwi
2017-07-01
Full Text Available Purpose - The purpose of this paper is to extend the ostracism literature by exploring the pragmatic impact of ostracism on performance. Design/methodology/approach - Ostracism workplace, social relations and empowerment structures are discussed. The paper then develops a theoretical framework that explains why and under what conditions workplace ostracism undermines employees’ performance. The author proposes that empowerment structures mediate the link between ostracism and in-role and extra-role performance. In addition, it was proposed that relational links buffer the negative relationship between ostracism and empowerment structures on performance and weaken the negative indirect effect of ostracism on performance. Findings - The theoretical arguments provide support for the model showing that empowerment structures mediate the relationship between ostracism and performance, and the mediation effect only occurred when external links were high but not when external links were low. Originality/value - The author has expanded the extant literature by answering recent calls for research exploring the pragmatic impact of workplace ostracism where past research has typically focused solely on the psychological impacts such as psychological needs.
17. Information Theoretic Tools for Parameter Fitting in Coarse Grained Models
KAUST Repository
Kalligiannaki, Evangelia
2015-01-07
We study the application of information theoretic tools for model reduction in the case of systems driven by stochastic dynamics out of equilibrium. The model/dimension reduction is considered by proposing parametrized coarse grained dynamics and finding the optimal parameter set for which the relative entropy rate with respect to the atomistic dynamics is minimized. The minimization problem leads to a generalization of the force matching methods to non equilibrium systems. A multiplicative noise example reveals the importance of the diffusion coefficient in the optimization problem.
18. Gamification in online education: proposal for a participatory learning model
Directory of Open Access Journals (Sweden)
Fabiana Bigão Silva
2017-09-01
Full Text Available Empirical studies have suggested limitations on the form of application of gamification mechanics in the context of online education. These mechanics have been applied without reference to a theoretical model dedicated to this type of education. The objective of the paper is to propose a model for a gamified platform for online education that contributes to a more participatory learning, taking into account the different student profiles. Based on literature review about approaches to gamification systems design, a set of steps was followed in order to develop a generic model for a framework dedicated to online education. The model proposed is based on the Educational Gamification Design Principles proposed by Dicheva et al. (2015. The model may contribute to the promotion of participatory learning, taking into account the different student profiles. The results of such evaluation will be published in the future.
19. Epistemology of tourism: theoretical schools and critical proposal
Directory of Open Access Journals (Sweden)
Alexandre Panosso Netto
2014-04-01
Full Text Available The aim of this paper is present, discuss and problematize the issue of epistemology applied to tourism. The paper discusses the problems in the construction of knowledge of tourism, and theoretical schools, and a proposed critical epistemology. Progress in the scientific production of knowledge in tourism, referring to epistemology, despite its growth in recent years, remains an issue that can be considered neglected or very complex due to the need for a strong philosophical reflection, with little application in practical life, by the claim of achieving scientific rigor of analytic epistemology. The procedure is a critical review of the term epistemology to discriminate the scientistic position of the term and the other schools recognize influencing this trend (analytical and those opposed to it (such as historical, to no effect misrepresent its meaning in the humanities and social sciences, but particularly in tourism. The proposal relates to the development of reflective critical foundations, based on critical theory, as a build option to transform the reality and knowledge of tourism.
20. Towards a theoretical model on medicines as a health need.
Science.gov (United States)
Vargas-Peláez, Claudia Marcela; Soares, Luciano; Rover, Marina Raijche Mattozo; Blatt, Carine Raquel; Mantel-Teeuwisse, Aukje; Rossi Buenaventura, Francisco Augusto; Restrepo, Luis Guillermo; Latorre, María Cristina; López, José Julián; Bürgin, María Teresa; Silva, Consuelo; Leite, Silvana Nair; Mareni Rocha, Farias
2017-04-01
1. Theoretical Models and Operational Frameworks in Public Health Ethics
Science.gov (United States)
Petrini, Carlo
2010-01-01
The article is divided into three sections: (i) an overview of the main ethical models in public health (theoretical foundations); (ii) a summary of several published frameworks for public health ethics (practical frameworks); and (iii) a few general remarks. Rather than maintaining the superiority of one position over the others, the main aim of the article is to summarize the basic approaches proposed thus far concerning the development of public health ethics by describing and comparing the various ideas in the literature. With this in mind, an extensive list of references is provided. PMID:20195441
2. Group theoretical construction of two-dimensional models with infinite sets of conservation laws
International Nuclear Information System (INIS)
D'Auria, R.; Regge, T.; Sciuto, S.
1980-01-01
We explicitly construct some classes of field theoretical 2-dimensional models associated with symmetric spaces G/H according to a general scheme proposed in an earlier paper. We treat the SO(n + 1)/SO(n) and SU(n + 1)/U(n) case, giving their relationship with the O(n) sigma-models and the CP(n) models. Moreover, we present a new class of models associated to the SU(n)/SO(n) case. All these models are shown to possess an infinite set of local conservation laws. (orig.)
3. A theoretical intellectual capital model applied to cities
Directory of Open Access Journals (Sweden)
José Luis Alfaro Navarro
2013-06-01
Full Text Available New Management Information Systems (MIS are necessary at local level as the main source of wealth creation. Therefore, tools and approaches that provide a full future vision of any organization should be a strategic priority for economic development. In this line, cities are “centers of knowledge and sources of growth and innovation” and integrated urban development policies are necessary. These policies support communication networks and optimize location structures as strategies that provide opportunities for social and democratic participation for the citizens. This paper proposes a theoretical model to measure and evaluate the cities intellectual capital that allows determine what we must take into account to make cities a source of wealth, prosperity, welfare and future growth. Furthermore, local intellectual capital provides a long run vision. Thus, in this paper we develop and explain how to implement a model to estimate intellectual capital in cities. In this sense, our proposal is to provide a model for measuring and managing intellectual capital using socio-economic indicators for cities. These indicators offer a long term picture supported by a comprehensive strategy for those who occupy the local space, infrastructure for implementation and management of the environment for its development.
4. Theoretical model of polar cap auroral arcs
International Nuclear Information System (INIS)
Kan, J.R.; Burke, W.J.; USAF, Bedford, MA)
1985-01-01
A theory of the polar cap auroral arcs is proposed under the assumption that the magnetic field reconnection occurs in the cusp region on tail field lines during northward interplanetary magnetic field (IMF) conditions. Requirements of a convection model during northward IMF are enumerated based on observations and fundamental theoretical considerations. The theta aurora can be expected to occur on the closed field lines convecting sunward in the central polar cap, while the less intense regular polar cap arcs can occur either on closed or open field lines. The dynamo region for the polar cap arcs is required to be on closed field lines convecting tailward in the plasma sheet which is magnetically connected to the sunward convection in the central polar cap. 43 references
5. Modeling goals and functions of control and safety systems - theoretical foundations and extensions of MFM
International Nuclear Information System (INIS)
Lind, M.
2005-10-01
Multilevel Flow Modeling (MFM) has proven to be an effective modeling tool for reasoning about plant failure and control strategies and is currently exploited for operator support in diagnosis and on-line alarm analysis. Previous MFM research was focussed on representing goals and functions of process plants which generate, transform and distribute mass and energy. However, only a limited consideration has been given to the problems of modeling the control systems. Control functions are indispensable for operating any industrial plant. But modeling of control system functions has proven to be a more challenging problem than modeling functions of energy and mass processes. The problems were discussed by Lind and tentative solutions has been proposed but have not been investigated in depth until recently, partly due to the lack of an appropriate theoretical foundation. The purposes of the present report are to show that such a theoretical foundation for modeling goals and functions of control systems can be built from concepts and theories of action developed by Von Wright and to show how the theoretical foundation can be used to extend MFM with concepts for modeling control systems. The theoretical foundations has been presented in detail elsewhere by the present author without the particular focus on modeling control actions and MFM adopted here. (au)
6. Modeling goals and functions of control and safety systems -theoretical foundations and extensions of MFM
Energy Technology Data Exchange (ETDEWEB)
Lind, M. [Oersted - DTU, Kgs. Lyngby (Denmark)
2005-10-01
Multilevel Flow Modeling (MFM) has proven to be an effective modeling tool for reasoning about plant failure and control strategies and is currently exploited for operator support in diagnosis and on-line alarm analysis. Previous MFM research was focussed on representing goals and functions of process plants which generate, transform and distribute mass and energy. However, only a limited consideration has been given to the problems of modeling the control systems. Control functions are indispensable for operating any industrial plant. But modeling of control system functions has proven to be a more challenging problem than modeling functions of energy and mass processes. The problems were discussed by Lind and tentative solutions has been proposed but have not been investigated in depth until recently, partly due to the lack of an appropriate theoretical foundation. The purposes of the present report are to show that such a theoretical foundation for modeling goals and functions of control systems can be built from concepts and theories of action developed by Von Wright and to show how the theoretical foundation can be used to extend MFM with concepts for modeling control systems. The theoretical foundations has been presented in detail elsewhere by the present author without the particular focus on modeling control actions and MFM adopted here. (au)
7. An Analytical Analysis of Hydraulic Jump in Triangular Channel: A Proposed Model
Science.gov (United States)
Khan, S. A.
2013-05-01
The paper presents the theoretical study of hydraulic jump in triangular channel section. Presuming the jump as one dimensional free shear layer with hydrostatic pressure distribution across it and using momentum equation, specific force equation is obtained. Using the specific force equation and eddy viscosity equation, analytical models for sequent depth, dimensionless profile, turbulent shear stress distribution and energy loss for various initial Froude numbers have been obtained. The proposed models for sequent depth and energy loss are also compared with the other developed models. The proposed energy loss model also provides the energy loss at any point along the jump, while this provision is not available in the models of other investigators. Newton-Raphson and Runge-Kutta methods are used for the solution of the proposed model. The outcome of this study can be used in the design of stilling basin floor and side walls on permeable foundations.
8. Theoretical Assessment of the Impact of Climatic Factors in a Vibrio Cholerae Model.
Science.gov (United States)
Kolaye, G; Damakoa, I; Bowong, S; Houe, R; Békollè, D
2018-05-04
A mathematical model for Vibrio Cholerae (V. Cholerae) in a closed environment is considered, with the aim of investigating the impact of climatic factors which exerts a direct influence on the bacterial metabolism and on the bacterial reservoir capacity. We first propose a V. Cholerae mathematical model in a closed environment. A sensitivity analysis using the eFast method was performed to show the most important parameters of the model. After, we extend this V. cholerae model by taking account climatic factors that influence the bacterial reservoir capacity. We present the theoretical analysis of the model. More precisely, we compute equilibria and study their stabilities. The stability of equilibria was investigated using the theory of periodic cooperative systems with a concave nonlinearity. Theoretical results are supported by numerical simulations which further suggest the necessity to implement sanitation campaigns of aquatic environments by using suitable products against the bacteria during the periods of growth of aquatic reservoirs.
9. The Roy Adaptation Model: A Theoretical Framework for Nurses Providing Care to Individuals With Anorexia Nervosa.
Science.gov (United States)
Jennings, Karen M
Using a nursing theoretical framework to understand, elucidate, and propose nursing research is fundamental to knowledge development. This article presents the Roy Adaptation Model as a theoretical framework to better understand individuals with anorexia nervosa during acute treatment, and the role of nursing assessments and interventions in the promotion of weight restoration. Nursing assessments and interventions situated within the Roy Adaptation Model take into consideration how weight restoration does not occur in isolation but rather reflects an adaptive process within external and internal environments, and has the potential for more holistic care.
10. A proposed general model of information behaviour.
Directory of Open Access Journals (Sweden)
2003-01-01
Full Text Available Presents a critical description of Wilson's (1996 global model of information behaviour and proposes major modification on the basis of research into information behaviour of managers, conducted in Poland. The theoretical analysis and research results suggest that Wilson's model has certain imperfections, both in its conceptual content, and in graphical presentation. The model, for example, cannot be used to describe managers' information behaviour, since managers basically are not the end users of external from organization or computerized information services, and they acquire information mainly through various intermediaries. Therefore, the model cannot be considered as a general model, applicable to every category of information users. The proposed new model encompasses the main concepts of Wilson's model, such as: person-in-context, three categories of intervening variables (individual, social and environmental, activating mechanisms, cyclic character of information behaviours, and the adoption of a multidisciplinary approach to explain them. However, the new model introduces several changes. They include: 1. identification of 'context' with the intervening variables; 2. immersion of the chain of information behaviour in the 'context', to indicate that the context variables influence behaviour at all stages of the process (identification of needs, looking for information, processing and using it; 3. stress is put on the fact that the activating mechanisms also can occur at all stages of the information acquisition process; 4. introduction of two basic strategies of looking for information: personally and/or using various intermediaries.
11. Theoretical Models and Operational Frameworks in Public Health Ethics
Directory of Open Access Journals (Sweden)
Carlo Petrini
2010-01-01
Full Text Available The article is divided into three sections: (i an overview of the main ethical models in public health (theoretical foundations; (ii a summary of several published frameworks for public health ethics (practical frameworks; and (iii a few general remarks. Rather than maintaining the superiority of one position over the others, the main aim of the article is to summarize the basic approaches proposed thus far concerning the development of public health ethics by describing and comparing the various ideas in the literature. With this in mind, an extensive list of references is provided.
12. Towards a Theoretical Construct for Modelling Smallholders’ Forestland-Use Decisions: What Can We Learn from Agriculture and Forest Economics?
Directory of Open Access Journals (Sweden)
Kahlil Baker
2017-09-01
Full Text Available Academic research on smallholders’ forestland-use decisions is regularly addressed in different streams of literature using different theoretical constructs that are independently incomplete. In this article, we propose a theoretical construct for modelling smallholders’ forestland-use decisions intended to serve in the guidance and operationalization of future models for quantitative analysis. Our construct is inspired by the sub-disciplines of forestry and agricultural economics with a crosscutting theme of how transaction costs drive separability between consumption and production decisions. Our results help explain why exogenous variables proposed in the existing literature are insufficient at explaining smallholders’ forestland-use decisions, and provide theoretical context for endogenizing characteristics of the household, farm and landscape. Smallholders’ forestland-use decisions are best understood in an agricultural context of competing uses for household assets and interdependent consumption and production decisions. Forest production strategies range from natural regeneration to intensive management of the forest resource to co-jointly produce market and non-market values. Due to transaction costs, decision prices are best represented by their shadow as opposed to market prices. Shadow prices are shaped by endogenous smallholder-specific preferences for leisure, non-market values, time, risk, and uncertainty. Our proposed construct is intended to provide a theoretical basis to assist modellers in the selection of variables for quantitative analysis.
13. Theoretical Background for the Decision-Making Process Modelling under Controlled Intervention Conditions
Directory of Open Access Journals (Sweden)
Bakanauskienė Irena
2017-12-01
Full Text Available This article is intended to theoretically justify the decision-making process model for the cases, when active participation of investing entities in controlling the activities of an organisation and their results is noticeable. Based on scientific literature analysis, a concept of controlled conditions is formulated, and using a rational approach to the decision-making process, a model of the 11-steps decision-making process under controlled intervention is presented. Also, there have been unified conditions, describing the case of controlled interventions thus providing preconditions to ensure the adequacy of the proposed decision-making process model.
14. The interrogation decision-making model: A general theoretical framework for confessions.
Science.gov (United States)
Yang, Yueran; Guyll, Max; Madon, Stephanie
2017-02-01
This article presents a new model of confessions referred to as the interrogation decision-making model . This model provides a theoretical umbrella with which to understand and analyze suspects' decisions to deny or confess guilt in the context of a custodial interrogation. The model draws upon expected utility theory to propose a mathematical account of the psychological mechanisms that not only underlie suspects' decisions to deny or confess guilt at any specific point during an interrogation, but also how confession decisions can change over time. Findings from the extant literature pertaining to confessions are considered to demonstrate how the model offers a comprehensive and integrative framework for organizing a range of effects within a limited set of model parameters. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
15. Information-theoretic model selection for optimal prediction of stochastic dynamical systems from data
Science.gov (United States)
Darmon, David
2018-03-01
In the absence of mechanistic or phenomenological models of real-world systems, data-driven models become necessary. The discovery of various embedding theorems in the 1980s and 1990s motivated a powerful set of tools for analyzing deterministic dynamical systems via delay-coordinate embeddings of observations of their component states. However, in many branches of science, the condition of operational determinism is not satisfied, and stochastic models must be brought to bear. For such stochastic models, the tool set developed for delay-coordinate embedding is no longer appropriate, and a new toolkit must be developed. We present an information-theoretic criterion, the negative log-predictive likelihood, for selecting the embedding dimension for a predictively optimal data-driven model of a stochastic dynamical system. We develop a nonparametric estimator for the negative log-predictive likelihood and compare its performance to a recently proposed criterion based on active information storage. Finally, we show how the output of the model selection procedure can be used to compare candidate predictors for a stochastic system to an information-theoretic lower bound.
16. Theoretical study of evaporation heat transfer in horizontal microfin tubes: stratified flow model
Energy Technology Data Exchange (ETDEWEB)
Honda, H; Wang, Y S [Kyushu Univ., Inst. for Materials Chemistry and Engineering, Kasuga, Fukuoka (Japan)
2004-08-01
The stratified flow model of evaporation heat transfer in helically grooved, horizontal microfin tubes has been developed. The profile of stratified liquid was determined by a theoretical model previously developed for condensation in horizontal microfin tubes. For the region above the stratified liquid, the meniscus profile in the groove between adjacent fins was determined by a force balance between the gravity and surface tension forces. The thin film evaporation model was applied to predict heat transfer in the thin film region of the meniscus. Heat transfer through the stratified liquid was estimated by using an empirical correlation proposed by Mori et al. The theoretical predictions of the circumferential average heat transfer coefficient were compared with available experimental data for four tubes and three refrigerants. A good agreement was obtained for the region of Fr{sub 0}<2.5 as long as partial dry out of tube surface did not occur. (Author)
17. Algebraic Specifications, Higher-order Types and Set-theoretic Models
DEFF Research Database (Denmark)
Kirchner, Hélène; Mosses, Peter David
2001-01-01
, and power-sets. This paper presents a simple framework for algebraic specifications with higher-order types and set-theoretic models. It may be regarded as the basis for a Horn-clause approximation to the Z framework, and has the advantage of being amenable to prototyping and automated reasoning. Standard......In most algebraic specification frameworks, the type system is restricted to sorts, subsorts, and first-order function types. This is in marked contrast to the so-called model-oriented frameworks, which provide higer-order types, interpreted set-theoretically as Cartesian products, function spaces...... set-theoretic models are considered, and conditions are given for the existence of initial reduct's of such models. Algebraic specifications for various set-theoretic concepts are considered....
18. A theoretical model of rain–wind–induced in-plane galloping on overhead transmission tower-lines system
Directory of Open Access Journals (Sweden)
Chao Zhou
2015-09-01
Full Text Available Rain–wind–induced galloping phenomenon often occurs on overhead transmission tower-lines system, just as icing galloping and vortex-excited vibration; this kind of instability oscillation can cause power-line breakage or tower failure. However, the existing theoretical models of rain–wind–induced galloping are mainly based on the hypothesis of the overhead power-lines with fixed ends, which is inconsistent with the actual operation situation. Therefore, this article thus presents a preliminary theoretical study and proposes a new theoretical model taking into account the effect of tower excitations on the in-plane galloping of the overhead power-line and on the motion of the upper rain-line. The theoretical model is solved by Galerkin method and verified by the comparison with the test data obtained in the available literature involved with the overhead power-lines with fixed towers or moving towers. It turns out that the tower excitations may intensify the in-plane galloping amplitude of the overhead power-line within a certain range of frequency ratio and enable better comprehension of rain–wind–induced galloping mechanism.
19. A Simple theoretical model for 63Ni betavoltaic battery
International Nuclear Information System (INIS)
ZUO, Guoping; ZHOU, Jianliang; KE, Guotu
2013-01-01
A numerical simulation of the energy deposition distribution in semiconductors is performed for 63 Ni beta particles. Results show that the energy deposition distribution exhibits an approximate exponential decay law. A simple theoretical model is developed for 63 Ni betavoltaic battery based on the distribution characteristics. The correctness of the model is validated by two literature experiments. Results show that the theoretical short-circuit current agrees well with the experimental results, and the open-circuit voltage deviates from the experimental results in terms of the influence of the PN junction defects and the simplification of the source. The theoretical model can be applied to 63 Ni and 147 Pm betavoltaic batteries. - Highlights: • The energy deposition distribution is found following an approximate exponential decay law when beta particles emitted from 63 Ni pass through a semiconductor. • A simple theoretical model for 63 Ni betavoltaic battery is constructed based on the exponential decay law. • Theoretical model can be applied to the betavoltaic batteries which radioactive source has a similar energy spectrum with 63 Ni, such as 147 Pm
20. THEORETICAL PROPOSAL FOR EXPANSION OF ROE WITH NEW SUB-RATIOS
Directory of Open Access Journals (Sweden)
Danilo Dorović
2017-09-01
Full Text Available ROE is the ratio of profitability which can be separated into three ratios in Du Pont model. The question is - can it be even more comprehensive with more than three Du Pont ratios; that is can it also include liquidity, market share, break-even point, plan vs. actual, structure of assets and liabilities, structure of fixed costs, etc.? If these can be included in calculation, the financial, management accounting and strategic analysis could be more integrated into one more rounded system. Financial ratio analysis would also integrate into one ratio the usually different areas of analysis, like structure of assets, structure of liabili-ties, liquidity, turnover, financial leverage, etc. Strategic management and management accounting ratios, developed in the literature and used in business practice, are represented. The article in front of you presents a theoretical proposal through deduction method of how mentioned measures can potentially be included in ROE, resulting in potential benefits in planning and controlling. Integrated different areas of financial ratio analysis, manage- ment accounting and strategic analysis each represented with its ratios in profitability measure ratio, provides potentially better view of conditions, profit multiplicators and risk the profitability is achieved by. Integration inside profitability measure gives a special qual-itative advantage, having in mind that achieved profit is the main goal for owners of the companys equity.
1. Theoretical models of neutron emission in fission
International Nuclear Information System (INIS)
1992-01-01
A brief survey of theoretical representations of two of the observables in neutron emission in fission is given, namely, the prompt fission neutron spectrum N(E) and the average prompt neutron multiplicity bar v p . Early representations of the two observables are presented and their deficiencies are discussed. This is followed by summaries and examples of recent theoretical models for the calculation of these quantities. Emphasis is placed upon the predictability and accuracy of the new models. In particular, the dependencies of N(E) and bar v p upon the fissioning nucleus and its excitation energy are treated. Recent work in the calculation of the prompt fission neutron spectrum matrix N(E,E n ), where E n is the energy of the neutron inducing fission, is then discussed. Concluding remarks address the current status of our ability to calculate these observables with confidence, the direction of future theoretical efforts, and limititations to current and future calculations. Finally, recommendations are presented as to which model should be used currently and which model should be pursued in future efforts
2. [Impact of small-area context on health: proposing a conceptual model].
Science.gov (United States)
Voigtländer, S; Mielck, A; Razum, O
2012-11-01
Recent empirical studies stress the impact of features related to the small-area context on individual health. However, so far there exists no standard explanatory model that integrates the different kinds of such features and that conceptualises their relation to individual characteristics of social inequality. A review of theoretical publications on the relationship between social position and health as well as existing conceptual models for the impact of features related to the small-area context on health was undertaken. In the present article we propose a conceptual model for the health impact of the small-area context. This model conceptualises the location of residence as one dimension of social inequality that affects health through the resources as well as stressors which are inherent in the small-area context. The proposed conceptual model offers an orientation for future empirical studies and can serve as a basis for further discussions concerning the health relevance of the small-area context. © Georg Thieme Verlag KG Stuttgart · New York.
3. Hybrid rocket engine, theoretical model and experiment
Science.gov (United States)
Chelaru, Teodor-Viorel; Mingireanu, Florin
2011-06-01
The purpose of this paper is to build a theoretical model for the hybrid rocket engine/motor and to validate it using experimental results. The work approaches the main problems of the hybrid motor: the scalability, the stability/controllability of the operating parameters and the increasing of the solid fuel regression rate. At first, we focus on theoretical models for hybrid rocket motor and compare the results with already available experimental data from various research groups. A primary computation model is presented together with results from a numerical algorithm based on a computational model. We present theoretical predictions for several commercial hybrid rocket motors, having different scales and compare them with experimental measurements of those hybrid rocket motors. Next the paper focuses on tribrid rocket motor concept, which by supplementary liquid fuel injection can improve the thrust controllability. A complementary computation model is also presented to estimate regression rate increase of solid fuel doped with oxidizer. Finally, the stability of the hybrid rocket motor is investigated using Liapunov theory. Stability coefficients obtained are dependent on burning parameters while the stability and command matrixes are identified. The paper presents thoroughly the input data of the model, which ensures the reproducibility of the numerical results by independent researchers.
4. How prayer heals: a theoretical model.
Science.gov (United States)
Levin, J S
1996-01-01
This article presents a theoretical model that outlines various possible explanations for the healing effects of prayer. Four classes of mechanisms are defined on the basis of whether healing has naturalistic or supernatural origins and whether it operates locally or nonlocally. Through this framework, most of the currently proposed hypotheses for understanding absent healing and other related phenomena-hypotheses that invoke such concepts as subtle energy, psi, consciousness, morphic fields, and extended mind-are shown to be no less naturalistic than the Newtonian, mechanistic forces of allopathic biomedicine so often derided for their materialism. In proposing that prayer may heal through nonlocal means according to mechanisms and theories proposed by the new physics, Dossey is almost alone among medical scholars in suggesting the possible limitations and inadequacies of hypotheses based on energies, forces, and fields. Yet even such nonlocal effects can be conceived of as naturalistic; that is, they are explained by physical laws that may be unbelievable or unfamiliar to most physicians but that are nonetheless becoming recognized as operant laws of the natural universe. The concept of the supernatural, however, is something altogether different, and is, by definition, outside of or beyond nature. Herein may reside an either wholly or partly transcendent Creator-God who is believed by many to heal through means that transcend the laws of the created universe, both its local and nonlocal elements, and that are thus inherently inaccessible to and unknowable by science. Such an explanation for the effects of prayer merits consideration and, despite its unprovability by medical science, should not be dismissed out of hand.
5. THEORETICAL AND EXPERIMENTAL CONTRIBUTIONS CONCERNING THE PROPOSED MODEL FOR THE DISC-TYPED ROTARY ULTRASONIC MOTOR
Directory of Open Access Journals (Sweden)
Oana CHIVU
2010-10-01
Full Text Available In this work the proposed model for type-disk, ultrasonic motor rotating, elliptic movement to surface beam. A sinusoidal vibration of the vertical displacement in the z-direction, Assume that the vertical displacement of the neutral plane, equals the product of the slope of the neutral plane and half of the beam height, the tangential velocity vs at the upper surface is given.
6. Theoretical models for recombination in expanding gas
International Nuclear Information System (INIS)
Avron, Y.; Kahane, S.
1978-09-01
In laser isotope separation of atomic uranium, one is confronted with the theoretical problem of estimating the concentration of thermally ionized uranium atoms. To investigate this problem theoretical models for recombination in an expanding gas and in the absence of local thermal equilibrium have been constructed. The expansion of the gas is described by soluble models of the hydrodynamic equation, and the recombination by rate equations. General results for the freezing effect for the suitable ranges of the gas parameters are obtained. The impossibility of thermal equilibrium in expanding two-component systems is proven
7. A P-value model for theoretical power analysis and its applications in multiple testing procedures
Directory of Open Access Journals (Sweden)
Fengqing Zhang
2016-10-01
Full Text Available Abstract Background Power analysis is a critical aspect of the design of experiments to detect an effect of a given size. When multiple hypotheses are tested simultaneously, multiplicity adjustments to p-values should be taken into account in power analysis. There are a limited number of studies on power analysis in multiple testing procedures. For some methods, the theoretical analysis is difficult and extensive numerical simulations are often needed, while other methods oversimplify the information under the alternative hypothesis. To this end, this paper aims to develop a new statistical model for power analysis in multiple testing procedures. Methods We propose a step-function-based p-value model under the alternative hypothesis, which is simple enough to perform power analysis without simulations, but not too simple to lose the information from the alternative hypothesis. The first step is to transform distributions of different test statistics (e.g., t, chi-square or F to distributions of corresponding p-values. We then use a step function to approximate each of the p-value’s distributions by matching the mean and variance. Lastly, the step-function-based p-value model can be used for theoretical power analysis. Results The proposed model is applied to problems in multiple testing procedures. We first show how the most powerful critical constants can be chosen using the step-function-based p-value model. Our model is then applied to the field of multiple testing procedures to explain the assumption of monotonicity of the critical constants. Lastly, we apply our model to a behavioral weight loss and maintenance study to select the optimal critical constants. Conclusions The proposed model is easy to implement and preserves the information from the alternative hypothesis.
8. Theoretical Framework and Model Design for Beautiful Countryside Construction in China
Directory of Open Access Journals (Sweden)
ZHENG Xiang-qun
2015-04-01
Full Text Available In the context of China today, the process of beautiful countryside construction mainly imitates the patterns of‘urbanization’construction. However, this approach leads to the loss of countryside characteristics and the separation of agricultural culture. Therefore, it's urgent to carry out research of the theoretical framework and model design for beautiful countryside construction. In this paper, based on the analysis of the beautiful countryside construction connotation, the basic theory of beautiful countryside construction was summarized in three aspects: rural complex ecosystem model, multi-functionality of rural model and sustainable development evaluation model. The basic idea of the beautiful countryside construction mode was studied. The design method of beautiful countryside construction mode was proposed in three levels: planning, scheming and evaluating. The research results might offer scientific reference for improving the scientific and operational nature of beautiful countryside construction.
9. Droplet size in flow: Theoretical model and application to polymer blends
Science.gov (United States)
Fortelný, Ivan; Jůza, Josef
2017-05-01
The paper is focused on prediction of the average droplet radius, R, in flowing polymer blends where the droplet size is determined by dynamic equilibrium between the droplet breakup and coalescence. Expressions for the droplet breakup frequency in systems with low and high contents of the dispersed phase are derived using available theoretical and experimental results for model blends. Dependences of the coalescence probability, Pc, on system parameters, following from recent theories, is considered and approximate equation for Pc in a system with a low polydispersity in the droplet size is proposed. Equations for R in systems with low and high contents of the dispersed phase are derived. Combination of these equations predicts realistic dependence of R on the volume fraction of dispersed droplets, φ. Theoretical prediction of the ratio of R to the critical droplet radius at breakup agrees fairly well with experimental values for steadily mixed polymer blends.
10. A Production Model for Construction: A Theoretical Framework
Directory of Open Access Journals (Sweden)
Ricardo Antunes
2015-03-01
Full Text Available The building construction industry faces challenges, such as increasing project complexity and scope requirements, but shorter deadlines. Additionally, economic uncertainty and rising business competition with a subsequent decrease in profit margins for the industry demands the development of new approaches to construction management. However, the building construction sector relies on practices based on intuition and experience, overlooking the dynamics of its production system. Furthermore, researchers maintain that the construction industry has no history of the application of mathematical approaches to model and manage production. Much work has been carried out on how manufacturing practices apply to construction projects, mostly lean principles. Nevertheless, there has been little research to understand the fundamental mechanisms of production in construction. This study develops an in-depth literature review to examine the existing knowledge about production models and their characteristics in order to establish a foundation for dynamic production systems management in construction. As a result, a theoretical framework is proposed, which will be instrumental in the future development of mathematical production models aimed at predicting the performance and behaviour of dynamic project-based systems in construction.
11. STRUCTURAL AND METHODICAL MODEL OF INCREASING THE LEVEL OF THEORETICAL TRAINING OF CADETS USING INFORMATION AND COMMUNICATION TECHNOLOGIES
Directory of Open Access Journals (Sweden)
2018-03-01
Full Text Available Features of training in higher educational institutions of system of EMERCOM of Russia demand introduction of the new educational techniques and the technical means directed on intensification of educational process, providing an opportunity of preparation of cadets at any time in the independent mode and improving quality of their theoretical knowledge. The authors have developed a structural and methodological model of increasing the level of theoretical training of cadets using information and communication technologies. The proposed structural and methodological model that includes elements to stimulate and enhance cognitive activity, allows you to generate the trajectory of theoretical training of cadets for the entire period of study at the University, to organize a systematic independent work, objective, current and final control of theoretical knowledge. The structural and methodological model for improving the level of theoretical training consists of three main elements: the base of theoretical questions, functional modules "teacher" and "cadet". The basis of the structural and methodological model of increasing the level of theoretical training of cadets is the base of theoretical issues, developed in all disciplines specialty 20.05.01 – fire safety. The functional module "teacher" allows you to create theoretical questions of various kinds, edit questions and delete them from the database if necessary, as well as create tests and monitor their implementation. The functional module "cadet" provides ample opportunities for theoretical training through independent work, testing for current and final control, the implementation of the game form of training in the form of a duel, as well as for the formation of the results of the cadets in the form of statistics and rankings. Structural and methodical model of increasing the level of theoretical training of cadets is implemented in practice in the form of a multi-level automated system
12. Development of a theoretical model for measuring the perceived value of social responsibility of IPEN
International Nuclear Information System (INIS)
Mutarelli, Rita de Cassia; Lima, Ana Cecilia de Souza; Sabundjian, Gaiane
2015-01-01
Social responsibility has been one of the great discussions in institutional management, and that is an important variable in the strategy and performance of the institutions. The Instituto de Pesquisas Energeticas e Nucleares (IPEN) has worked for the development of environmental and social issues, converging mainly to the benefit of the population. The theory that guides the social responsibility practices is always difficult to measure for several reasons. One reason for this difficulty is that social responsibility involves a variety of issues that are converted in rights, obligations and expectations of different audiences that could be internal and external to the organization. In addition, the different understanding of the institutions about social and environmental issues is another source of complexity. Based on the study context including: the topic being researched, the chosen institute and the questions resulting from the research, the aim of this paper is to propose a theoretical model to describe and analyze the social responsibility of IPEN. The main contribution of this study is to develop a model that integrates the dimensions of social responsibility. These dimensions - also called constructs - are composed of indexes and indicators that were previously used in various contexts of empirical research, combined with the theoretical and conceptual review of social responsibility. The construction of the proposed theoretical model was based on the research of various methodologies and various indicators for measuring social responsibility. This model was statistically tested, analyzed, adjusted, and the end result is a consistent model to measure the perceived value of social responsibility of IPEN. This work could also be applied to other institutions. Moreover, it may be improved and become a tool that will serve as a thermometer to measure social and environmental issues, and will support decision making in various management processes. (author)
13. Development of a theoretical model for measuring the perceived value of social responsibility of IPEN
Energy Technology Data Exchange (ETDEWEB)
Mutarelli, Rita de Cassia; Lima, Ana Cecilia de Souza; Sabundjian, Gaiane, E-mail: [email protected], E-mail: [email protected], E-mail: [email protected] [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)
2015-07-01
Social responsibility has been one of the great discussions in institutional management, and that is an important variable in the strategy and performance of the institutions. The Instituto de Pesquisas Energeticas e Nucleares (IPEN) has worked for the development of environmental and social issues, converging mainly to the benefit of the population. The theory that guides the social responsibility practices is always difficult to measure for several reasons. One reason for this difficulty is that social responsibility involves a variety of issues that are converted in rights, obligations and expectations of different audiences that could be internal and external to the organization. In addition, the different understanding of the institutions about social and environmental issues is another source of complexity. Based on the study context including: the topic being researched, the chosen institute and the questions resulting from the research, the aim of this paper is to propose a theoretical model to describe and analyze the social responsibility of IPEN. The main contribution of this study is to develop a model that integrates the dimensions of social responsibility. These dimensions - also called constructs - are composed of indexes and indicators that were previously used in various contexts of empirical research, combined with the theoretical and conceptual review of social responsibility. The construction of the proposed theoretical model was based on the research of various methodologies and various indicators for measuring social responsibility. This model was statistically tested, analyzed, adjusted, and the end result is a consistent model to measure the perceived value of social responsibility of IPEN. This work could also be applied to other institutions. Moreover, it may be improved and become a tool that will serve as a thermometer to measure social and environmental issues, and will support decision making in various management processes. (author)
14. Theoretical Models, Assessment Frameworks and Test Construction.
Science.gov (United States)
Chalhoub-Deville, Micheline
1997-01-01
Reviews the usefulness of proficiency models influencing second language testing. Findings indicate that several factors contribute to the lack of congruence between models and test construction and make a case for distinguishing between theoretical models. Underscores the significance of an empirical, contextualized and structured approach to the…
15. Modeling business processes: theoretical and practical aspects
Directory of Open Access Journals (Sweden)
V.V. Dubininа
2015-06-01
Full Text Available The essence of process-oriented enterprise management has been examined in the article. The content and types of information technology have been analyzed in the article, due to the complexity and differentiation of existing methods, as well as the specificity of language, terminology of the enterprise business processes modeling. The theoretical aspects of business processes modeling have been reviewed and the modern traditional modeling techniques received practical application in the visualization model of retailers activity have been studied in the article. In the process of theoretical analysis of the modeling methods found that UFO-toolkit method that has been developed by Ukrainian scientists due to it systemology integrated opportunities, is the most suitable for structural and object analysis of retailers business processes. It was designed visualized simulation model of the business process "sales" as is" of retailers using a combination UFO-elements with the aim of the further practical formalization and optimization of a given business process.
16. Hartree-Fock-Bogoliubov model: a theoretical and numerical perspective
International Nuclear Information System (INIS)
Paul, S.
2012-01-01
This work is devoted to the theoretical and numerical study of Hartree-Fock-Bogoliubov (HFB) theory for attractive quantum systems, which is one of the main methods in nuclear physics. We first present the model and its main properties, and then explain how to get numerical solutions. We prove some convergence results, in particular for the simple fixed point algorithm (sometimes called Roothaan). We show that it converges, or oscillates between two states, none of them being a solution. This generalizes to the HFB case previous results of Cances and Le Bris for the simpler Hartree-Fock model in the repulsive case. Following these authors, we also propose a relaxed constraint algorithm for which convergence is guaranteed. In the last part of the thesis, we illustrate the behavior of these algorithms by some numerical experiments. We first consider a system where the particles only interact through the Newton potential. Our numerical results show that the pairing matrix never vanishes, a fact that has not yet been proved rigorously. We then study a very simplified model for protons and neutrons in a nucleus. (author)
17. The (neurocognitive mechanisms behind attention bias modification in anxiety: Proposals based on theoretical accounts of attentional bias
Directory of Open Access Journals (Sweden)
Alexandre eHeeren
2013-04-01
Full Text Available Recently, researchers have investigated the causal nature of attentional bias for threat (AB in the maintenance of anxiety disorders by experimentally manipulating it. They found that training anxious individuals to attend to nonthreat stimuli reduces AB, which, in turn, reduces anxiety. This effect supports the hypothesis that AB can causally impact the maintenance of anxiety. At a fundamental level, however, uncertainty still abounds regarding the nature of the processes that mediate this effect. In the present paper, we propose that two contrasting approaches may be derived from theoretical accounts of AB. According to a first class of models, called the valence-specific bias models, modifying AB requires the modification of valence-specific attentional selectivity. According to a second class of models, called the attention control models, modifying AB requires the modification of attention control, driven by the recruitment of the dorsolateral prefrontal cortex. We formulate a series of specific predictions, to provide suggestions to trial these two approaches one against the other. This knowledge is critical for understanding the mechanisms of AB in anxiety disorders, which bares important clinical implications.
18. Accelerator simulation and theoretical modelling of radiation effects (SMoRE)
CERN Document Server
2018-01-01
This publication summarizes the findings and conclusions of the IAEA coordinated research project (CRP) on accelerator simulation and theoretical modelling of radiation effects, aimed at supporting Member States in the development of advanced radiation-resistant structural materials for implementation in innovative nuclear systems. This aim can be achieved through enhancement of both experimental neutron-emulation capabilities of ion accelerators and improvement of the predictive efficiency of theoretical models and computer codes. This dual approach is challenging but necessary, because outputs of accelerator simulation experiments need adequate theoretical interpretation, and theoretical models and codes need high dose experimental data for their verification. Both ion irradiation investigations and computer modelling have been the specific subjects of the CRP, and the results of these studies are presented in this publication which also includes state-ofthe- art reviews of four major aspects of the project...
19. Examining Asymmetrical Relationships of Organizational Learning Antecedents: A Theoretical Model
Directory of Open Access Journals (Sweden)
Ery Tri Djatmika
2016-02-01
Full Text Available Global era is characterized by highly competitive advantage market demand. Responding to the challenge of rapid environmental changes, organizational learning is becoming a strategic way and solution to empower people themselves within the organization in order to create a novelty as valuable positioning source. For research purposes, determining the influential antecedents that affect organizational learning is vital to understand research-based solutions given for practical implications. Accordingly, identification of variables examined by asymmetrical relationships is critical to establish. Possible antecedent variables come from organizational and personal point of views. It is also possible to include a moderating one. A proposed theoretical model of asymmetrical effects of organizational learning and its antecedents is discussed in this article.
20. Proposal of a Model for effective Management and Development of virtual Teams
Directory of Open Access Journals (Sweden)
Petr Skyrik
2010-10-01
Full Text Available The main aim of this paper is to present a pilot proposal of a model of “Virtual Development Management System” (ViDeMaS which will facilitate more effective management and development of virtual teams. Management and development of virtual teams is not a simple concept. It comprises a body of knowledge from a number of fields and scientific disciplines. The complexity of the concept may not be simplified as it is absolutely essential for full understanding of its nature. In order to gain better orientation in the concept, different perspectives will be used in the description of the model, which will enable us to achieve the goal of the work and to present the main results of the work (creation of a model for Virtual Development Management System. The present paper thus describes from different perspectives the proposal of a sufficiently detailed and complex model that may be utilized both on theoretical and application level.
1. Empathy and child neglect: a theoretical model.
Science.gov (United States)
De Paul, Joaquín; Guibert, María
2008-11-01
To present an explanatory theory-based model of child neglect. This model does not address neglectful behaviors of parents with mental retardation, alcohol or drug abuse, or severe mental health problems. In this model parental behavior aimed to satisfy a child's need is considered a helping behavior and, as a consequence, child neglect is considered as a specific type of non-helping behavior. The central hypothesis of the theoretical model presented here suggests that neglectful parents cannot develop the helping response set to care for their children because the observation of a child's signal of need does not lead to the experience of emotions that motivate helping or because the parents experience these emotions, but specific cognitions modify the motivation to help. The present theoretical model suggests that different typologies of neglectful parents could be developed based on different reasons that parents might not to experience emotions that motivate helping behaviors. The model can be helpful to promote new empirical studies about the etiology of different groups of neglectful families.
2. Modeling Multibody Systems with Uncertainties. Part I: Theoretical and Computational Aspects
International Nuclear Information System (INIS)
2006-01-01
This study explores the use of generalized polynomial chaos theory for modeling complex nonlinear multibody dynamic systems in the presence of parametric and external uncertainty. The polynomial chaos framework has been chosen because it offers an efficient computational approach for the large, nonlinear multibody models of engineering systems of interest, where the number of uncertain parameters is relatively small, while the magnitude of uncertainties can be very large (e.g., vehicle-soil interaction). The proposed methodology allows the quantification of uncertainty distributions in both time and frequency domains, and enables the simulations of multibody systems to produce results with 'error bars'. The first part of this study presents the theoretical and computational aspects of the polynomial chaos methodology. Both unconstrained and constrained formulations of multibody dynamics are considered. Direct stochastic collocation is proposed as less expensive alternative to the traditional Galerkin approach. It is established that stochastic collocation is equivalent to a stochastic response surface approach. We show that multi-dimensional basis functions are constructed as tensor products of one-dimensional basis functions and discuss the treatment of polynomial and trigonometric nonlinearities. Parametric uncertainties are modeled by finite-support probability densities. Stochastic forcings are discretized using truncated Karhunen-Loeve expansions. The companion paper 'Modeling Multibody Dynamic Systems With Uncertainties. Part II: Numerical Applications' illustrates the use of the proposed methodology on a selected set of test problems. The overall conclusion is that despite its limitations, polynomial chaos is a powerful approach for the simulation of multibody systems with uncertainties
3. Studying Economic Space: Synthesis of Balance and Game-Theoretic Methods of Modelling
Directory of Open Access Journals (Sweden)
2015-12-01
Full Text Available The article introduces questions about development of models used to study economic space. The author proposes the model that combines balance and game-theoretic methods for estimating system effects of economic agents’ interactions in multi-level economic space. The model is applied to research interactions between economic agents that are spatially heterogeneous within the Russian Far East. In the model the economic space of region is considered in a territorial dimension (the first level of decomposing space and also in territorial and product dimensions (the second level of decomposing space. The paper shows the mechanism of system effects formation that exists in the economic space of region. The author estimates system effects, analyses the real allocation of these effects between economic agents and identifies three types of local industrial markets: with zero, positive and negative system effects
4. Cigarette smoking and depression comorbidity: systematic review and proposed theoretical model.
Science.gov (United States)
Mathew, Amanda R; Hogarth, Lee; Leventhal, Adam M; Cook, Jessica W; Hitsman, Brian
2017-03-01
Despite decades of research on co-occurring smoking and depression, cessation rates remain consistently lower for depressed smokers than for smokers in the general population, highlighting the need for theory-driven models of smoking and depression. This paper provides a systematic review with a particular focus upon psychological states that disproportionately motivate smoking in depression, and frame an incentive learning theory account of smoking-depression co-occurrence. We searched PubMed, Scopus, PsychINFO and CINAHL to December 2014, which yielded 852 papers. Using pre-established eligibility criteria, we identified papers focused on clinical issues and motivational mechanisms underlying smoking in established, adult smokers (i.e. maintenance, quit attempts and cessation/relapse) with elevated symptoms of depression. Two reviewers determined independently whether papers met review criteria. We included 297 papers in qualitative synthesis. Our review identified three primary mechanisms that underlie persistent smoking among depressed smokers: low positive affect, high negative affect and cognitive impairment. We propose a novel application of incentive learning theory which posits that depressed smokers experience greater increases in the expected value of smoking in the face of these three motivational states, which promotes goal-directed choice of smoking behavior over alternative actions. The incentive learning theory accounts for current evidence on how depression primes smoking behavior and provides a unique framework for conceptualizing psychological mechanisms of smoking maintenance among depressed smokers. Treatment should focus upon correcting adverse internal states and beliefs about the high value of smoking in those states to improve cessation outcomes for depressed smokers. © 2016 Society for the Study of Addiction.
5. Theoretical Biology and Medical Modelling: ensuring continued growth and future leadership.
Science.gov (United States)
Nishiura, Hiroshi; Rietman, Edward A; Wu, Rongling
2013-07-11
Theoretical biology encompasses a broad range of biological disciplines ranging from mathematical biology and biomathematics to philosophy of biology. Adopting a broad definition of "biology", Theoretical Biology and Medical Modelling, an open access journal, considers original research studies that focus on theoretical ideas and models associated with developments in biology and medicine.
6. A New Approach for Modeling Darrieus-Type Vertical Axis Wind Turbine Rotors Using Electrical Equivalent Circuit Analogy: Basis of Theoretical Formulations and Model Development
Directory of Open Access Journals (Sweden)
Pierre Tchakoua
2015-09-01
Full Text Available Models are crucial in the engineering design process because they can be used for both the optimization of design parameters and the prediction of performance. Thus, models can significantly reduce design, development and optimization costs. This paper proposes a novel equivalent electrical model for Darrieus-type vertical axis wind turbines (DTVAWTs. The proposed model was built from the mechanical description given by the Paraschivoiu double-multiple streamtube model and is based on the analogy between mechanical and electrical circuits. This work addresses the physical concepts and theoretical formulations underpinning the development of the model. After highlighting the working principle of the DTVAWT, the step-by-step development of the model is presented. For assessment purposes, simulations of aerodynamic characteristics and those of corresponding electrical components are performed and compared.
7. The Associative Basis of Scientific Creativity: A Model Proposal
Directory of Open Access Journals (Sweden)
Esra Kanli
2014-06-01
Full Text Available Creativity is accepted as an important part of scientific skills. Scientific creativity proceeds from a need or urge to solve a problem, and in-volves the production of original and useful ideas or products. Existing scientific creativity theories and tests do not feature the very im-portant thinking processes, such as analogical and associative thinking, which can be consid-ered crucial in creative scientific problem solv-ing. Current study’s aim is to provide an alter-native model and explicate the associative basis of scientific creativity. Emerging from the re-viewed theoretical framework, Scientific Asso-ciations Model is proposed. This model claims that, similarity and mediation constitutes the basis of creativity and focuses on three compo-nents namely; associative thinking, analogical thinking (analogical reasoning & analogical problem solving and insight which are consid-ered to be main elements of scientific associa-tive thinking.
8. A set-theoretic model reference adaptive control architecture for disturbance rejection and uncertainty suppression with strict performance guarantees
Science.gov (United States)
Arabi, Ehsan; Gruenwald, Benjamin C.; Yucelen, Tansel; Nguyen, Nhan T.
2018-05-01
Research in adaptive control algorithms for safety-critical applications is primarily motivated by the fact that these algorithms have the capability to suppress the effects of adverse conditions resulting from exogenous disturbances, imperfect dynamical system modelling, degraded modes of operation, and changes in system dynamics. Although government and industry agree on the potential of these algorithms in providing safety and reducing vehicle development costs, a major issue is the inability to achieve a-priori, user-defined performance guarantees with adaptive control algorithms. In this paper, a new model reference adaptive control architecture for uncertain dynamical systems is presented to address disturbance rejection and uncertainty suppression. The proposed framework is predicated on a set-theoretic adaptive controller construction using generalised restricted potential functions.The key feature of this framework allows the system error bound between the state of an uncertain dynamical system and the state of a reference model, which captures a desired closed-loop system performance, to be less than a-priori, user-defined worst-case performance bound, and hence, it has the capability to enforce strict performance guarantees. Examples are provided to demonstrate the efficacy of the proposed set-theoretic model reference adaptive control architecture.
9. Game Theoretic Modeling of Water Resources Allocation Under Hydro-Climatic Uncertainty
Science.gov (United States)
Brown, C.; Lall, U.; Siegfried, T.
2005-12-01
Typical hydrologic and economic modeling approaches rely on assumptions of climate stationarity and economic conditions of ideal markets and rational decision-makers. In this study, we incorporate hydroclimatic variability with a game theoretic approach to simulate and evaluate common water allocation paradigms. Game Theory may be particularly appropriate for modeling water allocation decisions. First, a game theoretic approach allows economic analysis in situations where price theory doesn't apply, which is typically the case in water resources where markets are thin, players are few, and rules of exchange are highly constrained by legal or cultural traditions. Previous studies confirm that game theory is applicable to water resources decision problems, yet applications and modeling based on these principles is only rarely observed in the literature. Second, there are numerous existing theoretical and empirical studies of specific games and human behavior that may be applied in the development of predictive water allocation models. With this framework, one can evaluate alternative orderings and rules regarding the fraction of available water that one is allowed to appropriate. Specific attributes of the players involved in water resources management complicate the determination of solutions to game theory models. While an analytical approach will be useful for providing general insights, the variety of preference structures of individual players in a realistic water scenario will likely require a simulation approach. We propose a simulation approach incorporating the rationality, self-interest and equilibrium concepts of game theory with an agent-based modeling framework that allows the distinct properties of each player to be expressed and allows the performance of the system to manifest the integrative effect of these factors. Underlying this framework, we apply a realistic representation of spatio-temporal hydrologic variability and incorporate the impact of
10. Some New Theoretical Issues in Systems Thinking Relevant for Modelling Corporate Learning
Science.gov (United States)
Minati, Gianfranco
2007-01-01
Purpose: The purpose of this paper is to describe fundamental concepts and theoretical challenges with regard to systems, and to build on these in proposing new theoretical frameworks relevant to learning, for example in so-called learning organizations. Design/methodology/approach: The paper focuses on some crucial fundamental aspects introduced…
11. Theoretical models for the muon spectrum at sea level
International Nuclear Information System (INIS)
Abdel-Monem, M.S.; Benbrook, J.R.; Osborne, A.R.; Sheldon, W.R.
1975-01-01
The absolute vertical cosmic ray muon spectrum is investigated theoretically. Models of high energy interactions (namely, Maeda-Cantrell (MC), Constant Energy (CE), Cocconi-Koester-Perkins (CKP) and Scaling Models) are used to calculate the spectrum of cosmic ray muons at sea level. A comparison is made between the measured spectrum and that predicted from each of the four theoretical models. It is concluded that the recently available measured muon differential intensities agree with the scaling model for energies less than 100 GeV and with the CKP model for energies greater than 200 GeV. The measured differential intensities (Abdel-Monem et al.) agree with scaling. (orig.) [de
12. Toward a Theoretical Framework for Information Science
Directory of Open Access Journals (Sweden)
Amanda Spink
2000-01-01
Full Text Available Information Science is beginning to develop a theoretical framework for the modeling of users interactions with information retrieval (IR technologies within the more holistic context of human information behavior (Spink, 1998b. This paper addresses the following questions: (1 What is the nature of Information Science? and (2 What theoretical framework and model is most appropriate for Information Science? This paper proposes a theoretical framework for Information Science based on an explication of the processes of human information coordinating behavior and information feedback that facilitate the relationship between human information behavior and human interaction with information retrieval (IR technologies (Web, digital libraries, etc..
13. An improved theoretical electrochemical-thermal modelling of lithium-ion battery packs in electric vehicles
Science.gov (United States)
Amiribavandpour, Parisa; Shen, Weixiang; Mu, Daobin; Kapoor, Ajay
2015-06-01
A theoretical electrochemical thermal model combined with a thermal resistive network is proposed to investigate thermal behaviours of a battery pack. The combined model is used to study heat generation and heat dissipation as well as their influences on the temperatures of the battery pack with and without a fan under constant current discharge and variable current discharge based on electric vehicle (EV) driving cycles. The comparison results indicate that the proposed model improves the accuracy in the temperature predication of the battery pack by 2.6 times. Furthermore, a large battery pack with four of the investigated battery packs in series is simulated in the presence of different ambient temperatures. The simulation results show that the temperature of the large battery pack at the end of EV driving cycles can reach to 50 °C or 60 °C in high ambient temperatures. Therefore, thermal management system in EVs is required to maintain the battery pack within the safe temperature range.
14. Modeling theoretical uncertainties in phenomenological analyses for particle physics
Energy Technology Data Exchange (ETDEWEB)
Charles, Jerome [CNRS, Aix-Marseille Univ, Universite de Toulon, CPT UMR 7332, Marseille Cedex 9 (France); Descotes-Genon, Sebastien [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique (UMR 8627), Orsay Cedex (France); Niess, Valentin [CNRS/IN2P3, UMR 6533, Laboratoire de Physique Corpusculaire, Aubiere Cedex (France); Silva, Luiz Vale [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique (UMR 8627), Orsay Cedex (France); Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Groupe de Physique Theorique, Institut de Physique Nucleaire, Orsay Cedex (France); J. Stefan Institute, Jamova 39, P. O. Box 3000, Ljubljana (Slovenia)
2017-04-15
The determination of the fundamental parameters of the Standard Model (and its extensions) is often limited by the presence of statistical and theoretical uncertainties. We present several models for the latter uncertainties (random, nuisance, external) in the frequentist framework, and we derive the corresponding p values. In the case of the nuisance approach where theoretical uncertainties are modeled as biases, we highlight the important, but arbitrary, issue of the range of variation chosen for the bias parameters. We introduce the concept of adaptive p value, which is obtained by adjusting the range of variation for the bias according to the significance considered, and which allows us to tackle metrology and exclusion tests with a single and well-defined unified tool, which exhibits interesting frequentist properties. We discuss how the determination of fundamental parameters is impacted by the model chosen for theoretical uncertainties, illustrating several issues with examples from quark flavor physics. (orig.)
15. Theoretical aspects of spatial-temporal modeling
CERN Document Server
Matsui, Tomoko
2015-01-01
This book provides a modern introductory tutorial on specialized theoretical aspects of spatial and temporal modeling. The areas covered involve a range of topics which reflect the diversity of this domain of research across a number of quantitative disciplines. For instance, the first chapter provides up-to-date coverage of particle association measures that underpin the theoretical properties of recently developed random set methods in space and time otherwise known as the class of probability hypothesis density framework (PHD filters). The second chapter gives an overview of recent advances in Monte Carlo methods for Bayesian filtering in high-dimensional spaces. In particular, the chapter explains how one may extend classical sequential Monte Carlo methods for filtering and static inference problems to high dimensions and big-data applications. The third chapter presents an overview of generalized families of processes that extend the class of Gaussian process models to heavy-tailed families known as alph...
16. Modelling in Accounting. Theoretical and Practical Dimensions
Directory of Open Access Journals (Sweden)
Teresa Szot-Gabryś
2010-10-01
Full Text Available Accounting in the theoretical approach is a scientific discipline based on specific paradigms. In the practical aspect, accounting manifests itself through the introduction of a system for measurement of economic quantities which operates in a particular business entity. A characteristic of accounting is its flexibility and ability of adaptation to information needs of information recipients. One of the main currents in the development of accounting theory and practice is to cover by economic measurements areas which have not been hitherto covered by any accounting system (it applies, for example, to small businesses, agricultural farms, human capital, which requires the development of an appropriate theoretical and practical model. The article illustrates the issue of modelling in accounting based on the example of an accounting model developed for small businesses, i.e. economic entities which are not obliged by law to keep accounting records.
17. A unified theoretical framework for mapping models for the multi-state Hamiltonian.
Science.gov (United States)
Liu, Jian
2016-11-28
We propose a new unified theoretical framework to construct equivalent representations of the multi-state Hamiltonian operator and present several approaches for the mapping onto the Cartesian phase space. After mapping an F-dimensional Hamiltonian onto an F+1 dimensional space, creation and annihilation operators are defined such that the F+1 dimensional space is complete for any combined excitation. Commutation and anti-commutation relations are then naturally derived, which show that the underlying degrees of freedom are neither bosons nor fermions. This sets the scene for developing equivalent expressions of the Hamiltonian operator in quantum mechanics and their classical/semiclassical counterparts. Six mapping models are presented as examples. The framework also offers a novel way to derive such as the well-known Meyer-Miller model.
18. Patent portfolio management: literature review and a proposed model.
Science.gov (United States)
Conegundes De Jesus, Camila Kiyomi; Salerno, Mario Sergio
2018-05-09
Patents and patent portfolios are gaining attention in the last decades, from the called 'pro-patent era' to the recent billionaire transactions involving patent portfolios. The field is growing in importance, both theoretically and practically and despite having substantial literature on new product development portfolio management, we have not found an article relating this theory to patent portfolios. Areas covered: The paper develops a systematic literature review on patent portfolio management to organize the evolution and tendencies of patent portfolio management, highlighting distinctive features of patent portfolio management. Interview with IP manager of three life sciences companies, including a leading multinational group provided relevant information about patent portfolio management. Expert opinion: Based on the systematic literature review on portfolio management, more specifically, on new product development portfolio theory, and interview the paper proposes the paper proposes a reference model to manage patent portfolios. The model comprises four stages aligned with the three goals of the NPD portfolio management: 1 - Linking strategy of the Company's NPD Portfolio to Patent Portfolio; 2 - Balancing the portfolio in buckets; 3 - Patent Valuation (maximizing valuation); 4 - Regularly reviewing the patent portfolio.
19. Box-Cox Test: the theoretical justification and US-China empirical study
Directory of Open Access Journals (Sweden)
Tam Bang Vu
2011-01-01
Full Text Available In econometrics, the derivation of a theoretical model leads sometimes to two econometric models, which can be considered justified based on their respective approximation approaches. Hence, the decision of choosing one between the two hinges on applied econometric tools. In this paper, the authors develop a theoretical econometrics consumer maximization model to measure the flow of durables’ expenditures where depreciation is added to former classical econometrics model. The proposed model was formulated in both linear and logarithmic forms. Box-Cox tests were used to choose the most appropriate one among them. The proposed model was then applied to the historical data from the U.S. and China for a comparative study and the results discussed.
20. Theoretical aspects of the optical model
International Nuclear Information System (INIS)
Mahaux, C.
1980-01-01
We first recall the definition of the optical-model potential for nucleons and the physical interpretation of the main related quantities. We then survey the recent theoretical progress towards a reliable calculation of this potential. The present limitations of the theory and some prospects for future developments are outlined. (author)
1. Dynamics in Higher Education Politics: A Theoretical Model
Science.gov (United States)
Kauko, Jaakko
2013-01-01
This article presents a model for analysing dynamics in higher education politics (DHEP). Theoretically the model draws on the conceptual history of political contingency, agenda-setting theories and previous research on higher education dynamics. According to the model, socio-historical complexity can best be analysed along two dimensions: the…
2. Expectancy-Violation and Information-Theoretic Models of Melodic Complexity
Directory of Open Access Journals (Sweden)
Tuomas Eerola
2016-07-01
Full Text Available The present study assesses two types of models for melodic complexity: one based on expectancy violations and the other one related to an information-theoretic account of redundancy in music. Seven different datasets spanning artificial sequences, folk and pop songs were used to refine and assess the models. The refinement eliminated unnecessary components from both types of models. The final analysis pitted three variants of the two model types against each other and could explain from 46-74% of the variance in the ratings across the datasets. The most parsimonious models were identified with an information-theoretic criterion. This suggested that the simplified expectancy-violation models were the most efficient for these sets of data. However, the differences between all optimized models were subtle in terms both of performance and simplicity.
3. Food addiction spectrum: a theoretical model from normality to eating and overeating disorders.
Science.gov (United States)
Piccinni, Armando; Marazziti, Donatella; Vanelli, Federica; Franceschini, Caterina; Baroni, Stefano; Costanzo, Davide; Cremone, Ivan Mirko; Veltri, Antonello; Dell'Osso, Liliana
2015-01-01
The authors comment on the recently proposed food addiction spectrum that represents a theoretical model to understand the continuum between several conditions ranging from normality to pathological states, including eating disorders and obesity, as well as why some individuals show a peculiar attachment to food that can become an addiction. Further, they review the possible neurobiological underpinnings of these conditions that include dopaminergic neurotransmission and circuits that have long been implicated in drug addiction. The aim of this article is also that at stimulating a debate regarding the possible model of a food (or eating) addiction spectrum that may be helpful towards the search of novel therapeutic approaches to different pathological states related to disturbed feeding or overeating.
4. K. Sridhar Moorthy's Theoretical Modelling in Marketing - A Review ...
African Journals Online (AJOL)
K. Sridhar Moorthy's Theoretical Modelling in Marketing - A Review. ... Modelling has become a visible tool in many disciplines including marketing and several marketing models have ... EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT
5. A proposal of parameter determination method in the residual strength degradation model for the prediction of fatigue life (I)
International Nuclear Information System (INIS)
Kim, Sang Tae; Jang, Seong Soo
2001-01-01
The static and fatigue tests have been carried out to verify the validity of a generalized residual strength degradation model. And a new method of parameter determination in the model is verified experimentally to account for the effect of tension-compression fatigue loading of spheroidal graphite cast iron. It is shown that the correlation between the experimental results and the theoretical prediction on the statistical distribution of fatigue life by using the proposed method is very reasonable. Furthermore, it is found that the correlation between the theoretical prediction and the experimental results of fatigue life in case of tension-tension fatigue data in composite material appears to be reasonable. Therefore, the proposed method is more adjustable in the determination of the parameter than maximum likelihood method and minimization technique
6. A new theoretical model for scattering of electrons by molecules. 1
International Nuclear Information System (INIS)
Peixoto, E.M.A.; Mu-tao, L.; Nogueira, J.C.
1975-01-01
A new theoretical model for electron-molecule scattering is suggested. The e-H 2 scattering is studied and the superiority of the new model over the commonly used Independent Atom Model (IAM) is demonstrated. Comparing theoretical and experimental data for 40keV electrons scattered by H 2 utilizing the new model, its validity is proved, while Partial Wave and First Born calculations, employing the Independent Atom Model, strongly deviated from the experiment [pt
7. Allostatic load: A theoretical model for understanding the relationship between maternal posttraumatic stress disorder and adverse birth outcomes.
Science.gov (United States)
Li, Yang; Rosemberg, Marie-Anne Sanon; Seng, Julia S
2018-07-01
8. The Intense Slow Positron Source concept: A theoretical perspective on a proposed INEL Facility
International Nuclear Information System (INIS)
Makowitz, H.; Abrashoff, J.D.; Landman, W.H.; Albano, R.K.; Tajima, T.
1992-01-01
An analysis has been performed of the INEL Intense Slow Positron Source (ISPS) concept. The results of the theoretical study are encouraging. A full-scale device with a monoenergetic 5 KeV positron beam of ≥10 12 e + /s on a ≤0.03-cmdiameter target appears feasible and can be obtained within the existing infrastructure of INEL reactor facilities. A 30.0-cm-diameter, large area source dish, moderated at first with thin crystalline W films and later by solid Ne, is proposed as the initial device in order to explore problems with a facility scale system. A demonstration scale beam at ≥10 10 slow e + /s is proposed using a 58 Co source plated on a 6-cm-diameter source dish insert, placed in a 30- cm adapter
9. N-barN interaction theoretical models
International Nuclear Information System (INIS)
Loiseau, B.
1991-12-01
In the framework of antinucleon-nucleon interaction theoretical models, our present understanding on the N-barN interaction is discussed, either from quark- or/and meson- and baryon-degrees of freedom, by considering the N-barN annihilation into mesons and the N-barN elastic and charge-exchange scattering. (author) 52 refs., 11 figs., 2 tabs
10. Developing a theoretical framework for complex community-based interventions.
Science.gov (United States)
Angeles, Ricardo N; Dolovich, Lisa; Kaczorowski, Janusz; Thabane, Lehana
2014-01-01
Applying existing theories to research, in the form of a theoretical framework, is necessary to advance knowledge from what is already known toward the next steps to be taken. This article proposes a guide on how to develop a theoretical framework for complex community-based interventions using the Cardiovascular Health Awareness Program as an example. Developing a theoretical framework starts with identifying the intervention's essential elements. Subsequent steps include the following: (a) identifying and defining the different variables (independent, dependent, mediating/intervening, moderating, and control); (b) postulating mechanisms how the independent variables will lead to the dependent variables; (c) identifying existing theoretical models supporting the theoretical framework under development; (d) scripting the theoretical framework into a figure or sets of statements as a series of hypotheses, if/then logic statements, or a visual model; (e) content and face validation of the theoretical framework; and (f) revising the theoretical framework. In our example, we combined the "diffusion of innovation theory" and the "health belief model" to develop our framework. Using the Cardiovascular Health Awareness Program as the model, we demonstrated a stepwise process of developing a theoretical framework. The challenges encountered are described, and an overview of the strategies employed to overcome these challenges is presented.
11. Theoretical study on loss of coolant accident of a research reactor
International Nuclear Information System (INIS)
Lee, Kwon-Yeong; Kim, Wan-Soo
2016-01-01
Highlights: • A theoretical model of siphon breaking phenomena was developed. • A general formula using Chisholm coefficient B was proposed. • The safety requirements regarding a loss of coolant accident of research reactors could be found out. - Abstract: Under the design conditions of a research reactor, the siphon phenomenon induced by pipe rupture can cause continuous efflux of water. In order to prevent water efflux, an additional facility is necessary. A siphon breaker is a type of safety facility that can resist the loss of coolant effectively. However, analysis of siphon breaking is complex since it comprises two-phase flow and there are many inputs to be considered. For this reason, we analyzed the experimental results to develop a theoretical model of siphon breaking phenomena. Developed model is based on fluid mechanics and Chisholm model. From Bernoulli’s equation, the velocity and quantity as well as undershooting height, water level, pressure, friction coefficient, and factors related to the two-phase flow could be calculated. The Chisholm model, which is able to analyze the two-phase flow, can predict the results in a manner similar to those obtained from a real-scale experiment, and a general formula using Chisholm coefficient B was proposed in this study. Also, we verified the theoretical model and concluded that it is possible to analyze the siphon breaking. Moreover, the design conditions that can satisfy the safety requirements regarding a loss of coolant accident of research reactors could be found out by using the theoretical model. In conclusion, we propose the theoretical model which can analyze the siphon breaking as real, and it is helpful not only to analyze but also to design the siphon breaker.
12. Theoretical models for development competence of health protection and promotion
Directory of Open Access Journals (Sweden)
Cesnaviciene J.
2014-01-01
Full Text Available The competence of health protection and promotion are mentioned in various legislative documents that regulate areas of education and health policy. The researches on health conditions of Lithuania Country's population disclosed the deteriorating health status of the society, even of the children. It has also been found that the focus on health education is not adequate. The number of National and International health programmes have been realized and educational methodological tools prepared in Lithuania, however the insufficient attention to the health promotion models is been noticed. The objectiveof this article is to discuss the theoretical models used in health education field. The questions to be answered: what theoretical models are used in order to development competence of health protection and promotion? Who does employ particular models? What are the advantages of various models? What conceptions unite and characterize theoretical models? The analysis of scientific literature revealed the number of diverse health promotion model; however none of them is dominant. Some of the models focus on intrapersonal, others on interpersonal or community level but in general they can be distinguished as cognitive – behavioural models which are characterized by three main conceptions: 1 the healthy living is determined by the perceived health related knowledge: what is known and understood would influence the behaviour; 2 the knowledge in healthy living field is essential but insufficient condition for behaviour change; 3 the great influence to healthy living life style is done by perception, motivation, skills and habits as well as social environment. These are the components that are typical to all theoretical models and that reflect the hole of the conditions influencing healthy living.
13. Theoretical vibro-acoustic modeling of acoustic noise transmission through aircraft windows
Science.gov (United States)
Aloufi, Badr; Behdinan, Kamran; Zu, Jean
2016-06-01
In this paper, a fully vibro-acoustic model for sound transmission across a multi-pane aircraft window is developed. The proposed model is efficiently applied for a set of window models to perform extensive theoretical parametric studies. The studied window configurations generally simulate the passenger window designs of modern aircraft classes which have an exterior multi-Plexiglas pane, an interior single acrylic glass pane and a dimmable glass ("smart" glass), all separated by thin air cavities. The sound transmission loss (STL) characteristics of three different models, triple-, quadruple- and quintuple-paned windows identical in size and surface density, are analyzed for improving the acoustic insulation performances. Typical results describing the influence of several system parameters, such as the thicknesses, number and spacing of the window panes, on the transmission loss are then investigated. In addition, a comparison study is carried out to evaluate the acoustic reduction capability of each window model. The STL results show that the higher frequencies sound transmission loss performance can be improved by increasing the number of window panels, however, the low frequency performance is decreased, particularly at the mass-spring resonances.
14. Occupational health purchasing behaviour by SMEs--a new theoretical model.
Science.gov (United States)
Harrison, J; Woods, A; Dickson, K
2013-10-01
Factors influencing corporate decisions to purchase occupational health (OH) are unknown. To assist the marketing of OH services to small- and medium-sized enterprises (SMEs) by characterizing purchasing behaviour. We developed a 2×2 model, based on published studies, to describe OH purchasing behaviour by SMEs. We tested the model by analysis of responses to a cross-sectional market research survey carried out in November 2007. The companies surveyed were SMEs employing 30-250 employees, within the localities of five UK National Health Service OH services: West London, Buckinghamshire, Cambridge, Portsmouth and York. We chose a sample representative of all SMEs for each location. The survey explored knowledge of OH and the perceived importance of a variety of services. We obtained responses from 387 companies (19%); 81% indicated that they knew about OH and 24% had purchased OH services. OH was rated 'very important' by 35%, and 65% rated it as 'quite' or 'very important'. Sickness absence and its business impact were monitored by 89%. Enterprises claiming OH understanding were significantly more likely to purchase OH services (odds ratio [OR] 3.5, 95% confidence interval [CI] 1.6-8.0). Companies employing fewer than 90 employees were significantly less likely to purchase such services than larger ones (OR 0.17, 95% CI 0.09-0.3). OH knowledge and company size are key determinants of SME purchasing behaviour. Our findings support our proposed theoretical model. However, more research could explore claimed knowledge of OH with respect to the proposed purchaser types and business benefits.
15. A theoretical model of multielectrode DBR lasers
DEFF Research Database (Denmark)
Pan, Xing; Olesen, Henning; Tromborg, Bjarne
1988-01-01
A theoretical model for two- and three-section tunable distributed Bragg reflector (DBR) lasers is presented. The static tuning properties are studied in terms of threshold current, linewidth, oscillation frequency, and output power. Regions of continuous tuning for three-section DBR lasers...
16. Affectionate Touch to Promote Relational, Psychological, and Physical Well-Being in Adulthood: A Theoretical Model and Review of the Research.
Science.gov (United States)
Jakubiak, Brittany K; Feeney, Brooke C
2017-08-01
Throughout the life span, individuals engage in affectionate touch with close others. Touch receipt promotes well-being in infancy, but the impacts of touch in adult close relationships have been largely unexplored. In this article, we propose that affectionate touch receipt promotes relational, psychological, and physical well-being in adulthood, and we present a theoretical mechanistic model to explain why affectionate touch may promote these outcomes. The model includes pathways through which touch could affect well-being by reducing stress and by promoting well-being independent of stress. Specifically, two immediate outcomes of affectionate touch receipt-relational-cognitive changes and neurobiological changes-are described as important mechanisms underlying the effects of affectionate touch on well-being. We also review and evaluate the existing research linking affectionate touch to well-being in adulthood and propose an agenda to advance research in this area. This theoretical perspective provides a foundation for future work on touch in adult close relationships.
17. Theoretical model of an optothermal microactuator directly driven by laser beams
International Nuclear Information System (INIS)
Han, Xu; Zhang, Haijun; Xu, Rui; Wang, Shuying; Qin, Chun
2015-01-01
This paper proposes a novel method of optothermal microactuation based on single and dual laser beams (spots). The theoretical model of the optothermal temperature distribution of an expansion arm is established and simulated, indicating that the maximum temperature of the arm irradiated by dual laser spots, at the same laser power level, is much lower than that irradiated by one single spot, and thus the risk of burning out and damaging the optothermal microactuator (OTMA) can be effectively avoided. To verify the presented method, a 750 μm long OTMA with a 100 μm wide expansion arm is designed and microfabricated, and single/dual laser beams with a wavelength of 650 nm are adopted to carry out experiments. The experimental results showed that the optothermal deflection of the OTMA under the irradiation of dual laser spots is larger than that under the irradiation of a single spot with the same power, which is in accordance with theoretical prediction. This method of optothermal microactuation may expand the practical applications of microactuators, which serve as critical units in micromechanical devices and micro-opto-electro-mechanical systems (MOEMS). (paper)
18. Accidental naturalism: criticism of a theoretical model of socio-ecological legitimacy
Directory of Open Access Journals (Sweden)
2017-11-01
Full Text Available This article proposes the need for a theoretical review on the current epistemological assumption that establishes the dichotomy nature-society as a cornerstone of a broad worldview for western contexts. We will discuss the anthropological perspectives that assume that in these spaces, generically without nuances, social practice and ideas are not constructed in such a close relationship to the environment, falling under a belief that nature exists outside the human will. We will debate the naive ethnological essentialism that position naturalism as a central model of a socio-european worldview, characterized by dualistic patterns that have enabled monistic paradigms of socio-ecological relationships to be established at the same time, and in contrast to this, in other parts of the world.
19. A theoretical model of semi-elliptic surface crack growth
Directory of Open Access Journals (Sweden)
Shi Kaikai
2014-06-01
Full Text Available A theoretical model of semi-elliptic surface crack growth based on the low cycle strain damage accumulation near the crack tip along the cracking direction and the Newman–Raju formula is developed. The crack is regarded as a sharp notch with a small curvature radius and the process zone is assumed to be the size of cyclic plastic zone. The modified Hutchinson, Rice and Rosengren (HRR formulations are used in the presented study. Assuming that the shape of surface crack front is controlled by two critical points: the deepest point and the surface point. The theoretical model is applied to semi-elliptic surface cracked Al 7075-T6 alloy plate under cyclic loading, and five different initial crack shapes are discussed in present study. Good agreement between experimental and theoretical results is obtained.
20. Knowledge Management Implementation at the Women’s Branch of the Institute of Public Administration in Saudi Arabia: A Proposed Model
Directory of Open Access Journals (Sweden)
Eiman Saud Abokhodiar
2014-05-01
Full Text Available This article aims to introduce a proposed model of knowledge management implementation at the Women’s Branch of the Institute of Public Administration (WIPA. The model was built after a review and analysis of literature related to knowledge management implementation in higher education institutions. The research includes three sections. The first section deals with a theoretical framework of knowledge management, including a knowledge management definition, components of knowledge management systems, an academic knowledge framework, an organizational knowledge framework, and processes of knowledge management. The second section reviews and discusses the proposed model of knowledge management implementation at WIPA. Finally, the article concludes with a discussion of the success factors and expected barriers to the proposed model.
1. A review of game-theoretic models of road user behaviour.
Science.gov (United States)
Elvik, Rune
2014-01-01
2. Experimental Investigation and Theoretical Modeling of Nanosilica Activity in Concrete
Directory of Open Access Journals (Sweden)
Han-Seung Lee
2014-01-01
Full Text Available This paper presents experimental investigations and theoretical modeling of the hydration reaction of nanosilica blended concrete with different water-to-binder ratios and different nanosilica replacement ratios. The developments of chemically bound water contents, calcium hydroxide contents, and compressive strength of Portland cement control specimens and nanosilica blended specimens were measured at different ages: 1 day, 3 days, 7 days, 14 days, and 28 days. Due to the pozzolanic reaction of nanosilica, the contents of calcium hydroxide in nanosilica blended pastes are considerably lower than those in the control specimens. Compared with the control specimens, the extent of compressive strength enhancement in the nanosilica blended specimens is much higher at early ages. Additionally, a blended cement hydration model that considers both the hydration reaction of cement and the pozzolanic reaction of nanosilica is proposed. The properties of nanosilica blended concrete during hardening were evaluated using the degree of hydration of cement and the reaction degree of nanosilica. The calculated chemically bound water contents, calcium hydroxide contents, and compressive strength were generally consistent with the experimental results.
3. Structural modeling and analysis of an effluent treatment process for electroplating--a graph theoretic approach.
Science.gov (United States)
Kumar, Abhishek; Clement, Shibu; Agrawal, V P
2010-07-15
An attempt is made to address a few ecological and environment issues by developing different structural models for effluent treatment system for electroplating. The effluent treatment system is defined with the help of different subsystems contributing to waste minimization. Hierarchical tree and block diagram showing all possible interactions among subsystems are proposed. These non-mathematical diagrams are converted into mathematical models for design improvement, analysis, comparison, storage retrieval and commercially off-the-shelf purchases of different subsystems. This is achieved by developing graph theoretic model, matrix models and variable permanent function model. Analysis is carried out by permanent function, hierarchical tree and block diagram methods. Storage and retrieval is done using matrix models. The methodology is illustrated with the help of an example. Benefits to the electroplaters/end user are identified. 2010 Elsevier B.V. All rights reserved.
4. A theoretical adaptive model of thermal comfort - Adaptive Predicted Mean Vote (aPMV)
Energy Technology Data Exchange (ETDEWEB)
Yao, Runming [School of Construction Management and Engineering, The University of Reading (United Kingdom); Faculty of Urban Construction and Environmental Engineering, Chongqing University (China); Li, Baizhan [Key Laboratory of the Three Gorges Reservoir Region' s Eco-Environment (Ministry of Education), Chongqing University (China); Faculty of Urban Construction and Environmental Engineering, Chongqing University (China); Liu, Jing [School of Construction Management and Engineering, The University of Reading (United Kingdom)
2009-10-15
This paper presents in detail a theoretical adaptive model of thermal comfort based on the ''Black Box'' theory, taking into account factors such as culture, climate, social, psychological and behavioural adaptations, which have an impact on the senses used to detect thermal comfort. The model is called the Adaptive Predicted Mean Vote (aPMV) model. The aPMV model explains, by applying the cybernetics concept, the phenomena that the Predicted Mean Vote (PMV) is greater than the Actual Mean Vote (AMV) in free-running buildings, which has been revealed by many researchers in field studies. An Adaptive coefficient ({lambda}) representing the adaptive factors that affect the sense of thermal comfort has been proposed. The empirical coefficients in warm and cool conditions for the Chongqing area in China have been derived by applying the least square method to the monitored onsite environmental data and the thermal comfort survey results. (author)
5. Role of Travel Motivations, Perceived Risks and Travel Constraints on Destination Image and Visit Intention in Medical Tourism: Theoretical model.
Science.gov (United States)
Khan, Mohammad J; Chelliah, Shankar; Haron, Mahmod S; Ahmed, Sahrish
2017-02-01
Travel motivations, perceived risks and travel constraints, along with the attributes and characteristics of medical tourism destinations, are important issues in medical tourism. Although the importance of these factors is already known, a comprehensive theoretical model of the decision-making process of medical tourists has yet to be established, analysing the intricate relationships between the different variables involved. This article examines a large body of literature on both medical and conventional tourism in order to propose a comprehensive theoretical framework of medical tourism decision-making. Many facets of this complex phenomenon require further empirical investigation.
6. Optical gain in InAs/InGaAs quantum-dot structures: Experiments and theoretical model
International Nuclear Information System (INIS)
Eliseev, P G; Li, H; Liu, G T; Stintz, A; Newell, T C; Lester, L E; Malloy, K J
2000-01-01
The dependence of the mode optical gain on current in InAs/InGaAs quantum-dot structures grown by the method of molecular-beam epitaxy is obtained from the experimental study of ultra-low-threshold laser diodes. The record lowest inversion threshold at room temperature was about 13 A cm -2 . A theoretical model is proposed that relates the optical gain to the ground-state transitions in quantum dots. The effective gain cross section is estimated to be ∼7 x 10 -15 cm -2 . (lasers)
7. Kinetics of heterogeneous chemical reactions: a theoretical model for the accumulation of pesticides in soil.
Science.gov (United States)
Lin, S H; Sahai, R; Eyring, H
1971-04-01
A theoretical model for the accumulation of pesticides in soil has been proposed and discussed from the viewpoint of heterogeneous reaction kinetics with a basic aim to understand the complex nature of soil processes relating to the environmental pollution. In the bulk of soil, the pesticide disappears by diffusion and a chemical reaction; the rate processes considered on the surface of soil are diffusion, chemical reaction, vaporization, and regular pesticide application. The differential equations involved have been solved analytically by the Laplace-transform method.
8. K. Sridhar Moorthy's Theoretical Modelling in Marketing - A Review
African Journals Online (AJOL)
Toshiba
experimental design for theoretical modelling of sales force compensation is vivid and ... different from the concept of a model in decision support systems and behavioural .... ―refers to the fact that people may not optimize.‖ This, of course, is.
9. A thermodynamic and theoretical view for enzyme regulation.
Science.gov (United States)
Zhao, Qinyi
2015-01-01
Precise regulation is fundamental to the proper functioning of enzymes in a cell. Current opinions about this, such as allosteric regulation and dynamic contribution to enzyme regulation, are experimental models and substantially empirical. Here we proposed a theoretical and thermodynamic model of enzyme regulation. The main idea is that enzyme regulation is processed via the regulation of abundance of active conformation in the reaction buffer. The theoretical foundation, experimental evidence, and experimental criteria to test our model are discussed and reviewed. We conclude that basic principles of enzyme regulation are laws of protein thermodynamics and it can be analyzed using the concept of distribution curve of active conformations of enzymes.
10. Information-Theoretic Performance Analysis of Sensor Networks via Markov Modeling of Time Series Data.
Science.gov (United States)
Li, Yue; Jha, Devesh K; Ray, Asok; Wettergren, Thomas A; Yue Li; Jha, Devesh K; Ray, Asok; Wettergren, Thomas A; Wettergren, Thomas A; Li, Yue; Ray, Asok; Jha, Devesh K
2018-06-01
This paper presents information-theoretic performance analysis of passive sensor networks for detection of moving targets. The proposed method falls largely under the category of data-level information fusion in sensor networks. To this end, a measure of information contribution for sensors is formulated in a symbolic dynamics framework. The network information state is approximately represented as the largest principal component of the time series collected across the network. To quantify each sensor's contribution for generation of the information content, Markov machine models as well as x-Markov (pronounced as cross-Markov) machine models, conditioned on the network information state, are constructed; the difference between the conditional entropies of these machines is then treated as an approximate measure of information contribution by the respective sensors. The x-Markov models represent the conditional temporal statistics given the network information state. The proposed method has been validated on experimental data collected from a local area network of passive sensors for target detection, where the statistical characteristics of environmental disturbances are similar to those of the target signal in the sense of time scale and texture. A distinctive feature of the proposed algorithm is that the network decisions are independent of the behavior and identity of the individual sensors, which is desirable from computational perspectives. Results are presented to demonstrate the proposed method's efficacy to correctly identify the presence of a target with very low false-alarm rates. The performance of the underlying algorithm is compared with that of a recent data-driven, feature-level information fusion algorithm. It is shown that the proposed algorithm outperforms the other algorithm.
11. Computational and Game-Theoretic Approaches for Modeling Bounded Rationality
NARCIS (Netherlands)
L. Waltman (Ludo)
2011-01-01
textabstractThis thesis studies various computational and game-theoretic approaches to economic modeling. Unlike traditional approaches to economic modeling, the approaches studied in this thesis do not rely on the assumption that economic agents behave in a fully rational way. Instead, economic
12. Some Model Theoretic Remarks on Bass Modules
Directory of Open Access Journals (Sweden)
E. Momtahan
2011-09-01
Full Text Available We study Bass modules, Bass rings, and related concepts from a model theoretic point of view. We observe that the class of Bass modules (over a fixed ring is not stable under elementary equivalence. We observe that under which conditions the class of Bass rings are stable under elementary equivalence.
13. Testing a theoretical model of clinical nurses' intent to stay.
Science.gov (United States)
Cowden, Tracy L; Cummings, Greta G
2015-01-01
Published theoretical models of nurses' intent to stay (ITS) report inconsistent outcomes, and not all hypothesized models have been adequately tested. Research has focused on cognitive rather than emotional determinants of nurses' ITS. The aim of this study was to empirically verify a complex theoretical model of nurses' ITS that includes both affective and cognitive determinants and to explore the influence of relational leadership on staff nurses' ITS. The study was a correlational, mixed-method, nonexperimental design. A subsample of the Quality Work Environment Study survey data 2009 (n = 415 nurses) was used to test our theoretical model of clinical nurses' ITS as a structural equation model. The model explained 63% of variance in ITS. Organizational commitment, empowerment, and desire to stay were the model concepts with the strongest effects on nurses' ITS. Leadership practices indirectly influenced ITS. How nurses evaluate and respond to their work environment is both an emotional and rational process. Health care organizations need to be cognizant of the influence that nurses' feelings and views of their work setting have on their intention decisions and integrate that knowledge into the development of retention strategies. Leadership practices play an important role in staff nurses' perceptions of the workplace. Identifying the mechanisms by which leadership influences staff nurses' intentions to stay presents additional focus areas for developing retention strategies.
14. A field theoretic model for static friction
OpenAIRE
Mahyaeh, I.; Rouhani, S.
2013-01-01
We present a field theoretic model for friction, where the friction coefficient between two surfaces may be calculated based on elastic properties of the surfaces. We assume that the geometry of contact surface is not unusual. We verify Amonton's laws to hold that friction force is proportional to the normal load.This model gives the opportunity to calculate the static coefficient of friction for a few cases, and show that it is in agreement with observed values. Furthermore we show that the ...
15. Gender and Autonomy-Supportive Contexts: Theoretical Perspectives of Self-Determination and Goal Setting
Science.gov (United States)
Lin, Shinyi; Chen, Yu-Chuan
2013-01-01
In integrating theoretical perspectives of self-determination and goal-setting, this study proposes a conceptual model with moderating and mediating effects exploring gender issue in autonomy-supportive learning in higher education as research context. In the proposed model, goal-setting attributes, i.e., individual determinants, social…
16. Theoretical modelling and experimental study of air thermal conditioning process of a heat pump assisted solid desiccant cooling system
DEFF Research Database (Denmark)
Nie, Jinzhe; Li, Zan; Hu, Wenju
2017-01-01
purification aimed at improving indoor air quality and reducing building energy consumption. The heat and moisture transfer in adsorption desiccant rotor was theoretical modelled with one-dimensional partial differential equations. The theoretical model was validated with experimental measurements...... system, the energy performance of HP-SDC was more efficient mainly due to high efficient air purification capacity, reduction of cooling load and raised evaporation temperature. The energy performance of HP-SDC was sensitive to outdoor humidity ratio. Further improvements of HP-SDC energy efficiency......Taking the integrated gaseous contaminants and moisture adsorption potential of desiccant material, a new heat pump assisted solid desiccant cooling system (HP-SDC) was proposed based on the combination of desiccant rotor with heat pump. The HP-SDC was designed for dehumidification, cooling and air...
17. Theoretical proposal for a magnetic resonance study of charge transport in organic semiconductors
Science.gov (United States)
Mkhitaryan, Vagharsh
Charge transport in disordered organic semiconductors occurs via carrier incoherent hops in a band of localized states. In the framework of continuous-time random walk the carrier on-site waiting time distribution (WTD) is one of the basic characteristics of diffusion. Besides, WTD is fundamentally related to the density of states (DOS) of localized states, which is a key feature of a material determining the optoelectric properties. However, reliable first-principle calculations of DOS in organic materials are not yet available and experimental characterization of DOS and WTD is desirable. We theoretically study the spin dynamics of hopping carriers and propose measurement schemes directly probing WTD, based on the zero-field spin relaxation and the primary (Hahn) spin echo. The proposed schemes are possible because, as we demonstrate, the long-time behavior of the zero-field relaxation and the primary echo is determined by WTD, both for the hyperfine coupling dominated and the spin-orbit coupling dominated spin dynamics. We also examine the dispersive charge transport, which is a non-Markovian sub-diffusive process characterized by non-stationarity. We show that the proposed schemes unambiguously capture the effects of non-stationarity, e.g., the aging behavior of random walks. This work was supported by the Department of Energy-Basic Energy Sciences under Contract No. DE-AC02-07CH11358.
18. A non-traditional fluid problem: transition between theoretical models from Stokes’ to turbulent flow
Science.gov (United States)
Salomone, Horacio D.; Olivieri, Néstor A.; Véliz, Maximiliano E.; Raviola, Lisandro A.
2018-05-01
In the context of fluid mechanics courses, it is customary to consider the problem of a sphere falling under the action of gravity inside a viscous fluid. Under suitable assumptions, this phenomenon can be modelled using Stokes’ law and is routinely reproduced in teaching laboratories to determine terminal velocities and fluid viscosities. In many cases, however, the measured physical quantities show important deviations with respect to the predictions deduced from the simple Stokes’ model, and the causes of these apparent ‘anomalies’ (for example, whether the flow is laminar or turbulent) are seldom discussed in the classroom. On the other hand, there are various variable-mass problems that students tackle during elementary mechanics courses and which are discussed in many textbooks. In this work, we combine both kinds of problems and analyse—both theoretically and experimentally—the evolution of a system composed of a sphere pulled by a chain of variable length inside a tube filled with water. We investigate the effects of different forces acting on the system such as weight, buoyancy, viscous friction and drag force. By means of a sequence of mathematical models of increasing complexity, we obtain a progressive fit that accounts for the experimental data. The contrast between the various models exposes the strengths and weaknessess of each one. The proposed experience can be useful for integrating concepts of elementary mechanics and fluids, and is suitable as laboratory practice, stressing the importance of the experimental validation of theoretical models and showing the model-building processes in a didactic framework.
19. A Game-Theoretic Model of Marketing Skin Whiteners.
Science.gov (United States)
Mendoza, Roger Lee
2015-01-01
Empirical studies consistently find that people in less developed countries tend to regard light or "white" skin, particularly among women, as more desirable or superior. This is a study about the marketing of skin whiteners in these countries, where over 80 percent of users are typically women. It proceeds from the following premises: a) Purely market or policy-oriented approaches toward the risks and harms of skin whitening are cost-inefficient; b) Psychosocial and informational factors breed uninformed and risky consumer choices that favor toxic skin whiteners; and c) Proliferation of toxic whiteners in a competitive buyer's market raises critical supplier accountability issues. Is intentional tort a rational outcome of uncooperative game equilibria? Can voluntary cooperation nonetheless evolve between buyers and sellers of skin whiteners? These twin questions are key to addressing the central paradox in this study: A robust and expanding buyer's market, where cheap whitening products abound at a high risk to personal and societal health and safety. Game-theoretic modeling of two-player and n-player strategic interactions is proposed in this study for both its explanatory and predictive value. Therein also lie its practical contributions to the economic literature on skin whitening.
20. Theoretical proposals in bullying research: a review
OpenAIRE
Postigo, Silvia; González, Remedios; Montoya, Inmaculada; Ordoñez, Ana
2013-01-01
Four decades of research into peer bullying have produced an extensive body of knowledge. This work attempts to provide an integrative theoretical framework, which includes the specific theories and observations. The main aim is to organize the available knowledge in order to guide the development of effective interventions. To that end, several psychological theories are described that have been used and/or adapted with the aim of understanding peer bullying. All of them, at different ecolog...
1. Theoretical Relevance of Neuropsychological Data for Connectionist Modelling
Directory of Open Access Journals (Sweden)
Mauricio Iza
2011-05-01
Full Text Available The symbolic information-processing paradigm in cognitive psychology has met a growing challenge from neural network models over the past two decades. While neuropsychological
evidence has been of great utility to theories concerned with information processing, the real question is, whether the less rigid connectionist models provide valid, or enough, information
concerning complex cognitive structures. In this work, we will discuss the theoretical implications that neuropsychological data posits for modelling cognitive systems.
2. Chemical and morphological gradient scaffolds to mimic hierarchically complex tissues: From theoretical modeling to their fabrication.
Science.gov (United States)
Marrella, Alessandra; Aiello, Maurizio; Quarto, Rodolfo; Scaglione, Silvia
2016-10-01
Porous multiphase scaffolds have been proposed in different tissue engineering applications because of their potential to artificially recreate the heterogeneous structure of hierarchically complex tissues. Recently, graded scaffolds have been also realized, offering a continuum at the interface among different phases for an enhanced structural stability of the scaffold. However, their internal architecture is often obtained empirically and the architectural parameters rarely predetermined. The aim of this work is to offer a theoretical model as tool for the design and fabrication of functional and structural complex graded scaffolds with predicted morphological and chemical features, to overcome the time-consuming trial and error experimental method. This developed mathematical model uses laws of motions, Stokes equations, and viscosity laws to describe the dependence between centrifugation speed and fiber/particles sedimentation velocity over time, which finally affects the fiber packing, and thus the total porosity of the 3D scaffolds. The efficacy of the theoretical model was tested by realizing engineered graded grafts for osteochondral tissue engineering applications. The procedure, based on combined centrifugation and freeze-drying technique, was applied on both polycaprolactone (PCL) and collagen-type-I (COL) to test the versatility of the entire process. A functional gradient was combined to the morphological one by adding hydroxyapatite (HA) powders, to mimic the bone mineral phase. Results show that 3D bioactive morphologically and chemically graded grafts can be properly designed and realized in agreement with the theoretical model. Biotechnol. Bioeng. 2016;113: 2286-2297. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
3. Model selection and inference a practical information-theoretic approach
CERN Document Server
Burnham, Kenneth P
1998-01-01
This book is unique in that it covers the philosophy of model-based data analysis and an omnibus strategy for the analysis of empirical data The book introduces information theoretic approaches and focuses critical attention on a priori modeling and the selection of a good approximating model that best represents the inference supported by the data Kullback-Leibler information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection The maximized log-likelihood function can be bias-corrected to provide an estimate of expected, relative Kullback-Leibler information This leads to Akaike's Information Criterion (AIC) and various extensions and these are relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are ...
4. Role of Travel Motivations, Perceived Risks and Travel Constraints on Destination Image and Visit Intention in Medical Tourism; Theoretical model
Directory of Open Access Journals (Sweden)
2017-03-01
Full Text Available Travel motivations, perceived risks and travel constraints, along with the attributes and characteristics of medical tourism destinations, are important issues in medical tourism. Although the importance of these factors is already known, a comprehensive theoretical model of the decision-making process of medical tourists has yet to be established, analysing the intricate relationships between the different variables involved. This article examines a large body of literature on both medical and conventional tourism in order to propose a comprehensive theoretical framework of medical tourism decision-making. Many facets of this complex phenomenon require further empirical investigation.
5. Theoretical Models of Protostellar Binary and Multiple Systems with AMR Simulations
Science.gov (United States)
Matsumoto, Tomoaki; Tokuda, Kazuki; Onishi, Toshikazu; Inutsuka, Shu-ichiro; Saigo, Kazuya; Takakuwa, Shigehisa
2017-05-01
We present theoretical models for protostellar binary and multiple systems based on the high-resolution numerical simulation with an adaptive mesh refinement (AMR) code, SFUMATO. The recent ALMA observations have revealed early phases of the binary and multiple star formation with high spatial resolutions. These observations should be compared with theoretical models with high spatial resolutions. We present two theoretical models for (1) a high density molecular cloud core, MC27/L1521F, and (2) a protobinary system, L1551 NE. For the model for MC27, we performed numerical simulations for gravitational collapse of a turbulent cloud core. The cloud core exhibits fragmentation during the collapse, and dynamical interaction between the fragments produces an arc-like structure, which is one of the prominent structures observed by ALMA. For the model for L1551 NE, we performed numerical simulations of gas accretion onto protobinary. The simulations exhibit asymmetry of a circumbinary disk. Such asymmetry has been also observed by ALMA in the circumbinary disk of L1551 NE.
6. Studies in theoretical particle physics
International Nuclear Information System (INIS)
Kaplan, D.B.
1991-01-01
This proposal focuses on research on three distinct areas of particle physics: (1) Nonperturbative QCD. I tend to continue work on analytic modelling of nonperturbative effects in the strong interactions. I have been investigating the theoretical connection between the nonrelativistic quark model and QCD. The primary motivation has been to understand the experimental observation of nonzero matrix elements involving current strange quarks in ordinary matter -- which in the quark model has no strange quark component. This has led to my present work on understanding constituent (quark model) quarks as collective excitations of QCD degrees of freedom. (2) Weak Scale Baryogenesis. A continuation of work on baryogenesis in the early universe from weak interactions. In particular, an investigation of baryogenesis occurring during the weak phase transition through anomalous baryon violating processes in the standard model of weak interactions. (3) Flavor and Compositeness. Further investigation of a new mechanism that I recently discovered for dynamical mass generation for fermions, which naturally leads to a family hierarchy structure. A discussion of recent past work is found in the next section, followed by an outline of the proposed research. A recent publication from each of these three areas is attached to this proposal
7. Cognitive models of executive functions development: methodological limitations and theoretical challenges
Directory of Open Access Journals (Sweden)
Florencia Stelzer
2014-01-01
Full Text Available Executive functions (EF have been defined as a series of higher-order cognitive processes which allow the control of thought, behavior and affection according to the achievement of a goal. Such processes present a lengthy postnatal development which matures completely by the end of adolescence. In this article we make a review of some of the main models of EF development during childhood. The aim of this work is to describe the state of the art related to the topic, identifying the main theoretical difficulties and methodological limitations associated with the different proposed paradigms. Finally, some suggestions are given to cope with such difficulties, emphasizing that the development of an ontology of EF could be a viable alternative to counter them. We believe that futture researches should guide their efforts toward the development of that ontology.
8. Merging Theoretical Models and Therapy Approaches in the Context of Internet Gaming Disorder: A Personal Perspective
Science.gov (United States)
Young, Kimberly S.; Brand, Matthias
2017-01-01
Although, it is not yet officially recognized as a clinical entity which is diagnosable, Internet Gaming Disorder (IGD) has been included in section III for further study in the DSM-5 by the American Psychiatric Association (APA, 2013). This is important because there is increasing evidence that people of all ages, in particular teens and young adults, are facing very real and sometimes very severe consequences in daily life resulting from an addictive use of online games. This article summarizes general aspects of IGD including diagnostic criteria and arguments for the classification as an addictive disorder including evidence from neurobiological studies. Based on previous theoretical considerations and empirical findings, this paper examines the use of one recently proposed model, the Interaction of Person-Affect-Cognition-Execution (I-PACE) model, for inspiring future research and for developing new treatment protocols for IGD. The I-PACE model is a theoretical framework that explains symptoms of Internet addiction by looking at interactions between predisposing factors, moderators, and mediators in combination with reduced executive functioning and diminished decision making. Finally, the paper discusses how current treatment protocols focusing on Cognitive-Behavioral Therapy for Internet addiction (CBT-IA) fit with the processes hypothesized in the I-PACE model. PMID:29104555
9. Merging Theoretical Models and Therapy Approaches in the Context of Internet Gaming Disorder: A Personal Perspective.
Science.gov (United States)
Young, Kimberly S; Brand, Matthias
2017-01-01
Although, it is not yet officially recognized as a clinical entity which is diagnosable, Internet Gaming Disorder (IGD) has been included in section III for further study in the DSM-5 by the American Psychiatric Association (APA, 2013). This is important because there is increasing evidence that people of all ages, in particular teens and young adults, are facing very real and sometimes very severe consequences in daily life resulting from an addictive use of online games. This article summarizes general aspects of IGD including diagnostic criteria and arguments for the classification as an addictive disorder including evidence from neurobiological studies. Based on previous theoretical considerations and empirical findings, this paper examines the use of one recently proposed model, the Interaction of Person-Affect-Cognition-Execution (I-PACE) model, for inspiring future research and for developing new treatment protocols for IGD. The I-PACE model is a theoretical framework that explains symptoms of Internet addiction by looking at interactions between predisposing factors, moderators, and mediators in combination with reduced executive functioning and diminished decision making. Finally, the paper discusses how current treatment protocols focusing on Cognitive-Behavioral Therapy for Internet addiction (CBT-IA) fit with the processes hypothesized in the I-PACE model.
10. Merging Theoretical Models and Therapy Approaches in the Context of Internet Gaming Disorder: A Personal Perspective
Directory of Open Access Journals (Sweden)
Kimberly S. Young
2017-10-01
Full Text Available Although, it is not yet officially recognized as a clinical entity which is diagnosable, Internet Gaming Disorder (IGD has been included in section III for further study in the DSM-5 by the American Psychiatric Association (APA, 2013. This is important because there is increasing evidence that people of all ages, in particular teens and young adults, are facing very real and sometimes very severe consequences in daily life resulting from an addictive use of online games. This article summarizes general aspects of IGD including diagnostic criteria and arguments for the classification as an addictive disorder including evidence from neurobiological studies. Based on previous theoretical considerations and empirical findings, this paper examines the use of one recently proposed model, the Interaction of Person-Affect-Cognition-Execution (I-PACE model, for inspiring future research and for developing new treatment protocols for IGD. The I-PACE model is a theoretical framework that explains symptoms of Internet addiction by looking at interactions between predisposing factors, moderators, and mediators in combination with reduced executive functioning and diminished decision making. Finally, the paper discusses how current treatment protocols focusing on Cognitive-Behavioral Therapy for Internet addiction (CBT-IA fit with the processes hypothesized in the I-PACE model.
11. Theoretical model simulations for the global Thermospheric Mapping Study (TMS) periods
Science.gov (United States)
Rees, D.; Fuller-Rowell, T. J.
Theoretical and semiempirical models of the solar UV/EUV and of the geomagnetic driving forces affecting the terrestrial mesosphere and thermosphere have been used to generate a series of representative numerical time-dependent and global models of the thermosphere, for the range of solar and geoamgnetic activity levels which occurred during the three Thermospheric Mapping Study periods. The simulations obtained from these numerical models are compared with observations, and with the results of semiempirical models of the thermosphere. The theoretical models provide a record of the magnitude of the major driving forces which affected the thermosphere during the study periods, and a baseline against which the actual observed structure and dynamics can be compared.
12. Apparel shopping behaviour – Part 2: Conceptual theoretical model, market segments, profiles and implications
Directory of Open Access Journals (Sweden)
R. Du Preez
2003-10-01
Full Text Available This article is based on the conceptual theoretical model developed in Part 1 of this series of articles. The objective of this research is to identify female apparel consumer market segments on the basis of differentiating lifestyles, shopping orientation, cultural consciousness, store patronage and demographics. These profiles are discussed in full and the implications thereof for retailers, marketers and researchers are highlighted. A new conceptual model is proposed and recommendations are made for further research. Opsomming Hierdie artikel word gebaseer op die konseptuele teoretiese model wat reeds in Deel 1 van hierdie artikelreeks ontwikkel is. Die doel van hierdie navorsing is om marksegmente van vroue klere-kopers te identifiseer na aanleiding van hulle lewenstyle, kooporiëntasie, kulturele bewustheid, winkelvoorkeurgedrag en demografie. Hierdie profiele word volledig beskryf en die implikasies van die verskillende profiele vir kleinhandelaars, bemarkers en navorsers word uitgelig. ’n Nuwe konseptuele model word voorgestel en aanbevelings vir verdere navorsing word gemaak.
13. Pathways from Trauma to Psychotic Experiences: A Theoretically Informed Model of Posttraumatic Stress in Psychosis
Directory of Open Access Journals (Sweden)
Amy Hardy
2017-05-01
Full Text Available In recent years, empirical data and theoretical accounts relating to the relationship between childhood victimization and psychotic experiences have accumulated. Much of this work has focused on co-occurring Posttraumatic Stress Disorder or putative causal mechanisms in isolation from each other. The complexity of posttraumatic stress reactions experienced in psychosis remains poorly understood. This paper therefore attempts to synthesize the current evidence base into a theoretically informed, multifactorial model of posttraumatic stress in psychosis. Three trauma-related vulnerability factors are proposed to give rise to intrusions and to affect how people appraise and cope with them. First, understandable attempts to survive trauma become habitual ways of regulating emotion, manifesting in cognitive-affective, behavioral and interpersonal responses. Second, event memories, consisting of perceptual and episodic representations, are impacted by emotion experienced during trauma. Third, personal semantic memory, specifically appraisals of the self and others, are shaped by event memories. It is proposed these vulnerability factors have the potential to lead to two types of intrusions. The first type is anomalous experiences arising from emotion regulation and/or the generation of novel images derived from trauma memory. The second type is trauma memory intrusions reflecting, to varying degrees, the retrieval of perceptual, episodic and personal semantic representations. It is speculated trauma memory intrusions may be experienced on a continuum from contextualized to fragmented, depending on memory encoding and retrieval. Personal semantic memory will then impact on how intrusions are appraised, with habitual emotion regulation strategies influencing people’s coping responses to these. Three vignettes are outlined to illustrate how the model accounts for different pathways between victimization and psychosis, and implications for therapy are
14. A theoretical cost optimization model of reused flowback distribution network of regional shale gas development
International Nuclear Information System (INIS)
Li, Huajiao; An, Haizhong; Fang, Wei; Jiang, Meng
2017-01-01
The logistical issues surrounding the timing and transport of flowback generated by each shale gas well to the next is a big challenge. Due to more and more flowback being stored temporarily near the shale gas well and reused in the shale gas development, both transportation cost and storage cost are the heavy burden for the developers. This research proposed a theoretical cost optimization model to get the optimal flowback distribution solution for regional multi shale gas wells in a holistic perspective. Then, we used some empirical data of Marcellus Shale to do the empirical study. In addition, we compared the optimal flowback distribution solution by considering both the transportation cost and storage cost with the flowback distribution solution which only minimized the transportation cost or only minimized the storage cost. - Highlights: • A theoretical cost optimization model to get optimal flowback distribution solution. • An empirical study using the shale gas data in Bradford County of Marcellus Shale. • Visualization of optimal flowback distribution solutions under different scenarios. • Transportation cost is a more important factor for reducing the cost. • Help the developers to cut the storage and transportation cost of reusing flowback.
15. Desublimation process: verification and applications of a theoretical model
International Nuclear Information System (INIS)
Eby, R.S.
1979-01-01
A theoretical model simulating the simultaneous heat and mass transfer which takes place during the desublimation of a gas to a solid is presented. Desublimer column loading profiles to experimentally verify the model were obtained using a gamma scintillation technique. The data indicate that, if the physical parameters of the desublimed frost material are known, the model can accurately predict the desublimation phenomenon. The usefulness of the model in different engineering applications is also addressed
16. Theoretical modeling of steam condensation in the presence of a noncondensable gas in horizontal tubes
International Nuclear Information System (INIS)
Lee, Kwon-Yeong; Kim, Moo Hwan; Kim, Moo Hwan
2008-01-01
A theoretical model was developed to investigate a steam condensation with a noncondensable gas in a horizontal tube. The heat transfer through the vapor/noncondensable gas mixture boundary layer consists of the sensible heat transfer and the latent heat transfer given up by the condensing vapor, and it must equal that from the condensate film to the tube wall. Therefore, the total heat transfer coefficient is given by the film, condensation and sensible heat transfer coefficients. The film heat transfer coefficients of the upper and lower portions of the tube were calculated separately from Rosson and Meyers (1965) correlation. The heat and mass transfer analogy was used to analyze the steam/noncondensable gas mixture boundary layer. Here, the Nusselt and Sherwood numbers in the gas phase were modified to incorporate the effects of condensate film roughness, suction, and developing flow. The predictions of the theoretical model for the experimental heat transfer coefficients at the top and bottom of the tube were reasonable. The calculated heat transfer coefficients at the top of the tube were higher than those at the bottom of it, as experimental results. As the temperature potential at the top of tube was lower than that at the bottom of it, the heat fluxes at the upper and lower portions of the tube were similar to each other. Generally speaking, however, the model predictions showed a good agreement with experimental data. The new empirical correlation proposed by Lee and Kim (2008) for the vertical tube was applied to the condensation of steam/noncondensable mixture in a horizontal tube. Nusselt theory and Chato correlation were used to calculate the heat transfer coefficients at top and bottom of the horizontal tube, respectively. The predictions of the new empirical correlation were good and very similar with the theoretical model. (author)
17. Theoretical framework of community education improvement
Directory of Open Access Journals (Sweden)
Zaúl Brizuela Castillo
2015-05-01
Full Text Available The paper explains the connection between the approach selected for the analysis and development of community education and the contradictions manifested in its theoretical and practical comprehension. As a result, a comprehensive model for community education, describing the theoretical and methodological framework to improve community education, is devised. This framework is based on a conscious organizing of educative influences applied to the regular task of the community under the coordinate action of social institutions and organization that promote the transformational action of the neighborhood assuming a protagonist role in the improvement of the quality of live and morals related to the socialism updating process. The comprehensive model was proved experimentally at District 59 of San Miguel town; the transformation of the community was scientifically registered together with the information gather by means of observation and interviewing. The findings proved the pertinence and feasibility of the proposed model.
18. Simple theoretical models for composite rotor blades
Science.gov (United States)
Valisetty, R. R.; Rehfield, L. W.
1984-01-01
The development of theoretical rotor blade structural models for designs based upon composite construction is discussed. Care was exercised to include a member of nonclassical effects that previous experience indicated would be potentially important to account for. A model, representative of the size of a main rotor blade, is analyzed in order to assess the importance of various influences. The findings of this model study suggest that for the slenderness and closed cell construction considered, the refinements are of little importance and a classical type theory is adequate. The potential of elastic tailoring is dramatically demonstrated, so the generality of arbitrary ply layup in the cell wall is needed to exploit this opportunity.
19. Surface physics theoretical models and experimental methods
CERN Document Server
Mamonova, Marina V; Prudnikova, I A
2016-01-01
The demands of production, such as thin films in microelectronics, rely on consideration of factors influencing the interaction of dissimilar materials that make contact with their surfaces. Bond formation between surface layers of dissimilar condensed solids-termed adhesion-depends on the nature of the contacting bodies. Thus, it is necessary to determine the characteristics of adhesion interaction of different materials from both applied and fundamental perspectives of surface phenomena. Given the difficulty in obtaining reliable experimental values of the adhesion strength of coatings, the theoretical approach to determining adhesion characteristics becomes more important. Surface Physics: Theoretical Models and Experimental Methods presents straightforward and efficient approaches and methods developed by the authors that enable the calculation of surface and adhesion characteristics for a wide range of materials: metals, alloys, semiconductors, and complex compounds. The authors compare results from the ...
20. Proposal for an ecoradiological centre model
International Nuclear Information System (INIS)
Perovic, S.M.; Zunic, Z.; Demajo, M.; Konjevic, N.
1998-01-01
The problem of establishing an optimal Ecoradiological Centre Model is studied in some detail for the town of Kotor which is under the protection of the World Cultural and Natural Heritage. The proposed structure of the Centre is analyzed from the view of Engineering, Education and Scientific parameters. This Model is suitable for implementation as a network Centre Model for the state of Montenegro. Further, the modelling strategy of the ecoradiological condition control of natural, construction, bio and technological systems is elaborated. The proposal includes the ecoradiological monitoring, radioactive and electromagnetic radiation processing and protection for different natural zones as well as their different geostructures, aerial and hydrogeological conditions. The programme also includes all housing objects (hotels, flats, houses, office premises etc.). Here will also be presented the radiation protection and recommendations for the implementation of Title VII of the European Basic Safety Standards Directive (BSS), concerning significant increase in exposure due to natural radiation sources. Also, the proposal of Local Radiation Protection for the town of Kotor is presented. Our proposal for an Ecoradiological Centre Model presented here is in a form of a pilot programme, applicable also for other towns and states. (author)
1. Theoretical modeling of critical temperature increase in metamaterial superconductors
Science.gov (United States)
Smolyaninov, Igor; Smolyaninova, Vera
Recent experiments have demonstrated that the metamaterial approach is capable of drastic increase of the critical temperature Tc of epsilon near zero (ENZ) metamaterial superconductors. For example, tripling of the critical temperature has been observed in Al-Al2O3 ENZ core-shell metamaterials. Here, we perform theoretical modelling of Tc increase in metamaterial superconductors based on the Maxwell-Garnett approximation of their dielectric response function. Good agreement is demonstrated between theoretical modelling and experimental results in both aluminum and tin-based metamaterials. Taking advantage of the demonstrated success of this model, the critical temperature of hypothetic niobium, MgB2 and H2S-based metamaterial superconductors is evaluated. The MgB2-based metamaterial superconductors are projected to reach the liquid nitrogen temperature range. In the case of an H2S-based metamaterial Tc appears to reach 250 K. This work was supported in part by NSF Grant DMR-1104676 and the School of Emerging Technologies at Towson University.
2. Theoretical study of the dependence of single impurity Anderson model on various parameters within distributional exact diagonalization method
Science.gov (United States)
Syaina, L. P.; Majidi, M. A.
2018-04-01
Single impurity Anderson model describes a system consisting of non-interacting conduction electrons coupled with a localized orbital having strongly interacting electrons at a particular site. This model has been proven successful to explain the phenomenon of metal-insulator transition through Anderson localization. Despite the well-understood behaviors of the model, little has been explored theoretically on how the model properties gradually evolve as functions of hybridization parameter, interaction energy, impurity concentration, and temperature. Here, we propose to do a theoretical study on those aspects of a single impurity Anderson model using the distributional exact diagonalization method. We solve the model Hamiltonian by randomly generating sampling distribution of some conducting electron energy levels with various number of occupying electrons. The resulting eigenvalues and eigenstates are then used to define the local single-particle Green function for each sampled electron energy distribution using Lehmann representation. Later, we extract the corresponding self-energy of each distribution, then average over all the distributions and construct the local Green function of the system to calculate the density of states. We repeat this procedure for various values of those controllable parameters, and discuss our results in connection with the criteria of the occurrence of metal-insulator transition in this system.
3. Theoretical Model of Development of Information Competence among Students Enrolled in Elective Courses
Science.gov (United States)
Zhumasheva, Anara; Zhumabaeva, Zaida; Sakenov, Janat; Vedilina, Yelena; Zhaxylykova, Nuriya; Sekenova, Balkumis
2016-01-01
The current study focuses on the research topic of creating a theoretical model of development of information competence among students enrolled in elective courses. In order to examine specific features of the theoretical model of development of information competence among students enrolled in elective courses, we performed an analysis of…
4. Modified economic order quantity (EOQ model for items with imperfect quality: Game-theoretical approaches
Directory of Open Access Journals (Sweden)
2014-04-01
Full Text Available In the recent decade, studying the economic order quantity (EOQ models with imperfect quality has appealed to many researchers. Only few papers are published discussing EOQ models with imperfect items in a supply chain. In this paper, a two-echelon decentralized supply chain consisting of a manufacture and a supplier that both face just in time (JIT inventory problem is considered. It is sought to find the optimal number of the shipments and the quantity of each shipment in a way that minimizes the both manufacturer’s and the supplier’s cost functions. To the authors’ best knowledge, this is the first paper that deals with imperfect items in a decentralized supply chain. Thereby, three different game theoretical solution approaches consisting of two non-cooperative games and a cooperative game are proposed. Comparing the results of three different scenarios with those of the centralized model, the conclusions are drawn to obtain the best approach.
5. A beginner's guide to writing the nursing conceptual model-based theoretical rationale.
Science.gov (United States)
Gigliotti, Eileen; Manister, Nancy N
2012-10-01
Writing the theoretical rationale for a study can be a daunting prospect for novice researchers. Nursing's conceptual models provide excellent frameworks for placement of study variables, but moving from the very abstract concepts of the nursing model to the less abstract concepts of the study variables is difficult. Similar to the five-paragraph essay used by writing teachers to assist beginning writers to construct a logical thesis, the authors of this column present guidelines that beginners can follow to construct their theoretical rationale. This guide can be used with any nursing conceptual model but Neuman's model was chosen here as the exemplar.
6. Proposing an Environmental Excellence Self-Assessment Model
DEFF Research Database (Denmark)
Meulengracht Jensen, Peter; Johansen, John; Wæhrens, Brian Vejrum
2013-01-01
that the EEA model can be used in global organizations to differentiate environmental efforts depending on the maturity stage of the individual sites. Furthermore, the model can be used to support the decision-making process regarding when organizations should embark on more complex environmental efforts......This paper presents an Environmental Excellence Self-Assessment (EEA) model based on the structure of the European Foundation of Quality Management Business Excellence Framework. Four theoretical scenarios for deploying the model are presented as well as managerial implications, suggesting...
7. Tourism Cluster Competitiveness and Sustainability: Proposal for a Systemic Model to Measure the Impact of Tourism on Local Development
Directory of Open Access Journals (Sweden)
Sieglinde Kindl da Cunha
2005-07-01
Full Text Available This article proposes a model to measure tourism cluster impact on local development with a view to assessing tourism cluster interaction, competitiveness and sustainability impacts on the economy, society and the environment. The theoretical basis for this model is founded on cluster concept and typology adapting and integrating the systemic competitiveness and sustainability concepts within economic, social, cultural, environmental and political dimensions. The proposed model shows a holistic, multidisciplinary and multi-sector view of local development brought back through a systemic approach to the concepts of competitiveness, social equity and sustainability. Its results make possible strategic guidance to agents responsible for public sector tourism policies, as well as the strategies for competitiveness, competition, cooperation and sustainability in private companies and institutions.
8. 2nd International Conference on Proof-Theoretic Semantics
CERN Document Server
Schroeder-Heister, Peter
2016-01-01
This volume is the first ever collection devoted to the field of proof-theoretic semantics. Contributions address topics including the systematics of introduction and elimination rules and proofs of normalization, the categorial characterization of deductions, the relation between Heyting's and Gentzen's approaches to meaning, knowability paradoxes, proof-theoretic foundations of set theory, Dummett's justification of logical laws, Kreisel's theory of constructions, paradoxical reasoning, and the defence of model theory. The field of proof-theoretic semantics has existed for almost 50 years, but the term itself was proposed by Schroeder-Heister in the 1980s. Proof-theoretic semantics explains the meaning of linguistic expressions in general and of logical constants in particular in terms of the notion of proof. This volume emerges from presentations at the Second International Conference on Proof-Theoretic Semantics in Tübingen in 2013, where contributing authors were asked to provide a self-contained descri...
9. Healing from Childhood Sexual Abuse: A Theoretical Model
Science.gov (United States)
Draucker, Claire Burke; Martsolf, Donna S.; Roller, Cynthia; Knapik, Gregory; Ross, Ratchneewan; Stidham, Andrea Warner
2011-01-01
Childhood sexual abuse is a prevalent social and health care problem. The processes by which individuals heal from childhood sexual abuse are not clearly understood. The purpose of this study was to develop a theoretical model to describe how adults heal from childhood sexual abuse. Community recruitment for an ongoing broader project on sexual…
10. Theoretical Hill-type muscle and stability: numerical model and application.
Science.gov (United States)
Schmitt, S; Günther, M; Rupp, T; Bayer, A; Häufle, D
2013-01-01
The construction of artificial muscles is one of the most challenging developments in today's biomedical science. The application of artificial muscles is focused both on the construction of orthotics and prosthetics for rehabilitation and prevention purposes and on building humanoid walking machines for robotics research. Research in biomechanics tries to explain the functioning and design of real biological muscles and therefore lays the fundament for the development of functional artificial muscles. Recently, the hyperbolic Hill-type force-velocity relation was derived from simple mechanical components. In this contribution, this theoretical yet biomechanical model is transferred to a numerical model and applied for presenting a proof-of-concept of a functional artificial muscle. Additionally, this validated theoretical model is used to determine force-velocity relations of different animal species that are based on the literature data from biological experiments. Moreover, it is shown that an antagonistic muscle actuator can help in stabilising a single inverted pendulum model in favour of a control approach using a linear torque generator.
11. Organizational Resilience: The Theoretical Model and Research Implication
Directory of Open Access Journals (Sweden)
Xiao Lei
2017-01-01
Full Text Available Organizations are all subject to a diverse and ever changing and uncertain environment. Under this situation organizations should develop a capability which can resist the emergency and recover from the disruption. Base on lot of literature, the paper provides the main concept of organizational resilience; construct the primary theoretical model and some implications for management.
12. Proposed Reliability/Cost Model
Science.gov (United States)
Delionback, L. M.
1982-01-01
New technique estimates cost of improvement in reliability for complex system. Model format/approach is dependent upon use of subsystem cost-estimating relationships (CER's) in devising cost-effective policy. Proposed methodology should have application in broad range of engineering management decisions.
13. A theoretical model for predicting neutron fluxes for cyclic Neutron ...
African Journals Online (AJOL)
A theoretical model has been developed for prediction of thermal neutron fluxes required for cyclic irradiations of a sample to obtain the same activity previously used for the detection of any radionuclide of interest. The model is suitable for radiotracer production or for long-lived neutron activation products where the ...
14. Using a fuzzy comprehensive evaluation method to determine product usability: A proposed theoretical framework.
Science.gov (United States)
Zhou, Ronggang; Chan, Alan H S
2017-01-01
In order to compare existing usability data to ideal goals or to that for other products, usability practitioners have tried to develop a framework for deriving an integrated metric. However, most current usability methods with this aim rely heavily on human judgment about the various attributes of a product, but often fail to take into account of the inherent uncertainties in these judgments in the evaluation process. This paper presents a universal method of usability evaluation by combining the analytic hierarchical process (AHP) and the fuzzy evaluation method. By integrating multiple sources of uncertain information during product usability evaluation, the method proposed here aims to derive an index that is structured hierarchically in terms of the three usability components of effectiveness, efficiency, and user satisfaction of a product. With consideration of the theoretical basis of fuzzy evaluation, a two-layer comprehensive evaluation index was first constructed. After the membership functions were determined by an expert panel, the evaluation appraisals were computed by using the fuzzy comprehensive evaluation technique model to characterize fuzzy human judgments. Then with the use of AHP, the weights of usability components were elicited from these experts. Compared to traditional usability evaluation methods, the major strength of the fuzzy method is that it captures the fuzziness and uncertainties in human judgments and provides an integrated framework that combines the vague judgments from multiple stages of a product evaluation process.
15. The self-schema model: a theoretical approach to the self-concept in eating disorders.
Science.gov (United States)
Stein, K F
1996-04-01
Over the last several decades, the self-concept has been implicated as a important determinant of eating disorders (ED). Although considerable progress has been made, questions remain unanswered about the properties of self-concept that distinguish women with an ED from other populations, and mechanisms that link the self-concept to the disordered behaviors. Markus's self-schema model is presented as a theoretical approach to explore the role of the self-concept in ED. To show how the schema model can be integrated with existing work on the self-concept in ED, a framework is proposed that addresses the number, content, and accessibility of the self-schemas. More specifically, it is posited that a limited collection of positive self-schemas available in memory, in combination with a chronically and inflexibly accessible body-weight self-schema, lead to the disordered behaviors associated with anorexia nervosa and bulimia nervosa.
16. A game-theoretical model for selecting a site of non-preferred waste facilities
International Nuclear Information System (INIS)
Kim, Seong Ho; Kim, Tae Woon
2006-01-01
In the present work, a game-theoretic model (GTM) as a tool of conflict analysis is proposed for multiplayer multicriteria decision-making problems in a conflict situation. The developed GTM is used for obtaining the most possible resolutions in the conflict among multiple decision makers. The GTM is based on directed graph structure and solution concepts. To demonstrate the performance of the GTM, using a numerical example, the GTM is applied to an environmental conflict problem, especially a non-preferred waste disposal siting conflict available in the literature. It is found that with GTM the states in equilibrium can be recognized. The conflict under consideration is to select a site of non-preferred waste facilities. The government is to choose a site of installation for users of a toxic waste disposal facility. A certain time-point of interest is a period of time to select one of candidate sites that completely meet regular criteria of governmental body in charge of permitting a facility site. The facility siting conflict among multiple players (i.e., decision-makers, DMs) of concern is viewed as a multiple player-multiple criteria (MPMC) domain. For instance, three possible sites (i.e., site A, site B, and site C) to be selected by multiple players are characterized by the building cost, accessibility, and proximity to the residential area. Concerning the site A, the installation of a facility is not expensive, the accessible to a facility is easy, and the site A is located very near a residential area. Concerning site B, the facility is expensive to build, the facility is easily accessible, and the site is located near the residential area. Concerning site C, the installation cost is expensive, the accessibility is difficult, and the location of site is far from the residential area. In simple models, three main groups of players could be considered to be the government, users, and local residents. The government is to play a role as one of proponents or
17. CONTENT ANALYSIS, DISCOURSE ANALYSIS, AND CONVERSATION ANALYSIS: PRELIMINARY STUDY ON CONCEPTUAL AND THEORETICAL METHODOLOGICAL DIFFERENCES
Directory of Open Access Journals (Sweden)
Anderson Tiago Peixoto Gonçalves
2016-08-01
Full Text Available This theoretical essay aims to reflect on three models of text interpretation used in qualitative research, which is often confused in its concepts and methodologies (Content Analysis, Discourse Analysis, and Conversation Analysis. After the presentation of the concepts, the essay proposes a preliminary discussion on conceptual and theoretical methodological differences perceived between them. A review of the literature was performed to support the conceptual and theoretical methodological discussion. It could be verified that the models have differences related to the type of strategy used in the treatment of texts, the type of approach, and the appropriate theoretical position.
18. Theoretical models for supercritical fluid extraction.
Science.gov (United States)
Huang, Zhen; Shi, Xiao-Han; Jiang, Wei-Juan
2012-08-10
For the proper design of supercritical fluid extraction processes, it is essential to have a sound knowledge of the mass transfer mechanism of the extraction process and the appropriate mathematical representation. In this paper, the advances and applications of kinetic models for describing supercritical fluid extraction from various solid matrices have been presented. The theoretical models overviewed here include the hot ball diffusion, broken and intact cell, shrinking core and some relatively simple models. Mathematical representations of these models have been in detail interpreted as well as their assumptions, parameter identifications and application examples. Extraction process of the analyte solute from the solid matrix by means of supercritical fluid includes the dissolution of the analyte from the solid, the analyte diffusion in the matrix and its transport to the bulk supercritical fluid. Mechanisms involved in a mass transfer model are discussed in terms of external mass transfer resistance, internal mass transfer resistance, solute-solid interactions and axial dispersion. The correlations of the external mass transfer coefficient and axial dispersion coefficient with certain dimensionless numbers are also discussed. Among these models, the broken and intact cell model seems to be the most relevant mathematical model as it is able to provide realistic description of the plant material structure for better understanding the mass-transfer kinetics and thus it has been widely employed for modeling supercritical fluid extraction of natural matters. Copyright © 2012 Elsevier B.V. All rights reserved.
19. Validation of theoretical models through measured pavement response
DEFF Research Database (Denmark)
Ullidtz, Per
1999-01-01
mechanics was quite different from the measured stress, the peak theoretical value being only half of the measured value.On an instrumented pavement structure in the Danish Road Testing Machine, deflections were measured at the surface of the pavement under FWD loading. Different analytical models were...... then used to derive the elastic parameters of the pavement layeres, that would produce deflections matching the measured deflections. Stresses and strains were then calculated at the position of the gauges and compared to the measured values. It was found that all analytical models would predict the tensile...
20. A theoretical model of water and trade
Science.gov (United States)
Dang, Qian; Konar, Megan; Reimer, Jeffrey J.; Di Baldassarre, Giuliano; Lin, Xiaowen; Zeng, Ruijie
2016-03-01
Water is an essential input for agricultural production. Agriculture, in turn, is globalized through the trade of agricultural commodities. In this paper, we develop a theoretical model that emphasizes four tradeoffs involving water-use decision-making that are important yet not always considered in a consistent framework. One tradeoff focuses on competition for water among different economic sectors. A second tradeoff examines the possibility that certain types of agricultural investments can offset water use. A third tradeoff explores the possibility that the rest of the world can be a source of supply or demand for a country's water-using commodities. The fourth tradeoff concerns how variability in water supplies influences farmer decision-making. We show conditions under which trade liberalization affect water use. Two policy scenarios to reduce water use are evaluated. First, we derive a target tax that reduces water use without offsetting the gains from trade liberalization, although important tradeoffs exist between economic performance and resource use. Second, we show how subsidization of water-saving technologies can allow producers to use less water without reducing agricultural production, making such subsidization an indirect means of influencing water use decision-making. Finally, we outline conditions under which riskiness of water availability affects water use. These theoretical model results generate hypotheses that can be tested empirically in future work.
1. Forming a Perceived Franchise Value: Theoretical Insights
OpenAIRE
Levickaitė, Rasa; Reimeris, Ramojus
2011-01-01
The article is based on literature review, theoretical insights and deals with the topic of perceived franchise value. The objective of the paper is – what elements form the franchisee's perceived value in service business (comparing with alternative of own business model). The aim of the paper is to propose systematic value elements in the process of forming a value of a franchise business model perceived by the franchisee. In terms of practical meaning, this article should be relevant to en...
2. A theoretical-electron-density databank using a model of real and virtual spherical atoms.
Science.gov (United States)
Nassour, Ayoub; Domagala, Slawomir; Guillot, Benoit; Leduc, Theo; Lecomte, Claude; Jelsch, Christian
2017-08-01
A database describing the electron density of common chemical groups using combinations of real and virtual spherical atoms is proposed, as an alternative to the multipolar atom modelling of the molecular charge density. Theoretical structure factors were computed from periodic density functional theory calculations on 38 crystal structures of small molecules and the charge density was subsequently refined using a density model based on real spherical atoms and additional dummy charges on the covalent bonds and on electron lone-pair sites. The electron-density parameters of real and dummy atoms present in a similar chemical environment were averaged on all the molecules studied to build a database of transferable spherical atoms. Compared with the now-popular databases of transferable multipolar parameters, the spherical charge modelling needs fewer parameters to describe the molecular electron density and can be more easily incorporated in molecular modelling software for the computation of electrostatic properties. The construction method of the database is described. In order to analyse to what extent this modelling method can be used to derive meaningful molecular properties, it has been applied to the urea molecule and to biotin/streptavidin, a protein/ligand complex.
3. Research in theoretical nuclear physics. Progress report and research proposal, 1980-1981
International Nuclear Information System (INIS)
Bayman, B.F.; Ellis, P.J.; Tang, Y.C.
1980-01-01
Research performed during 1980 (and proposed for 1981) is summarized briefly in this administrative report. The main theme of the research is the mechanisms of light- and heavy-ion nuclear reactions and the relation between microscopic theories and phenomenological models. A publication list and budget are included
4. Theoretical modeling and experimental study on fatigue initiation life of 16MnR notched components
International Nuclear Information System (INIS)
Wang Xiaogui; Gao Zengliang; Qiu Baoxiang; Jiang Yanrao
2010-01-01
In order to investigate the effects of notch geometry and loading conditions on the fatigue initiation life and fatigue fracture life of 16MnR material, fatigue experiments were conducted for both smooth rod specimens and notched rod specimens. The detailed elastic-plastic stress and strain responses were computed by the finite element software (ABAQUS) incorporating a robust cyclic plasticity model via a user subroutine UMAT. The obtained stresses and strains were applied to the multiaxial fatigue damage criterion to compute the fatigue damage induced by a loading cycle on the critical material plane. The fatigue initiation life was then obtained by the proposed theoretical model. The well agreement between the predicted results and the experiment data indicated that the fatigue initiation of notched components in the multiaxial stress state related to all the nonzero stress and strain quantities. (authors)
5. Theoretical Hill-Type Muscle and Stability: Numerical Model and Application
Directory of Open Access Journals (Sweden)
S. Schmitt
2013-01-01
Full Text Available The construction of artificial muscles is one of the most challenging developments in today’s biomedical science. The application of artificial muscles is focused both on the construction of orthotics and prosthetics for rehabilitation and prevention purposes and on building humanoid walking machines for robotics research. Research in biomechanics tries to explain the functioning and design of real biological muscles and therefore lays the fundament for the development of functional artificial muscles. Recently, the hyperbolic Hill-type force-velocity relation was derived from simple mechanical components. In this contribution, this theoretical yet biomechanical model is transferred to a numerical model and applied for presenting a proof-of-concept of a functional artificial muscle. Additionally, this validated theoretical model is used to determine force-velocity relations of different animal species that are based on the literature data from biological experiments. Moreover, it is shown that an antagonistic muscle actuator can help in stabilising a single inverted pendulum model in favour of a control approach using a linear torque generator.
6. Theoretical and numerical analysis of auxiliary heating for cryogenic target fabrication
International Nuclear Information System (INIS)
Yang Xiaohu; Tian Chenglin; Yin Yan; Xu Han; Zhuo Hongbin
2008-01-01
In order to compensate for the nonspherical-symmetric heat flux in the hohlraum, auxiliary heating is usually applied to the outside wall of the hohlraum during the cooling process. A one-dimensional heat exchange theoretical model has been proposed in the indirect-drive target, to analyze the required auxiliary heat flux. With a two dimensional axisymmetric model, the auxiliary heating mechanism has been simulated by FLUENT code. The optimum heat flux which is 635 W/m 2 has been obtained as the heaters around the outside of the hohlraum about 1.3 mm above and below the mid-plane. The result is in good agreement with the theoretical model. (authors)
7. Theoretical methods and models for mechanical properties of soft biomaterials
Directory of Open Access Journals (Sweden)
Zhonggang Feng
2017-06-01
Full Text Available We review the most commonly used theoretical methods and models for the mechanical properties of soft biomaterials, which include phenomenological hyperelastic and viscoelastic models, structural biphasic and network models, and the structural alteration theory. We emphasize basic concepts and recent developments. In consideration of the current progress and needs of mechanobiology, we introduce methods and models for tackling micromechanical problems and their applications to cell biology. Finally, the challenges and perspectives in this field are discussed.
8. Transport simulations TFTR: Theoretically-based transport models and current scaling
International Nuclear Information System (INIS)
Redi, M.H.; Cummings, J.C.; Bush, C.E.; Fredrickson, E.; Grek, B.; Hahm, T.S.; Hill, K.W.; Johnson, D.W.; Mansfield, D.K.; Park, H.; Scott, S.D.; Stratton, B.C.; Synakowski, E.J.; Tang, W.M.; Taylor, G.
1991-12-01
In order to study the microscopic physics underlying observed L-mode current scaling, 1-1/2-d BALDUR has been used to simulate density and temperature profiles for high and low current, neutral beam heated discharges on TFTR with several semi-empirical, theoretically-based models previously compared for TFTR, including several versions of trapped electron drift wave driven transport. Experiments at TFTR, JET and D3-D show that I p scaling of τ E does not arise from edge modes as previously thought, and is most likely to arise from nonlocal processes or from the I p -dependence of local plasma core transport. Consistent with this, it is found that strong current scaling does not arise from any of several edge models of resistive ballooning. Simulations with the profile consistent drift wave model and with a new model for toroidal collisionless trapped electron mode core transport in a multimode formalism, lead to strong current scaling of τ E for the L-mode cases on TFTR. None of the theoretically-based models succeeded in simulating the measured temperature and density profiles for both high and low current experiments
9. Theoretical Modeling of Rock Breakage by Hydraulic and Mechanical Tool
Directory of Open Access Journals (Sweden)
Hongxiang Jiang
2014-01-01
Full Text Available Rock breakage by coupled mechanical and hydraulic action has been developed over the past several decades, but theoretical study on rock fragmentation by mechanical tool with water pressure assistance was still lacking. The theoretical model of rock breakage by mechanical tool was developed based on the rock fracture mechanics and the solution of Boussinesq’s problem, and it could explain the process of rock fragmentation as well as predicating the peak reacting force. The theoretical model of rock breakage by coupled mechanical and hydraulic action was developed according to the superposition principle of intensity factors at the crack tip, and the reacting force of mechanical tool assisted by hydraulic action could be reduced obviously if the crack with a critical length could be produced by mechanical or hydraulic impact. The experimental results indicated that the peak reacting force could be reduced about 15% assisted by medium water pressure, and quick reduction of reacting force after peak value decreased the specific energy consumption of rock fragmentation by mechanical tool. The crack formation by mechanical or hydraulic impact was the prerequisite to improvement of the ability of combined breakage.
10. Wettability of graphitic-carbon and silicon surfaces: MD modeling and theoretical analysis
International Nuclear Information System (INIS)
2015-01-01
The wettability of graphitic carbon and silicon surfaces was numerically and theoretically investigated. A multi-response method has been developed for the analysis of conventional molecular dynamics (MD) simulations of droplets wettability. The contact angle and indicators of the quality of the computations are tracked as a function of the data sets analyzed over time. This method of analysis allows accurate calculations of the contact angle obtained from the MD simulations. Analytical models were also developed for the calculation of the work of adhesion using the mean-field theory, accounting for the interfacial entropy changes. A calibration method is proposed to provide better predictions of the respective contact angles under different solid-liquid interaction potentials. Estimations of the binding energy between a water monomer and graphite match those previously reported. In addition, a breakdown in the relationship between the binding energy and the contact angle was observed. The macroscopic contact angles obtained from the MD simulations were found to match those predicted by the mean-field model for graphite under different wettability conditions, as well as the contact angles of Si(100) and Si(111) surfaces. Finally, an assessment of the effect of the Lennard-Jones cutoff radius was conducted to provide guidelines for future comparisons between numerical simulations and analytical models of wettability
11. Theoretical and Empirical Review of Asset Pricing Models: A Structural Synthesis
Directory of Open Access Journals (Sweden)
Saban Celik
2012-01-01
Full Text Available The purpose of this paper is to give a comprehensive theoretical review devoted to asset pricing models by emphasizing static and dynamic versions in the line with their empirical investigations. A considerable amount of financial economics literature devoted to the concept of asset pricing and their implications. The main task of asset pricing model can be seen as the way to evaluate the present value of the pay offs or cash flows discounted for risk and time lags. The difficulty coming from discounting process is that the relevant factors that affect the pay offs vary through the time whereas the theoretical framework is still useful to incorporate the changing factors into an asset pricing models. This paper fills the gap in literature by giving a comprehensive review of the models and evaluating the historical stream of empirical investigations in the form of structural empirical review.
12. A theoretical model for investigating the effect of vacuum fluctuations on the electromechanical stability of nanotweezers
Science.gov (United States)
2015-06-01
In this paper, the impact of the Casimir attraction on the electromechanical stability of nanowire-fabricated nanotweezers is investigated using a theoretical continuum mechanics model. The Dirichlet mode is considered and an asymptotic solution, based on path integral approach, is applied to consider the effect of vacuum fluctuations in the model. The Euler-Bernoulli beam theory is employed to derive the nonlinear governing equation of the nanotweezers. The governing equations are solved by three different approaches, i.e. the modified variation iteration method, generalized differential quadrature method and using a lumped parameter model. Various perspectives of the problem, including the comparison with the van der Waals force regime, the variation of instability parameters and effects of geometry are addressed in present paper. The proposed approach is beneficial for the precise determination of the electrostatic response of the nanotweezers in the presence of Casimir force.
13. Decision support models for solid waste management: Review and game-theoretic approaches
International Nuclear Information System (INIS)
Karmperis, Athanasios C.; Aravossis, Konstantinos; Tatsiopoulos, Ilias P.; Sotirchos, Anastasios
2013-01-01
Highlights: ► The mainly used decision support frameworks for solid waste management are reviewed. ► The LCA, CBA and MCDM models are presented and their strengths, weaknesses, similarities and possible combinations are analyzed. ► The game-theoretic approach in a solid waste management context is presented. ► The waste management bargaining game is introduced as a specific decision support framework. ► Cooperative and non-cooperative game-theoretic approaches to decision support for solid waste management are discussed. - Abstract: This paper surveys decision support models that are commonly used in the solid waste management area. Most models are mainly developed within three decision support frameworks, which are the life-cycle assessment, the cost–benefit analysis and the multi-criteria decision-making. These frameworks are reviewed and their strengths and weaknesses as well as their critical issues are analyzed, while their possible combinations and extensions are also discussed. Furthermore, the paper presents how cooperative and non-cooperative game-theoretic approaches can be used for the purpose of modeling and analyzing decision-making in situations with multiple stakeholders. Specifically, since a waste management model is sustainable when considering not only environmental and economic but also social aspects, the waste management bargaining game is introduced as a specific decision support framework in which future models can be developed
14. Decision support models for solid waste management: Review and game-theoretic approaches
Energy Technology Data Exchange (ETDEWEB)
Karmperis, Athanasios C., E-mail: [email protected] [Sector of Industrial Management and Operational Research, School of Mechanical Engineering, National Technical University of Athens, Iroon Polytechniou 9, 15780 Athens (Greece); Army Corps of Engineers, Hellenic Army General Staff, Ministry of Defence (Greece); Aravossis, Konstantinos; Tatsiopoulos, Ilias P.; Sotirchos, Anastasios [Sector of Industrial Management and Operational Research, School of Mechanical Engineering, National Technical University of Athens, Iroon Polytechniou 9, 15780 Athens (Greece)
2013-05-15
Highlights: ► The mainly used decision support frameworks for solid waste management are reviewed. ► The LCA, CBA and MCDM models are presented and their strengths, weaknesses, similarities and possible combinations are analyzed. ► The game-theoretic approach in a solid waste management context is presented. ► The waste management bargaining game is introduced as a specific decision support framework. ► Cooperative and non-cooperative game-theoretic approaches to decision support for solid waste management are discussed. - Abstract: This paper surveys decision support models that are commonly used in the solid waste management area. Most models are mainly developed within three decision support frameworks, which are the life-cycle assessment, the cost–benefit analysis and the multi-criteria decision-making. These frameworks are reviewed and their strengths and weaknesses as well as their critical issues are analyzed, while their possible combinations and extensions are also discussed. Furthermore, the paper presents how cooperative and non-cooperative game-theoretic approaches can be used for the purpose of modeling and analyzing decision-making in situations with multiple stakeholders. Specifically, since a waste management model is sustainable when considering not only environmental and economic but also social aspects, the waste management bargaining game is introduced as a specific decision support framework in which future models can be developed.
15. Toward a Theoretical Model of Employee Turnover: A Human Resource Development Perspective
Science.gov (United States)
Peterson, Shari L.
2004-01-01
This article sets forth the Organizational Model of Employee Persistence, influenced by traditional turnover models and a student attrition model. The model was developed to clarify the impact of organizational practices on employee turnover from a human resource development (HRD) perspective and provide a theoretical foundation for research on…
16. A Comparative Study of Theoretical Graph Models for Characterizing Structural Networks of Human Brain
Directory of Open Access Journals (Sweden)
Xiaojin Li
2013-01-01
Full Text Available Previous studies have investigated both structural and functional brain networks via graph-theoretical methods. However, there is an important issue that has not been adequately discussed before: what is the optimal theoretical graph model for describing the structural networks of human brain? In this paper, we perform a comparative study to address this problem. Firstly, large-scale cortical regions of interest (ROIs are localized by recently developed and validated brain reference system named Dense Individualized Common Connectivity-based Cortical Landmarks (DICCCOL to address the limitations in the identification of the brain network ROIs in previous studies. Then, we construct structural brain networks based on diffusion tensor imaging (DTI data. Afterwards, the global and local graph properties of the constructed structural brain networks are measured using the state-of-the-art graph analysis algorithms and tools and are further compared with seven popular theoretical graph models. In addition, we compare the topological properties between two graph models, namely, stickiness-index-based model (STICKY and scale-free gene duplication model (SF-GD, that have higher similarity with the real structural brain networks in terms of global and local graph properties. Our experimental results suggest that among the seven theoretical graph models compared in this study, STICKY and SF-GD models have better performances in characterizing the structural human brain network.
17. Assembling three-dimensional nanostructures on metal surfaces with a reversible vertical single-atom manipulation: A theoretical modeling
International Nuclear Information System (INIS)
Yang Tianxing; Ye Xiang; Huang Lei; Xie Yiqun; Ke Sanhuang
2012-01-01
Highlights: ► We simulate the reversible vertical single-atom manipulations on several metal surfaces. ► We propose a method to predict whether a reversible vertical single-atom manipulation can be successful on several metal surfaces. ► A 3-dimensional Ni nanocluster is assembled on the Ni(1 1 1) surface using a Ni trimer-apex tip. - Abstract: We propose a theoretical model to show that pulling up an adatom from an atomic step requires a weaker force than from the flat surfaces of Al(0 0 1), Ni(1 1 1), Pt(1 1 0) and Au(1 1 0). Single adatom in the atomic step can be extracted vertically by a trimer-apex tip while can be released to the flat surface. This reversible vertical manipulation can then be used to fabricate a supported three-dimensional (3D) nanostructure on the Ni(1 1 1) surface. The present modeling can be used to predict whether the reversible vertical single-atom manipulation and thus the assembling of 3D nanostructures can be achieved on a metal surface.
18. Measuring and Managing Value Co-Creation Process: Overview of Existing Theoretical Models
Directory of Open Access Journals (Sweden)
Monika Skaržauskaitė
2013-08-01
Full Text Available Purpose — the article is to provide a holistic view on concept of value co-creation and existing models for measuring and managing it by conducting theoretical analysis of scientific literature sources targeting the integration of various approaches. Most important and relevant results of the literature study are presented with a focus on changed roles of organizations and consumers. This article aims at contributing theoretically to the research stream of measuring co-creation of value in order to gain knowledge for improvement of organizational performance and enabling new and innovative means of value creation. Design/methodology/approach. The nature of this research is exploratory – theoretical analysis and synthesis of scientific literature sources targeting the integration of various approaches was performed. This approach was chosen due to the absence of established theory on models of co-creation, possible uses in organizations and systematic overview of tools measuring/suggesting how to measure co-creation. Findings. While the principles of managing and measuring co-creation in regards of consumer motivation and involvement are widely researched, little attempt has been made to identify critical factors and create models dealing with organizational capabilities and managerial implications of value co-creation. Systematic analysis of literature revealed a gap not only in empirical research concerning organization’s role in co-creation process, but in theoretical and conceptual levels, too. Research limitations/implications. The limitations of this work as a literature review lies in its nature – the complete reliance on previously published research papers and the availability of these studies. For a deeper understanding of co-creation management and for developing models that can be used in real-life organizations, a broader theoretical, as well as empirical, research is necessary. Practical implications. Analysis of the
19. δ-Cut Decision-Theoretic Rough Set Approach: Model and Attribute Reductions
Directory of Open Access Journals (Sweden)
Hengrong Ju
2014-01-01
Full Text Available Decision-theoretic rough set is a quite useful rough set by introducing the decision cost into probabilistic approximations of the target. However, Yao’s decision-theoretic rough set is based on the classical indiscernibility relation; such a relation may be too strict in many applications. To solve this problem, a δ-cut decision-theoretic rough set is proposed, which is based on the δ-cut quantitative indiscernibility relation. Furthermore, with respect to criterions of decision-monotonicity and cost decreasing, two different algorithms are designed to compute reducts, respectively. The comparisons between these two algorithms show us the following: (1 with respect to the original data set, the reducts based on decision-monotonicity criterion can generate more rules supported by the lower approximation region and less rules supported by the boundary region, and it follows that the uncertainty which comes from boundary region can be decreased; (2 with respect to the reducts based on decision-monotonicity criterion, the reducts based on cost minimum criterion can obtain the lowest decision costs and the largest approximation qualities. This study suggests potential application areas and new research trends concerning rough set theory.
20. The demand-induced strain compensation model : renewed theoretical considerations and empirical evidence
NARCIS (Netherlands)
de Jonge, J.; Dormann, C.; van den Tooren, M.; Näswall, K.; Hellgren, J.; Sverke, M.
2008-01-01
This chapter presents a recently developed theoretical model on jobrelated stress and performance, the so-called Demand-Induced Strain Compensation (DISC) model. The DISC model predicts in general that adverse health effects of high job demands can best be compensated for by matching job resources
1. Nursing management of sensory overload in psychiatry – Theoretical densification and modification of the framework model
Science.gov (United States)
Scheydt, Stefan; Needham, Ian; Behrens, Johann
2017-01-01
Background: Within the scope of the research project on the subjects of sensory overload and stimulus regulation, a theoretical framework model of the nursing care of patients with sensory overload in psychiatry was developed. In a second step, this theoretical model should now be theoretically compressed and, if necessary, modified. Aim: Empirical verification as well as modification, enhancement and theoretical densification of the framework model of nursing care of patients with sensory overload in psychiatry. Method: Analysis of 8 expert interviews by summarizing and structuring content analysis methods based on Meuser and Nagel (2009) as well as Mayring (2010). Results: The developed framework model (Scheydt et al., 2016b) could be empirically verified, theoretically densificated and extended by one category (perception modulation). Thus, four categories of nursing care of patients with sensory overload can be described in inpatient psychiatry: removal from stimuli, modulation of environmental factors, perceptual modulation as well as help somebody to help him- or herself / coping support. Conclusions: Based on the methodological approach, a relatively well-saturated, credible conceptualization of a theoretical model for the description of the nursing care of patients with sensory overload in stationary psychiatry could be worked out. In further steps, these measures have to be further developed, implemented and evaluated regarding to their efficacy.
2. Anticipatory Cognitive Systems: a Theoretical Model
Science.gov (United States)
Terenzi, Graziano
This paper deals with the problem of understanding anticipation in biological and cognitive systems. It is argued that a physical theory can be considered as biologically plausible only if it incorporates the ability to describe systems which exhibit anticipatory behaviors. The paper introduces a cognitive level description of anticipation and provides a simple theoretical characterization of anticipatory systems on this level. Specifically, a simple model of a formal anticipatory neuron and a model (i.e. the τ-mirror architecture) of an anticipatory neural network which is based on the former are introduced and discussed. The basic feature of this architecture is that a part of the network learns to represent the behavior of the other part over time, thus constructing an implicit model of its own functioning. As a consequence, the network is capable of self-representation; anticipation, on a oscopic level, is nothing but a consequence of anticipation on a microscopic level. Some learning algorithms are also discussed together with related experimental tasks and possible integrations. The outcome of the paper is a formal characterization of anticipation in cognitive systems which aims at being incorporated in a comprehensive and more general physical theory.
3. The theoretical aspects of UrQMD & AMPT models
Energy Technology Data Exchange (ETDEWEB)
Saini, Abhilasha, E-mail: [email protected] [Research Scholar, Department of Physics, Suresh Gyan vihar University, Jaipur (India); Bhardwaj, Sudhir, E-mail: [email protected] [Assistant professor, Govt. College of Engineering & Technology, Bikaner (India)
2016-05-06
The field of high energy physics is very challenging in carrying out theories and experiments to unlock the secrets of heavy ion collisions and still not cracked and solved completely. There are many theoretical queries; some may be due to the inherent causes like the non-perturbative nature of QCD in the strong coupling limit, also due to the multi-particle production and evolution during the heavy ion collisions which increase the complexity of the phenomena. So for the purpose of understanding the phenomena, variety of theories and ideas are developed which are usually implied in the form of Monte-Carlo codes. The UrQMD model and the AMPT model are discussed here in detail. These methods are useful in modeling the nuclear collisions.
4. Prediction of density limits in tokamaks: Theory, comparison with experiment, and application to the proposed Fusion Ignition Research Experiment
International Nuclear Information System (INIS)
Stacey, Weston M.
2002-01-01
A framework for the predictive calculation of density limits in future tokamaks is proposed. Theoretical models for different density limit phenomena are summarized, and the requirements for additional models are identified. These theoretical density limit models have been incorporated into a relatively simple, but phenomenologically comprehensive, integrated numerical calculation of the core, edge, and divertor plasmas and of the recycling neutrals, in order to obtain plasma parameters needed for the evaluation of the theoretical models. A comparison of these theoretical predictions with observed density limits in current experiments is summarized. A model for the calculation of edge pedestal parameters, which is needed in order to apply the density limit predictions to future tokamaks, is summarized. An application to predict the proximity to density limits and the edge pedestal parameters of the proposed Fusion Ignition Research Experiment is described
5. Modeling Organizational Design - Applying A Formalism Model From Theoretical Physics
Directory of Open Access Journals (Sweden)
Robert Fabac
2008-06-01
Full Text Available Modern organizations are exposed to diverse external environment influences. Currently accepted concepts of organizational design take into account structure, its interaction with strategy, processes, people, etc. Organization design and planning aims to align this key organizational design variables. At the higher conceptual level, however, completely satisfactory formulation for this alignment doesn’t exist. We develop an approach originating from the application of concepts of theoretical physics to social systems. Under this approach, the allocation of organizational resources is analyzed in terms of social entropy, social free energy and social temperature. This allows us to formalize the dynamic relationship between organizational design variables. In this paper we relate this model to Galbraith's Star Model and we also suggest improvements in the procedure of the complex analytical method in organizational design.
6. The neural mediators of kindness-based meditation: a theoretical model
Directory of Open Access Journals (Sweden)
Jennifer Streiffer Mascaro
2015-02-01
Full Text Available Although kindness-based contemplative practices are increasingly employed by clinicians and cognitive researchers to enhance prosocial emotions, social cognitive skills, and well-being, and as a tool to understand the basic workings of the social mind, we lack a coherent theoretical model with which to test the mechanisms by which kindness-based meditation may alter the brain and body. Here we link contemplative accounts of compassion and loving-kindness practices with research from social cognitive neuroscience and social psychology to generate predictions about how diverse practices may alter brain structure and function and related aspects of social cognition. Contingent on the nuances of the practice, kindness-based meditation may enhance the neural systems related to faster and more basic perceptual or motor simulation processes, simulation of another’s affective body state, slower and higher-level perspective-taking, modulatory processes such as emotion regulation and self/other discrimination, and combinations thereof. This theoretical model will be discussed alongside best practices for testing such a model and potential implications and applications of future work.
7. A Theoretical Model for the Prediction of Siphon Breaking Phenomenon
International Nuclear Information System (INIS)
Bae, Youngmin; Kim, Young-In; Seo, Jae-Kwang; Kim, Keung Koo; Yoon, Juhyeon
2014-01-01
A siphon phenomenon or siphoning often refers to the movement of liquid from a higher elevation to a lower one through a tube in an inverted U shape (whose top is typically located above the liquid surface) under the action of gravity, and has been used in a variety of reallife applications such as a toilet bowl and a Greedy cup. However, liquid drainage due to siphoning sometimes needs to be prevented. For example, a siphon breaker, which is designed to limit the siphon effect by allowing the gas entrainment into a siphon line, is installed in order to maintain the pool water level above the reactor core when a loss of coolant accident (LOCA) occurs in an open-pool type research reactor. In this paper, we develop a theoretical model to predict the siphon breaking phenomenon. In this paper, a theoretical model to predict the siphon breaking phenomenon is developed. It is shown that the present model predicts well the fundamental features of the siphon breaking phenomenon and undershooting height
8. A Theoretical Model for the Prediction of Siphon Breaking Phenomenon
Energy Technology Data Exchange (ETDEWEB)
Bae, Youngmin; Kim, Young-In; Seo, Jae-Kwang; Kim, Keung Koo; Yoon, Juhyeon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2014-10-15
A siphon phenomenon or siphoning often refers to the movement of liquid from a higher elevation to a lower one through a tube in an inverted U shape (whose top is typically located above the liquid surface) under the action of gravity, and has been used in a variety of reallife applications such as a toilet bowl and a Greedy cup. However, liquid drainage due to siphoning sometimes needs to be prevented. For example, a siphon breaker, which is designed to limit the siphon effect by allowing the gas entrainment into a siphon line, is installed in order to maintain the pool water level above the reactor core when a loss of coolant accident (LOCA) occurs in an open-pool type research reactor. In this paper, we develop a theoretical model to predict the siphon breaking phenomenon. In this paper, a theoretical model to predict the siphon breaking phenomenon is developed. It is shown that the present model predicts well the fundamental features of the siphon breaking phenomenon and undershooting height.
9. Theoretical investigation of solar humidification-dehumidification desalination system using parabolic trough concentrators
International Nuclear Information System (INIS)
Mohamed, A.M.I.; El-Minshawy, N.A.
2011-01-01
Highlights: → We evaluated the performance of sea water HDD system powered by solar PTC. → The proposed design to the expected desalination plant performance was introduced. → The collector thermal efficiency was a function of solar radiation value. → The highest fresh water productivity is found to be in the summer season. → The production time reaches 42% of the day time in the summer season. - Abstract: This paper deals with the status of solar energy as a clean and renewable energy applications in desalination. The object of this research is to theoretically investigate the principal operating parameters of a proposed desalination system based on air humidification-dehumidification principles. A parabolic trough solar collector is adapted to drive and optimize the considered desalination system. A test set-up of the desalination system was designed and a theoretical simulation model was constructed to evaluate the performance and productivity of the proposed solar humidification-dehumidification desalination system. The theoretical simulation model was developed in which the thermodynamic models of each component of the considered were set up respectively. The study showed that, parabolic trough solar collector is the suitable to drive the proposed desalination system. A comparison study had been presented to show the effect of the different parameters on the performance and the productivity of the system. The productivity of the proposed system showed also an increase with the increase of the day time till an optimum value and then decreased. The highest fresh water productivity is found to be in the summer season, when high direct solar radiation and long solar time are always expected. The production time reaches a maximum value in the summer season, which is 42% of the day.
10. A theoretical model to explain the smart technology adoption behaviors of elder consumers (Elderadopt).
Science.gov (United States)
Golant, Stephen M
2017-08-01
11. Uranium dioxide-sodium interactions. Development of a theoretical model. Fitting of this model to the experimental results
International Nuclear Information System (INIS)
Syrmalenios, Panayotis
1973-01-01
This research thesis addresses the issue of safety of fast neutron reactors, and more particularly is a contribution of the study of mechanisms of interaction between molten fuel and sodium. It aims at developing tools of prediction of consequences of three main types of accidents: local fusion of a fuel rod and contact of the fuel with the surrounding sodium, failure of an assembly due to the fusion of several rods and fuel-coolant interaction within the assembly, and fuel-coolant interaction at the level of the reactor core. The author first proposes a bibliographical analysis of experimental and theoretical studies related to this issue of interaction between a hot body and a cold liquid, and of its consequences. Then, he introduces a mathematical model and its resolution method, and reports the use of the associated code (Corfou) for the interpretation of experimental results: expulsion of cold sodium column by expansion of an overheated sodium mass, fusion of a rod by Joule effect, interaction between UO_2 molten by high frequency with liquid sodium. Finally, the author discusses a comparison between the Corfou code and other models which are being currently developed [fr
12. Theoretical and experimental studies on the daily accumulative heat gain from cool roofs
International Nuclear Information System (INIS)
Qin, Yinghong; Zhang, Mingyi; Hiller, Jacob E.
2017-01-01
Cool roofs are gaining popularity as passive building cooling techniques, but the correlation between energy savings and rooftop albedo has not been understood completely. Here we theoretically model the daily accumulative inward heat (DAIH) from building roofs with different albedo values, correlating the heat gain of the building roof to both the rooftop albedo and the incident solar radiation. According to this model, the DAIH increases linearly with the daily zenith solar radiation, but decreases linearly with the rooftop albedo. A small building cell was constructed to monitor the heat gain of the building under the conditions of non-insulated and insulated roofs. The observational DAIH is highly coincident with the theoretical one, validating the theoretical model. It was found that insulating the roof, increasing the rooftop albedo, or both options can effectively curtail the heat gain in buildings during the summer season. The proposed theoretical model would be a powerful tool for evaluating the heat gain of the buildings and estimating the energy savings potential of high-reflective cool roofs. - Highlights: • Daily accumulative heat gain from a building roof is theoretically modeled. • Daily accumulative heat gain from a building roof increases linearly with rooftop absorptivity. • Increasing the roof insulation tapers the effect of the rooftop absorptivity. • The theoretical model is powerful for estimating energy savings of reflective roofs.
13. Proposed Core Competencies and Empirical Validation Procedure in Competency Modeling: Confirmation and Classification.
Science.gov (United States)
Baczyńska, Anna K; Rowiński, Tomasz; Cybis, Natalia
2016-01-01
Competency models provide insight into key skills which are common to many positions in an organization. Moreover, there is a range of competencies that is used by many companies. Researchers have developed core competency terminology to underline their cross-organizational value. The article presents a theoretical model of core competencies consisting of two main higher-order competencies called performance and entrepreneurship. Each of them consists of three elements: the performance competency includes cooperation, organization of work and goal orientation, while entrepreneurship includes innovativeness, calculated risk-taking and pro-activeness. However, there is lack of empirical validation of competency concepts in organizations and this would seem crucial for obtaining reliable results from organizational research. We propose a two-step empirical validation procedure: (1) confirmation factor analysis, and (2) classification of employees. The sample consisted of 636 respondents (M = 44.5; SD = 15.1). Participants were administered a questionnaire developed for the study purpose. The reliability, measured by Cronbach's alpha, ranged from 0.60 to 0.83 for six scales. Next, we tested the model using a confirmatory factor analysis. The two separate, single models of performance and entrepreneurial orientations fit quite well to the data, while a complex model based on the two single concepts needs further research. In the classification of employees based on the two higher order competencies we obtained four main groups of employees. Their profiles relate to those found in the literature, including so-called niche finders and top performers. Some proposal for organizations is discussed.
14. Imitative Modeling as a Theoretical Base for Instructing Language-Disordered Children
Science.gov (United States)
Courtright, John A.; Courtright, Illene C.
1976-01-01
A modification of A. Bandura's social learning theory (imitative modeling) was employed as a theoretical base for language instruction with eight language disordered children (5 to 10 years old). (Author/SBH)
15. A theoretical approach to artificial intelligence systems in medicine.
Science.gov (United States)
Spyropoulos, B; Papagounos, G
1995-10-01
The various theoretical models of disease, the nosology which is accepted by the medical community and the prevalent logic of diagnosis determine both the medical approach as well as the development of the relevant technology including the structure and function of the A.I. systems involved. A.I. systems in medicine, in addition to the specific parameters which enable them to reach a diagnostic and/or therapeutic proposal, entail implicitly theoretical assumptions and socio-cultural attitudes which prejudice the orientation and the final outcome of the procedure. The various models -causal, probabilistic, case-based etc. -are critically examined and their ethical and methodological limitations are brought to light. The lack of a self-consistent theoretical framework in medicine, the multi-faceted character of the human organism as well as the non-explicit nature of the theoretical assumptions involved in A.I. systems restrict them to the role of decision supporting "instruments" rather than regarding them as decision making "devices". This supporting role and, especially, the important function which A.I. systems should have in the structure, the methods and the content of medical education underscore the need of further research in the theoretical aspects and the actual development of such systems.
16. Thermodynamic analysis on theoretical models of cycle combined heat exchange process: The reversible heat exchange process
International Nuclear Information System (INIS)
Zhang, Chenghu; Li, Yaping
2017-01-01
Concept of reversible heat exchange process as the theoretical model of the cycle combined heat exchanger could be useful to determine thermodynamics characteristics and the limitation values in the isolated heat exchange system. In this study, the classification of the reversible heat exchange processes is presented, and with the numerical method, medium temperature variation tendency and the useful work production and usage in the whole process are investigated by the construction and solution of the mathematical descriptions. Various values of medium inlet temperatures and heat capacity ratio are considered to analyze the effects of process parameters on the outlet temperature lift/drop. The maximum process work transferred from the Carnot cycle region to the reverse cycle region is also researched. Moreover, influence of the separating point between different sub-processes on temperature variation profile and the process work production are analyzed. In addition, the heat-exchange-enhancement-factor is defined to study the enhancement effect of the application of the idealized process in the isolated heat exchange system, and the variation degree of this factor with process parameters change is obtained. The research results of this paper can be a theoretical guidance to construct the cycle combined heat exchange process in the practical system. - Highlights: • A theoretical model of Cycle combined heat exchange process is proposed. • The classification of reversible heat exchange process are presented. • Effects of Inlet temperatures and heat capacity ratio on process are analyzed. • Process work transmission through the whole process is studied. • Heat-exchange-enhancement-factor can be a criteria to express the application effect of the idealized process.
17. Graph theoretical model of a sensorimotor connectome in zebrafish.
Science.gov (United States)
Stobb, Michael; Peterson, Joshua M; Mazzag, Borbala; Gahtan, Ethan
2012-01-01
Mapping the detailed connectivity patterns (connectomes) of neural circuits is a central goal of neuroscience. The best quantitative approach to analyzing connectome data is still unclear but graph theory has been used with success. We present a graph theoretical model of the posterior lateral line sensorimotor pathway in zebrafish. The model includes 2,616 neurons and 167,114 synaptic connections. Model neurons represent known cell types in zebrafish larvae, and connections were set stochastically following rules based on biological literature. Thus, our model is a uniquely detailed computational representation of a vertebrate connectome. The connectome has low overall connection density, with 2.45% of all possible connections, a value within the physiological range. We used graph theoretical tools to compare the zebrafish connectome graph to small-world, random and structured random graphs of the same size. For each type of graph, 100 randomly generated instantiations were considered. Degree distribution (the number of connections per neuron) varied more in the zebrafish graph than in same size graphs with less biological detail. There was high local clustering and a short average path length between nodes, implying a small-world structure similar to other neural connectomes and complex networks. The graph was found not to be scale-free, in agreement with some other neural connectomes. An experimental lesion was performed that targeted three model brain neurons, including the Mauthner neuron, known to control fast escape turns. The lesion decreased the number of short paths between sensory and motor neurons analogous to the behavioral effects of the same lesion in zebrafish. This model is expandable and can be used to organize and interpret a growing database of information on the zebrafish connectome.
18. Anticipation in stuttering: A theoretical model of the nature of stutter prediction.
Science.gov (United States)
Garcia-Barrera, Mauricio A; Davidow, Jason H
2015-06-01
The fact that some people who stutter have the ability to anticipate a stuttering moment is essential for several theories of stuttering and important for maximum effectiveness of many currently used treatment techniques. The "anticipation effect," however, is poorly understood despite much investigation into this phenomenon. In the present paper, we combine (1) behavioral evidence from the stuttering-anticipation literature, (2) speech production models, and (3) models of error detection to propose a theoretical model of anticipation. Integrating evidence from theories such as Damasio's Somatic Marker Hypothesis, Levelt's Perceptual Monitoring Theory, Guenther's The Directions Into Velocities of Articulators (DIVA) model, Postma's Covert Repair Hypothesis, among others, our central thesis is that the anticipation of a stuttering moment occurs as an outcome of the interactions between previous learning experiences (i.e., learnt associations between stuttered utterances and any self-experienced or environmental consequence) and error monitoring. Possible neurological mechanisms involved in generating conscious anticipation are also discussed, along with directions for future research. The reader will be able to: (a) describe historical theories that explain how PWS may learn to anticipate stuttering; (b) state some traditional sources of evidence of anticipation in stuttering; (c) describe how PWS may be sensitive to the detection of a stuttering; (d) state some of the neural correlates that may underlie anticipation in stuttering; and (e) describe some of the possible utilities of incorporating anticipation into stuttering interventions. Copyright © 2015 Elsevier Inc. All rights reserved.
19. A theoretical model of strong and moderate El Niño regimes
Science.gov (United States)
Takahashi, Ken; Karamperidou, Christina; Dewitte, Boris
2018-02-01
The existence of two regimes for El Niño (EN) events, moderate and strong, has been previously shown in the GFDL CM2.1 climate model and also suggested in observations. The two regimes have been proposed to originate from the nonlinearity in the Bjerknes feedback, associated with a threshold in sea surface temperature (T_c ) that needs to be exceeded for deep atmospheric convection to occur in the eastern Pacific. However, although the recent 2015-16 EN event provides a new data point consistent with the sparse strong EN regime, it is not enough to statistically reject the null hypothesis of a unimodal distribution based on observations alone. Nevertheless, we consider the possibility suggestive enough to explore it with a simple theoretical model based on the nonlinear Bjerknes feedback. In this study, we implemented this nonlinear mechanism in the recharge-discharge (RD) ENSO model and show that it is sufficient to produce the two EN regimes, i.e. a bimodal distribution in peak surface temperature (T) during EN events. The only modification introduced to the original RD model is that the net damping is suppressed when T exceeds T_c , resulting in a weak nonlinearity in the system. Due to the damping, the model is globally stable and it requires stochastic forcing to maintain the variability. The sustained low-frequency component of the stochastic forcing plays a key role for the onset of strong EN events (i.e. for T>T_c ), at least as important as the precursor positive heat content anomaly (h). High-frequency forcing helps some EN events to exceed T_c , increasing the number of strong events, but the rectification effect is small and the overall number of EN events is little affected by this forcing. Using the Fokker-Planck equation, we show how the bimodal probability distribution of EN events arises from the nonlinear Bjerknes feedback and also propose that the increase in the net feedback with increasing T is a necessary condition for bimodality in the RD
20. Recent evolution of theoretical models in inner shell photoionization
International Nuclear Information System (INIS)
Combet Farnoux, F.
1978-01-01
This paper is a brief review of various atomic theoretical models recently developed to calculate photoionization cross sections in the low energy range (from the far ultraviolet to the soft X ray region). For both inner and outer shells concerned, we emphasize the necessity to go beyond the independent particle models by means of the introduction of correlation effects in both initial and final states. The basic physical ideas of as elaborated models as Random Phase Approximation with exchange, Many Body Perturbation Theory and R matrix Theory are outlined and summarized. As examples, the results of some calculations are shown and compared with experiment
1. Theoretical proposals in bullying research: a review
Directory of Open Access Journals (Sweden)
Silvia Postigo
2013-05-01
Full Text Available Four decades of research into peer bullying have produced an extensive body of knowledge. This work attempts to provide an integrative theoretical framework, which includes the specific theories and observations. The main aim is to organize the available knowledge in order to guide the development of effective interventions. To that end, several psychological theories are described that have been used and/or adapted with the aim of understanding peer bullying. All of them, at different ecological levels and different stages of the process, may describe it in terms of the relational dynamics of power. It is concluded that research needs to take this integrative framework into account, that is to say to consider multi-causal and holistic approaches to bullying. For the intervention, regardless of the format or the target population, the empowerment of the individuals, and the social awareness of the use and abuse of personal power are suggested.
2. Theoretical Basis for the CE-QUAL-W2 River Basin Model
National Research Council Canada - National Science Library
Wells, Scott
2000-01-01
This report describes the theoretical development for CE-QUAL-W2, Version 3, that will allow the application of the model to entire water basins including multiple reservoirs, steeply sloping rivers, and estuaries...
3. Theoretical model of an evacuated tube heat pipe solar collector integrated with phase change material
International Nuclear Information System (INIS)
Naghavi, M.S.; Ong, K.S.; Badruddin, I.A.; Mehrali, M.; Silakhori, M.; Metselaar, H.S.C.
2015-01-01
The purpose of this paper is to model theoretically a solar hot water system consisting of an array of ETHPSC (evacuated tube heat pipe solar collectors) connected to a common manifold filled with phase change material and acting as a LHTES (latent heat thermal energy storage) tank. Solar energy incident on the ETHPSC is collected and stored in the LHTES tank. The stored heat is then transferred to the domestic hot water supply via a finned heat exchanger pipe placed inside the tank. A combination of mathematical algorithms is used to model a complete process of the heat absorption, storage and release modes of the proposed system. The results show that for a large range of flow rates, the thermal performance of the ETHPSC-LHTES system is higher than that of a similar system without latent heat storage. Furthermore, the analysis shows that the efficiency of the introduced system is less sensitive to the draw off water flowrate than a conventional system. Analysis indicates that this system could be applicable as a complementary part to conventional ETHPSC systems to be able to produce hot water at night time or at times with weak radiation. - Highlights: • The ETHPSC is integrated with PCM at manifold side for night hot water demands. • The thermal performance of the ETHPSC-PCM is often higher than the baseline model. • The efficiency of the proposed model is stable for different flow rates. • Using PCM as thermal storage increases reliability on the performance of the system.
4. Theoretical Expectations for the Muon's Electric Dipole Moment
CERN Document Server
Feng, J L; Shadmi, Y; Feng, Jonathan L; Matchev, Konstantin T.; Shadmi, Yael
2001-01-01
We examine the muon's electric dipole moment $\\dmu$ from a variety of theoretical perspectives. We point out that the reported deviation in the muon's g-2 can be due partially or even entirely to a new physics contribution to the muon's {\\em electric} dipole moment. In fact, the recent g-2 measurement provides the most stringent bound on $\\dmu$ to date. This ambiguity could be definitively resolved by the dedicated search for $\\dmu$ recently proposed. We then consider both model-independent and supersymmetric frameworks. Under the assumptions of scalar degeneracy, proportionality, and flavor conservation, the theoretical expectations for $\\dmu$ in supersymmetry fall just below the proposed sensitivity. However, non-degeneracy can give an order of magnitude enhancement, and lepton flavor violation can lead to $\\dmu$ of order $10^{-22}$ e cm, two orders of magnitude above the sensitivity of the $\\dmu$ experiment. We present compact expressions for leptonic dipole moments and lepton flavor violating amplitudes. ...
5. A utility-theoretic model for QALYs and willingness to pay.
Science.gov (United States)
Klose, Thomas
2003-01-01
Despite the widespread use of quality-adjusted life years (QALY) in economic evaluation studies, their utility-theoretic foundation remains unclear. A model for preferences over health, money, and time is presented in this paper. Under the usual assumptions of the original QALY-model, an additive separable presentation of the utilities in different periods exists. In contrast to the usual assumption that QALY-weights do solely depend on aspects of health-related quality of life, wealth-standardized QALY-weights might vary with the wealth level in the presented extension of the original QALY-model resulting in an inconsistent measurement of QALYs. Further assumptions are presented to make the measurement of QALYs consistent with lifetime preferences over health and money. Even under these strict assumptions, QALYs and WTP (which also can be defined in this utility-theoretic model) are not equivalent preference-based measures of the effects of health technologies on an individual level. The results suggest that the individual WTP per QALY can depend on the magnitude of the QALY-gain as well as on the disease burden, when health influences the marginal utility of wealth. Further research seems to be indicated on this structural aspect of preferences over health and wealth and to quantify its impact. Copyright 2002 John Wiley & Sons, Ltd.
6. Application of a two fluid theoretical plasma transport model on current tokamak reactor designs
International Nuclear Information System (INIS)
Ibrahim, E.; Fowler, T.K.
1987-06-01
In this work, the new theoretical transport models to TIBER II design calculations are described and the results are compared with recent experimental data in large tokamaks (TFTR, JET). Tang's method is extended to a two-fluid model treating ions and electrons separately. This allows for different ion and electron temperatures, as in recent low-density experiments in TFTR, and in the TIBER II design itself. The discussion is divided into two parts: (1) Development of the theoretical transport model and (2) calibration against experiments and application to TIBER II
7. Redesigning Orientation in an Intensive Care Unit Using 2 Theoretical Models.
Science.gov (United States)
Kozub, Elizabeth; Hibanada-Laserna, Maribel; Harget, Gwen; Ecoff, Laurie
2015-01-01
To accommodate a higher demand for critical care nurses, an orientation program in a surgical intensive care unit was revised and streamlined. Two theoretical models served as a foundation for the revision and resulted in clear clinical benchmarks for orientation progress evaluation. The purpose of the project was to integrate theoretical frameworks into practice to improve the unit orientation program. Performance improvement methods served as a framework for the revision, and outcomes were measured before and after implementation. The revised orientation program increased 1- and 2-year nurse retention and decreased turnover. Critical care knowledge increased after orientation for both the preintervention and postintervention groups. Incorporating a theoretical basis for orientation has been shown to be successful in increasing the number of nurses completing orientation and improving retention, turnover rates, and knowledge gained.
8. An Emerging Theoretical Model of Music Therapy Student Development.
Science.gov (United States)
Dvorak, Abbey L; Hernandez-Ruiz, Eugenia; Jang, Sekyung; Kim, Borin; Joseph, Megan; Wells, Kori E
2017-07-01
Music therapy students negotiate a complex relationship with music and its use in clinical work throughout their education and training. This distinct, pervasive, and evolving relationship suggests a developmental process unique to music therapy. The purpose of this grounded theory study was to create a theoretical model of music therapy students' developmental process, beginning with a study within one large Midwestern university. Participants (N = 15) were music therapy students who completed one 60-minute intensive interview, followed by a 20-minute member check meeting. Recorded interviews were transcribed, analyzed, and coded using open and axial coding. The theoretical model that emerged was a six-step sequential developmental progression that included the following themes: (a) Personal Connection, (b) Turning Point, (c) Adjusting Relationship with Music, (d) Growth and Development, (e) Evolution, and (f) Empowerment. The first three steps are linear; development continues in a cyclical process among the last three steps. As the cycle continues, music therapy students continue to grow and develop their skills, leading to increased empowerment, and more specifically, increased self-efficacy and competence. Further exploration of the model is needed to inform educators' and other key stakeholders' understanding of student needs and concerns as they progress through music therapy degree programs. © the American Music Therapy Association 2017. All rights reserved. For permissions, please e-mail: [email protected]
9. Theoretical modelling of semiconductor surfaces microscopic studies of electrons and photons
CERN Document Server
Srivastava, G P
1999-01-01
The state-of-the-art theoretical studies of ground state properties, electronic states and atomic vibrations for bulk semiconductors and their surfaces by the application of the pseudopotential method are discussed. Studies of bulk and surface phonon modes have been extended by the application of the phenomenological bond charge model. The coverage of the material, especially of the rapidly growing and technologically important topics of surface reconstruction and chemisorption, is up-to-date and beyond what is currently available in book form. Although theoretical in nature, the book provides
10. Determination of cognitive development: postnonclassical theoretical model
Directory of Open Access Journals (Sweden)
Irina N. Pogozhina
2015-09-01
Full Text Available The aim of this research is to develop a postnonclassical cognitive processes content determination model in which mental processes are considered as open selfdeveloping, self-organizing systems. Three types of systems (dynamic, statistical, developing were analysed and compared on the basis of the description of the external and internal characteristics of causation, types of causal chains (dependent, independent and their interactions, as well as the nature of the relationship between the elements of the system (hard, probabilistic, mixed. Mechanisms of open non-equilibrium nonlinear systems (dissipative and four dissipative structures emergence conditions are described. Determination models of mental and behaviour formation and development that were developed under various theoretical approaches (associationism, behaviorism, gestaltism, psychology of intelligence by Piaget, Vygotsky culture historical approach, activity approach and others are mapped on each other as the models that describe behaviour of the three system types mentioned above. The development models of the mental sphere are shown to be different by the following criteria: 1 allocated determinants amount; 2 presence or absence of the system own activity that results in selecting the model not only external, but also internal determinants; 3 types of causal chains (dependent-independent-blended; 4 types of relationships between the causal chain that ultimately determines the subsequent system determination type as decisive (a tough dynamic pattern or stochastic (statistical regularity. The continuity of postnonclassical, classical and non-classical models of mental development determination are described. The process of gradual refinement, complexity, «absorption» of the mental determination by the latter models is characterized. The human mental can be deemed as the functioning of the open developing non-equilibrium nonlinear system (dissipative. The mental sphere is
11. Heavy ion-induced lesions in DNA: A theoretical model for the initial induction of DNA strand breaks and chromatin breaks
International Nuclear Information System (INIS)
Schmidt, J.B.
1993-01-01
A theoretical model has been developed and used to calculate yields and spatial distributions of DNA strand breaks resulting from the interactions of heavy ions with chromatin in aqueous systems. The three dimensional spatial distribution of ionizing events has been modeled for charged particles as a function of charge and velocity. Chromatin has been modeled as a 30 nm diameter solenoid of nucleosomal DNA. The Monte Carlo methods used by Chatterjee et al. have been applied to DNA in a chromatin conformation. Refinements to their methods include: a combined treatment of primary and low energy (<2 keV) secondary electron interactions, an improved low energy delta ray model, and the combined simulation of direct energy deposition on the DNA and attack by diffusing hydroxyl radicals. Individual particle tracks are treated independently, which is assumed to be applicable to low fluence irradiations in which multiple particle effects are negligible. Single strand break cross section open-quotes hooksclose quotes seen in experiments at very high LET appear to be due to the collapsing radial extent of the track, as predicted in the open-quotes deep sieveclose quotes hypothesis proposed by Tobias et al. Spatial distributions of lesions produced by particles have been found to depend on chromatin structure. In the future, heavy ions may be used as a tool to probe the organization of DNA in chromatin. A Neyman A-binomial variation of the open-quotes cluster modelclose quotes for the distribution of chromatin breaks per irradiated cell has been theoretically tested. The model includes a treatment of the chromatin fragment detection technique's resolution, which places a limitation on the minimum size of fragments which can be detected. The model appears to fit some of the experimental data reasonably well. However, further experimental and theoretical refinements are desirable
12. Graph theoretical model of a sensorimotor connectome in zebrafish.
Directory of Open Access Journals (Sweden)
Michael Stobb
Full Text Available Mapping the detailed connectivity patterns (connectomes of neural circuits is a central goal of neuroscience. The best quantitative approach to analyzing connectome data is still unclear but graph theory has been used with success. We present a graph theoretical model of the posterior lateral line sensorimotor pathway in zebrafish. The model includes 2,616 neurons and 167,114 synaptic connections. Model neurons represent known cell types in zebrafish larvae, and connections were set stochastically following rules based on biological literature. Thus, our model is a uniquely detailed computational representation of a vertebrate connectome. The connectome has low overall connection density, with 2.45% of all possible connections, a value within the physiological range. We used graph theoretical tools to compare the zebrafish connectome graph to small-world, random and structured random graphs of the same size. For each type of graph, 100 randomly generated instantiations were considered. Degree distribution (the number of connections per neuron varied more in the zebrafish graph than in same size graphs with less biological detail. There was high local clustering and a short average path length between nodes, implying a small-world structure similar to other neural connectomes and complex networks. The graph was found not to be scale-free, in agreement with some other neural connectomes. An experimental lesion was performed that targeted three model brain neurons, including the Mauthner neuron, known to control fast escape turns. The lesion decreased the number of short paths between sensory and motor neurons analogous to the behavioral effects of the same lesion in zebrafish. This model is expandable and can be used to organize and interpret a growing database of information on the zebrafish connectome.
13. A theoretical model for prediction of deposition efficiency in cold spraying
International Nuclear Information System (INIS)
Li Changjiu; Li Wenya; Wang Yuyue; Yang Guanjun; Fukanuma, H.
2005-01-01
The deposition behavior of a spray particle stream with a particle size distribution was theoretically examined for cold spraying in terms of deposition efficiency as a function of particle parameters and spray angle. The theoretical relation was established between the deposition efficiency and spray angle. The experiments were conducted by measuring deposition efficiency at different driving gas conditions and different spray angles using gas-atomized copper powder. It was found that the theoretically estimated results agreed reasonably well with the experimental ones. Based on the theoretical model and experimental results, it was revealed that the distribution of particle velocity resulting from particle size distribution influences significantly the deposition efficiency in cold spraying. It was necessary for the majority of particles to achieve a velocity higher than the critical velocity in order to improve the deposition efficiency. The normal component of particle velocity contributed to the deposition of the particle under the off-nomal spray condition. The deposition efficiency of sprayed particles decreased owing to the decrease of the normal velocity component as spray was performed at off-normal angle
14. A general theoretical framework for decoherence in open and closed systems
International Nuclear Information System (INIS)
Castagnino, Mario; Fortin, Sebastian; Laura, Roberto; Lombardi, Olimpia
2008-01-01
A general theoretical framework for decoherence is proposed, which encompasses formalisms originally devised to deal just with open or closed systems. The conditions for decoherence are clearly stated and the relaxation and decoherence times are compared. Finally, the spin-bath model is developed in detail from the new perspective
15. Exploring Environmental Factors in Nursing Workplaces That Promote Psychological Resilience: Constructing a Unified Theoretical Model.
Science.gov (United States)
Cusack, Lynette; Smith, Morgan; Hegney, Desley; Rees, Clare S; Breen, Lauren J; Witt, Regina R; Rogers, Cath; Williams, Allison; Cross, Wendy; Cheung, Kin
2016-01-01
Building nurses' resilience to complex and stressful practice environments is necessary to keep skilled nurses in the workplace and ensuring safe patient care. A unified theoretical framework titled Health Services Workplace Environmental Resilience Model (HSWERM), is presented to explain the environmental factors in the workplace that promote nurses' resilience. The framework builds on a previously-published theoretical model of individual resilience, which identified the key constructs of psychological resilience as self-efficacy, coping and mindfulness, but did not examine environmental factors in the workplace that promote nurses' resilience. This unified theoretical framework was developed using a literary synthesis drawing on data from international studies and literature reviews on the nursing workforce in hospitals. The most frequent workplace environmental factors were identified, extracted and clustered in alignment with key constructs for psychological resilience. Six major organizational concepts emerged that related to a positive resilience-building workplace and formed the foundation of the theoretical model. Three concepts related to nursing staff support (professional, practice, personal) and three related to nursing staff development (professional, practice, personal) within the workplace environment. The unified theoretical model incorporates these concepts within the workplace context, linking to the nurse, and then impacting on personal resilience and workplace outcomes, and its use has the potential to increase staff retention and quality of patient care.
16. Supersymmetric field-theoretic models on a supermanifold
Energy Technology Data Exchange (ETDEWEB)
Franco, D.H.T. [Centro de Estudos de Fisica Teorica, Belo Horizonte, MG (Brazil); Polito, Caio M.M. [Centro Brasileiro de Pesquisas Fisicas (CBPF), Rio de Janeiro, RJ (Brazil). Coordenacao de Teoria de Campos e Particulas
2003-04-01
We propose an extension of some structural aspects that have successfully been applied in the development of the theory of quantum fields propagating on a general spacetime manifold so as to include superfield models on a super manifold. (author)
17. A THEORETICAL MODEL OF SUPPORTING OPEN SOURCE FRONT END INNOVATION THROUGH IDEA MANAGEMENT
DEFF Research Database (Denmark)
Aagaard, Annabeth
2013-01-01
to overcome these various challenges companies are looking for new models to support FEI. This theoretical paper explores in what way idea management may be applied as a tool in facilitation of front end innovation and how this facilitation may be captured in a conceptual model. First, I show through...... a literature study, how idea management and front end innovation are related and how they may support each other. Secondly, I present a theoretical model of how idea management may be applied in support of the open source front end of new product innovations. Thirdly, I present different venues of further...... exploration of active facilitation of open source front end innovation through idea management....
18. Game-theoretic modeling of curtailment rules and network investments with distributed generation
International Nuclear Information System (INIS)
Andoni, Merlinda; Robu, Valentin; Früh, Wolf-Gerrit; Flynn, David
2017-01-01
Highlights: •Comparative study on curtailment rules and their effects on RES profitability. •Proposal of novel fair curtailment rule which minimises generators’ disruption. •Modeling of private network upgrade as leader-follower (Stackelberg) game. •New model incorporating stochastic generation and variable demand. •New methodology for setting transmission charges in private network upgrade. -- Abstract: Renewable energy has achieved high penetration rates in many areas, leading to curtailment, especially if existing network infrastructure is insufficient and energy generated cannot be exported. In this context, Distribution Network Operators (DNOs) face a significant knowledge gap about how to implement curtailment rules that achieve desired operational objectives, but at the same time minimise disruption and economic losses for renewable generators. In this work, we study the properties of several curtailment rules widely used in UK renewable energy projects, and their effect on the viability of renewable generation investment. Moreover, we propose a new curtailment rule which guarantees fair allocation of curtailment amongst all generators with minimal disruption. Another key knowledge gap faced by DNOs is how to incentivise private network upgrades, especially in settings where several generators can use the same line against the payment of a transmission fee. In this work, we provide a solution to this problem by using tools from algorithmic game theory. Specifically, this setting can be modelled as a Stackelberg game between the private transmission line investor and local renewable generators, who are required to pay a transmission fee to access the line. We provide a method for computing the equilibrium of this game, using a model that captures the stochastic nature of renewable energy generation and demand. Finally, we use the practical setting of a grid reinforcement project from the UK and a large dataset of wind speed measurements and demand
19. Meta-Theoretical Contributions to the Constitution of a Model-Based Didactics of Science
Science.gov (United States)
Ariza, Yefrin; Lorenzano, Pablo; Adúriz-Bravo, Agustín
2016-10-01
There is nowadays consensus in the community of didactics of science (i.e. science education understood as an academic discipline) regarding the need to include the philosophy of science in didactical research, science teacher education, curriculum design, and the practice of science education in all educational levels. Some authors have identified an ever-increasing use of the concept of theoretical model', stemming from the so-called semantic view of scientific theories. However, it can be recognised that, in didactics of science, there are over-simplified transpositions of the idea of model (and of other meta-theoretical ideas). In this sense, contemporary philosophy of science is often blurred or distorted in the science education literature. In this paper, we address the discussion around some meta-theoretical concepts that are introduced into didactics of science due to their perceived educational value. We argue for the existence of a semantic family', and we characterise four different versions of semantic views existing within the family. In particular, we seek to contribute to establishing a model-based didactics of science mainly supported in this semantic family.
20. Theoretical model for plasma expansion generated by hypervelocity impact
International Nuclear Information System (INIS)
Ju, Yuanyuan; Zhang, Qingming; Zhang, Dongjiang; Long, Renrong; Chen, Li; Huang, Fenglei; Gong, Zizheng
2014-01-01
The hypervelocity impact experiments of spherical LY12 aluminum projectile diameter of 6.4 mm on LY12 aluminum target thickness of 23 mm have been conducted using a two-stage light gas gun. The impact velocity of the projectile is 5.2, 5.7, and 6.3 km/s, respectively. The experimental results show that the plasma phase transition appears under the current experiment conditions, and the plasma expansion consists of accumulation, equilibrium, and attenuation. The plasma characteristic parameters decrease as the plasma expands outward and are proportional with the third power of the impact velocity, i.e., (T e , n e ) ∝ v p 3 . Based on the experimental results, a theoretical model on the plasma expansion is developed and the theoretical results are consistent with the experimental data
1. Theoretical model for plasma expansion generated by hypervelocity impact
Energy Technology Data Exchange (ETDEWEB)
Ju, Yuanyuan; Zhang, Qingming, E-mail: [email protected]; Zhang, Dongjiang; Long, Renrong; Chen, Li; Huang, Fenglei [State Key Laboratory of Explosion Science and Technology, Beijing Institute of Technology, Beijing 100081 (China); Gong, Zizheng [National Key Laboratory of Science and Technology on Reliability and Environment Engineering, Beijing Institute of Spacecraft Environment Engineering, Beijing 100094 (China)
2014-09-15
The hypervelocity impact experiments of spherical LY12 aluminum projectile diameter of 6.4 mm on LY12 aluminum target thickness of 23 mm have been conducted using a two-stage light gas gun. The impact velocity of the projectile is 5.2, 5.7, and 6.3 km/s, respectively. The experimental results show that the plasma phase transition appears under the current experiment conditions, and the plasma expansion consists of accumulation, equilibrium, and attenuation. The plasma characteristic parameters decrease as the plasma expands outward and are proportional with the third power of the impact velocity, i.e., (T{sub e}, n{sub e}) ∝ v{sub p}{sup 3}. Based on the experimental results, a theoretical model on the plasma expansion is developed and the theoretical results are consistent with the experimental data.
2. What Factors Lead Companies to Adopt Social Media in their processes: Proposal and Test of a Measurement Model
Directory of Open Access Journals (Sweden)
Jozé Braz de Araújo
2016-01-01
Full Text Available The objective of this study was to understand which factors lead companies to use social media to achieve results. For that, a theoretical model was proposed and tested. Data was collected using a survey of 237 companies. In the analysis we analysis used the structural eq uation modeling technique. The results show that the social media relative advantage and its observability were important factors to social media organizational adoption. We also found that big companies with more formalized organizational structure (OS t end to adopt social media more than small ones with no formal OS. The companies studied showed strong organizational disposition for innovation adoption.
3. Control Theoretic Modeling and Generated Flow Patterns of a Fish-Tail Robot
Science.gov (United States)
Massey, Brian; Morgansen, Kristi; Dabiri, Dana
2003-11-01
Many real-world engineering problems involve understanding and manipulating fluid flows. One of the challenges to further progress in the area of active flow control is the lack of appropriate models that are amenable to control-theoretic studies and algorithm design and also incorporate reasonably realistic fluid dynamic effects. We focus here on modeling and model-verification of bio-inspired actuators (fish-fin type structures) used to control fluid dynamic artifacts that will affect speed, agility, and stealth of Underwater Autonomous Vehicles (UAVs). Vehicles using fish-tail type systems are more maneuverable, can turn in much shorter and more constrained spaces, have lower drag, are quieter and potentially more efficient than those using propellers. We will present control-theoretic models for a simple prototype coupled fluid and mechanical actuator where fluid effects are crudely modeled by assuming only lift, drag, and added mass, while neglecting boundary effects. These models will be tested with different control input parameters on an experimental fish-tail robot with the resulting flow captured with DPIV. Relations between the model, the control function choices, the obtained thrust and drag, and the corresponding flow patterns will be presented and discussed.
4. Theoretical Modeling of Magnesium Ion Imprints in the Raman Scattering of Water
Czech Academy of Sciences Publication Activity Database
Kapitán, J.; Dračínský, Martin; Kaminský, Jakub; Benda, Ladislav; Bouř, Petr
2010-01-01
Roč. 114, č. 10 (2010), s. 3574-3582 ISSN 1520-6106 R&D Projects: GA ČR GA202/07/0732; GA AV ČR IAA400550702; GA AV ČR IAA400550701; GA ČR GPP208/10/P356 Grant - others:AV ČR(CZ) M200550902 Institutional research plan: CEZ:AV0Z40550506 Keywords : Raman spectroscopy * theoretical modelling * CPMD Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.603, year: 2010
5. Exploring complex dynamics in multi agent-based intelligent systems: Theoretical and experimental approaches using the Multi Agent-based Behavioral Economic Landscape (MABEL) model
Science.gov (United States)
Alexandridis, Konstantinos T.
This dissertation adopts a holistic and detailed approach to modeling spatially explicit agent-based artificial intelligent systems, using the Multi Agent-based Behavioral Economic Landscape (MABEL) model. The research questions that addresses stem from the need to understand and analyze the real-world patterns and dynamics of land use change from a coupled human-environmental systems perspective. Describes the systemic, mathematical, statistical, socio-economic and spatial dynamics of the MABEL modeling framework, and provides a wide array of cross-disciplinary modeling applications within the research, decision-making and policy domains. Establishes the symbolic properties of the MABEL model as a Markov decision process, analyzes the decision-theoretic utility and optimization attributes of agents towards comprising statistically and spatially optimal policies and actions, and explores the probabilogic character of the agents' decision-making and inference mechanisms via the use of Bayesian belief and decision networks. Develops and describes a Monte Carlo methodology for experimental replications of agent's decisions regarding complex spatial parcel acquisition and learning. Recognizes the gap on spatially-explicit accuracy assessment techniques for complex spatial models, and proposes an ensemble of statistical tools designed to address this problem. Advanced information assessment techniques such as the Receiver-Operator Characteristic curve, the impurity entropy and Gini functions, and the Bayesian classification functions are proposed. The theoretical foundation for modular Bayesian inference in spatially-explicit multi-agent artificial intelligent systems, and the ensembles of cognitive and scenario assessment modular tools build for the MABEL model are provided. Emphasizes the modularity and robustness as valuable qualitative modeling attributes, and examines the role of robust intelligent modeling as a tool for improving policy-decisions related to land
6. Improving the theoretical foundations of the multi-mode transport model
International Nuclear Information System (INIS)
Bateman, G.; Kritz, A.H.; Redd, A.J.; Erba, M.; Rewoldt, G.; Weiland, J.; Strand, P.; Kinsey, J.E.; Scott, B.
1999-01-01
A new version of the Multi-Mode transport model, designated MMM98, is being developed with improved theoretical foundations, in an ongoing effort to predict the temperature and density profiles in tokamaks. For transport near the edge of the plasma, MMM98 uses a new model based on 3-D nonlinear simulations of drift Alfven mode turbulence. Flow shear stabilization effects have been added to the Weiland model for Ion Temperature Gradient and Trapped Electron Modes, which usually dominates in most of the plasma core. For transport near the magnetic axis at high beta, a new kinetic ballooning mode model has been constructed based on FULL stability code computations. (author)
7. Improving the theoretical foundations of the multi-mode transport model
International Nuclear Information System (INIS)
Bateman, G.; Kritz, A.H.; Redd, A.J.; Erba, M.; Rewoldt, G.; Weiland, J.; Strand, P.; Kinsey, J.E.; Scott, B.
2001-01-01
A new version of the Multi-Mode transport model, designated MMM98, is being developed with improved theoretical foundations, in an ongoing effort to predict the temperature and density profiles in tokamaks. For transport near the edge of the plasma, MMM98 uses a new model based on 3-D nonlinear simulations of drift Alfven mode turbulence. Flow shear stabilization effects have been added to the Weiland model for Ion Temperature Gradient and Trapped Electron Modes, which usually dominates in most of the plasma core. For transport near the magnetic axis at high beta, a new kinetic ballooning mode model has been constructed based on FULL stability code computations. (author)
8. Theoretical modeling and experimental analyses of laminated wood composite poles
Science.gov (United States)
Cheng Piao; Todd F. Shupe; Vijaya Gopu; Chung Y. Hse
2005-01-01
Wood laminated composite poles consist of trapezoid-shaped wood strips bonded with synthetic resin. The thick-walled hollow poles had adequate strength and stiffness properties and were a promising substitute for solid wood poles. It was necessary to develop theoretical models to facilitate the manufacture and future installation and maintenance of this novel...
9. Organizational Learning and Product Design Management: Towards a Theoretical Model.
Science.gov (United States)
Chiva-Gomez, Ricardo; Camison-Zornoza, Cesar; Lapiedra-Alcami, Rafael
2003-01-01
Case studies of four Spanish ceramics companies were used to construct a theoretical model of 14 factors essential to organizational learning. One set of factors is related to the conceptual-analytical phase of the product design process and the other to the creative-technical phase. All factors contributed to efficient product design management…
10. Consumers’ Acceptance and Use of Information and Communications Technology: A UTAUT and Flow Based Theoretical Model
Directory of Open Access Journals (Sweden)
Saleh Alwahaishi
2013-03-01
Full Text Available The world has changed a lot in the past years. The rapid advances in technology and the changing of the communication channels have changed the way people work and, for many, where do they work from. The Internet and mobile technology, the two most dynamic technological forces in modern information and communications technology (ICT are converging into one ubiquitous mobile Internet service, which will change our way of both doing business and dealing with our daily routine activities. As the use of ICT expands globally, there is need for further research into cultural aspects and implications of ICT. The acceptance of Information Technology (IT has become a fundamental part of the research plan for most organizations (Igbaria 1993. In IT research, numerous theories are used to understand users’ adoption of new technologies. Various models were developed including the Technology Acceptance Model, Theory of Reasoned Action, Theory of Planned Behavior, and recently, the Unified Theory of Acceptance and Use of Technology. Each of these models has sought to identify the factors which influence a citizen’s intention or actual use of information technology. Drawing on the UTAUT model and Flow Theory, this research composes a new hybrid theoretical framework to identify the factors affecting the acceptance and use of Mobile Internet -as an ICT application- in a consumer context. The proposed model incorporates eight constructs: Performance Expectancy, Effort Expectancy, Facilitating Conditions, Social Influences, Perceived Value, Perceived Playfulness, Attention Focus, and Behavioral intention. Data collected online from 238 respondents in Saudi Arabia were tested against the research model, using the structural equation modeling approach. The proposed model was mostly supported by the empirical data. The findings of this study provide several crucial implications for ICT and, in particular, mobile Internet service practitioners and researchers
11. Theoretical Models of Deliberative Democracy: A Critical Analysis
Directory of Open Access Journals (Sweden)
Tutui Viorel
2015-07-01
Full Text Available Abstract: My paper focuses on presenting and analyzing some of the most important theoretical models of deliberative democracy and to emphasize their limits. Firstly, I will mention James Fishkin‟s account of deliberative democracy and its relations with other democratic models. He differentiates between four democratic theories: competitive democracy, elite deliberation, participatory democracy and deliberative democracy. Each of these theories makes an explicit commitment to two of the following four “principles”: political equality, participation, deliberation, nontyranny. Deliberative democracy is committed to political equality and deliberation. Secondly, I will present Philip Pettit‟s view concerning the main constraints of deliberative democracy: the inclusion constraint, the judgmental constraint and the dialogical constraint. Thirdly, I will refer to Amy Gutmann and Dennis Thompson‟s conception regarding the “requirements” or characteristics of deliberative democracy: the reason-giving requirement, the accessibility of reasons, the binding character of the decisions and the dynamic nature of the deliberative process. Finally, I will discuss Joshua Cohen‟s “ideal deliberative procedure” which has the following features: it is free, reasoned, the parties are substantively equal and the procedure aims to arrive at rationally motivated consensus. After presenting these models I will provide a critical analysis of each one of them with the purpose of revealing their virtues and limits. I will make some suggestions in order to combine the virtues of these models, to transcend their limitations and to offer a more systematical account of deliberative democracy. In the next four sections I will take into consideration four main strategies for combining political and epistemic values (“optimistic”, “deliberative”, “democratic” and “pragmatic” and the main objections they have to face. In the concluding section
12. Evolutionary adaptations: theoretical and practical implications for visual ergonomics.
Science.gov (United States)
Fostervold, Knut Inge; Watten, Reidulf G; Volden, Frode
2014-01-01
The literature discussing visual ergonomics often mention that human vision is adapted to light emitted by the sun. However, theoretical and practical implications of this viewpoint is seldom discussed or taken into account. The paper discusses some of the main theoretical implications of an evolutionary approach to visual ergonomics. Based on interactional theory and ideas from ecological psychology an evolutionary stress model is proposed as a theoretical framework for future research in ergonomics and human factors. The model stresses the importance of developing work environments that fits with our evolutionary adaptations. In accordance with evolutionary psychology, the environment of evolutionary adaptedness (EEA) and evolutionarily-novel environments (EN) are used as key concepts. Using work with visual display units (VDU) as an example, the paper discusses how this knowledge can be utilized in an ergonomic analysis of risk factors in the work environment. The paper emphasises the importance of incorporating evolutionary theory in the field of ergonomics. Further, the paper encourages scientific practices that further our understanding of any phenomena beyond the borders of traditional proximal explanations.
13. Regional differences of outpatient physician supply as a theoretical economic and empirical generalized linear model.
Science.gov (United States)
Scholz, Stefan; Graf von der Schulenburg, Johann-Matthias; Greiner, Wolfgang
2015-11-17
Regional differences in physician supply can be found in many health care systems, regardless of their organizational and financial structure. A theoretical model is developed for the physicians' decision on office allocation, covering demand-side factors and a consumption time function. To test the propositions following the theoretical model, generalized linear models were estimated to explain differences in 412 German districts. Various factors found in the literature were included to control for physicians' regional preferences. Evidence in favor of the first three propositions of the theoretical model could be found. Specialists show a stronger association to higher populated districts than GPs. Although indicators for regional preferences are significantly correlated with physician density, their coefficients are not as high as population density. If regional disparities should be addressed by political actions, the focus should be to counteract those parameters representing physicians' preferences in over- and undersupplied regions.
14. 4. Valorizations of Theoretical Models of Giftedness and Talent in Defining of Artistic Talent
OpenAIRE
Anghel Ionica Ona
2016-01-01
Artistic talent has been defined in various contexts and registers a variety of meanings, more or less operational. From the perspective of pedagogical intervention, it is imperative understanding artistic talent trough the theoretical models of giftedness and talent. So, the aim of the study is to realize a review of the most popular of the theoretical models of giftedness and talent, with identification of the place of artistic talent and the new meanings that artistic talent has in each on...
15. Mathematical model and computer programme for theoretical calculation of calibration curves of neutron soil moisture probes with highly effective counters
International Nuclear Information System (INIS)
Kolev, N.A.
1981-07-01
A mathematical model based on the three group theory for theoretical calculation by means of computer of the calibration curves of neutron soil moisture probes with highly effective counters, is described. Methods for experimental correction of the mathematical model are discussed and proposed. The computer programme described allows the calibration of neutron probes with high or low effective counters, and central or end geometry, with or without linearizing of the calibration curve. The use of two calculation variants and printing of output data gives the possibility not only for calibration, but also for other researches. The separate data inputs for soil and probe temperature allow the temperature influence analysis. The computer programme and calculation examples are given. (author)
16. A Primer on Theoretically Exploring the Field of Business Model Innovation
OpenAIRE
Gassmann, Oliver; Frankenberger, Karolin; Sauer, Roman
2017-01-01
Companies like Amazon, Uber, and Skype have become business strategy icons and the way they transformed industries can hardly be explained with classic strategy research. This article explores the topic of Business Model Innovation, which has become the cornerstone for the competitiveness of many successful firms, from a theoretical perspective. It gives an overview and introduction to the book "Exploring the Field of Business Model Innovation".
17. Mechanical Behaviour of 3D Multi-layer Braided Composites: Experimental, Numerical and Theoretical Study
Science.gov (United States)
Deng, Jian; Zhou, Guangming; Ji, Le; Wang, Xiaopei
2017-12-01
Mechanical properties and failure mechanisms of a newly designed 3D multi-layer braided composites are evaluated by experimental, numerical and theoretical studies. The microstructure of the composites is introduced. The unit cell technique is employed to address the periodic arrangement of the structure. The volume averaging method is used in theoretical solutions while FEM with reasonable periodic boundary conditions and meshing technique in numerical simulations. Experimental studies are also conducted to verify the feasibility of the proposed models. Predicted elastic properties agree well with the experimental data, indicating the feasibility of the proposed models. Numerical evaluation is more accurate than theoretical assessment. Deformations and stress distributions of the unit cell under tension shows displacement and traction continuity, guaranteeing the rationality of the applied periodic boundary conditions. Although compression and tension modulus are close, the compressive strength only reaches 70% of the tension strength. This indicates that the composites can be weakened in compressive loading. Additionally, by analysing the micrograph of fracture faces and strain-stress curves, a brittle failure mechanism is observed both in composites under tension and compression.
18. Evaluating the Theoretic Adequacy and Applied Potential of Computational Models of the Spacing Effect.
Science.gov (United States)
Walsh, Matthew M; Gluck, Kevin A; Gunzelmann, Glenn; Jastrzembski, Tiffany; Krusmark, Michael
2018-03-02
The spacing effect is among the most widely replicated empirical phenomena in the learning sciences, and its relevance to education and training is readily apparent. Yet successful applications of spacing effect research to education and training is rare. Computational modeling can provide the crucial link between a century of accumulated experimental data on the spacing effect and the emerging interest in using that research to enable adaptive instruction. In this paper, we review relevant literature and identify 10 criteria for rigorously evaluating computational models of the spacing effect. Five relate to evaluating the theoretic adequacy of a model, and five relate to evaluating its application potential. We use these criteria to evaluate a novel computational model of the spacing effect called the Predictive Performance Equation (PPE). Predictive Performance Equation combines elements of earlier models of learning and memory including the General Performance Equation, Adaptive Control of Thought-Rational, and the New Theory of Disuse, giving rise to a novel computational account of the spacing effect that performs favorably across the complete sets of theoretic and applied criteria. We implemented two other previously published computational models of the spacing effect and compare them to PPE using the theoretic and applied criteria as guides. © 2018 Cognitive Science Society, Inc.
19. Improving statistical reasoning theoretical models and practical implications
CERN Document Server
Sedlmeier, Peter
1999-01-01
This book focuses on how statistical reasoning works and on training programs that can exploit people''s natural cognitive capabilities to improve their statistical reasoning. Training programs that take into account findings from evolutionary psychology and instructional theory are shown to have substantially larger effects that are more stable over time than previous training regimens. The theoretical implications are traced in a neural network model of human performance on statistical reasoning problems. This book apppeals to judgment and decision making researchers and other cognitive scientists, as well as to teachers of statistics and probabilistic reasoning.
20. A review of theoretical ideas on the EMC effect
International Nuclear Information System (INIS)
Krzywicki, A.
1985-01-01
This paper is a shortened version of a review presented at a nuclear physics conference held in Paris in July 1985. The author concentrates on a sample of representative theoretical ideas. The old dogma, claiming the identity of the structure functions of bound and free nucleons respectively, has been abandoned. Contemplating the plethora of models proposed to explain the EMC effect, the author realizes how unfounded the old dogma has been. However, considerable experimental uncertainties persist (low x region, sea vs. valence, gluon distribution). Also, the predictive power of theoretical models is poor. The author does not see any contradiction between the data and the calculations based on conventional nuclear theory. In this sense, the future theory of the EMC effect will perhaps resemble the rescaling models but, of course, with an improved justification. In any case, it is both important and interesting to achieve a better understanding of the role of QCD degrees of freedom in nuclei
1. Predicting Freshman Persistence and Voluntary Dropout Decisions from a Theoretical Model.
Science.gov (United States)
Pascarella, Ernest T.; Terenzini, Patrick T.
1980-01-01
A five-scale instrument developed from a theoretical model of college attrition correctly identified the persistence/voluntary withdrawal decisions of 78.5 percent of 773 freshmen in a large, residential university. Findings showed that student relationships with faculty were particularly important. (Author/PHR)
2. Exploring Environmental Factors in Nursing Workplaces That Promote Psychological Resilience: Constructing a Unified Theoretical Model
OpenAIRE
Cusack, Lynette; Smith, Morgan; Hegney, Desley; Rees, Clare S.; Breen, Lauren J.; Witt, Regina R.; Rogers, Cath; Williams, Allison; Cross, Wendy; Cheung, Kin
2016-01-01
Building nurses' resilience to complex and stressful practice environments is necessary to keep skilled nurses in the workplace and ensuring safe patient care. A unified theoretical framework titled Health Services Workplace Environmental Resilience Model (HSWERM), is presented to explain the environmental factors in the workplace that promote nurses' resilience. The framework builds on a previously-published theoretical model of individual resilience, which identified the key constructs of p...
3. Correlates of emotional congruence with children in sexual offenders against children: a test of theoretical models in an incarcerated sample.
Science.gov (United States)
McPhail, Ian V; Hermann, Chantal A; Fernandez, Yolanda M
2014-02-01
Emotional congruence with children is a psychological construct theoretically involved in the etiology and maintenance of sexual offending against children. Research conducted to date has not examined the relationship between emotional congruence with children and other psychological meaningful risk factors for sexual offending against children. The current study derived potential correlates of emotional congruence with children from the published literature and proposed three models of emotional congruence with children that contain relatively unique sets of correlates: the blockage, sexual deviance, and psychological immaturity models. Using Area under the Curve analysis, we assessed the relationship between emotional congruence with children and offense characteristics, victim demographics, and psychologically meaningful risk factors in a sample of incarcerated sexual offenders against children (n=221). The sexual deviance model received the most support: emotional congruence with children was significantly associated with deviant sexual interests, sexual self-regulation problems, and cognition that condones and supports child molestation. The blockage model received partial support, and the immaturity model received the least support. Based on the results, we propose a set of further predictions regarding the relationships between emotional congruence with children and other psychologically meaningful risk factors to be examined in future research. Copyright © 2013 Elsevier Ltd. All rights reserved.
4. Bioactivity of Isoflavones: Assessment through a Theoretical Model as a Way to Obtain a “Theoretical Efficacy Related to Estradiol (TERE)”
Science.gov (United States)
Campos, Maria da Graça R.; Matos, Miguel Pires
2010-01-01
The increase of human life span will have profound implications in Public Health in decades to come. By 2030, there will be an estimated 1.2 billion women in post-menopause. Hormone Replacement Therapy with synthetic hormones is still full of risks and according to the latest developments, should be used for the shortest time possible. Searching for alternative drugs is inevitable in this scenario and science must provide physicians with other substances that can be used to treat the same symptoms with less side effects. Systematic research carried out on this field of study is focusing now on isoflavones but the randomised controlled trials and reviews of meta-analysis concerning post-menopause therapy, that could have an important impact on human health, are very controversial. The aim of the present work was to establish a theoretical calculation suitable for use as a way to estimate the “Theoretical Efficacy (TE)” of a mixture with different bioactive compounds as a way to obtain a “Theoretical Efficacy Related to Estradiol (TERE)”. The theoretical calculation that we propose in this paper integrates different knowledge about this subject and sets methodological boundaries that can be used to analyse already published data. The outcome should set some consensus for new clinical trials using isoflavones (isolated or included in mixtures) that will be evaluated to assess their therapeutically activity. This theoretical method for evaluation of a possible efficacy could probably also be applied to other herbal drug extracts when a synergistic or contradictory bio-effect is not verified. In this way, it we may contribute to enlighten and to the development of new therapeutic approaches. PMID:20386649
5. Obesity in sub-Saharan Africa: development of an ecological theoretical framework.
Science.gov (United States)
Scott, Alison; Ejikeme, Chinwe Stella; Clottey, Emmanuel Nii; Thomas, Joy Goens
2013-03-01
The prevalence of overweight and obesity is increasing in sub-Saharan Africa (SSA). There is a need for theoretical frameworks to catalyze further research and to inform the development of multi-level, context-appropriate interventions. In this commentary, we propose a preliminary ecological theoretical framework to conceptualize factors that contribute to increases in overweight and obesity in SSA. The framework is based on a Causality Continuum model [Coreil et al. Social and Behavioral Foundations of Public Health. Sage Publications, Thousand Oaks] that considers distant, intermediate and proximate influences. The influences incorporated in the model include globalization and urbanization as distant factors; occupation, social relationships, built environment and cultural perceptions of weight as intermediate factors and caloric intake, physical inactivity and genetics as proximate factors. The model illustrates the interaction of factors along a continuum, from the individual to the global marketplace, in shaping trends in overweight and obesity in SSA. The framework will be presented, each influence elucidated and implications for research and intervention development discussed. There is a tremendous need for further research on obesity in SSA. An improved evidence base will serve to validate and develop the proposed framework further.
6. The Role of Interpersonal Relations in Healthcare Team Communication and Patient Safety: A Proposed Model of Interpersonal Process in Teamwork.
Science.gov (United States)
Lee, Charlotte Tsz-Sum; Doran, Diane Marie
2017-06-01
Patient safety is compromised by medical errors and adverse events related to miscommunications among healthcare providers. Communication among healthcare providers is affected by human factors, such as interpersonal relations. Yet, discussions of interpersonal relations and communication are lacking in healthcare team literature. This paper proposes a theoretical framework that explains how interpersonal relations among healthcare team members affect communication and team performance, such as patient safety. We synthesized studies from health and social science disciplines to construct a theoretical framework that explicates the links among these constructs. From our synthesis, we identified two relevant theories: framework on interpersonal processes based on social relation model and the theory of relational coordination. The former involves three steps: perception, evaluation, and feedback; and the latter captures relational communicative behavior. We propose that manifestations of provider relations are embedded in the third step of the framework on interpersonal processes: feedback. Thus, varying team-member relationships lead to varying collaborative behavior, which affects patient-safety outcomes via a change in team communication. The proposed framework offers new perspectives for understanding how workplace relations affect healthcare team performance. The framework can be used by nurses, administrators, and educators to improve patient safety, team communication, or to resolve conflicts.
7. Ecological Dynamics as a Theoretical Framework for Development of Sustainable Behaviours towards the Environment
Science.gov (United States)
Brymer, Eric; Davids, Keith
2013-01-01
This paper proposes how the theoretical framework of ecological dynamics can provide an influential model of the learner and the learning process to pre-empt effective behaviour changes. Here we argue that ecological dynamics supports a well-established model of the learner ideally suited to the environmental education context because of its…
8. Proposed reliability cost model
Science.gov (United States)
Delionback, L. M.
1973-01-01
The research investigations which were involved in the study include: cost analysis/allocation, reliability and product assurance, forecasting methodology, systems analysis, and model-building. This is a classic example of an interdisciplinary problem, since the model-building requirements include the need for understanding and communication between technical disciplines on one hand, and the financial/accounting skill categories on the other. The systems approach is utilized within this context to establish a clearer and more objective relationship between reliability assurance and the subcategories (or subelements) that provide, or reenforce, the reliability assurance for a system. Subcategories are further subdivided as illustrated by a tree diagram. The reliability assurance elements can be seen to be potential alternative strategies, or approaches, depending on the specific goals/objectives of the trade studies. The scope was limited to the establishment of a proposed reliability cost-model format. The model format/approach is dependent upon the use of a series of subsystem-oriented CER's and sometimes possible CTR's, in devising a suitable cost-effective policy.
9. Theoretical prediction method of subcooled flow boiling CHF
Energy Technology Data Exchange (ETDEWEB)
Kwon, Young Min; Chang, Soon Heung [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)
1999-12-31
A theoretical critical heat flux (CHF ) model, based on lateral bubble coalescence on the heated wall, is proposed to predict the subcooled flow boiling CHF in a uniformly heated vertical tube. The model is based on the concept that a single layer of bubbles contacted to the heated wall prevents a bulk liquid from reaching the wall at near CHF condition. Comparisons between the model predictions and experimental data result in satisfactory agreement within less than 9.73% root-mean-square error by the appropriate choice of the critical void fraction in the bubbly layer. The present model shows comparable performance with the CHF look-up table of Groeneveld et al.. 28 refs., 11 figs., 1 tab. (Author)
10. Theoretical prediction method of subcooled flow boiling CHF
Energy Technology Data Exchange (ETDEWEB)
Kwon, Young Min; Chang, Soon Heung [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)
1998-12-31
A theoretical critical heat flux (CHF ) model, based on lateral bubble coalescence on the heated wall, is proposed to predict the subcooled flow boiling CHF in a uniformly heated vertical tube. The model is based on the concept that a single layer of bubbles contacted to the heated wall prevents a bulk liquid from reaching the wall at near CHF condition. Comparisons between the model predictions and experimental data result in satisfactory agreement within less than 9.73% root-mean-square error by the appropriate choice of the critical void fraction in the bubbly layer. The present model shows comparable performance with the CHF look-up table of Groeneveld et al.. 28 refs., 11 figs., 1 tab. (Author)
11. Adaptive supervision: a theoretical model for social workers.
Science.gov (United States)
Latting, J E
1986-01-01
Two models of leadership styles are prominent in the management field: Blake and Mouton's managerial Grid and Hersey and Blanchard's Situational Leadership Model. Much of the research on supervisory styles in social work has been based on the former. A recent public debate between the two sets of theorists suggests that both have strengths and limitations. Accordingly, an adaptive model of social work supervision that combines elements of both theories is proposed.
12. Theoretical frameworks informing family-based child and adolescent obesity interventions
DEFF Research Database (Denmark)
Alulis, Sarah; Grabowski, Dan
2017-01-01
into focus. However, the use of theoretical frameworks to strengthen these interventions is rare and very uneven. OBJECTIVE AND METHOD: To conduct a qualitative meta-synthesis of family-based interventions for child and adolescent obesity to identify the theoretical frameworks applied, thus understanding how...... inconsistencies and a significant void between research results and health care practice. Based on the analysis, this article proposes three themes to be used as focus points when designing future interventions and when selecting theories for the development of solid, theory-based frameworks for application...... cognitive, self-efficacy and Family Systems Theory appeared most frequently. The remaining 24 were classified as theory-related as theoretical elements of self-monitoring; stimulus control, reinforcement and modelling were used. CONCLUSION: The designs of family-based interventions reveal numerous...
13. Choosing where to work at work - towards a theoretical model of benefits and risks of activity-based flexible offices.
Science.gov (United States)
Wohlers, Christina; Hertel, Guido
2017-04-01
Although there is a trend in today's organisations to implement activity-based flexible offices (A-FOs), only a few studies examine consequences of this new office type. Moreover, the underlying mechanisms why A-FOs might lead to different consequences as compared to cellular and open-plan offices are still unclear. This paper introduces a theoretical framework explaining benefits and risks of A-FOs based on theories from work and organisational psychology. After deriving working conditions specific for A-FOs (territoriality, autonomy, privacy, proximity and visibility), differences in working conditions between A-FOs and alternative office types are proposed. Further, we suggest how these differences in working conditions might affect work-related consequences such as well-being, satisfaction, motivation and performance on the individual, the team and the organisational level. Finally, we consider task-related (e.g. task variety), person-related (e.g. personality) and organisational (e.g. leadership) moderators. Based on this model, future research directions as well as practical implications are discussed. Practitioner Summary: Activity-based flexible offices (A-FOs) are popular in today's organisations. This article presents a theoretical model explaining why and when working in an A-FO evokes benefits and risks for individuals, teams and organisations. According to the model, A-FOs are beneficial when management encourages employees to use the environment appropriately and supports teams.
14. From representing to modelling knowledge: Proposing a two-step training for excellence in concept mapping
Directory of Open Access Journals (Sweden)
Joana G. Aguiar
2017-09-01
Full Text Available Training users in the concept mapping technique is critical for ensuring a high-quality concept map in terms of graphical structure and content accuracy. However, assessing excellence in concept mapping through structural and content features is a complex task. This paper proposes a two-step sequential training in concept mapping. The first step requires the fulfilment of low-order cognitive objectives (remember, understand and apply to facilitate novices’ development into good Cmappers by honing their knowledge representation skills. The second step requires the fulfilment of high-order cognitive objectives (analyse, evaluate and create to grow good Cmappers into excellent ones through the development of knowledge modelling skills. Based on Bloom’s revised taxonomy and cognitive load theory, this paper presents theoretical accounts to (1 identify the criteria distinguishing good and excellent concept maps, (2 inform instructional tasks for concept map elaboration and (3 propose a prototype for training users on concept mapping combining online and face-to-face activities. The proposed training application and the institutional certification are the next steps for the mature use of concept maps for educational as well as business purposes.
15. Multiscale modeling of complex materials phenomenological, theoretical and computational aspects
CERN Document Server
Trovalusci, Patrizia
2014-01-01
The papers in this volume deal with materials science, theoretical mechanics and experimental and computational techniques at multiple scales, providing a sound base and a framework for many applications which are hitherto treated in a phenomenological sense. The basic principles are formulated of multiscale modeling strategies towards modern complex multiphase materials subjected to various types of mechanical, thermal loadings and environmental effects. The focus is on problems where mechanics is highly coupled with other concurrent physical phenomena. Attention is also focused on the historical origins of multiscale modeling and foundations of continuum mechanics currently adopted to model non-classical continua with substructure, for which internal length scales play a crucial role.
16. Theoretical-empirical model of the steam-water cycle of the power unit
Directory of Open Access Journals (Sweden)
Grzegorz Szapajko
2010-06-01
Full Text Available The diagnostics of the energy conversion systems’ operation is realised as a result of collecting, processing, evaluatingand analysing the measurement signals. The result of the analysis is the determination of the process state. It requires a usageof the thermal processes models. Construction of the analytical model with the auxiliary empirical functions built-in brings satisfyingresults. The paper presents theoretical-empirical model of the steam-water cycle. Worked out mathematical simulation model containspartial models of the turbine, the regenerative heat exchangers and the condenser. Statistical verification of the model is presented.
17. Theoretical & Experimental Research in Weak, Electromagnetic & Strong Interactions
Energy Technology Data Exchange (ETDEWEB)
Nandi, Satyanarayan [Oklahoma State Univ., Stillwater, OK (United States); Babu, Kaladi [Oklahoma State Univ., Stillwater, OK (United States); Rizatdinova, Flera [Oklahoma State Univ., Stillwater, OK (United States); Khanov, Alexander [Oklahoma State Univ., Stillwater, OK (United States); Haley, Joseph [Oklahoma State Univ., Stillwater, OK (United States)
2015-09-17
The conducted research spans a wide range of topics in the theoretical, experimental and phenomenological aspects of elementary particle interactions. Theory projects involve topics in both the energy frontier and the intensity frontier. The experimental research involves energy frontier with the ATLAS Collaboration at the Large Hadron Collider (LHC). In theoretical research, novel ideas going beyond the Standard Model with strong theoretical motivations were proposed, and their experimental tests at the LHC and forthcoming neutrino facilities were outlined. These efforts fall into the following broad categories: (i) TeV scale new physics models for LHC Run 2, including left-right symmetry and trinification symmetry, (ii) unification of elementary particles and forces, including the unification of gauge and Yukawa interactions, (iii) supersummetry and mechanisms of supersymmetry breaking, (iv) superworld without supersymmetry, (v) general models of extra dimensions, (vi) comparing signals of extra dimensions with those of supersymmetry, (vii) models with mirror quarks and mirror leptons at the TeV scale, (viii) models with singlet quarks and singlet Higgs and their implications for Higgs physics at the LHC, (ix) new models for the dark matter of the universe, (x) lepton flavor violation in Higgs decays, (xi) leptogenesis in radiative models of neutrino masses, (xii) light mediator models of non-standard neutrino interactions, (xiii) anomalous muon decay and short baseline neutrino anomalies, (xiv) baryogenesis linked to nucleon decay, and (xv) a new model for recently observed diboson resonance at the LHC and its other phenomenological implications. The experimental High Energy Physics group has been, and continues to be, a successful and productive contributor to the ATLAS experiment at the LHC. Members of the group performed search for gluinos decaying to stop and top quarks, new heavy gauge bosons decaying to top and bottom quarks, and vector-like quarks
18. Theoretical models of DNA flexibility
Czech Academy of Sciences Publication Activity Database
Dršata, Tomáš; Lankaš, Filip
2013-01-01
Roč. 3, č. 4 (2013), s. 355-363 ISSN 1759-0876 Institutional support: RVO:61388963 Keywords : molecular dynamics simulations * base pair level * indirect readout Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 9.041, year: 2013
19. A theoretical model of the M87 jet
International Nuclear Information System (INIS)
Falle, S.A.E.G.; Wilson, M.J.
1985-01-01
This paper describes a theoretical model of the knots in the M87 jet based on the idea that it is a steady fluid jet propagating through a non-uniform atmosphere. It is argued that knots D, E and F can be explained by the jet being underexpanded as it emerges from the central source, while knot A is due to reconfinement of the jet. Very high resolution numerical calculations are used to show that good agreement with the observed positions of the knots can be obtained with reasonable jet parameters and an atmosphere consistent with the X-ray observations. (author)
20. Theoretical model of the SOS effect
Energy Technology Data Exchange (ETDEWEB)
Darznek, S A; Mesyats, G A; Rukin, S N; Tsiranov, S N [Russian Academy of Sciences, Ural Division, Ekaterinburg (Russian Federation). Institute of Electrophysics
1997-12-31
Physical principles underlying the operation of semiconductor opening switches (SOS) are highlighted. The SOS effect occurs at a current density of up to 60 kA/cm{sup 2} in silicon p{sup +}-p-n-n{sup +} structures filled with residual electron-hole plasma. Using a theoretical model developed for plasma dynamic calculations, the mechanism by which current passes through the structure at the stage of high conduction and the processes that take place at the stage of current interruption were analyzed. The dynamics of the processes taking place in the structure was calculated with allowance for both diffusive and drift mechanisms of carrier transport. In addition, two recombination types, viz. recombination via impurities and impact Auger recombination, were included in the model. The effect of the structure on the pumping-circuit current and voltage was also taken into account. The real distribution of the doped impurity in the structure and the avalanche mechanism of carrier multiplication were considered. The results of calculations of a typical SOS are presented. The dynamics of the electron-hole plasma is analyzed. It is shown that the SOS effect represents a qualitatively new mechanism of current interruption in semiconductor structures. (author). 4 figs., 7 refs.
1. On the development of LWR fuel analysis code (1). Analysis of the FEMAXI code and proposal of a new model
International Nuclear Information System (INIS)
Lemehov, Sergei; Suzuki, Motoe
2000-01-01
This report summarizes the review on the modeling features of FEMAXI code and proposal of a new theoretical equation model of clad creep on the basis of irradiation-induced microstructure change. It was pointed out that plutonium build-up in fuel matrix and non-uniform radial power profile at high burn-up affect significantly fuel behavior through the interconnected effects with such phenomena as clad irradiation-induced creep, fission gas release, fuel thermal conductivity degradation, rim porous band formation and associated fuel swelling. Therefore, these combined effects should be properly incorporated into the models of the FEMAXI code so that the code can carry out numerical analysis at the level of accuracy and elaboration that modern experimental data obtained in test reactors have. Also, the proposed new mechanistic clad creep model has a general formalism which allows the model to be flexibly applied for clad behavior analysis under normal operation conditions and power transients as well for Zr-based clad materials by the use of established out-of-pile mechanical properties. The model has been tested against experimental data, while further verification is needed with specific emphasis on power ramps and transients. (author)
2. Theoretical Model of Pricing Behavior on the Polish Wholesale Fuel Market
Directory of Open Access Journals (Sweden)
Bejger Sylwester
2016-12-01
Full Text Available In this paper, we constructed a theoretical model of strategic pricing behavior of the players in a Polish wholesale fuel market. This model is consistent with the characteristics of the industry, the wholesale market, and the players. The model is based on the standard methodology of repeated games with a built-in adjustment to a focal price, which resembles the Import Parity Pricing (IPP mechanism. From the equilibrium of the game, we conclude that the focal price policy implies a parallel pricing strategic behavior on the market.
3. Theoretical Study on the Flow of Refilling Stage in a Safety Injection Tank
Energy Technology Data Exchange (ETDEWEB)
Park, Jun Sang [Halla Univ. Daejeon (Korea, Republic of)
2017-10-15
In this study, a theoretical analysis was performed to the flow of refilling stage in a safety injection tank, which is the core cooling system of nuclear power plant in an emergency. A theoretical model was proposed with a nonlinear governing equation defining on the flow of the refilling process of the coolant. Utilizing the Taylor-series expansion, the 1st - order approximation flow equation was obtained, along with its analytic solution of closed type, which could predict accurately the variations of free surface height and flow rate of the coolant. The availability of theoretical result was confirmed by comparing with previous experimental results.
4. Theoretical Analysis and Design of Ultrathin Broadband Optically Transparent Microwave Metamaterial Absorbers
Science.gov (United States)
Deng, Ruixiang; Li, Meiling; Muneer, Badar; Zhu, Qi; Shi, Zaiying; Song, Lixin; Zhang, Tao
2018-01-01
Optically Transparent Microwave Metamaterial Absorber (OTMMA) is of significant use in both civil and military field. In this paper, equivalent circuit model is adopted as springboard to navigate the design of OTMMA. The physical model and absorption mechanisms of ideal lightweight ultrathin OTMMA are comprehensively researched. Both the theoretical value of equivalent resistance and the quantitative relation between the equivalent inductance and equivalent capacitance are derived for design. Frequency-dependent characteristics of theoretical equivalent resistance are also investigated. Based on these theoretical works, an effective and controllable design approach is proposed. To validate the approach, a wideband OTMMA is designed, fabricated, analyzed and tested. The results reveal that high absorption more than 90% can be achieved in the whole 6~18 GHz band. The fabricated OTMMA also has an optical transparency up to 78% at 600 nm and is much thinner and lighter than its counterparts. PMID:29324686
5. Theoretical Analysis and Design of Ultrathin Broadband Optically Transparent Microwave Metamaterial Absorbers
Directory of Open Access Journals (Sweden)
Ruixiang Deng
2018-01-01
Full Text Available Optically Transparent Microwave Metamaterial Absorber (OTMMA is of significant use in both civil and military field. In this paper, equivalent circuit model is adopted as springboard to navigate the design of OTMMA. The physical model and absorption mechanisms of ideal lightweight ultrathin OTMMA are comprehensively researched. Both the theoretical value of equivalent resistance and the quantitative relation between the equivalent inductance and equivalent capacitance are derived for design. Frequency-dependent characteristics of theoretical equivalent resistance are also investigated. Based on these theoretical works, an effective and controllable design approach is proposed. To validate the approach, a wideband OTMMA is designed, fabricated, analyzed and tested. The results reveal that high absorption more than 90% can be achieved in the whole 6~18 GHz band. The fabricated OTMMA also has an optical transparency up to 78% at 600 nm and is much thinner and lighter than its counterparts.
6. Sound transmission through lightweight double-leaf partitions: theoretical modelling
Science.gov (United States)
Wang, J.; Lu, T. J.; Woodhouse, J.; Langley, R. S.; Evans, J.
2005-09-01
This paper presents theoretical modelling of the sound transmission loss through double-leaf lightweight partitions stiffened with periodically placed studs. First, by assuming that the effect of the studs can be replaced with elastic springs uniformly distributed between the sheathing panels, a simple smeared model is established. Second, periodic structure theory is used to develop a more accurate model taking account of the discrete placing of the studs. Both models treat incident sound waves in the horizontal plane only, for simplicity. The predictions of the two models are compared, to reveal the physical mechanisms determining sound transmission. The smeared model predicts relatively simple behaviour, in which the only conspicuous features are associated with coincidence effects with the two types of structural wave allowed by the partition model, and internal resonances of the air between the panels. In the periodic model, many more features are evident, associated with the structure of pass- and stop-bands for structural waves in the partition. The models are used to explain the effects of incidence angle and of the various system parameters. The predictions are compared with existing test data for steel plates with wooden stiffeners, and good agreement is obtained.
7. Exploring patient satisfaction predictors in relation to a theoretical model.
Science.gov (United States)
Grøndahl, Vigdis Abrahamsen; Hall-Lord, Marie Louise; Karlsson, Ingela; Appelgren, Jari; Wilde-Larsson, Bodil
2013-01-01
The aim is to describe patients' care quality perceptions and satisfaction and to explore potential patient satisfaction predictors as person-related conditions, external objective care conditions and patients' perception of actual care received ("PR") in relation to a theoretical model. A cross-sectional design was used. Data were collected using one questionnaire combining questions from four instruments: Quality from patients' perspective; Sense of coherence; Big five personality trait; and Emotional stress reaction questionnaire (ESRQ), together with questions from previous research. In total, 528 patients (83.7 per cent response rate) from eight medical, three surgical and one medical/surgical ward in five Norwegian hospitals participated. Answers from 373 respondents with complete ESRQ questionnaires were analysed. Sequential multiple regression analysis with ESRQ as dependent variable was run in three steps: person-related conditions, external objective care conditions, and PR (p person-related conditions) explained 51.7 per cent of the ESRQ variance. Step 2 (external objective care conditions) explained an additional 2.4 per cent. Step 3 (PR) gave no significant additional explanation (0.05 per cent). Steps 1 and 2 contributed statistical significance to the model. Patients rated both quality-of-care and satisfaction highly. The paper shows that the theoretical model using an emotion-oriented approach to assess patient satisfaction can explain 54 per cent of patient satisfaction in a statistically significant manner.
8. [Self-Determination in Medical Rehabilitation - Development of a Conceptual Model for Further Theoretical Discussion].
Science.gov (United States)
Senin, Tatjana; Meyer, Thorsten
2018-01-22
Aim was to gather theoretical knowledge about self-determination and to develop a conceptual model for medical rehabilitation- which serves as a basis for discussion. We performed a literature research in electronic databases. Various theories and research results were adopted and transferred to the context of medical rehabilitation and into a conceptual model. The conceptual model of self-determination reflects on a continuum which forms of self-determination may be present in situations of medical rehabilitation treatments. The location on the continuum depends theoretically on the manifestation of certain internal and external factors that may influence each other. The model provides a first conceptualization of self-determination focusing on medical rehabilitation which should be further refined and tested empirically. © Georg Thieme Verlag KG Stuttgart · New York.
9. Modeling oscillatory dynamics in brain microcircuits as a way to help uncover neurological disease mechanisms: A proposal
Energy Technology Data Exchange (ETDEWEB)
Skinner, F. K. [Toronto Western Research Institute, University Health Network, Krembil Discovery Tower, Toronto Western Hospital, 60 Leonard Street, 7th floor, 7KD411, Toronto, Ontario M5T 2S8 (Canada); Department of Medicine (Neurology), University of Toronto, 200 Elizabeth Street, Toronto, Ontario M5G 2C4 (Canada); Department of Physiology, University of Toronto Medical Sciences Building, 3rd Floor, 1 King' s College Circle, Toronto, Ontario M5S 1A8 (Canada); Ferguson, K. A. [Toronto Western Research Institute, University Health Network, Krembil Discovery Tower, Toronto Western Hospital, 60 Leonard Street, 7th floor, 7KD411, Toronto, Ontario M5T 2S8 (Canada); Department of Physiology, University of Toronto Medical Sciences Building, 3rd Floor, 1 King' s College Circle, Toronto, Ontario M5S 1A8 (Canada)
2013-12-15
There is an undisputed need and requirement for theoretical and computational studies in Neuroscience today. Furthermore, it is clear that oscillatory dynamical output from brain networks is representative of various behavioural states, and it is becoming clear that one could consider these outputs as measures of normal and pathological brain states. Although mathematical modeling of oscillatory dynamics in the context of neurological disease exists, it is a highly challenging endeavour because of the many levels of organization in the nervous system. This challenge is coupled with the increasing knowledge of cellular specificity and network dysfunction that is associated with disease. Recently, whole hippocampus in vitro preparations from control animals have been shown to spontaneously express oscillatory activities. In addition, when using preparations derived from animal models of disease, these activities show particular alterations. These preparations present an opportunity to address challenges involved with using models to gain insight because of easier access to simultaneous cellular and network measurements, and pharmacological modulations. We propose that by developing and using models with direct links to experiment at multiple levels, which at least include cellular and microcircuit, a cycling can be set up and used to help us determine critical mechanisms underlying neurological disease. We illustrate our proposal using our previously developed inhibitory network models in the context of these whole hippocampus preparations and show the importance of having direct links at multiple levels.
10. Organizational intellectual capital and the role of the nurse manager: A proposed conceptual model.
Science.gov (United States)
Gilbert, Jason H; Von Ah, Diane; Broome, Marion E
11. NMR relaxation induced by iron oxide particles: testing theoretical models.
Science.gov (United States)
Gossuin, Y; Orlando, T; Basini, M; Henrard, D; Lascialfari, A; Mattea, C; Stapf, S; Vuong, Q L
2016-04-15
Superparamagnetic iron oxide particles find their main application as contrast agents for cellular and molecular magnetic resonance imaging. The contrast they bring is due to the shortening of the transverse relaxation time T 2 of water protons. In order to understand their influence on proton relaxation, different theoretical relaxation models have been developed, each of them presenting a certain validity domain, which depends on the particle characteristics and proton dynamics. The validation of these models is crucial since they allow for predicting the ideal particle characteristics for obtaining the best contrast but also because the fitting of T 1 experimental data by the theory constitutes an interesting tool for the characterization of the nanoparticles. In this work, T 2 of suspensions of iron oxide particles in different solvents and at different temperatures, corresponding to different proton diffusion properties, were measured and were compared to the three main theoretical models (the motional averaging regime, the static dephasing regime, and the partial refocusing model) with good qualitative agreement. However, a real quantitative agreement was not observed, probably because of the complexity of these nanoparticulate systems. The Roch theory, developed in the motional averaging regime (MAR), was also successfully used to fit T 1 nuclear magnetic relaxation dispersion (NMRD) profiles, even outside the MAR validity range, and provided a good estimate of the particle size. On the other hand, the simultaneous fitting of T 1 and T 2 NMRD profiles by the theory was impossible, and this occurrence constitutes a clear limitation of the Roch model. Finally, the theory was shown to satisfactorily fit the deuterium T 1 NMRD profile of superparamagnetic particle suspensions in heavy water.
12. A game theoretic framework for evaluation of the impacts of hackers diversity on security measures
International Nuclear Information System (INIS)
2012-01-01
Game theoretical methods offer new insights into quantitative evaluation of dependability and security. Currently, there is a wide range of useful game theoretic approaches to model the behaviour of intelligent agents. However, it is necessary to revise these approaches if there is a community of hackers with significant diversity in their behaviours. In this paper, we introduce a novel approach to extend the basic ideas of applying game theory in stochastic modelling. The proposed method classifies the community of hackers based on two main criteria used widely in hacker classifications, which are motivation and skill. We use Markov chains to model the system and compute the transition rates between the states based on the preferences and the skill distributions of hacker classes. The resulting Markov chains can be solved to obtain the desired security measures. We also present the results of an illustrative example using the proposed approach, which examines the relation between the attributes of the community of hackers and the security measures.
13. Theoretical aspects of the equivalence principle
International Nuclear Information System (INIS)
Damour, Thibault
2012-01-01
We review several theoretical aspects of the equivalence principle (EP). We emphasize the unsatisfactory fact that the EP maintains the absolute character of the coupling constants of physics, while general relativity and its generalizations (Kaluza–Klein, …, string theory) suggest that all absolute structures should be replaced by dynamical entities. We discuss the EP-violation phenomenology of dilaton-like models, which is likely to be dominated by the linear superposition of two effects: a signal proportional to the nuclear Coulomb energy, related to the variation of the fine-structure constant, and a signal proportional to the surface nuclear binding energy, related to the variation of the light quark masses. We recall various theoretical arguments (including a recently proposed anthropic argument) suggesting that the EP be violated at a small, but not unmeasurably small level. This motivates the need for improved tests of the EP. These tests are probing new territories in physics that are related to deep, and mysterious, issues in fundamental physics. (paper)
14. A Proposal for a Flexible Trend Specification in DSGE Models
Directory of Open Access Journals (Sweden)
Slanicay Martin
2016-06-01
Full Text Available In this paper I propose a flexible trend specification for estimating DSGE models on log differences. I demonstrate this flexible trend specification on a New Keynesian DSGE model of two economies, which I consequently estimate on data from the Czech economy and the euro area, using Bayesian techniques. The advantage of the trend specification proposed is that the trend component and the cyclical component are modelled jointly in a single model. The proposed trend specification is flexible in the sense that smoothness of the trend can be easily modified by different calibration of some of the trend parameters. The results suggest that this method is capable of finding a very reasonable trend in the data. Moreover, comparison of forecast performance reveals that the proposed specification offers more reliable forecasts than the original variant of the model.
15. Physics of human cooperation: experimental evidence and theoretical models
Science.gov (United States)
Sánchez, Angel
2018-02-01
In recent years, many physicists have used evolutionary game theory combined with a complex systems perspective in an attempt to understand social phenomena and challenges. Prominent among such phenomena is the issue of the emergence and sustainability of cooperation in a networked world of selfish or self-focused individuals. The vast majority of research done by physicists on these questions is theoretical, and is almost always posed in terms of agent-based models. Unfortunately, more often than not such models ignore a number of facts that are well established experimentally, and are thus rendered irrelevant to actual social applications. I here summarize some of the facts that any realistic model should incorporate and take into account, discuss important aspects underlying the relation between theory and experiments, and discuss future directions for research based on the available experimental knowledge.
16. Theoretical modeling of mechanical homeostasis of a mammalian cell under gravity-directed vector.
Science.gov (United States)
Zhou, Lüwen; Zhang, Chen; Zhang, Fan; Lü, Shouqin; Sun, Shujin; Lü, Dongyuan; Long, Mian
2018-02-01
Translocation of dense nucleus along gravity vector initiates mechanical remodeling of a eukaryotic cell. In our previous experiments, we quantified the impact of gravity vector on cell remodeling by placing an MC3T3-E1 cell onto upward (U)-, downward (D)-, or edge-on (E)- orientated substrate. Our experimental data demonstrate that orientation dependence of nucleus longitudinal translocation is positively correlated with cytoskeletal (CSK) remodeling of their expressions and structures and also is associated with rearrangement of focal adhesion complex (FAC). However, the underlying mechanism how CSK network and FACs are reorganized in a mammalian cell remains unclear. In this paper, we developed a theoretical biomechanical model to integrate the mechanosensing of nucleus translocation with CSK remodeling and FAC reorganization induced by a gravity vector. The cell was simplified as a nucleated tensegrity structure in the model. The cell and CSK filaments were considered to be symmetrical. All elements of CSK filaments and cytomembrane that support the nucleus were simplified as springs. FACs were simplified as an adhesion cluster of parallel bonds with shared force. Our model proposed that gravity vector-directed translocation of the cell nucleus is mechanically balanced by CSK remodeling and FAC reorganization induced by a gravitational force. Under gravity, dense nucleus tends to translocate and exert additional compressive or stretching force on the cytoskeleton. Finally, changes of the tension force acting on talin by microfilament alter the size of FACs. Results from our model are in qualitative agreement with those from experiments.
17. Experimental-theoretical analysis of laminar internal forced convection with nanofluids
Energy Technology Data Exchange (ETDEWEB)
Cerqueira, Ivana G.; Cotta, Renato M. [Lab. of Transmission and Technology of Heat-LTTC. Mechanical Eng. Dept. - POLI and COPPE/UFRJ, Rio de Janeiro, RJ (Brazil)], E-mail: [email protected]; Mota, Carlos Alberto A. [Conselho Nacional de Pesquisas - CNPq, Brasilia, DF (Brazil)], e-mail: [email protected]; Nunes, Jeziel S. [INPI, Rio de Janeiro, RJ (Brazil)], e-mail: [email protected]
2010-07-01
This work reports fundamental experimental-theoretical research related to heat transfer enhancement in laminar channel flow with nanofluids, which are essentially modifications of the base fluid with the dispersion of metal oxide nanoparticles. The theoretical work was performed by making use of mixed symbolic-numerical computation (Mathematica 7.0 platform) and a hybrid numerical-analytical methodology (Generalized Integral Transform Technique - GITT) in accurately handling the governing partial differential equations for the heat and fluid flow problem formulation with temperature dependency in all the thermophysical properties. Experimental work was also undertaken based on a thermohydraulic circuit built for this purpose, and sample results are presented to verify the proposed model. The aim is to illustrate detailed modeling and robust simulation attempting to reach an explanation of the controversial heat transfer enhancement observed in laminar forced convection with nanofluids. (author)
18. Theoretical analysis of ejector refrigeration system performance under overall modes
International Nuclear Information System (INIS)
Chen, Weixiong; Shi, Chaoyin; Zhang, Shuangping; Chen, Huiqiang; Chong, Daotong; Yan, Junjie
2017-01-01
Highlights: • Real gas theoretical model is used to get ejector performance at critical/sub-critical modes. • The model has a better accuracy against the experiment results compared to ideal gas model. • The overall performances of two refrigerants are analyzed based on the parameter analysis. - Abstract: The ejector refrigeration integrated in the air-conditioning system is a promising technology, because it could be driven by the low grade energy. In the present study, a theoretical calculation based on the real gas property is put forward to estimate the ejector refrigeration system performance under overall modes (critical/sub-critical modes). The experimental data from literature are applied to validate the proposed model. The findings show that the proposed model has higher accuracy compared to the model using the ideal gas law, especially when the ejector operates at sub-critical mode. Then, the performances of the ejector refrigeration circle using different refrigerants are analyzed. R290 and R134a are selected as typical refrigerants by considering the aspects of COP, environmental impact, safety and economy. Finally, the ejector refrigeration performance is investigated under variable operation conditions with R290 and R134a as refrigerants. The results show that the R290 ejector circle has higher COP under critical mode and could operate at low evaporator temperature. However, the performance would decrease rapidly at high condenser temperature. The performance of R134a ejector circle is the opposite, with relatively lower COP, and higher COP at high condenser temperature compared to R290.
19. The Safety Culture Enactment Questionnaire (SCEQ): Theoretical model and empirical validation.
Science.gov (United States)
de Castro, Borja López; Gracia, Francisco J; Tomás, Inés; Peiró, José M
2017-06-01
This paper presents the Safety Culture Enactment Questionnaire (SCEQ), designed to assess the degree to which safety is an enacted value in the day-to-day running of nuclear power plants (NPPs). The SCEQ is based on a theoretical safety culture model that is manifested in three fundamental components of the functioning and operation of any organization: strategic decisions, human resources practices, and daily activities and behaviors. The extent to which the importance of safety is enacted in each of these three components provides information about the pervasiveness of the safety culture in the NPP. To validate the SCEQ and the model on which it is based, two separate studies were carried out with data collection in 2008 and 2014, respectively. In Study 1, the SCEQ was administered to the employees of two Spanish NPPs (N=533) belonging to the same company. Participants in Study 2 included 598 employees from the same NPPs, who completed the SCEQ and other questionnaires measuring different safety outcomes (safety climate, safety satisfaction, job satisfaction and risky behaviors). Study 1 comprised item formulation and examination of the factorial structure and reliability of the SCEQ. Study 2 tested internal consistency and provided evidence of factorial validity, validity based on relationships with other variables, and discriminant validity between the SCEQ and safety climate. Exploratory Factor Analysis (EFA) carried out in Study 1 revealed a three-factor solution corresponding to the three components of the theoretical model. Reliability analyses showed strong internal consistency for the three scales of the SCEQ, and each of the 21 items on the questionnaire contributed to the homogeneity of its theoretically developed scale. Confirmatory Factor Analysis (CFA) carried out in Study 2 supported the internal structure of the SCEQ; internal consistency of the scales was also supported. Furthermore, the three scales of the SCEQ showed the expected correlation
20. Theoretical model of the density of states of random binary alloys
International Nuclear Information System (INIS)
Zekri, N.; Brezini, A.
1991-09-01
A theoretical formulation of the density of states for random binary alloys is examined based on a mean field treatment. The present model includes both diagonal and off-diagonal disorder and also short-range order. Extensive results are reported for various concentrations and compared to other calculations. (author). 22 refs, 6 figs
1. Developing a theoretical maintenance model for disordered eating in Type 1 diabetes.
Science.gov (United States)
Treasure, J; Kan, C; Stephenson, L; Warren, E; Smith, E; Heller, S; Ismail, K
2015-12-01
According to the literature, eating disorders are an increasing problem for more than a quarter of people with Type 1 diabetes and they are associated with accentuated diabetic complications. The clinical outcomes in this group when given standard eating disorder treatments are disappointing. The Medical Research Council guidelines for developing complex interventions suggest that the first step is to develop a theoretical model. To review existing literature to build a theoretical maintenance model for disordered eating in people with Type 1 diabetes. The literature in diabetes relating to models of eating disorder (Fairburn's transdiagnostic model and the dual pathway model) and food addiction was examined and assimilated. The elements common to all eating disorder models include weight/shape concern and problems with mood regulation. The predisposing traits of perfectionism, low self-esteem and low body esteem and the interpersonal difficulties from the transdiagnostic model are also relevant to diabetes. The differences include the use of insulin mismanagement to compensate for breaking eating rules and the consequential wide variations in plasma glucose that may predispose to 'food addiction'. Eating disorder symptoms elicit emotionally driven reactions and behaviours from others close to the individual affected and these are accentuated in the context of diabetes. The next stage is to test the assumptions within the maintenance model with experimental medicine studies to facilitate the development of new technologies aimed at increasing inhibitory processes and moderating environmental triggers. © 2015 The Authors. Diabetic Medicine © 2015 Diabetes UK.
2. A Theoretical Model for Estimation of Yield Strength of Fiber Metal Laminate
Science.gov (United States)
Bhat, Sunil; Nagesh, Suresh; Umesh, C. K.; Narayanan, S.
2017-08-01
The paper presents a theoretical model for estimation of yield strength of fiber metal laminate. Principles of elasticity and formulation of residual stress are employed to determine the stress state in metal layer of the laminate that is found to be higher than the stress applied over the laminate resulting in reduced yield strength of the laminate in comparison with that of the metal layer. The model is tested over 4A-3/2 Glare laminate comprising three thin aerospace 2014-T6 aluminum alloy layers alternately bonded adhesively with two prepregs, each prepreg built up of three uni-directional glass fiber layers laid in longitudinal and transverse directions. Laminates with prepregs of E-Glass and S-Glass fibers are investigated separately under uni-axial tension. Yield strengths of both the Glare variants are found to be less than that of aluminum alloy with use of S-Glass fiber resulting in higher laminate yield strength than with the use of E-Glass fiber. Results from finite element analysis and tensile tests conducted over the laminates substantiate the theoretical model.
3. Theoretical Aspects of Erroneous Actions During the Process of Decision Making by Air Traffic Control
Directory of Open Access Journals (Sweden)
Andersone Silva
2017-08-01
Full Text Available The Theoretical Aspects of Erroneous Actions During the Process of Decision Making by Air Traffic Control evaluates the factors affecting the operational decision-making of a human air traffic controller, interacting in a dynamic environment with the flight crew, surrounding aircraft traffic and environmental conditions of the airspace. This article reviews the challenges of air traffic control in different conditions, ranging from normal and complex to emergency and catastrophic. Workload factors and operating conditions make an impact on air traffic controllers’ decision-making. The proposed model compares various operating conditions within an assumed air traffic control environment subsequently comparing them against a theoretically “perfect” air traffic control system. A mathematical model of flight safety assessment has been proposed for the quantitative assessment of various hazards arising during the process of Air Traffic Control. The model assumes events of various severity and probability ranging from high frequency and low severity up to less likely and catastrophic ones. Certain limitations of the model have been recognised and further improvements for effective hazard evaluation have been suggested.
4. Universe in the theoretical model «Evolving matter»
Directory of Open Access Journals (Sweden)
Bazaluk Oleg
2013-04-01
Full Text Available The article critically examines modern model of the Universe evolution constructed by efforts of a group of scientists (mathematicians, physicists and cosmologists from the world's leading universities (Oxford and Cambridge Universities, Yale, Columbia, New York, Rutgers and the UC Santa Cruz. The author notes its strengths, but also points to shortcomings. Author believes that this model does not take into account the most important achievements in the field of biochemistry and biology (molecular, physical, developmental, etc., as well as neuroscience and psychology. Author believes that in the construction of model of the Universe evolution, scientists must take into account (with great reservations the impact of living and intelligent matter on space processes. As an example, the author gives his theoretical model "Evolving matter". In this model, he shows not only the general dependence of the interaction of cosmic processes with inert, living and intelligent matter, but also he attempts to show the direct influence of systems of living and intelligent matter on the acceleration of the Universe's expansion.
5. The Theoretical and Empirical Approaches to the Definition of Audit Risk
Directory of Open Access Journals (Sweden)
Berezhniy Yevgeniy B.
2017-12-01
Full Text Available The risk category is one of the key factors in planning the audit and assessing its results. The article is aimed at generalizing the theoretical and empirical approaches to the definition of audit risk and methods of its reduction. The structure of audit risk was analyzed and it has been determined, that each of researchers approached to structuring of audit risk from the subjective point of view. The author’s own model of audit risk has been proposed. The basic methods of assessment of audit risk are generalized, the theoretical and empirical approaches to its definition are allocated, also it is noted, that application of any of the given models can be suitable rather for approximate estimation, than for exact calculation of an audit risk, as it is accompanied by certain shortcomings.
6. Development Mechanism of an Integrated Model for Training of a Specialist and Conceptual-Theoretical Activity of a Teacher
Science.gov (United States)
Marasulov, Akhmat; Saipov, Amangeldi; ?rymbayeva, Kulimkhan; Zhiyentayeva, Begaim; Demeuov, Akhan; Konakbaeva, Ulzhamal; Bekbolatova, Akbota
2016-01-01
The aim of the study is to examine the methodological-theoretical construction bases for development mechanism of an integrated model for a specialist's training and teacher's conceptual-theoretical activity. Using the methods of generalization of teaching experience, pedagogical modeling and forecasting, the authors determine the urgent problems…
7. Information-Theoretic Properties of Auditory Sequences Dynamically Influence Expectation and Memory.
Science.gov (United States)
Agres, Kat; Abdallah, Samer; Pearce, Marcus
2018-01-01
A basic function of cognition is to detect regularities in sensory input to facilitate the prediction and recognition of future events. It has been proposed that these implicit expectations arise from an internal predictive coding model, based on knowledge acquired through processes such as statistical learning, but it is unclear how different types of statistical information affect listeners' memory for auditory stimuli. We used a combination of behavioral and computational methods to investigate memory for non-linguistic auditory sequences. Participants repeatedly heard tone sequences varying systematically in their information-theoretic properties. Expectedness ratings of tones were collected during three listening sessions, and a recognition memory test was given after each session. Information-theoretic measures of sequential predictability significantly influenced listeners' expectedness ratings, and variations in these properties had a significant impact on memory performance. Predictable sequences yielded increasingly better memory performance with increasing exposure. Computational simulations using a probabilistic model of auditory expectation suggest that listeners dynamically formed a new, and increasingly accurate, implicit cognitive model of the information-theoretic structure of the sequences throughout the experimental session. Copyright © 2017 Cognitive Science Society, Inc.
8. Theoretical model of gravitational perturbation of current collector axisymmetric flow field
Science.gov (United States)
Walker, John S.; Brown, Samuel H.; Sondergaard, Neal A.
1990-05-01
Some designs of liquid-metal current collectors in homopolar motors and generators are essentially rotating liquid-metal fluids in cylindrical channels with free surfaces and will, at critical rotational speeds, become unstable. An investigation at David Taylor Research Center is being performed to understand the role of gravity in modifying this ejection instability. Some gravitational effects can be theoretically treated by perturbation techniques on the axisymmetric base flow of the liquid metal. This leads to a modification of previously calculated critical-current-collector ejection values neglecting gravity effects. The purpose of this paper is to document the derivation of the mathematical model which determines the perturbation of the liquid-metal base flow due to gravitational effects. Since gravity is a small force compared with the centrifugal effects, the base flow solutions can be expanded in inverse powers of the Froude number and modified liquid-flow profiles can be determined as a function of the azimuthal angle. This model will be used in later work to theoretically study the effects of gravity on the ejection point of the current collector.
9. Theoretical models to predict the mechanical behavior of thick composite tubes
Directory of Open Access Journals (Sweden)
Volnei Tita
2012-02-01
Full Text Available This paper shows theoretical models (analytical formulations to predict the mechanical behavior of thick composite tubes and how some parameters can influence this behavior. Thus, firstly, it was developed the analytical formulations for a pressurized tube made of composite material with a single thick ply and only one lamination angle. For this case, the stress distribution and the displacement fields are investigated as function of different lamination angles and reinforcement volume fractions. The results obtained by the theoretical model are physic consistent and coherent with the literature information. After that, the previous formulations are extended in order to predict the mechanical behavior of a thick laminated tube. Both analytical formulations are implemented as a computational tool via Matlab code. The results obtained by the computational tool are compared to the finite element analyses, and the stress distribution is considered coherent. Moreover, the engineering computational tool is used to perform failure analysis, using different types of failure criteria, which identifies the damaged ply and the mode of failure.
10. Theoretical study on the inverse modeling of deep body temperature measurement
International Nuclear Information System (INIS)
Huang, Ming; Chen, Wenxi
2012-01-01
We evaluated the theoretical aspects of monitoring the deep body temperature distribution with the inverse modeling method. A two-dimensional model was built based on anatomical structure to simulate the human abdomen. By integrating biophysical and physiological information, the deep body temperature distribution was estimated from cutaneous surface temperature measurements using an inverse quasilinear method. Simulations were conducted with and without the heat effect of blood perfusion in the muscle and skin layers. The results of the simulations showed consistently that the noise characteristics and arrangement of the temperature sensors were the major factors affecting the accuracy of the inverse solution. With temperature sensors of 0.05 °C systematic error and an optimized 16-sensor arrangement, the inverse method could estimate the deep body temperature distribution with an average absolute error of less than 0.20 °C. The results of this theoretical study suggest that it is possible to reconstruct the deep body temperature distribution with the inverse method and that this approach merits further investigation. (paper)
11. Anatomy of the Higgs fits: A first guide to statistical treatments of the theoretical uncertainties
Directory of Open Access Journals (Sweden)
Sylvain Fichet
2016-04-01
Full Text Available The studies of the Higgs boson couplings based on the recent and upcoming LHC data open up a new window on physics beyond the Standard Model. In this paper, we propose a statistical guide to the consistent treatment of the theoretical uncertainties entering the Higgs rate fits. Both the Bayesian and frequentist approaches are systematically analysed in a unified formalism. We present analytical expressions for the marginal likelihoods, useful to implement simultaneously the experimental and theoretical uncertainties. We review the various origins of the theoretical errors (QCD, EFT, PDF, production mode contamination…. All these individual uncertainties are thoroughly combined with the help of moment-based considerations. The theoretical correlations among Higgs detection channels appear to affect the location and size of the best-fit regions in the space of Higgs couplings. We discuss the recurrent question of the shape of the prior distributions for the individual theoretical errors and find that a nearly Gaussian prior arises from the error combinations. We also develop the bias approach, which is an alternative to marginalisation providing more conservative results. The statistical framework to apply the bias principle is introduced and two realisations of the bias are proposed. Finally, depending on the statistical treatment, the Standard Model prediction for the Higgs signal strengths is found to lie within either the 68% or 95% confidence level region obtained from the latest analyses of the 7 and 8 TeV LHC datasets.
12. Anatomy of the Higgs fits: A first guide to statistical treatments of the theoretical uncertainties
Science.gov (United States)
Fichet, Sylvain; Moreau, Grégory
2016-04-01
The studies of the Higgs boson couplings based on the recent and upcoming LHC data open up a new window on physics beyond the Standard Model. In this paper, we propose a statistical guide to the consistent treatment of the theoretical uncertainties entering the Higgs rate fits. Both the Bayesian and frequentist approaches are systematically analysed in a unified formalism. We present analytical expressions for the marginal likelihoods, useful to implement simultaneously the experimental and theoretical uncertainties. We review the various origins of the theoretical errors (QCD, EFT, PDF, production mode contamination…). All these individual uncertainties are thoroughly combined with the help of moment-based considerations. The theoretical correlations among Higgs detection channels appear to affect the location and size of the best-fit regions in the space of Higgs couplings. We discuss the recurrent question of the shape of the prior distributions for the individual theoretical errors and find that a nearly Gaussian prior arises from the error combinations. We also develop the bias approach, which is an alternative to marginalisation providing more conservative results. The statistical framework to apply the bias principle is introduced and two realisations of the bias are proposed. Finally, depending on the statistical treatment, the Standard Model prediction for the Higgs signal strengths is found to lie within either the 68% or 95% confidence level region obtained from the latest analyses of the 7 and 8 TeV LHC datasets.
13. Diffusion-controlled interface kinetics-inclusive system-theoretic propagation models for molecular communication systems
Science.gov (United States)
Chude-Okonkwo, Uche A. K.; Malekian, Reza; Maharaj, B. T.
2015-12-01
Inspired by biological systems, molecular communication has been proposed as a new communication paradigm that uses biochemical signals to transfer information from one nano device to another over a short distance. The biochemical nature of the information transfer process implies that for molecular communication purposes, the development of molecular channel models should take into consideration diffusion phenomenon as well as the physical/biochemical kinetic possibilities of the process. The physical and biochemical kinetics arise at the interfaces between the diffusion channel and the transmitter/receiver units. These interfaces are herein termed molecular antennas. In this paper, we present the deterministic propagation model of the molecular communication between an immobilized nanotransmitter and nanoreceiver, where the emission and reception kinetics are taken into consideration. Specifically, we derived closed-form system-theoretic models and expressions for configurations that represent different communication systems based on the type of molecular antennas used. The antennas considered are the nanopores at the transmitter and the surface receptor proteins/enzymes at the receiver. The developed models are simulated to show the influence of parameters such as the receiver radius, surface receptor protein/enzyme concentration, and various reaction rate constants. Results show that the effective receiver surface area and the rate constants are important to the system's output performance. Assuming high rate of catalysis, the analysis of the frequency behavior of the developed propagation channels in the form of transfer functions shows significant difference introduce by the inclusion of the molecular antennas into the diffusion-only model. It is also shown that for t > > 0 and with the information molecules' concentration greater than the Michaelis-Menten kinetic constant of the systems, the inclusion of surface receptors proteins and enzymes in the models
14. Three General Theoretical Models in Sociology: An Articulated ?(Disunity?
Directory of Open Access Journals (Sweden)
Thaís García-Pereiro
2015-01-01
Full Text Available After merely a brief, comparative reconstruction of the three most general theoretical models underlying contemporary Sociology (atomic, systemic, and fluid it becomes necessary to review the question about the unity or plurality of Sociology, which is the main objective of this paper. To do so, the basic terms of the question are firstly updated by following the hegemonic trends in current studies of science. Secondly the convergences and divergences among the three models discussed are shown. Following some additional discussion, the conclusion is reached that contemporary Sociology is not unitary, and need not be so. It is plural, but its plurality is limited and articulated by those very models. It may therefore be portrayed as integrated and commensurable, to the extent that a partial and unstable (disunity may be said to exist in Sociology, which is not too far off from what happens in the natural sciences.
15. Psychotherapy Integration via Theoretical Unification
Directory of Open Access Journals (Sweden)
Warren W. Tryon
2017-01-01
Full Text Available Meaningful psychotherapy integration requires theoretical unification because psychotherapists can only be expected to treat patients with the same diagnoses similarly if they understand these disorders similarly and if they agree on the mechanisms by which effective treatments work. Tryon (in press has proposed a transtheoretic transdiagnostic psychotherapy based on an Applied Psychological Science (APS clinical orientation, founded on a BioPsychology Network explanatory system that provides sufficient theoretical unification to support meaningful psychotherapy integration. That proposal focused mainly on making a neuroscience argument. This article makes a different argument for theoretical unification and consequently psychotherapy integration. The strength of theories of psychotherapy, like all theory, is to focus on certain topics, goals, and methods. But this strength is also a weakness because it can blind one to alternative perspectives and thereby promote unnecessary competition among therapies. This article provides a broader perspective based on learning and memory that is consistent with the behavioral, cognitive, cognitive-behavioral, psychodynamic, pharmacologic, and Existential/Humanistic/Experiential clinical orientations. It thereby provides a basis for meaningful psychotherapy integration.
16. Adaptive information-theoretic bounded rational decision-making with parametric priors
OpenAIRE
Grau-Moya, Jordi; Braun, Daniel A.
2015-01-01
Deviations from rational decision-making due to limited computational resources have been studied in the field of bounded rationality, originally proposed by Herbert Simon. There have been a number of different approaches to model bounded rationality ranging from optimality principles to heuristics. Here we take an information-theoretic approach to bounded rationality, where information-processing costs are measured by the relative entropy between a posterior decision strategy and a given fix...
17. Theoretical modeling of diluted antiferromagnetic systems
International Nuclear Information System (INIS)
Pozo, J; Elgueta, R; Acevedo, R
2000-01-01
Some magnetic properties of a Diluted Antiferromagnetic System (DAFS) are studied. The model of the two sub-networks for antiferromagnetism is used and a Heisenberg Hamiltonian type is proposed, where the square operators are expressed in terms of boson operators with the approach of spin waves. The behavior of the diluted system's fundamental state depends basically on the competition effect between the anisotropy field and the Weiss molecular field. The approach used allows the diluted system to be worked for strong anisotropies as well as when these are very weak
18. Theoretical expectations for the muon's electric dipole moment
International Nuclear Information System (INIS)
Feng, Jonathan L.; Matchev, Konstantin T.; Shadmi, Yael
2001-01-01
We examine the muon's electric dipole moment d μ from a variety of theoretical perspectives. We point out that the reported deviation in the muon's g-2 can be due partially or even entirely to a new physics contribution to the muon's electric dipole moment. In fact, the recent g-2 measurement provides the most stringent bound on d μ to date. This ambiguity could be definitively resolved by the dedicated search for d μ recently proposed. We then consider both model-independent and supersymmetric frameworks. Under the assumptions of scalar degeneracy, proportionality, and flavor conservation, the theoretical expectations for d μ in supersymmetry fall just below the proposed sensitivity. However, nondegeneracy can give an order of magnitude enhancement, and lepton flavor violation can lead to d μ ∼10 -22 e cm, two orders of magnitude above the sensitivity of the d μ experiment. We present compact expressions for leptonic dipole moments and lepton flavor violating amplitudes. We also derive new limits on the amount of flavor violation allowed and demonstrate that approximations previously used to obtain such limits are highly inaccurate in much of parameter space
19. A proposed best practice model validation framework for banks
Directory of Open Access Journals (Sweden)
Pieter J. (Riaan de Jongh
2017-06-01
Full Text Available Background: With the increasing use of complex quantitative models in applications throughout the financial world, model risk has become a major concern. The credit crisis of 2008–2009 provoked added concern about the use of models in finance. Measuring and managing model risk has subsequently come under scrutiny from regulators, supervisors, banks and other financial institutions. Regulatory guidance indicates that meticulous monitoring of all phases of model development and implementation is required to mitigate this risk. Considerable resources must be mobilised for this purpose. The exercise must embrace model development, assembly, implementation, validation and effective governance. Setting: Model validation practices are generally patchy, disparate and sometimes contradictory, and although the Basel Accord and some regulatory authorities have attempted to establish guiding principles, no definite set of global standards exists. Aim: Assessing the available literature for the best validation practices. Methods: This comprehensive literature study provided a background to the complexities of effective model management and focussed on model validation as a component of model risk management. Results: We propose a coherent ‘best practice’ framework for model validation. Scorecard tools are also presented to evaluate if the proposed best practice model validation framework has been adequately assembled and implemented. Conclusion: The proposed best practice model validation framework is designed to assist firms in the construction of an effective, robust and fully compliant model validation programme and comprises three principal elements: model validation governance, policy and process.
20. Theoretical investigations of the new Cokriging method for variable-fidelity surrogate modeling
DEFF Research Database (Denmark)
Zimmermann, Ralf; Bertram, Anna
2018-01-01
Cokriging is a variable-fidelity surrogate modeling technique which emulates a target process based on the spatial correlation of sampled data of different levels of fidelity. In this work, we address two theoretical questions associated with the so-called new Cokriging method for variable fidelity...
1. Health Professionals' Explanations of Suicidal Behaviour: Effects of Professional Group, Theoretical Intervention Model, and Patient Suicide Experience.
Science.gov (United States)
Rothes, Inês Areal; Henriques, Margarida Rangel
2017-12-01
In a help relation with a suicidal person, the theoretical models of suicidality can be essential to guide the health professional's comprehension of the client/patient. The objectives of this study were to identify health professionals' explanations of suicidal behaviors and to study the effects of professional group, theoretical intervention models, and patient suicide experience in professionals' representations. Two hundred and forty-two health professionals filled out a self-report questionnaire. Exploratory principal components analysis was used. Five explanatory models were identified: psychological suffering, affective cognitive, sociocommunicational, adverse life events, and psychopathological. Results indicated that the psychological suffering and psychopathological models were the most valued by the professionals, while the sociocommunicational was seen as the least likely to explain suicidal behavior. Differences between professional groups were found. We concluded that training and reflection on theoretical models in general and in communicative issues in particular are needed in the education of health professionals.
2. Theoretical modeling of the dynamics of a semiconductor laser subject to double-reflector optical feedback
Energy Technology Data Exchange (ETDEWEB)
Bakry, A. [King Abdulaziz University, 80203, Department of Physics, Faculty of Science (Saudi Arabia); Abdulrhmann, S. [Jazan University, 114, Department of Physics, Faculty of Sciences (Saudi Arabia); Ahmed, M., E-mail: [email protected] [King Abdulaziz University, 80203, Department of Physics, Faculty of Science (Saudi Arabia)
2016-06-15
We theoretically model the dynamics of semiconductor lasers subject to the double-reflector feedback. The proposed model is a new modification of the time-delay rate equations of semiconductor lasers under the optical feedback to account for this type of the double-reflector feedback. We examine the influence of adding the second reflector to dynamical states induced by the single-reflector feedback: periodic oscillations, period doubling, and chaos. Regimes of both short and long external cavities are considered. The present analyses are done using the bifurcation diagram, temporal trajectory, phase portrait, and fast Fourier transform of the laser intensity. We show that adding the second reflector attracts the periodic and perioddoubling oscillations, and chaos induced by the first reflector to a route-to-continuous-wave operation. During this operation, the periodic-oscillation frequency increases with strengthening the optical feedback. We show that the chaos induced by the double-reflector feedback is more irregular than that induced by the single-reflector feedback. The power spectrum of this chaos state does not reflect information on the geometry of the optical system, which then has potential for use in chaotic (secure) optical data encryption.
3. A game theoretic model of the Northwestern European electricity market-market power and the environment
NARCIS (Netherlands)
Lise, W.; Linderhof, V.G.M.; Kuik, O.; Kemfert, C.; Ostling, R.; Heinzow, T.
2006-01-01
This paper develops a static computational game theoretic model. Illustrative results for the liberalising European electricity market are given to demonstrate the type of economic and environmental results that can be generated with the model. The model is empirically calibrated to eight
4. Kinetic Adsorption Study of Silver Nanoparticles on Natural Zeolite: Experimental and Theoretical Models
Directory of Open Access Journals (Sweden)
Alvaro Ruíz-Baltazar
2015-12-01
Full Text Available In this research, the adsorption capacity of Ag nanoparticles on natural zeolite from Oaxaca is presented. In order to describe the adsorption mechanism of silver nanoparticles on zeolite, experimental adsorption models for Ag ions and Ag nanoparticles were carried out. These experimental data obtained by the atomic absorption spectrophotometry technique were compared with theoretical models such as Lagergren first-order, pseudo-second-order, Elovich, and intraparticle diffusion. Correlation factors R2 of the order of 0.99 were observed. Analysis by transmission electron microscopy describes the distribution of the silver nanoparticles on the zeolite outer surface. Additionally, a chemical characterization of the material was carried out through a dilution process with lithium metaborate. An average value of 9.3 in the Si/Al ratio was observed. Factors such as the adsorption behavior of the silver ions and the Si/Al ratio of the zeolite are very important to support the theoretical models and establish the adsorption mechanism of Ag nanoparticles on natural zeolite.
5. Experimental analysis and theoretical model for anomalously high ideality factors in ZnO/diamond p-n junction diode
International Nuclear Information System (INIS)
Wang Chengxin; Yang Guowei; Liu Hongwu; Han Yonghao; Luo Jifeng; Gao Chunxiao; Zou Guangtian
2004-01-01
High-quality heterojunctions between p-type diamond single-crystalline films and highly oriented n-type ZnO films were fabricated by depositing the p-type diamond single-crystal films on the I o -type diamond single crystal using a hot filament chemical vapor deposition, and later growing a highly oriented n-type ZnO film on the p-type diamond single-crystal film by magnetron sputtering. Interestingly, anomalously high ideality factors (n>>2.0) in the prepared ZnO/diamond p-n junction diode in the interim bias voltage range were measured. For this, detailed electronic characterizations of the fabricated p-n junction were conducted, and a theoretical model was proposed to clarify the much higher ideality factors of the special heterojunction diode
6. Photoluminescence of crystalline and disordered BTO:Mn powder: Experimental and theoretical modeling
International Nuclear Information System (INIS)
Gurgel, M.F.C.; Espinosa, J.W.M.; Campos, A.B.; Rosa, I.L.V.; Joya, M.R.; Souza, A.G.; Zaghete, M.A.; Pizani, P.S.; Leite, E.R.; Varela, J.A.; Longo, E.
2007-01-01
Disordered and crystalline Mn-doped BaTiO 3 (BTO:Mn) powders were synthesized by the polymeric precursor method. After heat treatment, the nature of visible photoluminescence (PL) at room temperature in amorphous BTO:Mn was discussed, considering results of experimental and theoretical studies. X-ray diffraction (XRD), PL, and UV-vis were used to characterize this material. Rietveld refinement of the BTO:Mn from XRD data was used to built two models, which represent the crystalline BTO:Mn (BTO:Mn c ) and disordered BTO:Mn (BTO:Mn d ) structures. Theses models were analyzed by the periodic ab initio quantum mechanical calculations using the CRYSTAL98 package within the framework of density functional theory at the B3LYP level. The experimental and theoretical results indicated that PL is related with the degree of disorder in the BTO:Mn powders and also suggests the presence of localized states in the disordered structure
7. A PROPOSED TAXONOMY OF THE PERCEPTUAL DOMAIN AND SOME SUGGESTED APPLICATIONS.
Science.gov (United States)
MOORE, MAXINE R.
THIS PROPOSAL FOR A PRELIMINARY TAXONOMY OF THE PERCEPTUAL DOMAIN, ORGANIZED ON THE PRINCIPLE OF INTEGRATION, DREW ON GUILFORD'S THEORETICAL AND FACTOR-ANALYTICAL WORK, ON WITKIN'S FIGURE-GROUND STUDIES, AND ON THE "TAXONOMY OF EDUCATIONAL OBJECTIVES" MODELS. THE TAXONOMY CATEGORIES ARE SENSATION, FIGURE PERCEPTION, SYMBOL PERCEPTION, PERCEPTION…
8. Theoretical Background for the Decision-Making Process Modelling under Controlled Intervention Conditions
OpenAIRE
Bakanauskienė Irena; Baronienė Laura
2017-01-01
This article is intended to theoretically justify the decision-making process model for the cases, when active participation of investing entities in controlling the activities of an organisation and their results is noticeable. Based on scientific literature analysis, a concept of controlled conditions is formulated, and using a rational approach to the decision-making process, a model of the 11-steps decision-making process under controlled intervention is presented. Also, there have been u...
9. Modeling opinion dynamics: Theoretical analysis and continuous approximation
International Nuclear Information System (INIS)
Pinasco, Juan Pablo; Semeshenko, Viktoriya; Balenzuela, Pablo
2017-01-01
Highlights: • We study a simple model of persuasion dynamics with long range pairwise interactions. • The continuous limit of the master equation is a nonlinear, nonlocal, first order partial differential equation. • We compute the analytical solutions to this equation, and compare them with the simulations of the dynamics. - Abstract: Frequently we revise our first opinions after talking over with other individuals because we get convinced. Argumentation is a verbal and social process aimed at convincing. It includes conversation and persuasion and the agreement is reached because the new arguments are incorporated. Given the wide range of opinion formation mathematical approaches, there are however no models of opinion dynamics with nonlocal pair interactions analytically solvable. In this paper we present a novel analytical framework developed to solve the master equations with non-local kernels. For this we used a simple model of opinion formation where individuals tend to get more similar after each interactions, no matter their opinion differences, giving rise to nonlinear differential master equation with non-local terms. Simulation results show an excellent agreement with results obtained by the theoretical estimation.
10. Concept of the Cooling System of the ITS for ALICE: Technical Proposals, Theoretical Estimates, Experimental Results
CERN Document Server
Godisov, O N; Yudkin, M I; Gerasimov, S F; Feofilov, G A
1994-01-01
Contradictory demands raised by the application of different types of sensitive detectors in 5 layers of the Inner Tracking System (ITS) for ALICE stipulate the simultaneous use of different schemes of heat drain: gaseous cooling of the 1st layer (uniform heat production over the sensitive surface) and evaporative cooling for the 2nd-5th layers (localised heat production). The last system is also a must for the thermostabilization of Si-drift detectors within 0.1 degree C. Theoretical estimates of gaseous, evaporative and liquid cooling systems are done for all ITS layers. The results of the experiments done for evaporative and liquid heat drain systems are presented and discussed. The major technical problems of the evaporative systems' design are being considered: i) control of liquid supply; ii) vapour pressure control. Two concepts of the evaporative systems are proposed: 1) One channel system for joint transfer of two phases (liquid + gas); 2) Two channels system with separate transfer of phases. Both sy...
11. Quantification of uncertainties in turbulence modeling: A comparison of physics-based and random matrix theoretic approaches
International Nuclear Information System (INIS)
Wang, Jian-Xun; Sun, Rui; Xiao, Heng
2016-01-01
Highlights: • Compared physics-based and random matrix methods to quantify RANS model uncertainty. • Demonstrated applications of both methods in channel ow over periodic hills. • Examined the amount of information introduced in the physics-based approach. • Discussed implications to modeling turbulence in both near-wall and separated regions. - Abstract: Numerical models based on Reynolds-Averaged Navier-Stokes (RANS) equations are widely used in engineering turbulence modeling. However, the RANS predictions have large model-form uncertainties for many complex flows, e.g., those with non-parallel shear layers or strong mean flow curvature. Quantification of these large uncertainties originating from the modeled Reynolds stresses has attracted attention in the turbulence modeling community. Recently, a physics-based Bayesian framework for quantifying model-form uncertainties has been proposed with successful applications to several flows. Nonetheless, how to specify proper priors without introducing unwarranted, artificial information remains challenging to the current form of the physics-based approach. Another recently proposed method based on random matrix theory provides the prior distributions with maximum entropy, which is an alternative for model-form uncertainty quantification in RANS simulations. This method has better mathematical rigorousness and provides the most non-committal prior distributions without introducing artificial constraints. On the other hand, the physics-based approach has the advantages of being more flexible to incorporate available physical insights. In this work, we compare and discuss the advantages and disadvantages of the two approaches on model-form uncertainty quantification. In addition, we utilize the random matrix theoretic approach to assess and possibly improve the specification of priors used in the physics-based approach. The comparison is conducted through a test case using a canonical flow, the flow past
12. Nuclear energy policy analysis under uncertainties : applications of new utility theoretic approaches
International Nuclear Information System (INIS)
Ra, Ki Yong
1992-02-01
For the purpose of analyzing the nuclear energy policy under uncertainties, new utility theoretic approaches were applied. The main discoveries of new utility theories are that, firstly, the consequences can affect the perceived probabilities, secondly, the utilities are not fixed but can change, and finally, utilities and probabilities thus should be combined dependently to determine the overall worth of risky option. These conclusions were applied to develop the modified expected utility model and to establish the probabilistic nuclear safety criterion. The modified expected utility model was developed in order to resolve the inconsistencies between the expected utility model and the actual decision behaviors. Based on information theory and Bayesian inference, the modified probabilities were obtained as the stated probabilities times substitutional factors. The model theoretically predicts that the extreme value outcomes are perceived as to be more likely to occur than medium value outcomes. This prediction is consistent with the first finding of new utility theories that the consequences can after the perceived probabilities. And further with this theoretical prediction, the decision behavior of buying lottery ticket, of paying for insurance and of nuclear catastrophic risk aversion can well be explained. Through the numerical application, it is shown that the developed model can well explain the common consequence effect, common ratio effect and reflection effect. The probabilistic nuclear safety criterion for core melt frequency was established: Firstly, the distribution of the public's safety goal (DPSG) was proposed for representing the public's group preference under risk. Secondly, a new probabilistic safety criterion (PSC) was established, in which the DPSG was used as a benchmark for evaluating the results of probabilistic safety assessment. Thirdly, a log-normal distribution was proposed as the appropriate DPSG for core melt frequency using the
13. The role of the intervertebral disc in correction of scoliotic curves. A theoretical model of idiopathic scoliosis pathogenesis.
Science.gov (United States)
Grivas, T B; Vasiliadis, E S; Rodopoulos, G; Bardakos, N
2008-01-01
Wedging of the scoliotic inter-vertebral disc (IVD) was previously reported as a contributory factor for progression of idiopathic scoliotic (IS) curves. The present study introduces a theoretical model of IVD's role in IS pathogenesis and examines if, by reversing IVD wedging with conservative treatment (full- and night-time braces and exercises) or fusionless IS surgery with staples, we can correct the deformity of the immature spine. The proposed model implies the role of the diurnal variation and the asymmetric water distribution in the scoliotic IVD and the subsequent alteration of the mechanical environment of the adjacent vertebral growth plates. Modulation of the IVD by applying corrective forces on the scoliotic curve restores a close-to-normal force application on the vertebral growth plates through the Hueter-Volkmann principle and consequently prevents curve progression. The forces are now transmitted evenly to the growth plate and increase the rate of proliferation of chondrocytes at the corrected pressure side, the concave. Application of appropriately directed forces, ideally opposite to the apex of the deformity, likely leads to optimal correction. The wedging of the elastic IVD in the immature scoliotic spine could be reversed by application of corrective forces on it. Reversal of IVD wedging is thus amended into a "corrective", rather than "progressive", factor of the deformity. Through the proposed model, treatment of progressive IS with braces, exercises and fusionless surgery by anterior stapling could be effective.
14. Language acquisition is model-based rather than model-free.
Science.gov (United States)
Wang, Felix Hao; Mintz, Toben H
2016-01-01
Christiansen & Chater (C&C) propose that learning language is learning to process language. However, we believe that the general-purpose prediction mechanism they propose is insufficient to account for many phenomena in language acquisition. We argue from theoretical considerations and empirical evidence that many acquisition tasks are model-based, and that different acquisition tasks require different, specialized models.
15. Theoretical modeling of a gas clearance phase regulation mechanism for a pneumatically-driven split-Stirling-cycle cryocooler
Science.gov (United States)
Zhang, Cun-quan; Zhong, Cheng
2015-03-01
The concept of a new type of pneumatically-driven split-Stirling-cycle cryocooler with clearance-phase-adjustor is proposed. In this implementation, the gap between the phase-adjusting part and the cylinder of the spring chamber is used, instead of dry friction acting on the pneumatically-driven rod to control motion damping of the displacer and to adjust the phase difference between the compression piston and displacer. It has the advantages of easy damping adjustment, low cost, and simplified manufacturing and assembly. A theoretical model has been established to simulate its dynamic performance. The linear compressor is modeled under adiabatic conditions, and the displacement of the compression piston is experimentally rectified. The working characteristics of the compressor motor and the principal losses of cooling, including regenerator inefficiency loss, solid conduction loss, shuttle loss, pump loss and radiation loss, are taken into account. The displacer motion was modeled as a single-degree-of-freedom (SDOF) forced system. A set of governing equations can be solved numerically to simulate the cooler's performance. The simulation is useful for understanding the physical processes occurring in the cooler and for predicting the cooler's performance.
16. A theoretical derivation of the Hoek–Brown failure criterion for rock materials
Directory of Open Access Journals (Sweden)
Jianping Zuo
2015-08-01
Full Text Available This study uses a three-dimensional crack model to theoretically derive the Hoek–Brown rock failure criterion based on the linear elastic fracture theory. Specifically, we argue that a failure characteristic factor needs to exceed a critical value when macro-failure occurs. This factor is a product of the micro-failure orientation angle (characterizing the density and orientation of damaged micro-cracks and the changing rate of the angle with respect to the major principal stress (characterizing the microscopic stability of damaged cracks. We further demonstrate that the factor mathematically leads to the empirical Hoek–Brown rock failure criterion. Thus, the proposed factor is able to successfully relate the evolution of microscopic damaged crack characteristics to macro-failure. Based on this theoretical development, we also propose a quantitative relationship between the brittle–ductile transition point and confining pressure, which is consistent with experimental observations.
17. THEORETICAL FLOW MODEL THROUGH A CENTRIFUGAL PUMP USED FOR WATER SUPPLY IN AGRICULTURE IRRIGATION
Directory of Open Access Journals (Sweden)
SCHEAUA Fanel Dorel
2017-05-01
motion of the rotor. A theoretical model for calculating the flow of the working fluid through the interior of a centrifugal pump model is presented in this paper as well as the numerical analysis on the virtual model performed with the ANSYS CFX software in order to highlight the flow parameters and flow path-lines that are formed during centrifugal pump operation.
18. Theoretical microbial ecology without species
Science.gov (United States)
Tikhonov, Mikhail
2017-09-01
Ecosystems are commonly conceptualized as networks of interacting species. However, partitioning natural diversity of organisms into discrete units is notoriously problematic and mounting experimental evidence raises the intriguing question whether this perspective is appropriate for the microbial world. Here an alternative formalism is proposed that does not require postulating the existence of species as fundamental ecological variables and provides a naturally hierarchical description of community dynamics. This formalism allows approaching the species problem from the opposite direction. While the classical models treat a world of imperfectly clustered organism types as a perturbation around well-clustered species, the presented approach allows gradually adding structure to a fully disordered background. The relevance of this theoretical construct for describing highly diverse natural ecosystems is discussed.
19. A theoretical approach to sputtering due to molecular ion bombardment, 1
International Nuclear Information System (INIS)
Karashima, Shosuke; Ootoshi, Tsukuru; Kamiyama, Masahide; Kim, Pil-Hyon; Namba, Susumu.
1981-01-01
A shock wave model is proposed to explain theoretically the non-linear effects in sputtering phenomena by molecular ion bombardments. In this theory the sputtering processes are separated into two parts; one is due to linear effects and another is due to non-linear effects. The treatment of the linear parts is based on the statistical model by Schwarz and Helms concerning a broad range of atomic collision cascades. The non-linear parts are treated by the model of shock wave due to overlapping cascades, and useful equations to calculate the sputtering yields and the dynamical quantities in the system are derived. (author)
20. Using Mathematics, Mathematical Applications, Mathematical Modelling, and Mathematical Literacy: A Theoretical Study
Science.gov (United States)
Mumcu, Hayal Yavuz
2016-01-01
The purpose of this theoretical study is to explore the relationships between the concepts of using mathematics in the daily life, mathematical applications, mathematical modelling, and mathematical literacy. As these concepts are generally taken as independent concepts in the related literature, they are confused with each other and it becomes…
1. Experimental and Theoretical Deflections of Hybrid Composite Sandwich Panel under Four-point Bending Load
Directory of Open Access Journals (Sweden)
Jauhar Fajrin
2017-03-01
Full Text Available This paper presents a comparison of theoretical and experimental deflection of a hybrid sandwich panel under four-point bending load. The paper initially presents few basic equations developed under three-point load, followed by development of model under four-point bending load and a comparative analysis between theoretical and experimental results. It was found that the proposed model for predicting the deflection of hybrid sandwich panels provided fair agreement with the experimental values. Most of the sandwich panels showed theoretical deflection values higher than the experimental values, which is desirable in the design. It was also noticed that the introduction of intermediate layer does not contribute much to reduce the deflection of sandwich panel as the main contributor for the total deflection was the shear deformation of the core that mostly determined by the geometric of the samples and the thickness of the core.
2. Oxidation of organics in water in microfluidic electrochemical reactors: Theoretical model and experiments
International Nuclear Information System (INIS)
Scialdone, Onofrio; Guarisco, Chiara; Galia, Alessandro
2011-01-01
The electrochemical oxidation of organics in water performed in micro reactors on boron doped diamond (BDD) anode was investigated both theoretically and experimentally in order to find the influence of various operative parameters on the conversion and the current efficiency CE of the process. The electrochemical oxidation of formic acid (FA) was selected as a model case. High conversions for a single passage of the electrolytic solution inside the cell were obtained by operating with proper residence times and low distances between cathode and anode. The effect of initial concentration, flow rate and current density was investigated in detail. Theoretical predictions were in very good agreement with experimental results for both mass transfer control, oxidation reaction control and mixed kinetic regimes in spite of the fact that no adjustable parameters was used. Mass transfer process was successfully modelled by considering for simplicity a constant Sh number (e.g., a constant mass transfer coefficient k m ) for a process performed with no high values of the current intensity to minimize the effect of the gas bubbling on the flowdynamic pattern. For mixed kinetic regimes, two different modelling approaches were used. In the first one, the oxidation of organics at BDD was assumed to be mass transfer controlled and to occur with an intrinsic 100% CE when applied current density is higher than the limiting current density. In the second case, the CE of the process was modelled assuming that the competition between organic and water oxidation depends only on the electrodic material and on the nature and the concentration of the organic. In the latter case a better agreement between experimental data and theoretical predictions was observed.
3. Delayed hydride cracking: theoretical model testing to predict cracking velocity
International Nuclear Information System (INIS)
Mieza, Juan I.; Vigna, Gustavo L.; Domizzi, Gladys
2009-01-01
Pressure tubes from Candu nuclear reactors as any other component manufactured with Zr alloys are prone to delayed hydride cracking. That is why it is important to be able to predict the cracking velocity during the component lifetime from parameters easy to be measured, such as: hydrogen concentration, mechanical and microstructural properties. Two of the theoretical models reported in literature to calculate the DHC velocity were chosen and combined, and using the appropriate variables allowed a comparison with experimental results of samples from Zr-2.5 Nb tubes with different mechanical and structural properties. In addition, velocities measured by other authors in irradiated materials could be reproduced using the model described above. (author)
4. Modeling cognitive behavior in nuclear power plants: An overview of contributing theoretical traditions
International Nuclear Information System (INIS)
Woods, D.D.; Roth, E.M.
1986-01-01
This paper reviews the major theoretical literatures that are relevant to modeling human cognitive activities important to nuclear power plant safety. The traditions considered include control theory, communication theory, statistical decision theory, information processing models and symbolic processing models. The review reveals a gradual convergence towards models that incorporate elements from multiple traditions. Models from the control theory tradition have gradually evolved to include rich knowledge representations borrowed from the symbolic processing work. At the same time theorists in the symbolic processing tradition are beginning to grapple with some of the critical issues involved in modeling complex real world domain
5. By-product mutualism and the ambiguous effects of harsher environments - A game-theoretic model
NARCIS (Netherlands)
De Jaegher, Kris; Hoyer, Britta
2016-01-01
We construct two-player two-strategy game-theoretic models of by-product mutualism, where our focus lies on the way in which the probability of cooperation among players is affected by the degree of adversity facing the players. In our first model, cooperation consists of the production of a public
6. A proposed wind shift model for the German reactor safety study
International Nuclear Information System (INIS)
Aldrich, D.C.; Bayer, A.; Schueckler, M.
1979-04-01
To account for hourly wind direction changes, a wind shift model has been proposed. Using hourly recorded wind speed and direction data, the model modifies the angular distribution of radionuclide concentrations calculated by a straightline model, and is intended to better represent the concentrations in areas close to the reactor where potential doses might exceed the threshold level for early fatalities. 115 weather sequences were used, both with and without the proposed wind shift model, to calculate probability distributions for early fatalities. The use of the proposed model results in a reduction of the mean and peak values of that distribution by 36% and 29%, respectively. (orig./HP) [de
7. Physics of mind: Experimental confirmations of theoretical predictions.
Science.gov (United States)
Schoeller, Félix; Perlovsky, Leonid; Arseniev, Dmitry
2018-02-02
What is common among Newtonian mechanics, statistical physics, thermodynamics, quantum physics, the theory of relativity, astrophysics and the theory of superstrings? All these areas of physics have in common a methodology, which is discussed in the first few lines of the review. Is a physics of the mind possible? Is it possible to describe how a mind adapts in real time to changes in the physical world through a theory based on a few basic laws? From perception and elementary cognition to emotions and abstract ideas allowing high-level cognition and executive functioning, at nearly all levels of study, the mind shows variability and uncertainties. Is it possible to turn psychology and neuroscience into so-called "hard" sciences? This review discusses several established first principles for the description of mind and their mathematical formulations. A mathematical model of mind is derived from these principles. This model includes mechanisms of instincts, emotions, behavior, cognition, concepts, language, intuitions, and imagination. We clarify fundamental notions such as the opposition between the conscious and the unconscious, the knowledge instinct and aesthetic emotions, as well as humans' universal abilities for symbols and meaning. In particular, the review discusses in length evolutionary and cognitive functions of aesthetic emotions and musical emotions. Several theoretical predictions are derived from the model, some of which have been experimentally confirmed. These empirical results are summarized and we introduce new theoretical developments. Several unsolved theoretical problems are proposed, as well as new experimental challenges for future research. Copyright © 2017. Published by Elsevier B.V.
8. Evaluating the Staff at Enterprise: Several Theoretical and Methodological Aspects
Directory of Open Access Journals (Sweden)
Girman Alla P.
2017-03-01
Full Text Available The article is aimed at generalizing and systematizing various knowledge, related to evaluation of staff, on a common theoretical-methodological basis. Concept, objectives, directions, methods, and indicators for evaluating staff in the contemporary economy were analyzed. The topicality of using the theoretical developments on staff evaluation in actual practice of functioning of enterprises has been substantiated. A new approach to the procedure of evaluation of the total human resource of enterprise, based on the life cycle of organization, has been proposed. On the basis of the proposed scientific algorithmic step-by-step approach to the evaluation of staff, managers of companies can design their own models for staff evaluation, develop its separate elements. Prospects for further researches in this direction involve relation of staff evaluation to the life cycle of employee no less than the life cycle of enterprise. Management of the life cycle of employee represents methods for management of his development that would change the level of the employee’s professional maturity as result of a system impact.
9. Theoretical models for Type I and Type II supernova
International Nuclear Information System (INIS)
Woosley, S.E.; Weaver, T.A.
1985-01-01
Recent theoretical progress in understanding the origin and nature of Type I and Type II supernovae is discussed. New Type II presupernova models characterized by a variety of iron core masses at the time of collapse are presented and the sensitivity to the reaction rate 12 C(α,γ) 16 O explained. Stars heavier than about 20 M/sub solar/ must explode by a ''delayed'' mechanism not directly related to the hydrodynamical core bounce and a subset is likely to leave black hole remnants. The isotopic nucleosynthesis expected from these massive stellar explosions is in striking agreement with the sun. Type I supernovae result when an accreting white dwarf undergoes a thermonuclear explosion. The critical role of the velocity of the deflagration front in determining the light curve, spectrum, and, especially, isotopic nucleosynthesis in these models is explored. 76 refs., 8 figs
10. Exploring the relationship between volunteering and hospice sustainability in the UK: a theoretical model.
Science.gov (United States)
Scott, Ros; Jindal-Snape, Divya; Manwaring, Gaye
2018-05-02
To explore the relationship between volunteering and the sustainability of UK voluntary hospices. A narrative literature review was conducted to inform the development of a theoretical model. Eight databases were searched: CINAHL (EBSCO), British Nursing Index, Intute: Health and Life Sciences, ERIC, SCOPUS, ASSIA (CSA), Cochrane Library and Google Scholar. A total of 90 documents were analysed. Emerging themes included the importance of volunteering to the hospice economy and workforce, the quality of services, and public and community support. Findings suggest that hospice sustainability is dependent on volunteers; however, the supply and retention of volunteers is affected by internal and external factors. A theoretical model was developed to illustrate the relationship between volunteering and hospice sustainability. It demonstrates the factors necessary for hospice sustainability and the reciprocal impact that these factors and volunteering have on each other. The model has a practical application as an assessment framework and strategic planning tool.
11. Developing a theoretical model to investigate thermal performance of a thin membrane heat-pipe solar collector
International Nuclear Information System (INIS)
Riffat, S.B.; Zhao, X.; Doherty, P.S.
2005-01-01
A thin membrane heat-pipe solar collector was designed and constructed to allow heat from solar radiation to be collected at a relatively high efficiency while keeping the capital cost low. A theoretical model incorporating a set of heat balance equations was developed to analyse heat transfer processes occurring in separate regions of the collector, i.e., the top cover, absorber and condenser/manifold areas, and examine their relationship. The thermal performance of the collector was investigated using the theoretical model. The modelling predictions were validated using the experimental data from a referred source. The test efficiency was found to be in the range 40-70%, which is a bitter lower than the values predicted by modelling. The factors influencing these results were investigated
12. An exploratory game-theoretic analysis of biomass electricity generation supply chain
International Nuclear Information System (INIS)
Nasiri, Fuzhan; Zaccour, Georges
2009-01-01
This study proposes a game-theoretic approach to model and analyze the process of utilizing biomass for power generation considering three players: distributor, facility developer, and participating farmer. We characterize the Nash equilibrium of the sequential game and discuss its features. A special attention is devoted to the analysis of the impact of incentives and initial target on the equilibrium, in which the biomass is part of electricity production.
13. Theoretical results on the tandem junction solar cell based on its Ebers-Moll transistor model
Science.gov (United States)
Goradia, C.; Vaughn, J.; Baraona, C. R.
1980-01-01
A one-dimensional theoretical model of the tandem junction solar cell (TJC) with base resistivity greater than about 1 ohm-cm and under low level injection has been derived. This model extends a previously published conceptual model which treats the TJC as an npn transistor. The model gives theoretical expressions for each of the Ebers-Moll type currents of the illuminated TJC and allows for the calculation of the spectral response, I(sc), V(oc), FF and eta under variation of one or more of the geometrical and material parameters and 1MeV electron fluence. Results of computer calculations based on this model are presented and discussed. These results indicate that for space applications, both a high beginning of life efficiency, greater than 15% AM0, and a high radiation tolerance can be achieved only with thin (less than 50 microns) TJC's with high base resistivity (greater than 10 ohm-cm).
14. A Communication Theoretical Modeling of Axonal Propagation in Hippocampal Pyramidal Neurons.
Science.gov (United States)
Ramezani, Hamideh; Akan, Ozgur B
2017-06-01
Understanding the fundamentals of communication among neurons, known as neuro-spike communication, leads to reach bio-inspired nanoscale communication paradigms. In this paper, we focus on a part of neuro-spike communication, known as axonal transmission, and propose a realistic model for it. The shape of the spike during axonal transmission varies according to previously applied stimulations to the neuron, and these variations affect the amount of information communicated between neurons. Hence, to reach an accurate model for neuro-spike communication, the memory of axon and its effect on the axonal transmission should be considered, which are not studied in the existing literature. In this paper, we extract the important factors on the memory of axon and define memory states based on these factors. We also describe the transition among these states and the properties of axonal transmission in each of them. Finally, we demonstrate that the proposed model can follow changes in the axonal functionality properly by simulating the proposed model and reporting the root mean square error between simulation results and experimental data.
15. An Experimental and Theoretical Investigation of Micropiiting in Wind Turbine Gears and Bearings
Energy Technology Data Exchange (ETDEWEB)
Kahraman, Ahmet
2012-03-28
In this research study, the micro-pitting related contact failures of wind turbine gearbox components were investigated both experimentally and theoretically. On the experimental side, a twin-disk type test machine was used to simulate wind turbine transmission contacts in terms of their kinematic (rolling and sliding speeds), surface roughnesses, material parameters and lubricant conditions. A test matrix that represents the ranges of contact conditions of the wind turbine gear boxes was defined and executed to bring an empirical understanding to the micro-pitting problem in terms of key contact parameters and operating conditions. On the theoretical side, the first deterministic micro-pitting model based on a mixed elastohydrodynamic lubrication formulations and multi-axial near-surface crack initiation model was developed. This physics-based model includes actual instantaneous asperity contacts associated with real surface roughness profiles for predicting the onset of the micro-pit formation. The predictions from the theoretical model were compared to the experimental data for validation of the models. The close agreement between the model and measurements was demonstrated. With this, the proposed model can be deemed suitable for identifying the mechanisms leading to micro-pitting of gear and bearing surfaces of wind turbine gear boxes, including all key material, lubricant and surface engineering aspects of the problem, and providing solutions to these micro-pitting problems.
16. A proposed residual stress model for oblique turning
International Nuclear Information System (INIS)
Elkhabeery, M. M.
2001-01-01
A proposed mathematical model is presented for predicting the residual stresses caused by turning. Effects of change in tool free length, cutting speed, feed rate, and the tensile strength of work piece material on the maximum residual stress are investigated. The residual stress distribution in the surface region due to turning under unlubricated condition is determined using a deflection etching technique. To reduce the number of experiments required and build the mathematical model for these variables, Response Surface Methodology (RSM) is used. In addition, variance analysis and an experimental check are conducted to determine the prominent parameters and the adequacy of the model. The results show that the tensile stress of the work piece material, cutting speed, and feed rate have significant effects on the maximum residual stresses. The proposed model, that offering good correlation between the experimental and predicted results, is useful in selecting suitable cutting parameters for the machining of different materials. (author)
17. Experimental study and theoretical modelling of two-phase flow in a converging diverging nozzle
International Nuclear Information System (INIS)
Selmer-Olsen, Stale
1991-01-01
A theoretical and experimental study of high quality two-phase flows in converging-diverging nozzles is presented. The main objectives are the prediction of critical (choked) flow rates and the evolution of characteristic parameters towards the nozzle outlet. First, a thorough analysis of available models shows the importance of a correct modelling of the mechanical and thermal interactions between the gas and liquid phases. As a second step, a purely dispersed flow model is considered. The solution algorithm which is utilized describes accurately the critical (choked) flow conditions as well as the topology of the solutions. The dispersed flow model accounts for effects on the gas flow rate of the upstream and the downstream pressures, the liquid flow rate and the nozzle geometry. The pressure profile along the nozzle and the location of the critical cross-section are also well predicted. The flow is shown to switch from critical to sub-critical when the liquid flow rate is increased, all other control parameters at the inlet and the outlet maintained. This new finding is interpreted as a result of the possible location of the critical cross-section anywhere in the diverging part of the nozzle. Moreover, the experiments show that the critical (choked) gas flow rate depends on the inlet configuration of gas/liquid. In the third step, a careful analysis of the data is used as a basis for proposing a new dispersed-annular flow model. This model accounts for the liquid flowing both as a liquid film and as entrained droplets in the core, non-developed flow is accounted for as well as flow separation in the diffuser. Finally, advanced local measuring techniques of pressure, film thickness and film velocity have been developed in the course of the work. In particular film thickness measurements allowed the development of the flow structure to be understood. (author) [fr
18. Theoretical modeling of a new structure of III-V tandem solar cells by ...
African Journals Online (AJOL)
junction solar cell is theoretically investigated by optimizing the thickness of GaAs and GaInPandusing a new optical model to separate the junction between the two solar cell in order to solve problems of tunnel junction and difficulties of fabrication.
19. An effectiveness analysis of healthcare systems using a systems theoretic approach
Directory of Open Access Journals (Sweden)
Inder Kerry
2009-10-01
Full Text Available Abstract Background The use of accreditation and quality measurement and reporting to improve healthcare quality and patient safety has been widespread across many countries. A review of the literature reveals no association between the accreditation system and the quality measurement and reporting systems, even when hospital compliance with these systems is satisfactory. Improvement of health care outcomes needs to be based on an appreciation of the whole system that contributes to those outcomes. The research literature currently lacks an appropriate analysis and is fragmented among activities. This paper aims to propose an integrated research model of these two systems and to demonstrate the usefulness of the resulting model for strategic research planning. Methods/design To achieve these aims, a systematic integration of the healthcare accreditation and quality measurement/reporting systems is structured hierarchically. A holistic systems relationship model of the administration segment is developed to act as an investigation framework. A literature-based empirical study is used to validate the proposed relationships derived from the model. Australian experiences are used as evidence for the system effectiveness analysis and design base for an adaptive-control study proposal to show the usefulness of the system model for guiding strategic research. Results Three basic relationships were revealed and validated from the research literature. The systemic weaknesses of the accreditation system and quality measurement/reporting system from a system flow perspective were examined. The approach provides a system thinking structure to assist the design of quality improvement strategies. The proposed model discovers a fourth implicit relationship, a feedback between quality performance reporting components and choice of accreditation components that is likely to play an important role in health care outcomes. An example involving accreditation
20. An effectiveness analysis of healthcare systems using a systems theoretic approach.
Science.gov (United States)
Chuang, Sheuwen; Inder, Kerry
2009-10-24
The use of accreditation and quality measurement and reporting to improve healthcare quality and patient safety has been widespread across many countries. A review of the literature reveals no association between the accreditation system and the quality measurement and reporting systems, even when hospital compliance with these systems is satisfactory. Improvement of health care outcomes needs to be based on an appreciation of the whole system that contributes to those outcomes. The research literature currently lacks an appropriate analysis and is fragmented among activities. This paper aims to propose an integrated research model of these two systems and to demonstrate the usefulness of the resulting model for strategic research planning. To achieve these aims, a systematic integration of the healthcare accreditation and quality measurement/reporting systems is structured hierarchically. A holistic systems relationship model of the administration segment is developed to act as an investigation framework. A literature-based empirical study is used to validate the proposed relationships derived from the model. Australian experiences are used as evidence for the system effectiveness analysis and design base for an adaptive-control study proposal to show the usefulness of the system model for guiding strategic research. Three basic relationships were revealed and validated from the research literature. The systemic weaknesses of the accreditation system and quality measurement/reporting system from a system flow perspective were examined. The approach provides a system thinking structure to assist the design of quality improvement strategies. The proposed model discovers a fourth implicit relationship, a feedback between quality performance reporting components and choice of accreditation components that is likely to play an important role in health care outcomes. An example involving accreditation surveyors is developed that provides a systematic search for
1. A proposed experiment on ball lightning model
International Nuclear Information System (INIS)
Ignatovich, Vladimir K.; Ignatovich, Filipp V.
2011-01-01
Highlights: → We propose to put a glass sphere inside an excited gas. → Then to put a light ray inside the glass in a whispering gallery mode. → If the light is resonant to gas excitation, it will be amplified at every reflection. → In ms time the light in the glass will be amplified, and will melt the glass. → A liquid shell kept integer by electrostriction forces is the ball lightning model. -- Abstract: We propose an experiment for strong light amplification at multiple total reflections from active gaseous media.
2. DETECTION OF EARNINGS MANAGEMENT - A PROPOSED FRAMEWORK BASED ON ACCRUALS APPROACH RESEARCH DESIGNS
OpenAIRE
Vladu Alina Beattrice; Cuzdriorean Dan Dacian
2011-01-01
The scope of this theoretical research is to outline recommendations for improving the complex process of detection of accounts manipulation. In this respect we turned to the previous literature and assessed empirical studies in order to be able to develop a robust model for understand the process of detection for accounts manipulation and further to ease the path of detection by proposing as we stated above a theoretical framework in this respect. Since there is a constant conjecture between...
3. Theoretical and experimental stress analyses of ORNL thin-shell cylinder-to-cylinder model 3
International Nuclear Information System (INIS)
Gwaltney, R.C.; Bolt, S.E.; Corum, J.M.; Bryson, J.W.
1975-06-01
4. Theoretical and experimental stress analyses of ORNL thin-shell cylinder-to-cylinder model 4
International Nuclear Information System (INIS)
Gwaltney, R.C.; Bolt, S.E.; Bryson, J.W.
1975-06-01
5. A theoretical global optimization method for vapor-compression refrigeration systems based on entransy theory
International Nuclear Information System (INIS)
Xu, Yun-Chao; Chen, Qun
2013-01-01
The vapor-compression refrigeration systems have been one of the essential energy conversion systems for humankind and exhausting huge amounts of energy nowadays. Surrounding the energy efficiency promotion of the systems, there are lots of effectual optimization methods but mainly relied on engineering experience and computer simulations rather than theoretical analysis due to the complex and vague physical essence. We attempt to propose a theoretical global optimization method based on in-depth physical analysis for the involved physical processes, i.e. heat transfer analysis for condenser and evaporator, through introducing the entransy theory and thermodynamic analysis for compressor and expansion valve. The integration of heat transfer and thermodynamic analyses forms the overall physical optimization model for the systems to describe the relation between all the unknown parameters and known conditions, which makes theoretical global optimization possible. With the aid of the mathematical conditional extremum solutions, an optimization equation group and the optimal configuration of all the unknown parameters are analytically obtained. Eventually, via the optimization of a typical vapor-compression refrigeration system with various working conditions to minimize the total heat transfer area of heat exchangers, the validity and superior of the newly proposed optimization method is proved. - Highlights: • A global optimization method for vapor-compression systems is proposed. • Integrating heat transfer and thermodynamic analyses forms the optimization model. • A mathematical relation between design parameters and requirements is derived. • Entransy dissipation is introduced into heat transfer analysis. • The validity of the method is proved via optimization of practical cases
6. Mechanisms of plasma-assisted catalyzed growth of carbon nanofibres: a theoretical modeling
Science.gov (United States)
Gupta, R.; Sharma, S. C.; Sharma, R.
2017-02-01
A theoretical model is developed to study the nucleation and catalytic growth of carbon nanofibers (CNFs) in a plasma environment. The model includes the charging of CNFs, the kinetics of the plasma species (neutrals, ions and electrons), plasma pretreatment of the catalyst film, and various processes unique to a plasma-exposed catalyst surface such as adsorption of neutrals, thermal dissociation of neutrals, ion induced dissociation, interaction between neutral species, stress exerted by the growing graphene layers and the growth of CNFs. Numerical calculations are carried out for typical glow discharge plasma parameters. It is found that the growth rate of CNFs decreases with the catalyst nanoparticle size. In addition, the effect of hydrogen on the catalyst nanoparticle size, CNF tip diameter, CNF growth rate, and the tilt angle of the graphene layers to the fiber axis are investigated. Moreover, it is also found that the length of CNFs increases with hydrocarbon number density. Our theoretical findings are in good agreement with experimental observations and can be extended to enhance the field emission characteristics of CNFs.
7. Theoretical Modeling and Simulation of Phase-Locked Loop (PLL for Clock Data Recovery (CDR
Directory of Open Access Journals (Sweden)
2012-01-01
Full Text Available Modern communication and computer systems require rapid (Gbps, efficient and large bandwidth data transfers. Agressive scaling of digital integrated systems allow buses and communication controller circuits to be integrated with the microprocessor on the same chip. The Peripheral Component Interconnect Express (PCIe protocol handles all communcation between the central processing unit (CPU and hardware devices. PCIe buses require efficient clock data recovery circuits (CDR to recover clock signals embedded in data during transmission. This paper describes the theoretical modeling and simulation of a phase-locked loop (PLL used in a CDR circuit. A simple PLL architecture for a 5 GHz CDR circuit is proposed and elaborated in this work. Simulations were carried out using a Hardware Description Language, Verilog-AMS. The effect of jitter on the proposed design is also simulated and evaluated in this work. It was found that the proposed design is robust against both input and VCO jitter.ABSTRAK: Sistem komunikasi dan komputer moden memerlukan pemindahan data yang cekap (Gbps, dan bandwidth yang besar. Pengecilan agresif menggunakan teknik sistem digital bersepadu membenarkan bas dan litar pengawal komunikasi disatukan dengan mikroprocessor dalam cip yang sama. Protokol persisian komponen sambung tara ekspres (PCIe mengendalikan semua komunikasi antara unit pemprosesan pusat (CPU dan peranti perkakasan. Bas PCIe memerlukan litar jam pemulihan data (CDR yang cekap untuk mendapatkan kembali isyarat jam yang tertanam dalam data semasa transmisi. Karya ini menerangkan teori pemodelan dan simulasi gelung fasa terkunci (PLL untuk CDR. Rekabentuk 5 GHz PLL yang mudah telah dicadangkan dalm kertas kerja ini. Simulasi telah dijalankan menggunakan perisian verilog-AMS. Simulasi mengunnakan kesan ketar dalam reka bentuk yang dicadangkan telah dinilai. Reka bentuk yang dicadangkan terbukti teguh mengatasi ganguan ketar di input dan VCO.KEY WORDS
8. An assessment of some theoretical models used for the calculation of the refractive index of InXGa1-xAs
Science.gov (United States)
Engelbrecht, J. A. A.
2018-04-01
Theoretical models used for the determination of the refractive index of InXGa1-XAs are reviewed and compared. Attention is drawn to some problems experienced with some of the models. Models also extended to the mid-infrared region of the electromagnetic spectrum. Theoretical results in the mid-infrared region are then compared to previously published experimental results.
9. A New Theoretical Approach to Postsecondary Student Disability: Disability-Diversity (Dis)Connect Model
Science.gov (United States)
Aquino, Katherine C.
2016-01-01
Disability is often viewed as an obstacle to postsecondary inclusion, but not a characteristic of student diversity. Additionally, current theoretical frameworks isolate disability from other student diversity characteristics. In response, a new conceptual framework, the Disability-Diversity (Dis)Connect Model (DDDM), was created to address…
10. A Theoretical Model for Meaning Construction through Constructivist Concept Learning
DEFF Research Database (Denmark)
The central focus of this Ph.D. research is on ‘Logic and Cognition’ and, more specifically, this research covers the quintuple (Logic and Logical Philosophy, Philosophy of Education, Educational Psychology, Cognitive Science, Computer Science). The most significant contributions of this Ph.D. di...... of ‘learning’, ‘mentoring’, and ‘knowledge’ within learning and knowledge acquisition systems. Constructivism as an epistemology and as a model of knowing and, respectively as a theoretical model of learning builds up the central framework of this research........D. dissertation are conceptual, logical, terminological, and semantic analysis of Constructivist Concept Learning (specifically, in the context of humans’ interactions with their environment and with other agents). This dissertation is concerned with the specification of the conceptualisation of the phenomena...
11. Reduced analysis and confirmatory research on co-adaptability theoretical solution to conflicting events in construction engineering projects
Institute of Scientific and Technical Information of China (English)
2010-01-01
The co-adaptability theoretical solution to conflicting events in construction engineering projects has three problems. First, the transformation of constraint conditions of theoretical solution is very difficult in practical engineering applications; second, some coefficients are difficult to be determined; third, there are overfull circular arithmetic operations involved in it. To resolve these problems, a new method to reduce the theoretical solution complications is proposed. By analyzing the operating mechanism of theoretical solution model, redundancies in the theoretical solution can be eliminated, and the ISM mapping with the co-adaptability solution can be set up. Based on this approach, a procedure to solve practical conflicting events in construction projects is established by replacing characteristic variables with mathematic variables. The research results show that the procedure can replace the co-adaptability theoretical solution effectively and solve practical conflicting events in construction projects.
12. A theoretical interpretation of EPR and ENDOR
International Nuclear Information System (INIS)
Matos, M.O.M. de.
1975-08-01
To interpret the EPR and ENDOR results of the U 2 center in SrF 2 , two wavefunctions are proposed to describe the unpaired electron of the defect. Use is made of two different models in order to obtain the wavefunctions: the Heitler-London and that of molecular orbitals models. The Pauli repulsion (overlap of wavefunctions) is discussed as well as covalency mechanisms and their influence in the calculation of the hyperfine constants due to magnetic interaction of the unpaired electron and the magnetic nucleus of the cristal. A small amount of covalency between the ground state of the interstitial Hydrogen atom and the 2p shell of the F - ions of the first cristaline shell is introduced fenomenologically in the molecular orbitals model. Both methods are discussed by comparing the theoretical calculations of the hyperfine constants with the measured experimental values obtained with the EPR and ENDOR techniques. (Author) [pt
13. Lung Cancer Screening Participation: Developing a Conceptual Model to Guide Research.
Science.gov (United States)
Carter-Harris, Lisa; Davis, Lorie L; Rawl, Susan M
2016-11-01
To describe the development of a conceptual model to guide research focused on lung cancer screening participation from the perspective of the individual in the decision-making process. Based on a comprehensive review of empirical and theoretical literature, a conceptual model was developed linking key psychological variables (stigma, medical mistrust, fatalism, worry, and fear) to the health belief model and precaution adoption process model. Proposed model concepts have been examined in prior research of either lung or other cancer screening behavior. To date, a few studies have explored a limited number of variables that influence screening behavior in lung cancer specifically. Therefore, relationships among concepts in the model have been proposed and future research directions presented. This proposed model is an initial step to support theoretically based research. As lung cancer screening becomes more widely implemented, it is critical to theoretically guide research to understand variables that may be associated with lung cancer screening participation. Findings from future research guided by the proposed conceptual model can be used to refine the model and inform tailored intervention development.
14. INTRODUCTION: Theoretical Models as Mass Media Practice: Perspectives from the West
DEFF Research Database (Denmark)
Thomsen, Line
2007-01-01
What is journalism? How does it exist and why? How does journalism define itself and in what ways can we make use of looking theoretically at the practice of it? These were the central themes of our workshop; Theoretical Models as Mass Media Practice held at the ‘Minding the Gap’ conference...... an exceptional framework for understanding the workings of mass media while helping the press reflect over these workings too. In a time of change for the journalistic profession, when media convergence is growing; the media is marked by deregulation and fewer journalists are being asked to do more...... at Reuters Institute in May 2007, from which this collection of papers has been selected. As with the other workshops during the conference, the majority of our panellists were themselves once media practitioners. It is my opinion that this background and inside knowledge of the field in itself can provide...
15. Activity systems modeling as a theoretical lens for social exchange studies
Directory of Open Access Journals (Sweden)
Ernest Jones
2016-01-01
Full Text Available The social exchange perspective seeks to acknowledge, understand and predict the dynamics of social interactions. Empirical research involving social exchange constructs have grown to be highly technical including confirmatory factor analysis to assess construct distinctiveness and structural equation modeling to assess construct causality. Each study seemingly strives to assess how underlying social exchange theoretic constructs interrelate. Yet despite this methodological depth and resultant explanatory and predictive power, a significant number of studies report findings that, once synthesized, suggest an underlying persistent threat of conceptual or construct validity brought about by a search for epistemological parsimony. Further, it is argued that a methodological approach that embraces inherent complexity such as activity systems modeling facilitates the search for simplified models while not ignoring contextual factors.
16. A theoretical and empirical evaluation and extension of the Todaro migration model.
Science.gov (United States)
Salvatore, D
1981-11-01
"This paper postulates that it is theoretically and empirically preferable to base internal labor migration on the relative difference in rural-urban real income streams and rates of unemployment, taken as separate and independent variables, rather than on the difference in the expected real income streams as postulated by the very influential and often quoted Todaro model. The paper goes on to specify several important ways of extending the resulting migration model and improving its empirical performance." The analysis is based on Italian data. excerpt
17. Theoretical studies in medium-energy nuclear and hadronic physics
International Nuclear Information System (INIS)
Horowitz, C.J.; Macfarlane, M.H.; Matsui, T.; Serot, B.D.
1993-01-01
A proposal for theoretical nuclear physics research is made for the period April 1, 1993 through March 31, 1996. Research is proposed in the following areas: relativistic many-body theory of nuclei and nuclear matter, quasifree electroweak scattering and strange quarks in nuclei, dynamical effects in (e,e'p) scattering at large momentum transfer, investigating the nucleon's parton sea with polarized leptoproduction, physics of ultrarelativistic nucleus endash nucleus collisions, QCD sum rules and hadronic properties, non-relativistic models of nuclear reactions, and spin and color correlations in a quark-exchange model of nuclear matter. Highlights of recent research, vitae of principal investigators, and lists of publications and invited talks are also given. Recent research dealt primarily with medium-energy nuclear physics, relativistic theories of nuclei and the nuclear response, the nuclear equation of state under extreme conditions, the dynamics of the quark endash gluon plasma in relativistic heavy-ion collisions, and theories of the nucleon endash nucleon force
18. A reduced theoretical model for estimating condensation effects in combustion-heated hypersonic tunnel
Science.gov (United States)
Lin, L.; Luo, X.; Qin, F.; Yang, J.
2018-03-01
As one of the combustion products of hydrocarbon fuels in a combustion-heated wind tunnel, water vapor may condense during the rapid expansion process, which will lead to a complex two-phase flow inside the wind tunnel and even change the design flow conditions at the nozzle exit. The coupling of the phase transition and the compressible flow makes the estimation of the condensation effects in such wind tunnels very difficult and time-consuming. In this work, a reduced theoretical model is developed to approximately compute the nozzle-exit conditions of a flow including real-gas and homogeneous condensation effects. Specifically, the conservation equations of the axisymmetric flow are first approximated in the quasi-one-dimensional way. Then, the complex process is split into two steps, i.e., a real-gas nozzle flow but excluding condensation, resulting in supersaturated nozzle-exit conditions, and a discontinuous jump at the end of the nozzle from the supersaturated state to a saturated state. Compared with two-dimensional numerical simulations implemented with a detailed condensation model, the reduced model predicts the flow parameters with good accuracy except for some deviations caused by the two-dimensional effect. Therefore, this reduced theoretical model can provide a fast, simple but also accurate estimation of the condensation effect in combustion-heated hypersonic tunnels.
19. Toward a Theoretical Model of Decision-Making and Resistance to Change among Higher Education Online Course Designers
Science.gov (United States)
Dodd, Bucky J.
2013-01-01
Online course design is an emerging practice in higher education, yet few theoretical models currently exist to explain or predict how the diffusion of innovations occurs in this space. This study used a descriptive, quantitative survey research design to examine theoretical relationships between decision-making style and resistance to change…
20. Experimental and theoretical study of magnetohydrodynamic ship models.
Science.gov (United States)
Cébron, David; Viroulet, Sylvain; Vidal, Jérémie; Masson, Jean-Paul; Viroulet, Philippe
2017-01-01
Magnetohydrodynamic (MHD) ships represent a clear demonstration of the Lorentz force in fluids, which explains the number of students practicals or exercises described on the web. However, the related literature is rather specific and no complete comparison between theory and typical small scale experiments is currently available. This work provides, in a self-consistent framework, a detailed presentation of the relevant theoretical equations for small MHD ships and experimental measurements for future benchmarks. Theoretical results of the literature are adapted to these simple battery/magnets powered ships moving on salt water. Comparison between theory and experiments are performed to validate each theoretical step such as the Tafel and the Kohlrausch laws, or the predicted ship speed. A successful agreement is obtained without any adjustable parameter. Finally, based on these results, an optimal design is then deduced from the theory. Therefore this work provides a solid theoretical and experimental ground for small scale MHD ships, by presenting in detail several approximations and how they affect the boat efficiency. Moreover, the theory is general enough to be adapted to other contexts, such as large scale ships or industrial flow measurement techniques.
1. Experimental and theoretical study of magnetohydrodynamic ship models.
Directory of Open Access Journals (Sweden)
David Cébron
Full Text Available Magnetohydrodynamic (MHD ships represent a clear demonstration of the Lorentz force in fluids, which explains the number of students practicals or exercises described on the web. However, the related literature is rather specific and no complete comparison between theory and typical small scale experiments is currently available. This work provides, in a self-consistent framework, a detailed presentation of the relevant theoretical equations for small MHD ships and experimental measurements for future benchmarks. Theoretical results of the literature are adapted to these simple battery/magnets powered ships moving on salt water. Comparison between theory and experiments are performed to validate each theoretical step such as the Tafel and the Kohlrausch laws, or the predicted ship speed. A successful agreement is obtained without any adjustable parameter. Finally, based on these results, an optimal design is then deduced from the theory. Therefore this work provides a solid theoretical and experimental ground for small scale MHD ships, by presenting in detail several approximations and how they affect the boat efficiency. Moreover, the theory is general enough to be adapted to other contexts, such as large scale ships or industrial flow measurement techniques.
2. Status of molten fuel coolant interaction studies and theoretical modelling work at IGCAR
International Nuclear Information System (INIS)
Rao, P.B.; Singh, Om Pal; Singh, R.S.
1994-01-01
The status of Molten Fuel Coolant Interaction (MFCI) studies is reviewed and some of the important observations made are presented. A new model for MFCI that is developed at IGCAR by considering the various mechanisms in detail is described. The model is validated and compared with the available experimental data and theoretical work at different stages of its development. Several parametric studies that are carried using this model are described. The predictions from this model have been found to be satisfactory, considering the complexity of the MFCI. A need for more comprehensive and MFCI-specific experimental tests is brought out. (author)
3. A Game-theoretic Framework for Network Coding Based Device-to-Device Communications
KAUST Repository
Douik, Ahmed S.; Sorour, Sameh; Tembine, Hamidou; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim
2016-01-01
This paper investigates the delay minimization problem for instantly decodable network coding (IDNC) based deviceto- device (D2D) communications. In D2D enabled systems, users cooperate to recover all their missing packets. The paper proposes a game theoretic framework as a tool for improving the distributed solution by overcoming the need for a central controller or additional signaling in the system. The session is modeled by self-interested players in a non-cooperative potential game. The utility functions are designed so as increasing individual payoff results in a collective behavior achieving both a desirable system performance in a shared network environment and the Nash equilibrium. Three games are developed whose first reduces the completion time, the second the maximum decoding delay and the third the sum decoding delay. The paper, further, improves the formulations by including a punishment policy upon collision occurrence so as to achieve the Nash bargaining solution. Learning algorithms are proposed for systems with complete and incomplete information, and for the imperfect feedback scenario. Numerical results suggest that the proposed game-theoretical formulation provides appreciable performance gain against the conventional point-to-multipoint (PMP), especially for reliable user-to-user channels.
4. A Game-theoretic Framework for Network Coding Based Device-to-Device Communications
KAUST Repository
Douik, Ahmed
2016-06-29
This paper investigates the delay minimization problem for instantly decodable network coding (IDNC) based deviceto- device (D2D) communications. In D2D enabled systems, users cooperate to recover all their missing packets. The paper proposes a game theoretic framework as a tool for improving the distributed solution by overcoming the need for a central controller or additional signaling in the system. The session is modeled by self-interested players in a non-cooperative potential game. The utility functions are designed so as increasing individual payoff results in a collective behavior achieving both a desirable system performance in a shared network environment and the Nash equilibrium. Three games are developed whose first reduces the completion time, the second the maximum decoding delay and the third the sum decoding delay. The paper, further, improves the formulations by including a punishment policy upon collision occurrence so as to achieve the Nash bargaining solution. Learning algorithms are proposed for systems with complete and incomplete information, and for the imperfect feedback scenario. Numerical results suggest that the proposed game-theoretical formulation provides appreciable performance gain against the conventional point-to-multipoint (PMP), especially for reliable user-to-user channels.
5. Clusters of DNA induced by ionizing radiation: formation of short DNA fragments. I. Theoretical modeling
Science.gov (United States)
Holley, W. R.; Chatterjee, A.
1996-01-01
We have developed a general theoretical model for the interaction of ionizing radiation with chromatin. Chromatin is modeled as a 30-nm-diameter solenoidal fiber comprised of 20 turns of nucleosomes, 6 nucleosomes per turn. Charged-particle tracks are modeled by partitioning the energy deposition between primary track core, resulting from glancing collisions with 100 eV or less per event, and delta rays due to knock-on collisions involving energy transfers >100 eV. A Monte Carlo simulation incorporates damages due to the following molecular mechanisms: (1) ionization of water molecules leading to the formation of OH, H, eaq, etc.; (2) OH attack on sugar molecules leading to strand breaks: (3) OH attack on bases; (4) direct ionization of the sugar molecules leading to strand breaks; (5) direct ionization of the bases. Our calculations predict significant clustering of damage both locally, over regions up to 40 bp and over regions extending to several kilobase pairs. A characteristic feature of the regional damage predicted by our model is the production of short fragments of DNA associated with multiple nearby strand breaks. The shapes of the spectra of DNA fragment lengths depend on the symmetries or approximate symmetries of the chromatin structure. Such fragments have subsequently been detected experimentally and are reported in an accompanying paper (B. Rydberg, Radiat, Res. 145, 200-209, 1996) after exposure to both high- and low-LET radiation. The overall measured yields agree well quantitatively with the theoretical predictions. Our theoretical results predict the existence of a strong peak at about 85 bp, which represents the revolution period about the nucleosome. Other peaks at multiples of about 1,000 bp correspond to the periodicity of the particular solenoid model of chromatin used in these calculations. Theoretical results in combination with experimental data on fragmentation spectra may help determine the consensus or average structure of the
6. Representing general theoretical concepts in structural equation models: The role of composite variables
Science.gov (United States)
Grace, J.B.; Bollen, K.A.
2008-01-01
Structural equation modeling (SEM) holds the promise of providing natural scientists the capacity to evaluate complex multivariate hypotheses about ecological systems. Building on its predecessors, path analysis and factor analysis, SEM allows for the incorporation of both observed and unobserved (latent) variables into theoretically-based probabilistic models. In this paper we discuss the interface between theory and data in SEM and the use of an additional variable type, the composite. In simple terms, composite variables specify the influences of collections of other variables and can be helpful in modeling heterogeneous concepts of the sort commonly of interest to ecologists. While long recognized as a potentially important element of SEM, composite variables have received very limited use, in part because of a lack of theoretical consideration, but also because of difficulties that arise in parameter estimation when using conventional solution procedures. In this paper we present a framework for discussing composites and demonstrate how the use of partially-reduced-form models can help to overcome some of the parameter estimation and evaluation problems associated with models containing composites. Diagnostic procedures for evaluating the most appropriate and effective use of composites are illustrated with an example from the ecological literature. It is argued that an ability to incorporate composite variables into structural equation models may be particularly valuable in the study of natural systems, where concepts are frequently multifaceted and the influence of suites of variables are often of interest. ?? Springer Science+Business Media, LLC 2007.
7. Clusters of DNA damage induced by ionizing radiation: Formation of short DNA fragments. I. Theoretical modeling
International Nuclear Information System (INIS)
Holley, W.R.; Chatterjee, A.
1996-01-01
We have developed a general theoretical model for the interaction of ionizing radiation with chromatin. Chromatin is modeled as a 30-nm-diameter solenoidal fiber composed of 20 turns of nucleosomes, 6 nucleosomes per turn. Charged-particle tracks are modeled by partitioning the energy deposition between primary track core, resulting from glancing collisions with 100 eV or less per event, and δ rays due to knock-on collisions involving energy transfers > 100 eV. A Monte Carlo simulation incorporates damages due to the following molecular mechanisms: (1) ionization of water molecules leading to the formation of circ OH, circ H, e aq , etc.; circ OH attack on sugar molecules leading to strand breaks; circ OH attack on bases; direct ionization of the sugar molecules leading to strand breaks; direct ionization of the bases. Our calculations predict significant clustering of damage both locally, over regions up to 40 hp and over regions extending to several kilobase pairs. A characteristic feature of the regional damage predicted by our model is the production of short fragments of DNA associated with multiple nearby strand breaks. Such fragments have subsequently been detected experimentally and are reported in an accompanying paper after exposure to both high- and low-LET radiation. The overall measured yields agree well quantitatively with the theoretical predictions. Our theoretical results predict the existence of a strong peak at about 85 bp, which represents the revolution period about the nucleosome. Other peaks at multiples of about 1,000 bp correspond to the periodicity of the particular solenoid model of chromatin used in these calculations. Theoretical results in combination with experimental data on fragmentation spectra may help determine the consensus or average structure of the chromatin fibers in mammalian DNA. 27 refs., 7 figs
8. Comparison between theoretical and experimental results of the 1/6 scale concrete model under internal pressure
International Nuclear Information System (INIS)
Riviere, J.; Barbe, B.; Millard, A.; Koundy, V.
1988-01-01
The prevision of the behavior of the 1/6 scale concrete model under internal pressure was realized by means of two computations, the first one with an infinite soil rigidity, the second one with a soil rigidity equal to 61.26 MPa/m. These two computations, that assumed a perfectly axisymetric structure gave theoretical and experimental results in good agreement, except the raft of which the theoretical uplift was three times higher than the experimental one. The main conclusions of this study are as follow: the soil stiffness has no influence on the ultimate behavior of the model, the dead concrete rigidity decreases the raft uplift in an important way, the model is destroyed because the hoop stress reaches the ultimate strength
9. The role of tourism public-private partnerships in regional development: a conceptual model proposal
Directory of Open Access Journals (Sweden)
Mário Franco
Full Text Available Tourism is characterized as being a sector that stands out as one of the business activities with the greatest potential for worldwide expansion, and as an engine for economic growth. If at the national level, the appeal of tourism is significant, on the local level this sector presents itself as an essential tool in regional development, as a means to avoid regional desertification and stagnation, stimulating the potential of more undeveloped regions. In such a competitive sector as tourism, companies should develop synergies and achieve competitive advantage. In this context, public-private partnerships play an important role in regional development. The aim of this paper is to present a theoretical context that combines different concepts and elements to explain and understand the public-private partnership phenomenon in tourism. A conceptual model of the role of public-private partnerships will be proposed in order to contribute to successful regional development.
10. Theoretical study and control optimization of an integrated pest management predator-prey model with power growth rate.
Science.gov (United States)
Sun, Kaibiao; Zhang, Tonghua; Tian, Yuan
2016-09-01
This work presents a pest control predator-prey model, where rate of change in prey density follows a scaling law with exponent less than one and the control is by an integrated management strategy. The aim is to investigate the change in system dynamics and determine a pest control level with minimum control price. First, the dynamics of the proposed model without control is investigated by taking the exponent as an index parameter. And then, to determine the frequency of spraying chemical pesticide and yield releases of the predator, the existence of the order-1 periodic orbit of the control system is discussed in cases. Furthermore, to ensure a certain robustness of the adopted control, i.e., for an inaccurately detected species density or a deviation, the control system could be stabilized at the order-1 periodic orbit, the stability of the order-1 periodic orbit is verified by an stability criterion for a general semi-continuous dynamical system. In addition, to minimize the total cost input in pest control, an optimization problem is formulated and the optimum pest control level is obtained. At last, the numerical simulations with a specific model are carried out to complement the theoretical results. Copyright © 2016 Elsevier Inc. All rights reserved.
11. Modeling Instruction of David Hestenes: a proposal of thematic modeling cycle and discussion of scientific literacy
Directory of Open Access Journals (Sweden)
Ednilson Sergio Ramalho de Souza
2016-07-01
Full Text Available The pedagogical work with mathematical modeling assumes investigate situations of reality. However, mental models formed from the contact with the experiential world are generally incompatible with the conceptual models. So David Hestenes supports the view that one of the biggest challenges of teaching and learning in science and mathematics is to coordinate conceptual models with mental models, which led to the elaboration of a didactic in mathematical modeling: Modeling Instruction. Our goal is to present a proposal for thematic modeling cycle drawn up in hestenesianos assumptions and discuss possibilities for scientific literacy. The main question was to know how to emerge indicators for scientific literacy for the proposed cycle. This is a bibliographic research in order to identify the available literature contributions on the subject and raise the possibility and challenges for the brazilian teaching science and mathematics. Preliminary results indicate that the proposed modeling cycle can develop indicators for scientific literacy of different natures.
12. A New Proposed Cost Model for List Accessing Problem using Buffering
OpenAIRE
Mohanty, Rakesh; Bhoi, Seetaya; Tripathy, Sasmita
2011-01-01
There are many existing well known cost models for the list accessing problem. The standard cost model developed by Sleator and Tarjan is most widely used. In this paper, we have made a comprehensive study of the existing cost models and proposed a new cost model for the list accessing problem. In our proposed cost model, for calculating the processing cost of request sequence using a singly linked list, we consider the access cost, matching cost and replacement cost. The cost of processing a...
13. Tacit knowledge in academia: a proposed model and measurement scale.
Science.gov (United States)
Leonard, Nancy; Insch, Gary S
2005-11-01
The authors propose a multidimensional model of tacit knowledge and develop a measure of tacit knowledge in academia. They discuss the theory and extant literature on tacit knowledge and propose a 6-factor model. Experiment 1 is a replication of a recent study of academic tacit knowledge using the scale developed and administered at an Israeli university (A. Somech & R. Bogler, 1999). The results of the replication differed from those found in the original study. For Experiment 2, the authors developed a domain-specific measure of academic tacit knowledge, the Academic Tacit Knowledge Scale (ATKS), and used this measure to explore the multidimensionality of tacit knowledge proposed in the model. The results of an exploratory factor analysis (n=142) followed by a confirmatory factor analysis (n=286) are reported. The sample for both experiments was 428 undergraduate students enrolled at a large public university in the eastern United States. Results indicated that a 5-factor model of academic tacit knowledge provided a strong fit for the data.
14. Accuracy Analysis of a Box-wing Theoretical SRP Model
Science.gov (United States)
Wang, Xiaoya; Hu, Xiaogong; Zhao, Qunhe; Guo, Rui
2016-07-01
For Beidou satellite navigation system (BDS) a high accuracy SRP model is necessary for high precise applications especially with Global BDS establishment in future. The BDS accuracy for broadcast ephemeris need be improved. So, a box-wing theoretical SRP model with fine structure and adding conical shadow factor of earth and moon were established. We verified this SRP model by the GPS Block IIF satellites. The calculation was done with the data of PRN 1, 24, 25, 27 satellites. The results show that the physical SRP model for POD and forecast for GPS IIF satellite has higher accuracy with respect to Bern empirical model. The 3D-RMS of orbit is about 20 centimeters. The POD accuracy for both models is similar but the prediction accuracy with the physical SRP model is more than doubled. We tested 1-day 3-day and 7-day orbit prediction. The longer is the prediction arc length, the more significant is the improvement. The orbit prediction accuracy with the physical SRP model for 1-day, 3-day and 7-day arc length are 0.4m, 2.0m, 10.0m respectively. But they are 0.9m, 5.5m and 30m with Bern empirical model respectively. We apply this means to the BDS and give out a SRP model for Beidou satellites. Then we test and verify the model with Beidou data of one month only for test. Initial results show the model is good but needs more data for verification and improvement. The orbit residual RMS is similar to that with our empirical force model which only estimate the force for along track, across track direction and y-bias. But the orbit overlap and SLR observation evaluation show some improvement. The remaining empirical force is reduced significantly for present Beidou constellation.
15. The relationship between structural and functional connectivity: graph theoretical analysis of an EEG neural mass model
NARCIS (Netherlands)
Ponten, S.C.; Daffertshofer, A.; Hillebrand, A.; Stam, C.J.
2010-01-01
We investigated the relationship between structural network properties and both synchronization strength and functional characteristics in a combined neural mass and graph theoretical model of the electroencephalogram (EEG). Thirty-two neural mass models (NMMs), each representing the lump activity
16. Theoretical model of intravascular paramagnetic tracers effect on tissue relaxation
DEFF Research Database (Denmark)
Kjølby, Birgitte Fuglsang; Østergaard, Leif; Kiselev, Valerij G
2006-01-01
The concentration of MRI tracers cannot be measured directly by MRI and is commonly evaluated indirectly using their relaxation effect. This study develops a comprehensive theoretical model to describe the transverse relaxation in perfused tissue caused by intravascular tracers. The model takes...... into account a number of individual compartments. The signal dephasing is simulated in a semianalytical way by embedding Monte Carlo simulations in the framework of analytical theory. This approach yields a tool for fast, realistic simulation of the change in the transverse relaxation. The results indicate...... with bulk blood. The enhancement of relaxation in tissue is due to the contrast in magnetic susceptibility between blood vessels and parenchyma induced by the presence of paramagnetic tracer. Beyond the perfusion measurements, the results can be applied to quantitation of functional MRI and to vessel size...
17. Theoretical and Experimental Study on Electromechanical Coupling Properties of Multihammer Synchronous Vibration System
Directory of Open Access Journals (Sweden)
Xin Lai
2016-01-01
Full Text Available Industrial simulation of real external load using multiple exciting points or increasing exciting force by synchronizing multiple exciting forces requires multiple vibration hammers to be coordinated and work together. Multihammer vibration system which consists of several hammers is a complex electromechanical system with complex electromechanical coupling. In this paper, electromechanical coupling properties of such a multihammer vibration system were studied in detail using theoretical derivation, numerical simulation, and experiment. A kinetic model of multihammer synchronous vibration system was established, and approximate expressions for electromechanical coupling strength were solved using a small parameter periodic averaging method. Basic coupling rules and reasons were obtained. Self-synchronization and frequency hopping phenomenon were also analyzed. Subsequently, numerical simulations were carried out and electromechanical coupling process was obtained for different parameters. Simulation results verify correctness of the proposed model and results. Finally, experiments were carried out, self-synchronization and frequency hopping phenomenon were both observed, and results agree well with theoretical deduction and simulation results. These results provide theoretical foundations for multihammer synchronous vibration system and its synchronous control.
18. Theoretical model for ultracold molecule formation via adaptive feedback control
International Nuclear Information System (INIS)
Poschinger, Ulrich; Salzmann, Wenzel; Wester, Roland; Weidemueller, Matthias; Koch, Christiane P; Kosloff, Ronnie
2006-01-01
We theoretically investigate pump-dump photoassociation of ultracold molecules with amplitude- and phase-modulated femtosecond laser pulses. For this purpose, a perturbative model for light-matter interaction is developed and combined with a genetic algorithm for adaptive feedback control of the laser pulse shapes. The model is applied to the formation of 85 Rb 2 molecules in a magneto-optical trap. We find that optimized pulse shapes may maximize the formation of ground state molecules in a specific vibrational state at a pump-dump delay time for which unshaped pulses lead to a minimum of the formation rate. Compared to the maximum formation rate obtained for unshaped pulses at the optimum pump-dump delay, the optimized pulses lead to a significant improvement of about 40% for the target level population. Since our model yields the spectral amplitudes and phases of the optimized pulses, the results are directly applicable in pulse shaping experiments
19. Preparation of theoretical scanning tunneling microscope images of adsorbed molecules: a theoretical study of benzene on the Cu(110) surface
International Nuclear Information System (INIS)
Shapter, J.G.; Rogers, B.L.; Ford, M.J.
2003-01-01
Full text: Since its development in 1982, the Scanning Tunneling Microscope (STM) has developed into a powerful tool for the study of surfaces and adsorbates. However, the utility of the technique can be further enhanced through the development of techniques for generating theoretical STM images. This is particularly true when studying molecules adsorbed on a substrate, as the results are often interpreted superficially due to an inadequate understanding of the orbital overlap probed in the experiment. A method of preparing theoretical scanning tunneling microscope (STM) images using comparatively inexpensive desktop computers and the commercially available CRYSTAL98 package is presented through a study of benzene adsorbed on the Cu(110) surface. Density Functional Theory (DFT) and Hartree-Fock (HF) methods are used to model clean Cu(110) slabs of various thicknesses and to simulate the adsorption of benzene onto these slabs. Eight possible orientations of benzene on the Cu(110) surface are proposed, and the optimum orientation according to the calculations is presented. Theoretical STM images of the Cu(110) surface and benzene adsorbed on the Cu(110) surface are compared with experimental STM images of the system from a published study. Significant differences are observed and are examined in detail
20. A game-theoretical model of private power production
International Nuclear Information System (INIS)
Xing, W.; Wu, F.F.
2001-01-01
Private power production has sprung up all over the world. The build-operate-transfer (BOT) arrangement has emerged as one of the most important options for private power production, especially in developing countries with rapidly growing demand and financial shortages. Based on oligopoly theory, the paper proposes a Stackelberg game model between a BOT investor and an electric utility whereby they can negotiate a long-term energy contract. Asymmetric pricing schemes are taken into account such that a host utility purchases electricity from a BOT company at its ''avoided cost'', and sells its electricity to end users at its ''average cost''. Our Stackelberg game model is transferred into a two-level optimization problem, and then solved by an iterative algorithm. The game model is demonstrated by an illustrative example. (author)
1. Theoretical modeling of transport barriers in helical plasmas
International Nuclear Information System (INIS)
Toda, S.; Itoh, K.; Ohyabu, N.
2008-10-01
A unified transport modelling to explain electron Internal Transport Barriers (e-ITB) in helical plasmas and Internal Diffusion Barriers (IDB) observed in Large Helical Device (LHD) is proposed. The e-ITB can be predicted with the effect of zonal flows to obtain the e-ITB in the low collisional regime when the radial variation of the particle anomalous diffusivity is included. Transport analysis in this article can newly show that the particle fuelling induces the IDB formation when this unified transport modelling is used in the high collisional regime. The density limit for the IDB in helical plasmas is also examined including the effect of the radiation loss. (author)
2. A Game Theoretic Optimization Method for Energy Efficient Global Connectivity in Hybrid Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
JongHyup Lee
2016-08-01
Full Text Available For practical deployment of wireless sensor networks (WSN, WSNs construct clusters, where a sensor node communicates with other nodes in its cluster, and a cluster head support connectivity between the sensor nodes and a sink node. In hybrid WSNs, cluster heads have cellular network interfaces for global connectivity. However, when WSNs are active and the load of cellular networks is high, the optimal assignment of cluster heads to base stations becomes critical. Therefore, in this paper, we propose a game theoretic model to find the optimal assignment of base stations for hybrid WSNs. Since the communication and energy cost is different according to cellular systems, we devise two game models for TDMA/FDMA and CDMA systems employing power prices to adapt to the varying efficiency of recent wireless technologies. The proposed model is defined on the assumptions of the ideal sensing field, but our evaluation shows that the proposed model is more adaptive and energy efficient than local selections.
3. A Game Theoretic Optimization Method for Energy Efficient Global Connectivity in Hybrid Wireless Sensor Networks
Science.gov (United States)
Lee, JongHyup; Pak, Dohyun
2016-01-01
For practical deployment of wireless sensor networks (WSN), WSNs construct clusters, where a sensor node communicates with other nodes in its cluster, and a cluster head support connectivity between the sensor nodes and a sink node. In hybrid WSNs, cluster heads have cellular network interfaces for global connectivity. However, when WSNs are active and the load of cellular networks is high, the optimal assignment of cluster heads to base stations becomes critical. Therefore, in this paper, we propose a game theoretic model to find the optimal assignment of base stations for hybrid WSNs. Since the communication and energy cost is different according to cellular systems, we devise two game models for TDMA/FDMA and CDMA systems employing power prices to adapt to the varying efficiency of recent wireless technologies. The proposed model is defined on the assumptions of the ideal sensing field, but our evaluation shows that the proposed model is more adaptive and energy efficient than local selections. PMID:27589743
4. Theoretical modeling of heating and structure alterations in cartilage under laser radiation with regard to water evaporation and diffusion dominance
Science.gov (United States)
Sobol, Emil N.; Kitai, Moishe S.; Jones, Nicholas; Sviridov, Alexander P.; Milner, Thomas E.; Wong, Brian
1998-05-01
We develop a theoretical model to calculate the temperature field and the size of modified structure area in cartilaginous tissue. The model incorporates both thermal and mass transfer in a tissue regarding bulk absorption of laser radiation, water evaporation from a surface and temperature dependence of diffusion coefficient. It is proposed that due to bound- to free-phase transition of water in cartilage heated to about 70 degrees Celsius, some parts of cartilage matrix (proteoglycan units) became more mobile. The movement of these units takes place only when temperature exceed 70 degrees Celsius and results in alteration of tissue structure (denaturation). It is shown that (1) the maximal temperature is reached not on the surface irradiated at some distance from the surface; (2) surface temperature reaches a plateau quicker that the maximal temperature; (3) the depth of denatured area strongly depends on laser fluence and wavelength, exposure time and thickness of cartilage. The model allows to predict and control temperature and depth of structure alterations in the course of laser reshaping and treatment of cartilage.
5. Patients’ Acceptance of Smartphone Health Technology for Chronic Disease Management: A Theoretical Model and Empirical Test
Science.gov (United States)
Dou, Kaili; Yu, Ping; Liu, Fang; Guan, YingPing; Li, Zhenye; Ji, Yumeng; Du, Ningkai; Lu, Xudong; Duan, Huilong
2017-01-01
Background Chronic disease patients often face multiple challenges from difficult comorbidities. Smartphone health technology can be used to help them manage their conditions only if they accept and use the technology. Objective The aim of this study was to develop and test a theoretical model to predict and explain the factors influencing patients’ acceptance of smartphone health technology for chronic disease management. Methods Multiple theories and factors that may influence patients’ acceptance of smartphone health technology have been reviewed. A hybrid theoretical model was built based on the technology acceptance model, dual-factor model, health belief model, and the factors identified from interviews that might influence patients’ acceptance of smartphone health technology for chronic disease management. Data were collected from patient questionnaire surveys and computer log records about 157 hypertensive patients’ actual use of a smartphone health app. The partial least square method was used to test the theoretical model. Results The model accounted for .412 of the variance in patients’ intention to adopt the smartphone health technology. Intention to use accounted for .111 of the variance in actual use and had a significant weak relationship with the latter. Perceived ease of use was affected by patients’ smartphone usage experience, relationship with doctor, and self-efficacy. Although without a significant effect on intention to use, perceived ease of use had a significant positive influence on perceived usefulness. Relationship with doctor and perceived health threat had significant positive effects on perceived usefulness, countering the negative influence of resistance to change. Perceived usefulness, perceived health threat, and resistance to change significantly predicted patients’ intentions to use the technology. Age and gender had no significant influence on patients’ acceptance of smartphone technology. The study also
6. A Proposed Systems Model for Socializing the Graduate Writer
Science.gov (United States)
Jones, David R.
2018-01-01
Although researchers chorus the need to support graduate students toward higher levels of writing proficiency, their findings lack a holistic model for doing so. A model emerges upon scrutiny of the factors that have been implicated in supporting writing proficiency. In the proposed model, a socialization theory fits as a proximal process into the…
7. Theoretical modelling of physiologically stretched vessel in magnetisable stent assisted magnetic drug targeting application
International Nuclear Information System (INIS)
2011-01-01
The magnetisable stent assisted magnetic targeted drug delivery system in a physiologically stretched vessel is considered theoretically. The changes in the mechanical behaviour of the vessel are analysed under the influence of mechanical forces generated by blood pressure. In this 2D mathematical model a ferromagnetic, coiled wire stent is implanted to aid collection of magnetic drug carrier particles in an elastic tube, which has similar mechanical properties to the blood vessel. A cyclic mechanical force is applied to the elastic tube to mimic the mechanical stress and strain of both the stent and vessel while in the body due to pulsatile blood circulation. The magnetic dipole-dipole and hydrodynamic interactions for multiple particles are included and agglomeration of particles is also modelled. The resulting collection efficiency of the mathematical model shows that the system performance can decrease by as much as 10% due to the effects of the pulsatile blood circulation. - Research highlights: →Theoretical modelling of magnetic drug targeting on a physiologically stretched stent-vessel system. →Cyclic mechanical force applied to mimic the mechanical stress and strain of both stent and vessel. →The magnetic dipole-dipole and hydrodynamic interactions for multiple particles is modelled. →Collection efficiency of the mathematical model is calculated for different physiological blood flow and magnetic field strength.
8. A Research Proposal to Examine Entrepreneurship in Family Business
Directory of Open Access Journals (Sweden)
2012-01-01
Full Text Available This paper builds on existing theoretical and empirical studies in the areas of family business and entrepreneurship. It uses Dubin´s theory building framework to propose a model for conducting research of family businesses and its linkage to entrepreneurial activities in Mexico. This works starts by describing the concepts of family business and explains the importance that these definitions can have on the variables to be included in the research. After that, the paper explains how the concept of “familiness” relates to the essence definition of family business. Using the resource-based view (RBV, agency theory, and social capital theories we describe how social capital resources are the basis for building firm capabilities and competitive advantages that influence firm’s performances. Based on this perspective, a theoretical model, laws of interaction, a set of propositions and suggestions for further research are provided.
9. Model of twelve properties of a set of organic solvents with graph-theoretical and/or experimental parameters.
Science.gov (United States)
Pogliani, Lionello
2010-01-30
Twelve properties of a highly heterogeneous class of organic solvents have been modeled with a graph-theoretical molecular connectivity modified (MC) method, which allows to encode the core electrons and the hydrogen atoms. The graph-theoretical method uses the concepts of simple, general, and complete graphs, where these last types of graphs are used to encode the core electrons. The hydrogen atoms have been encoded by the aid of a graph-theoretical perturbation parameter, which contributes to the definition of the valence delta, delta(v), a key parameter in molecular connectivity studies. The model of the twelve properties done with a stepwise search algorithm is always satisfactory, and it allows to check the influence of the hydrogen content of the solvent molecules on the choice of the type of descriptor. A similar argument holds for the influence of the halogen atoms on the type of core electron representation. In some cases the molar mass, and in a minor way, special "ad hoc" parameters have been used to improve the model. A very good model of the surface tension could be obtained by the aid of five experimental parameters. A mixed model method based on experimental parameters plus molecular connectivity indices achieved, instead, to consistently improve the model quality of five properties. To underline is the importance of the boiling point temperatures as descriptors in these last two model methodologies. Copyright 2009 Wiley Periodicals, Inc.
10. Modeling the economic impact of medication adherence in type 2 diabetes: a theoretical approach.
Science.gov (United States)
Cobden, David S; Niessen, Louis W; Rutten, Frans Fh; Redekop, W Ken
2010-09-07
While strong correlations exist between medication adherence and health economic outcomes in type 2 diabetes, current economic analyses do not adequately consider them. We propose a new approach to incorporate adherence in cost-effectiveness analysis. We describe a theoretical approach to incorporating the effect of adherence when estimating the long-term costs and effectiveness of an antidiabetic medication. This approach was applied in a Markov model which includes common diabetic health states. We compared two treatments using hypothetical patient cohorts: injectable insulin (IDM) and oral (OAD) medications. Two analyses were performed, one which ignored adherence (analysis 1) and one which incorporated it (analysis 2). Results from the two analyses were then compared to explore the extent to which adherence may impact incremental cost-effectiveness ratios. In both analyses, IDM was more costly and more effective than OAD. When adherence was ignored, IDM generated an incremental cost-effectiveness of $12,097 per quality-adjusted life-year (QALY) gained versus OAD. Incorporation of adherence resulted in a slightly higher ratio ($16,241/QALY). This increase was primarily due to better adherence with OAD than with IDM, and the higher direct medical costs for IDM. Incorporating medication adherence into economic analyses can meaningfully influence the estimated cost-effectiveness of type 2 diabetes treatments, and should therefore be considered in health care decision-making. Future work on the impact of adherence on health economic outcomes, and validation of different approaches to modeling adherence, is warranted.
11. Theoretical modeling and experimental validation of transport and separation properties of carbon nanotube electrospun membrane distillation
KAUST Repository
Lee, Jung Gil; Lee, Eui-Jong; Jeong, Sanghyun; Guo, Jiaxin; An, Alicia Kyoungjin; Guo, Hong; Kim, Joonha; Leiknes, TorOve; Ghaffour, NorEddine
2016-01-01
Developing a high flux and selective membrane is required to make membrane distillation (MD) a more attractive desalination process. Amongst other characteristics membrane hydrophobicity is significantly important to get high vapor transport and low wettability. In this study, a laboratory fabricated carbon nanotubes (CNTs) composite electrospun (E-CNT) membrane was tested and has showed a higher permeate flux compared to poly(vinylidene fluoride-co-hexafluoropropylene) (PH) electrospun membrane (E-PH membrane) in a direct contact MD (DCMD) configuration. Only 1% and 2% of CNTs incorporation resulted in an enhanced permeate flux with lower sensitivity to feed salinity while treating a 35 and 70 g/L NaCl solutions. Experimental results and the mechanisms of E-CNT membrane were validated by a proposed new step-modeling approach. The increased vapor transport in E-CNT membranes could not be elucidated by an enhancement of mass transfer only at a given physico-chemical properties. However, the theoretical modeling approach considering the heat and mass transfers simultaneously enabled to explain successfully the enhanced flux in the DCMD process using E-CNT membranes. This indicates that both mass and heat transfers improved by CNTs are attributed to the enhanced vapor transport in the E-CNT membrane.
12. Theoretical modeling and experimental validation of transport and separation properties of carbon nanotube electrospun membrane distillation
KAUST Repository
Lee, Jung Gil
2016-12-27
Developing a high flux and selective membrane is required to make membrane distillation (MD) a more attractive desalination process. Amongst other characteristics membrane hydrophobicity is significantly important to get high vapor transport and low wettability. In this study, a laboratory fabricated carbon nanotubes (CNTs) composite electrospun (E-CNT) membrane was tested and has showed a higher permeate flux compared to poly(vinylidene fluoride-co-hexafluoropropylene) (PH) electrospun membrane (E-PH membrane) in a direct contact MD (DCMD) configuration. Only 1% and 2% of CNTs incorporation resulted in an enhanced permeate flux with lower sensitivity to feed salinity while treating a 35 and 70 g/L NaCl solutions. Experimental results and the mechanisms of E-CNT membrane were validated by a proposed new step-modeling approach. The increased vapor transport in E-CNT membranes could not be elucidated by an enhancement of mass transfer only at a given physico-chemical properties. However, the theoretical modeling approach considering the heat and mass transfers simultaneously enabled to explain successfully the enhanced flux in the DCMD process using E-CNT membranes. This indicates that both mass and heat transfers improved by CNTs are attributed to the enhanced vapor transport in the E-CNT membrane.
13. A study of insider threat in nuclear security analysis using game theoretic modeling
International Nuclear Information System (INIS)
Kim, Kyo-Nam; Yim, Man-Sung; Schneider, Erich
2017-01-01
Highlights: • Implications of an insider threat in nuclear security were quantitatively analyzed. • The analysis was based on of a hypothetical nuclear facility and using game theoretic approach. • Through a sensitivity analysis, vulnerable paths and important parameters were identified. • The methodology can be utilized to prioritize the implementation of PPS improvements in a facility. - Abstract: An Insider poses a greater threat to the security system of a nuclear power plant (NPP) because of their ability to take advantage of their access rights and knowledge of a facility, to bypass dedicated security measures. If an insider colludes with an external terrorist group, this poses a key threat to the safety-security interface. However, despite the importance of the insider threat, few studies have been conducted to quantitatively analyze an insider threat. This research examines the quantitative framework for investigating the implications of insider threat, taking a novel approach. Conventional tools assessing the security threats to nuclear facilities focus on a limited number of attack pathways. These are defined by the modeler and are based on simple probabilistic calculations. They do not capture the adversary’s intentions nor do they account for their response and adaptation to defensive investments. As an alternative way of performing physical protection analysis, this research explores the use of game theoretic modeling of Physical Protection Systems (PPS) analysis by incorporating the implications of an insider threat, to address the issues of intentionality and interactions. The game theoretic approach has the advantage of modeling an intelligent adversary and insider who has an intention to do harm and complete knowledge of the facility. Through a quantitative assessment and sensitivity analysis, vulnerable but important parameters in this model were identified. This made it possible to determine which insider threat is more important. The
14. A game-theoretical model of private power production
Energy Technology Data Exchange (ETDEWEB)
Xing, W.; Wu, F.F. [University of Hong Kong (China). Dept. of Electrical and Electronic Engineering
2001-03-01
Private power production has sprung up all over the world. The build-operate-transfer (BOT) arrangement has emerged as one of the most important options for private power production, especially in developing countries with rapidly growing demand and financial shortages. Based on oligopoly theory, the paper proposes a Stackelberg game model between a BOT investor and an electric utility whereby they can negotiate a long-term energy contract. Asymmetric pricing schemes are taken into account such that a host utility purchases electricity from a BOT company at its ''avoided cost'', and sells its electricity to end users at its ''average cost''. Our Stackelberg game model is transferred into a two-level optimization problem, and then solved by an iterative algorithm. The game model is demonstrated by an illustrative example. (author)
15. Theoretical model of two-phase drift flow on natural circulation
International Nuclear Information System (INIS)
Yang Xingtuan; Jiang Shengyao; Zhang Youjie
2002-01-01
Some expressions, such as sub-cooled boiling in the heating section, condensation near the riser inlet, flashing in the riser, and pressure balance in the steam-space, have been theoretically deduced from the physical model of 5 MW heating reactor test loop. The thermodynamics un-equilibrium etc have been considered too. A entire drift model with four equations has been formed, which can be applied to natural circulation system with low pressure and low steam quality. By means of introducing the concept of condensation layer, condensing of bubbles in the sub-cooled liquid has been formulated for the first time. The restrictive equations of the steam space pressure and liquid level have been offered. The equations can be solved by means of integral method, then by using Rung-Kutta-Verner method the final results is obtained
16. Theoretical performance model for single image depth from defocus.
Science.gov (United States)
Trouvé-Peloux, Pauline; Champagnat, Frédéric; Le Besnerais, Guy; Idier, Jérôme
2014-12-01
In this paper we present a performance model for depth estimation using single image depth from defocus (SIDFD). Our model is based on an original expression of the Cramér-Rao bound (CRB) in this context. We show that this model is consistent with the expected behavior of SIDFD. We then study the influence on the performance of the optical parameters of a conventional camera such as the focal length, the aperture, and the position of the in-focus plane (IFP). We derive an approximate analytical expression of the CRB away from the IFP, and we propose an interpretation of the SIDFD performance in this domain. Finally, we illustrate the predictive capacity of our performance model on experimental data comparing several settings of a consumer camera.
17. Information-theoretic metamodel of organizational evolution
Science.gov (United States)
Sepulveda, Alfredo
2011-12-01
18. Rolling force prediction for strip casting using theoretical model and artificial intelligence
Institute of Scientific and Technical Information of China (English)
CAO Guang-ming; LI Cheng-gang; ZHOU Guo-ping; LIU Zhen-yu; WU Di; WANG Guo-dong; LIU Xiang-hua
2010-01-01
Rolling force for strip casting of 1Cr17 ferritic stainless steel was predicted using theoretical model and artificial intelligence.Solution zone was classified into two parts by kiss point position during casting strip.Navier-Stokes equation in fluid mechanics and stream function were introduced to analyze the rheological property of liquid zone and mushy zone,and deduce the analytic equation of unit compression stress distribution.The traditional hot rolling model was still used in the solid zone.Neural networks based on feedforward training algorithm in Bayesian regularization were introduced to build model for kiss point position.The results show that calculation accuracy for verification data of 94.67% is in the range of+7.0%,which indicates that the predicting accuracy of this model is very high.
19. A Game-Theoretic Approach to Branching Time Abstract-Check-Refine Process
Science.gov (United States)
Wang, Yi; Tamai, Tetsuo
2009-01-01
Since the complexity of software systems continues to grow, most engineers face two serious problems: the state space explosion problem and the problem of how to debug systems. In this paper, we propose a game-theoretic approach to full branching time model checking on three-valued semantics. The three-valued models and logics provide successful abstraction that overcomes the state space explosion problem. The game style model checking that generates counter-examples can guide refinement or identify validated formulas, which solves the system debugging problem. Furthermore, output of our game style method will give significant information to engineers in detecting where errors have occurred and what the causes of the errors are.
20. How Do Trading Firms Upgrade Skills and Technology: A Theoretical Model
Directory of Open Access Journals (Sweden)
Mojca Lindic
2015-12-01
Full Text Available This paper studies the mechanisms of skill upgrading in trading firms by developing a theoretical model that relates the individual’s incentives for acquiring higher skills to the profit-maximizing behaviour of trading firms. The model shows that only the high ability individuals have incentives for acquiring higher skills, as long as they are compensated with higher wages after entering employment. Furthermore, high-productive firms have incentives for investing in higher technology, to employ high-skilled labour, and to engage in international trade. The decisions for technology dress-up and skill upgrading coincide with firm’s decisions to start importing and exporting as the latter requires higher technology and high-skilled labour. Contributions of the paper are twofold: gaining new insights by combining fragments of models on individual’s and firm’s behaviours, and broadening the content of the Melitz (2003 model by introducing importers and controlling for skilled and unskilled labour.
1. Relational models for knowledge sharing behavior
NARCIS (Netherlands)
Boer, N.I.; Berends, J.J.; Baalen, P.
2011-01-01
In this paper we explore the relational dimension of knowledge sharing behavior by proposing a comprehensive theoretical framework for studying knowledge sharing in organizations. This theoretical framework originates from (Fiske, 1991) and (Fiske, 1992) Relational Models Theory (RMT). The RMT
2. Theoretical thermal dosimetry produced by an annular phased array system in CT-based patient models
International Nuclear Information System (INIS)
Paulsen, K.D.; Strohbehn, J.W.; Lynch, D.R.
1984-01-01
Theoretical calculations for the specific absorption rate (SAR) and the resulting temperature distributions produced by an annular phased array (APA) type system are made. The finite element numerical method is used in the formulation of both the electromagnetic (EM) and the thermal boundary value problems. A number of detailed patient models based on CT-scan data from the pelvic, visceral, and thoracic regions are generated to stimulate a variety of tumor locations and surrounding normal tissues. The SAR values from the EM solution are input into the bioheat transfer equation, and steady-rate temperature distributions are calculated for a wide variety of blood flow rates. Based on theoretical modeling, the APA shows no preferential heating of superficial over deep-seated tumors. However, in most cases satisfactory thermal profiles (therapeutic volume near 60%) are obtained in all three regions of the human trunk only for tumors with little or no blood flow. Unsatisfactory temperature patterns (therapeutic volume <50%) are found for tumors with moderate to high perfusion rates. These theoretical calculations should aid the clinician in the evaluation of the effectiveness of APA type devices in heating tumors located in the trunk region
3. Theoretical analysis of surface stress for a microcantilever with varying widths
International Nuclear Information System (INIS)
Li Xianfang; Peng Xulong
2008-01-01
A theoretical model of surface stress is developed in this paper for a microcantilever with varying widths, and a method for calculating the surface stress via static deflection, slope angle or radius at curvature of the cantilever beam is presented. This model assumes that surface stresses are uniformly distributed on one surface of the cantilever beam. Based on this stressor model and using the small deformation Euler-Bernoulli beam theory, a fourth-order ordinary differential governing equation with varying coefficients or an equivalent second-order integro-differential equation is derived. A simple approach is then proposed to determine the solution of the resulting equation, and a closed-form approximate solution with high accuracy can be obtained. For rectangular and V-shaped microfabricated cantilevers, the dependences of transverse deflection, slope and curvature of the beam on the surface stresses are given explicitly. The obtained results indicate that the zeroth order approximation of the stressor model reduces to the end force model with a linear curvature for a rectangular cantilever. For larger surface stresses, the curvature exhibits a non-linear behaviour. The predictions through the stressor model give higher accuracy than those from the end moment and end force models and satisfactorily agree with experimental data. The derived closed-form solution can serve as a theoretical benchmark for verifying numerically obtained results for microcantilevers as atomic force microscopy and micromechanical sensors
4. Theoretical modelling, analysis and validation of the shaft motion and dynamic forces during rotor–stator contact
DEFF Research Database (Denmark)
Lahriri, Said; Santos, Ilmar
2013-01-01
and stator. Expressions for the restoring magnetic forces are derived using Biot Savart law for uniformed magnetised bar magnets and the contact forces are derived by use of a compliant contact force model. The theoretical mathematical model is verified with experimental results, and shows good agreements...
5. Main features of the proposed NCRP respiratory tract model
International Nuclear Information System (INIS)
Phalen, R.F.; Fisher, G.L.; Moss, O.R.; Schlesinger, R.B.; Swift, D.L.
1991-01-01
The proposed NCRP respiratory tract dosimetry model regions include the naso-oro-pharyngo-laryngeal (NOPL), the tracheobronchial (TB), the pulmonary (P), and the lymph nodes (LN). Input aerosol concentrations are derived from a consideration of particle-size-dependent inspirability. Particle deposition in the respiratory tract is modelled using the mechanisms of inertial impaction, sedimentation and diffusion. The rates of absorption of particles, and transport to the blood, have been derived from clearance data from people and laboratory animals. The effect of body growth on particle deposition is considered. Particle clearance rates are assumed to be independent of age. The proposed respiratory tract model differs significantly from the 1966 Task Group Model in that (1) inspirability is considered; (2) new sub-regions of the respiratory tract are considered; (3) absorption of materials by the blood is treated in a more sophisticated fashion; and (4) body size (and thus age) is taken into account. (author)
6. Green accounts for sulphur and nitrogen deposition in Sweden. Implementation of a theoretical model in practice
International Nuclear Information System (INIS)
Ahlroth, S.
2001-01-01
This licentiate thesis tries to bridge the gap between the theoretical and the practical studies in the field of environmental accounting. In the paper, 1 develop an optimal control theory model for adjusting NDP for the effects Of SO 2 and NO x emissions, and subsequently insert empirically estimated values. The model includes correction entries for the effects on welfare, real capital, health and the quality and quantity of renewable natural resources. In the empirical valuation study, production losses were estimated with dose-response functions. Recreational and other welfare values were estimated by the contingent valuation (CV) method. Effects on capital depreciation are also included. For comparison, abatement costs and environmental protection expenditures for reducing sulfur and nitrogen emissions were estimated. The theoretical model was then utilized to calculate the adjustment to NDP in a consistent manner
7. Green accounts for sulphur and nitrogen deposition in Sweden. Implementation of a theoretical model in practice
Energy Technology Data Exchange (ETDEWEB)
Ahlroth, S.
2001-01-01
This licentiate thesis tries to bridge the gap between the theoretical and the practical studies in the field of environmental accounting. In the paper, 1 develop an optimal control theory model for adjusting NDP for the effects Of SO{sub 2} and NO{sub x} emissions, and subsequently insert empirically estimated values. The model includes correction entries for the effects on welfare, real capital, health and the quality and quantity of renewable natural resources. In the empirical valuation study, production losses were estimated with dose-response functions. Recreational and other welfare values were estimated by the contingent valuation (CV) method. Effects on capital depreciation are also included. For comparison, abatement costs and environmental protection expenditures for reducing sulfur and nitrogen emissions were estimated. The theoretical model was then utilized to calculate the adjustment to NDP in a consistent manner.
8. A theoretical model of ultrasonic examination of smooth flat cracks
International Nuclear Information System (INIS)
Chapman, R.K.; Coffey, J.M.
1984-01-01
This chapter proposes a mathematical model which combines approximate descriptions of the defect, the defect-sound interaction, and the transmission and reception of the sound by the probes, all in a framework of the component geometry. Topics considered include scattering from cracks, a model of the probe beam, the geometry of the inspection, and extensions of the model using generalized ray theory. The objective is to devise a practical, yet accurate and reliable model for the overall inspection process which can be readily adapted to different inspection geometries and conditions, and which does not involve an inordinate amount of computing time
9. Formal and relational contracts between organizations: proposal of a model for analysis of the transactional and governance structure characteristics of comparative cases
Directory of Open Access Journals (Sweden)
Luciana Cardoso Siqueira Ambrozini
Full Text Available Abstract The literature indicates that the use of formal and relational governance structures have a fundamental role in the conduct and maintenance of inter-organizational relationships. Nevertheless, there are possibilities for discussions about the composition and function of these structures in the presence of different transactional characteristics. Thus, a model based on the literatures of formal contracts, inter-organizational relationships, Relational Contract Theory, and Transaction Cost Economics is proposed. Since this is a qualitative exploratory research, six structured interviews were carried out and interpreted by means of Content Analysis for case comparison and discussion of theoretical propositions. It was observed that some transactional characteristics, when present with greater intensity in the context of a transaction, tend to corroborate the theoretical propositions of formal contractual function, demonstrating that the intensity of these characteristics is a relevant factor for analyzing the adequacy of governance structures. Likewise, the use of different relational norms presents variations within each characteristic analyzed. Other aspects explored in the Content Analysis are suggested in the composition of the analysis model. The propositions explored regarding the composition of the transaction context and the complementarity of governance structure of inter-organizational relationships are also discussed.
10. Use of Graph-Theoretic Models in Technological Preparation of Assembly Plant
Directory of Open Access Journals (Sweden)
Peter Franzevich Yurchik
2015-05-01
Full Text Available The article examines the existing ways of describing the structural and technological properties of the product in the process of building and repair. It turned out that the main body of work on the preparation process of assembling production uses graph-theoretic model of the product. It is shown that, in general, the structural integrity of many-form connections and relations on the set of components that can not be adequately described by binary structures, such as graphs, networks or trees.
11. An e-Learning Theoretical Framework
Science.gov (United States)
Aparicio, Manuela; Bacao, Fernando; Oliveira, Tiago
2016-01-01
E-learning systems have witnessed a usage and research increase in the past decade. This article presents the e-learning concepts ecosystem. It summarizes the various scopes on e-learning studies. Here we propose an e-learning theoretical framework. This theory framework is based upon three principal dimensions: users, technology, and services…
12. The role of the bacterial mismatch repair system in SOS-induced mutagenesis: a theoretical background
International Nuclear Information System (INIS)
Belov, O.V.; Kapralov, M.I.; Chuluunbaatar, O.; Sweilam, N.H.
2012-01-01
A theoretical study is performed of the possible role of the methyl-directed mismatch repair system in the ultraviolet-induced mutagenesis of Escherichia coli bacterial cells. For this purpose, a mathematical model of the bacterial mismatch repair system is developed. Within this model, the key pathways of this type of repair are simulated on the basis of modern experimental data related to its mechanisms. Here we have modelled in detail five main pathways of DNA misincorporation removal with different DNA exonucleases. Using our calculations, we have tested the hypothesis that the bacterial mismatch repair system is responsible for the removal of the nucleotides misincorporated by DNA polymerase V (the UmuD' 2 C complex) during ultraviolet-induced SOS response. For the theoretical analysis of the mutation frequency, we have combined the proposed mathematical approach with the model of SOS-induced mutagenesis in the E.coli bacterial cell developed earlier. Our calculations support the hypothesis that methyl-directed mismatch repair influences the mutagenic effect of ultraviolet radiation
13. Hospital nurses' wellbeing at work: a theoretical model.
Science.gov (United States)
Utriainen, Kati; Ala-Mursula, Leena; Kyngäs, Helvi
2015-09-01
To develop a theoretical model of hospital nurses' wellbeing at work. The concept of wellbeing at work is presented without an exact definition and without considering different contents. A model was developed in a deductive manner and empirical data collected from nurses (n = 233) working in a university hospital. Explorative factor analysis was used. The main concepts were: patients' experience of high-quality care; assistance and support among nurses; nurses' togetherness and cooperation; fluent practical organisation of work; challenging and meaningful work; freedom to express diverse feelings in the work community; well-conducted everyday nursing; status related to the work itself; fair and supportive leadership; opportunities for professional development; fluent communication with other professionals; and being together with other nurses in an informal way. Themes included: collegial relationships; enhancing high-quality patient care; supportive and fair leadership; challenging, meaningful and well organised work; and opportunities for professional development. Object-dependent wellbeing was supported. Managers should focus on strengthening the positive aspect of wellbeing at work, focusing on providing fluently organised work practices, fair and supportive leadership and togetherness while allowing nurses to implement their own ideas and promote the experience of meaningfulness. © 2014 John Wiley & Sons Ltd.
14. A theoretical perspective on road safety communication campaigns.
Science.gov (United States)
Elvik, Rune
2016-12-01
15. Theoretical modelling of quantum circuit systems
International Nuclear Information System (INIS)
Stiffell, Peter Barry
2002-01-01
The work in this thesis concentrates on the interactions between circuit systems operating in the quantum regime. The main thrust of this work involves the use of a new model for investigating the way in which different components in such systems behave when coupled together. This is achieved by utilising the matrix representation of quantum mechanics, in conjunction with a number of other theoretical techniques (such as Wigner functions and entanglement entropies). With these tools in place it then becomes possible to investigate and review different quantum circuit systems. These investigations cover systems ranging from simple electromagnetic (cm) field oscillators in isolation to coupled SQUID rings in more sophisticated multi-component arrangements. Primarily, we look at the way SQUID rings couple to em fields, and how the ring-field interaction can be mediated by the choice of external flux, Φ x , applied to the SQUID ring. A lot of interest is focused on the transfer of energy between the system modes. However, we also investigate the statistical properties of the system, including squeezing, entropy and entanglement. Among the phenomena uncovered in this research we note the ability to control coupling in SQUID rings via the external flux, the capacity for entanglement between quantum circuit modes, frequency conversions of photons, flux squeezing and the existence of Schroedinger Cat states. (author)
16. An Alternative Theoretical Model for Economic Reforms in Africa ...
African Journals Online (AJOL)
This paper offers an alternative model for economic reforms in Africa. It proposes that Africa can still get on the pathway of sustained economic growth if economic reforms can focus on a key variable, namely, the price of non-tradables. Prices of non-tradables are generally less in Africa than in advanced economies, and the ...
17. 6 essays about auctions: a theoretical and empirical analysis. Application to power markets
International Nuclear Information System (INIS)
Lamy, L.
2007-06-01
This thesis is devoted to a theoretical and empirical analysis of auction mechanisms. Motivated by allocation issues in network industries, in particular by the liberalization of the electricity sector, it focus on auctions with externalities (either allocative or informational) and on multi-objects auctions. After an introduction which provides a survey of the use and the analysis of auctions in power markets, six chapters make this thesis. The first one considers standard auctions in Milgrom-Weber's model with interdependent valuations when the seller can not commit not to participate in the auction. The second and third chapters study the combinatorial auction mechanism proposed by Ausubel and Milgrom. The first of these two studies proposes a modification of this format with a final discount stage and clarifies the theoretical status of those formats, in particular the conditions such that truthful reporting is a dominant strategy. Motivated by the robustness issues of the generalizations of the Ausubel-Milgrom and the Vickrey combinatorial auctions to environments with allocative externalities between joint-purchasers, the second one characterizes the buyer-sub-modularity condition in a general model with allocative identity-dependent externalities between purchasers. In a complete information setup, the fourth chapter analyses the optimal design problem when the commitment abilities of the principal are reduced, namely she can not commit to a simultaneous participation game. The fifth chapter is devoted to the structural analysis of the private value auction model for a single-unit when the econometrician can not observe bidders' identities. The asymmetric independent private value (IPV) model is identified. A multi-step kernel-based estimator is proposed and shown to be asymptotically optimal. Using auctions data for the anglo-french electric Interconnector, the last chapter analyses a multi-unit ascending auctions through reduced forms. (author)
18. Demystifying Theoretical Sampling in Grounded Theory Research
Directory of Open Access Journals (Sweden)
Jenna Breckenridge BSc(Hons,Ph.D.Candidate
2009-06-01
Full Text Available Theoretical sampling is a central tenet of classic grounded theory and is essential to the development and refinement of a theory that is ‘grounded’ in data. While many authors appear to share concurrent definitions of theoretical sampling, the ways in which the process is actually executed remain largely elusive and inconsistent. As such, employing and describing the theoretical sampling process can present a particular challenge to novice researchers embarking upon their first grounded theory study. This article has been written in response to the challenges faced by the first author whilst writing a grounded theory proposal. It is intended to clarify theoretical sampling for new grounded theory researchers, offering some insight into the practicalities of selecting and employing a theoretical sampling strategy. It demonstrates that the credibility of a theory cannot be dissociated from the process by which it has been generated and seeks to encourage and challenge researchers to approach theoretical sampling in a way that is apposite to the core principles of the classic grounded theory methodology.
19. Wireless Networks under a Backoff Attack: A Game Theoretical Perspective.
Science.gov (United States)
Parras, Juan; Zazo, Santiago
2018-01-30
We study a wireless sensor network using CSMA/CA in the MAC layer under a backoff attack: some of the sensors of the network are malicious and deviate from the defined contention mechanism. We use Bianchi's network model to study the impact of the malicious sensors on the total network throughput, showing that it causes the throughput to be unfairly distributed among sensors. We model this conflict using game theory tools, where each sensor is a player. We obtain analytical solutions and propose an algorithm, based on Regret Matching, to learn the equilibrium of the game with an arbitrary number of players. Our approach is validated via simulations, showing that our theoretical predictions adjust to reality.
20. Choice of theoretical model for beam scattering at accelerator output foil for particle energy determination
International Nuclear Information System (INIS)
Balagyra, V.S.; Ryabka, P.M.
1999-01-01
For measuring the charged particle energy calculations of mean square angles of electron beam multiple Coulomb scattering at output combined accelerator target were undertaken according to seven theoretical models. Mollier method showed the best agreement with experiments
1. Nonlinear local electrovascular coupling. I: A theoretical model.
Science.gov (United States)
Riera, Jorge J; Wan, Xiaohong; Jimenez, Juan Carlos; Kawashima, Ryuta
2006-11-01
Here we present a detailed biophysical model of how brain electrical and vascular dynamics are generated within a basic cortical unit. The model was obtained from coupling a canonical neuronal mass and an expandable vasculature. In this proposal, we address several aspects related to electroencephalographic and functional magnetic resonance imaging data fusion: (1) the impact of the cerebral architecture (at different physical levels) on the observations; (2) the physiology involved in electrovascular coupling; and (3) energetic considerations to gain a better understanding of how the glucose budget is used during neuronal activity. The model has three components. The first is the canonical neural mass model of three subpopulations of neurons that respond to incoming excitatory synaptic inputs. The generation of the membrane potentials in the somas of these neurons and the electric currents flowing in the neuropil are modeled by this component. The second and third components model the electrovascular coupling and the dynamics of vascular states in an extended balloon approach, respectively. In the first part we describe, in some detail, the biophysical model and establish its face validity using simulations of visually evoked responses under different flickering frequencies and luminous contrasts. In a second part, a recursive optimization algorithm is developed and used to make statistical inferences about this forward/generative model from actual data. Copyright 2006 Wiley-Liss, Inc.
2. Theoretical analysis of the mode coupling induced by heat of large-pitch micro-structured fibers
International Nuclear Information System (INIS)
Zhang Hai-Tao; Hao Jie; Yan Ping; Gong Ma-Li; Chen Dan
2015-01-01
In this paper, a theoretical model to analyze the mode coupling induced by heat, when the fiber amplifier works at high power configuration, is proposed. The model mainly takes into consideration the mode field change due to the thermally induced refractive index change and the coupling between modes. A method to predict the largest average output power of fiber is also proposed according to the mode coupling theory. The largest average output power of a large pitch fiber with a core diameter of 190 μm and an available pulse energy of 100 mJ is predicted to be 540 W, which is the highest in large mode field fibers. (paper)
3. A didactic proposal about Rutherford backscattering spectrometry with theoretic, experimental, simulation and application activities
Science.gov (United States)
Corni, Federico; Michelini, Marisa
2018-01-01
Rutherford backscattering spectrometry is a nuclear analysis technique widely used for materials science investigation. Despite the strict technical requirements to perform the data acquisition, the interpretation of a spectrum is within the reach of general physics students. The main phenomena occurring during a collision between helium ions—with energy of a few MeV—and matter are: elastic nuclear collision, elastic scattering, and, in the case of non-surface collision, ion stopping. To interpret these phenomena, we use classical physics models: material point elastic collision, unscreened Coulomb scattering, and inelastic energy loss of ions with electrons, respectively. We present the educational proposal for Rutherford backscattering spectrometry, within the framework of the model of educational reconstruction, following a rationale that links basic physics concepts with quantities for spectra analysis. This contribution offers the opportunity to design didactic specific interventions suitable for undergraduate and secondary school students.
4. Self-rated health, multimorbidity and depression in Mexican older adults: Proposal and evaluation of a simple conceptual model.
Science.gov (United States)
Bustos-Vázquez, Eduardo; Fernández-Niño, Julián Alfredo; Astudillo-Garcia, Claudia Iveth
2017-04-01
Self-rated health is an individual and subjective conceptualization involving the intersection of biological, social and psychological factors. It provides an invaluable and unique evaluation of a person's general health status. To propose and evaluate a simple conceptual model to understand self-rated health and its relationship to multimorbidity, disability and depressive symptoms in Mexican older adults. We conducted a cross-sectional study based on a national representative sample of 8,874 adults of 60 years of age and older. Self-perception of a positive health status was determined according to a Likert-type scale based on the question: "What do you think is your current health status?" Intermediate variables included multimorbidity, disability and depressive symptoms, as well as dichotomous exogenous variables (sex, having a partner, participation in decision-making and poverty). The proposed conceptual model was validated using a general structural equation model with a logit link function for positive self-rated health. A direct association was found between multimorbidity and positive self-rated health (OR=0.48; 95% CI: 0.42-0.55), disability and positive self-rated health (OR=0.35; 95% CI: 0.30-0.40), depressive symptoms and positive self-rated health (OR=0.38; 95% CI: 0.34-0.43). The model also validated indirect associations between disability and depressive symptoms (OR=2.25; 95% CI: 2.01- 2.52), multimorbidity and depressive symptoms (OR=1.79; 95% CI: 1.61-2.00) and multimorbidity and disability (OR=1.98; 95% CI: 1.78-2.20). A parsimonious theoretical model was empirically evaluated, which enabled identifying direct and indirect associations with positive self-rated health.
5. Optimization of rootkit revealing system resources – A game theoretic approach
Directory of Open Access Journals (Sweden)
K. Muthumanickam
2015-10-01
Full Text Available Malicious rootkit is a collection of programs designed with the intent of infecting and monitoring the victim computer without the user’s permission. After the victim has been compromised, the remote attacker can easily cause further damage. In order to infect, compromise and monitor, rootkits adopt Native Application Programming Interface (API hooking technique. To reveal the hidden rootkits, current rootkit detection techniques check different data structures which hold reference to Native APIs. To verify these data structures, a large amount of system resources are required. This is because of the number of APIs in these data structures being quite large. Game theoretic approach is a useful mathematical tool to simulate network attacks. In this paper, a mathematical model is framed to optimize resource consumption using game-theory. To the best of our knowledge, this is the first work to be proposed for optimizing resource consumption while revealing rootkit presence using game theory. Non-cooperative game model is taken to discuss the problem. Analysis and simulation results show that our game theoretic model can effectively reduce the resource consumption by selectively monitoring the number of APIs in windows platform.
6. Satellite, climatological, and theoretical inputs for modeling of the diurnal cycle of fire emissions
Science.gov (United States)
Hyer, E. J.; Reid, J. S.; Schmidt, C. C.; Giglio, L.; Prins, E.
2009-12-01
The diurnal cycle of fire activity is crucial for accurate simulation of atmospheric effects of fire emissions, especially at finer spatial and temporal scales. Estimating diurnal variability in emissions is also a critical problem for construction of emissions estimates from multiple sensors with variable coverage patterns. An optimal diurnal emissions estimate will use as much information as possible from satellite fire observations, compensate known biases in those observations, and use detailed theoretical models of the diurnal cycle to fill in missing information. As part of ongoing improvements to the Fire Location and Monitoring of Burning Emissions (FLAMBE) fire monitoring system, we evaluated several different methods of integrating observations with different temporal sampling. We used geostationary fire detections from WF_ABBA, fire detection data from MODIS, empirical diurnal cycles from TRMM, and simple theoretical diurnal curves based on surface heating. Our experiments integrated these data in different combinations to estimate the diurnal cycles of emissions for each location and time. Hourly emissions estimates derived using these methods were tested using an aerosol transport model. We present results of this comparison, and discuss the implications of our results for the broader problem of multi-sensor data fusion in fire emissions modeling.
7. Patients' Acceptance of Smartphone Health Technology for Chronic Disease Management: A Theoretical Model and Empirical Test.
Science.gov (United States)
Dou, Kaili; Yu, Ping; Deng, Ning; Liu, Fang; Guan, YingPing; Li, Zhenye; Ji, Yumeng; Du, Ningkai; Lu, Xudong; Duan, Huilong
2017-12-06
Chronic disease patients often face multiple challenges from difficult comorbidities. Smartphone health technology can be used to help them manage their conditions only if they accept and use the technology. The aim of this study was to develop and test a theoretical model to predict and explain the factors influencing patients' acceptance of smartphone health technology for chronic disease management. Multiple theories and factors that may influence patients' acceptance of smartphone health technology have been reviewed. A hybrid theoretical model was built based on the technology acceptance model, dual-factor model, health belief model, and the factors identified from interviews that might influence patients' acceptance of smartphone health technology for chronic disease management. Data were collected from patient questionnaire surveys and computer log records about 157 hypertensive patients' actual use of a smartphone health app. The partial least square method was used to test the theoretical model. The model accounted for .412 of the variance in patients' intention to adopt the smartphone health technology. Intention to use accounted for .111 of the variance in actual use and had a significant weak relationship with the latter. Perceived ease of use was affected by patients' smartphone usage experience, relationship with doctor, and self-efficacy. Although without a significant effect on intention to use, perceived ease of use had a significant positive influence on perceived usefulness. Relationship with doctor and perceived health threat had significant positive effects on perceived usefulness, countering the negative influence of resistance to change. Perceived usefulness, perceived health threat, and resistance to change significantly predicted patients' intentions to use the technology. Age and gender had no significant influence on patients' acceptance of smartphone technology. The study also confirmed the positive relationship between intention to use
8. Theoretical modeling of yields for proton-induced reactions on natural and enriched molybdenum targets
Energy Technology Data Exchange (ETDEWEB)
Celler, A; Hou, X [University of British Columbia, Vancouver, BC, Canada, (Canada); Benard, F; Ruth, T, E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected] [BC Cancer Agency, Vancouver, BC (Canada)
2011-09-07
Recent acute shortage of medical radioisotopes prompted investigations into alternative methods of production and the use of a cyclotron and {sup 100}Mo(p,2n){sup 99m}Tc reaction has been considered. In this context, the production yields of {sup 99m}Tc and various other radioactive and stable isotopes which will be created in the process have to be investigated, as these may affect the diagnostic outcome and radiation dosimetry in human studies. Reaction conditions (beam and target characteristics, and irradiation and cooling times) need to be optimized in order to maximize the amount of {sup 99m}Tc and minimize impurities. Although ultimately careful experimental verification of these conditions must be performed, theoretical calculations can provide the initial guidance allowing for extensive investigations at little cost. We report the results of theoretically determined reaction yields for {sup 99m}Tc and other radioactive isotopes created when natural and enriched molybdenum targets are irradiated by protons. The cross-section calculations were performed using a computer program EMPIRE for the proton energy range 6-30 MeV. A computer graphical user interface for automatic calculation of production yields taking into account various reaction channels leading to the same final product has been created. The proposed approach allows us to theoretically estimate the amount of {sup 99m}Tc and its ratio relative to {sup 99g}Tc and other radioisotopes which must be considered reaction contaminants, potentially contributing to additional patient dose in diagnostic studies.
9. Theoretical modeling of yields for proton-induced reactions on natural and enriched molybdenum targets.
Science.gov (United States)
Celler, A; Hou, X; Bénard, F; Ruth, T
2011-09-07
Recent acute shortage of medical radioisotopes prompted investigations into alternative methods of production and the use of a cyclotron and ¹⁰⁰Mo(p,2n)(99m)Tc reaction has been considered. In this context, the production yields of (99m)Tc and various other radioactive and stable isotopes which will be created in the process have to be investigated, as these may affect the diagnostic outcome and radiation dosimetry in human studies. Reaction conditions (beam and target characteristics, and irradiation and cooling times) need to be optimized in order to maximize the amount of (99m)Tc and minimize impurities. Although ultimately careful experimental verification of these conditions must be performed, theoretical calculations can provide the initial guidance allowing for extensive investigations at little cost. We report the results of theoretically determined reaction yields for (99m)Tc and other radioactive isotopes created when natural and enriched molybdenum targets are irradiated by protons. The cross-section calculations were performed using a computer program EMPIRE for the proton energy range 6-30 MeV. A computer graphical user interface for automatic calculation of production yields taking into account various reaction channels leading to the same final product has been created. The proposed approach allows us to theoretically estimate the amount of (99m)Tc and its ratio relative to (99g)Tc and other radioisotopes which must be considered reaction contaminants, potentially contributing to additional patient dose in diagnostic studies.
10. Experimental and theoretical requirements for fuel modelling
International Nuclear Information System (INIS)
Gatesoupe, J.P.
1979-01-01
From a scientific point of view it may be considered that any event in the life of a fuel pin under irradiation should be perfectly well understood and foreseen from that deterministic point of view, the whole behaviour of the pin maybe analysed and dismantled with a specific function for every component part and each component part related to one basic phenomenon which can be independently studied on pure physical grounds. When extracted from the code structure the subroutine is studied for itself by specialists who try to keep as close as possible to the physics involved in the phenomenon; that often leads to an impressive luxury in details and a subsequent need for many unavailable input data. It might seem more secure to follow that approach since it tries to be firmly based on theoretical grounds. One should think so if the phenomenological situation in the pin were less complex than it is. The codes would not be adequate for off-normal operating conditions since for the accidental transient conditions the key-phenomena would not be the same as for steady-state or slow transient conditions. The orientation given to fuel modelling is based on our two main technological constraints which are: no fuel melting; no cladding failure; no excessive cladding deformation. In this context, the only relevant models are those which have a significant influence on the maximum temperatures in the fuel or on the cladding damage hence the selection between key models and irrelevant models which will next be done. A rather pragmatic view is kept on codification with a special focus on a few determinant aspects of fuel behaviour and no attention to models which are nothing but decorative. Fuel modeling is merely considered as a link between experimental knowledge; it serves as a guide for further improvements in fuel design and as so happens to be quite useful. On this basis the main lacks in of fuel behaviour is described. These are mainly concerning: thermal transfer through
11. Theoretical model of Orion gamma emission: acceleration, propagation and interaction of energetic particles in the interstellar medium
International Nuclear Information System (INIS)
Parizot, Etienne
1997-01-01
This research thesis reports the development of a general model for the study of the propagation and interaction of energetic particles (cosmic rays, and so on) in the interstellar medium (ISM). The first part addresses the development of theoretical and numerical tools. The author presents cosmic rays and energetic particles, presents and describes the various processes related to high-energy particles (matter ionisation, synchrotron and Bremsstrahlung radiation, Compton scattering, nuclear processes), addresses the transport and acceleration of energetic particles (plasmas, magnetic fields and energetic particles, elements of kinetic theory, transport and acceleration of energetic particles), and describes the general model of production of γ nuclear lines and of secondary nuclei. The second part addresses the gamma signature of a massive star in a dense medium: presentation and description of massive stars and of the circumstellar medium, life, death and gamma resurrection of a massive star at the heart of a cloud. The third part addresses the case of the gamma emission by Orion, and more particularly presents a theoretical model of this emission. Some generalities and perspectives (theoretical as well as observational) are then stated [fr
12. Being present in action: a theoretical model about the interlocking between intentions and environmental affordances
Directory of Open Access Journals (Sweden)
Stefano eTriberti
2016-01-01
Full Text Available Recent neuropsychological evidence suggest that a key role in linking perceptions and intentions is played by sense of presence. Despite this phenomenon having been studied primarily in the field of virtual reality (conceived as the illusion of being in the virtual space, recent research highlighted that it is a fundamental feature of everyday experience. Specifically, the function of presence as a cognitive process is to locate the Self in a physical space or situation, based on the perceived possibility to act in it; so, the variations in sense of presence allow one to continuously adapt his own action to the external environment. Indeed intentions, as the cognitive antecedents of action, are not static representations of the desired outcomes, but dynamic processes able to adjust their own representational content according to the opportunities/restrictions emerging in the environment. Focusing on the peculiar context of action mediated by interactive technologies, we here propose a theoretical model showing how each level of an intentional hierarchy (future-directed; present directed; and motor intentions can interlock with environmental affordances in order to promote a continuous stream of action and activity.
13. “Sweet Science:” A Proposal for Integral Macropolitics
Directory of Open Access Journals (Sweden)
Daniel Gustav Anderson
2010-03-01
Full Text Available This treatise proposes the practice of becoming-responsible as a basis for integral micropolitics, defined as taking active responsibility for the well-being of the totality of living beings without exception, for the sake of that well-being alone. After reviewing two extant integral models for political action and interaction, demonstrating some of the limitations inherent in them, some ways are outlined in which the characteristic features of becoming-responsible—including critical clarity, compassion, competence, and consciousness—can be expressed in the realm of public concern; first, theoretically, drawing on a model proposed by poet and artist William Blake, and second, also historically, reflecting on an experiment in radical democracy in Chile (1970-1973, such that both examples critique and advance the claims and methods of mainstream integral theory as well as the alternative approach elaborated in this essay.
14. Theoretical cytotoxicity models for combined exposure of cells to different radiations
International Nuclear Information System (INIS)
Scott, B.R.
1981-01-01
Theoretical cytotoxicity models for predicting cell survival after sequential or simultaneous exposure of cells to high and low linear energy transfer (LET) radiation are discussed. Major findings are that (1) ordering of sequential exposures can influence the level of cell killing achieved; (2) synergism is unimportant at low doses; (3) effects at very low doses should be additive; (4) use of the conventional relative biological effectiveness approach for predicting combined effects of different radiations is unnecessary at very low doses and can lead to overestimation of risk at moderate and high doses
15. Theoretical Studies of Small-System Thermodynamics in Energetic Materials
Science.gov (United States)
2016-01-06
SECURITY CLASSIFICATION OF: This is a comprehensive theoretical research program to investigate the fundamental principles of small-system thermodynamics ...a.k.a. nanothermodynamics). The proposed work is motivated by our desire to better understand the fundamental dynamics and thermodynamics of...for Public Release; Distribution Unlimited Final Report: Theoretical Studies of Small-System Thermodynamics in Energetic Materials The views, opinions
16. A theoretical model to predict customer satisfaction in relation to service quality in selected university libraries in Sri Lanka
Directory of Open Access Journals (Sweden)
Chaminda Jayasundara
2009-01-01
Full Text Available University library administrators in Sri Lanka have begun to search for alternative ways to satisfy their clientele on the basis of service quality. This article aims at providing a theoretical model to facilitate the identification of service quality attributes and domains that may be used to predict customer satisfaction from a service quality perspective. The effectiveness of existing service quality models such as LibQUAL, SERVQUAL and SERVPREF have been questioned. In that regard, this study developed a theoretical model for academic libraries in Sri Lanka based on the disconfirmation and performance-only paradigms. These perspectives were considered by researchers to be the core mechanism to develop service quality/customer satisfaction models. The attributes and domain identification of service quality was carried out with a stratified sample of 263 participants selected from postgraduate and undergraduate students and academic staff members from the faculties of Arts in four universities in Sri Lanka. The study established that responsiveness, supportiveness, building environment, collection and access, furniture and facilities, technology, Web services and service delivery were quality domains which can be used to predict customer satisfaction. The theoretical model is unique in its domain structure compared to the existing models. The model needs to be statistically tested to make it valid and parsimonious.
17. Can theory be embedded in visual interventions to promote self-management? A proposed model and worked example.
Science.gov (United States)
Williams, B; Anderson, A S; Barton, K; McGhee, J
2012-12-01
Nurses are increasingly involved in a range of strategies to encourage patient behaviours that improve self-management. If nurses are to be involved in, or indeed lead, the development of such interventions then processes that enhance the likelihood that they will lead to evidence that is both robust and usable in practice are required. Although behavioural interventions have been predominantly based on written text or the spoken word increasing numbers are now drawing on visual media to communicate their message, despite only a growing evidence base to support it. The use of such media in health interventions is likely to increase due to technological advances enabling easier and cheaper production, and an increasing social preference for visual forms of communication. However, the development of such media is often highly pragmatic and developed intuitively rather than with theory and evidence informing their content and form. Such a process may be at best inefficient and at worst potentially harmful. This paper performs two functions. Firstly, it discusses and argues why visual based interventions may be a powerful media for behaviour change; and secondly, it proposes a model, developed from the MRC Framework for the Development and Evaluation of Complex Interventions, to guide the creation of theory informed visual interventions. It employs a case study of the development of an intervention to motivate involvement in a lifestyle intervention among people with increased cardiovascular risk. In doing this we argue for a step-wise model which includes: (1) the identification of a theoretical basis and associated concepts; (2) the development of visual narrative to establish structure; (3) the visual rendering of narrative and concepts; and (4) the assessment of interpretation and impact among the intended patient group. We go on to discuss the theoretical and methodological limitations of the model. Copyright © 2012 Elsevier Ltd. All rights reserved.
18. Information Theoretic-Learning Auto-Encoder
OpenAIRE
Santana, Eder; Emigh, Matthew; Principe, Jose C
2016-01-01
We propose Information Theoretic-Learning (ITL) divergence measures for variational regularization of neural networks. We also explore ITL-regularized autoencoders as an alternative to variational autoencoding bayes, adversarial autoencoders and generative adversarial networks for randomly generating sample data without explicitly defining a partition function. This paper also formalizes, generative moment matching networks under the ITL framework.
19. Search for Hidden Particles (SHiP): a new experiment proposal
Science.gov (United States)
De Lellis, G.
2015-06-01
Searches for new physics with accelerators are being performed at the LHC, looking for high massive particles coupled to matter with ordinary strength. We propose a new experimental facility meant to search for very weakly coupled particles in the few GeV mass domain. The existence of such particles, foreseen in different theoretical models beyond the Standard Model, is largely unexplored from the experimental point of view. A beam dump facility, built at CERN in the north area, using 400 GeV protons is a copious factory of charmed hadrons and could be used to probe the existence of such particles. The beam dump is also an ideal source of tau neutrinos, the less known particle in the Standard Model. In particular, tau anti-neutrinos have not been observed so far. We therefore propose an experiment to search for hidden particles and study tau neutrino physics at the same time.
20. A theoretical model for predicting the Peak Cutting Force of conical picks
Directory of Open Access Journals (Sweden)
Gao Kuidong
2014-01-01
Full Text Available In order to predict the PCF (Peak Cutting Force of conical pick in rock cutting process, a theoretical model is established based on elastic fracture mechanics theory. The vertical fracture model of rock cutting fragment is also established based on the maximum tensile criterion. The relation between vertical fracture angle and associated parameters (cutting parameter and ratio B of rock compressive strength to tensile strength is obtained by numerical analysis method and polynomial regression method, and the correctness of rock vertical fracture model is verified through experiments. Linear regression coefficient between the PCF of prediction and experiments is 0.81, and significance level less than 0.05 shows that the model for predicting the PCF is correct and reliable. A comparative analysis between the PCF obtained from this model and Evans model reveals that the result of this prediction model is more reliable and accurate. The results of this work could provide some guidance for studying the rock cutting theory of conical pick and designing the cutting mechanism.
1. Proposal of a new biokinetic model for niobium
International Nuclear Information System (INIS)
Oliveira, Roges
2006-01-01
There are two niobium isotopes generated in nuclear power plants: 95 Nb and 94 Nb. Workers and members of the public are subjects to intake these radionuclides in accident situation. For dose calculation purpose, it is very important to develop a model that describes in a more realistic way the kinetics of niobium inside of the human body. Presently the model adopted by ICRP (ICRP, 1989) is based on animal studies and describes the behavior of niobium in human being in a simple manner. The new model proposal describes the kinetics of the niobium from the intake into the blood until the excretion, doing this in a more realistic form and considering not only data from animals but data from human beings as well. For this objective, a workers group of a niobium extraction and processing industry exposed to stable niobium (93 Nb) in oxide insoluble form with associated uranium, was monitored for uranium and niobium determination in urinary and fecal excretion, by mass spectrometry. Based in the ratios of the niobium concentration in urinary and faecal excretion of this workers and animal data study, a new biokinetic model for niobium was proposed, with the followings modifications relative to ICRP model: a new compartment that represents muscular tissue; the fractions which are deposited into the compartment are modified; a third component in the retention equation of the bone tissue; introduction of recirculation between organs and blood. The new model was applied for a case of accidental intake and described adequately the experimental data
2. Theoretical Tinnitus framework: A Neurofunctional Model
Directory of Open Access Journals (Sweden)
Iman Ghodratitoostani
2016-08-01
Full Text Available Subjective tinnitus is the conscious (attended awareness perception of sound in the absence of an external source and can be classified as an auditory phantom perception. The current tinnitus development models depend on the role of external events congruently paired with the causal physical events that precipitate the phantom perception. We propose a novel Neurofunctional tinnitus model to indicate that the conscious perception of phantom sound is essential in activating the cognitive-emotional value. The cognitive-emotional value plays a crucial role in governing attention allocation as well as developing annoyance within tinnitus clinical distress. Structurally, the Neurofunctional tinnitus model includes the peripheral auditory system, the thalamus, the limbic system, brain stem, basal ganglia, striatum and the auditory along with prefrontal cortices. Functionally, we assume the model includes presence of continuous or intermittent abnormal signals at the peripheral auditory system or midbrain auditory paths. Depending on the availability of attentional resources, the signals may or may not be perceived. The cognitive valuation process strengthens the lateral-inhibition and noise canceling mechanisms in the mid-brain, which leads to the cessation of sound perception and renders the signal evaluation irrelevant. However, the sourceless sound is eventually perceived and can be cognitively interpreted as suspicious or an indication of a disease in which the cortical top-down processes weaken the noise canceling effects. This results in an increase in cognitive and emotional negative reactions such as depression and anxiety. The negative or positive cognitive-emotional feedbacks within the top-down approach may have no relation to the previous experience of the patients. They can also be associated with aversive stimuli similar to abnormal neural activity in generating the phantom sound. Cognitive and emotional reactions depend on general
3. Theoretical Tinnitus Framework: A Neurofunctional Model.
Science.gov (United States)
Ghodratitoostani, Iman; Zana, Yossi; Delbem, Alexandre C B; Sani, Siamak S; Ekhtiari, Hamed; Sanchez, Tanit G
2016-01-01
Subjective tinnitus is the conscious (attended) awareness perception of sound in the absence of an external source and can be classified as an auditory phantom perception. Earlier literature establishes three distinct states of conscious perception as unattended, attended, and attended awareness conscious perception. The current tinnitus development models depend on the role of external events congruently paired with the causal physical events that precipitate the phantom perception. We propose a novel Neurofunctional Tinnitus Model to indicate that the conscious (attended) awareness perception of phantom sound is essential in activating the cognitive-emotional value. The cognitive-emotional value plays a crucial role in governing attention allocation as well as developing annoyance within tinnitus clinical distress. Structurally, the Neurofunctional Tinnitus Model includes the peripheral auditory system, the thalamus, the limbic system, brainstem, basal ganglia, striatum, and the auditory along with prefrontal cortices. Functionally, we assume the model includes presence of continuous or intermittent abnormal signals at the peripheral auditory system or midbrain auditory paths. Depending on the availability of attentional resources, the signals may or may not be perceived. The cognitive valuation process strengthens the lateral-inhibition and noise canceling mechanisms in the mid-brain, which leads to the cessation of sound perception and renders the signal evaluation irrelevant. However, the "sourceless" sound is eventually perceived and can be cognitively interpreted as suspicious or an indication of a disease in which the cortical top-down processes weaken the noise canceling effects. This results in an increase in cognitive and emotional negative reactions such as depression and anxiety. The negative or positive cognitive-emotional feedbacks within the top-down approach may have no relation to the previous experience of the patients. They can also be
4. RECENT DEVELOPMENTS OF THE FINANCIAL REPORTING MODEL: THEORETICAL STUDIES IN REVIEW
Directory of Open Access Journals (Sweden)
Bonaci Carmen Giorgiana
2011-07-01
Full Text Available Our paper analyzes the manner in which the financial reporting model evolved towards fair value accounting. After a brief introduction into the context of financial reporting at international level, the analysis focuses on the accounting model of fair value. This is done by synthesizing main studies in accounting research literature that analyze fair value accounting through a theoretical approach. The analysis being developed relies on literature review methodology. The main purpose of the developed analysis is to synthesize main pros and cons as being documented through accounting research literature. Our findings underline both the advantages and shortcomings of fair value accounting and of the recent mixed attribute in nowadays financial reporting practices. The concluding remarks synthesize the obtained results and possible future developments of our analysis.
5. Theoretical size distribution of fossil taxa: analysis of a null model
Directory of Open Access Journals (Sweden)
Hughes Barry D
2007-03-01
Full Text Available Abstract Background This article deals with the theoretical size distribution (of number of sub-taxa of a fossil taxon arising from a simple null model of macroevolution. Model New species arise through speciations occurring independently and at random at a fixed probability rate, while extinctions either occur independently and at random (background extinctions or cataclysmically. In addition new genera are assumed to arise through speciations of a very radical nature, again assumed to occur independently and at random at a fixed probability rate. Conclusion The size distributions of the pioneering genus (following a cataclysm and of derived genera are determined. Also the distribution of the number of genera is considered along with a comparison of the probability of a monospecific genus with that of a monogeneric family.
6. Theoretical chemistry advances and perspectives
CERN Document Server
Eyring, Henry
1980-01-01
Theoretical Chemistry: Advances and Perspectives, Volume 5 covers articles concerning all aspects of theoretical chemistry. The book discusses the mean spherical approximation for simple electrolyte solutions; the representation of lattice sums as Mellin-transformed products of theta functions; and the evaluation of two-dimensional lattice sums by number theoretic means. The text also describes an application of contour integration; a lattice model of quantum fluid; as well as the computational aspects of chemical equilibrium in complex systems. Chemists and physicists will find the book usef
7. Theoretical modeling of infrared spectra of the hydrogen and deuterium bond in aspirin crystal
Science.gov (United States)
Ghalla, Houcine; Rekik, Najeh; Michta, Anna; Oujia, Brahim; Flakus, Henryk T.
2010-01-01
An extended quantum theoretical approach of the ν IR lineshape of cyclic dimers of weakly H-bonded species is proposed. We have extended a previous approach [M.E.-A. Benmalti, P. Blaise, H.T. Flakus, O. Henri-Rousseau, Chem. Phys. 320 (2006) 267] by accounting for the anharmonicity of the slow mode which is described by a "Morse" potential in order to reproduce the polarized infrared spectra of the hydrogen and deuterium bond in acetylsalicylic acid (aspirin) crystals. From comparison of polarized IR spectra of isotopically neat and isotopically diluted aspirin crystals it resulted that centrosymmetric aspirin dimer was the bearer of the crystal main spectral properties. In this approach, the adiabatic approximation is performed for each separate H-bond bridge of the dimer and a strong non-adiabatic correction is introduced into the model via the resonant exchange between the fast mode excited states of the two moieties. Within the strong anharmonic coupling theory, according to which the X-H→⋯Y high-frequency mode is anharmonically coupled to the H-bond bridge, this model incorporated the Davydov coupling between the excited states of the two moieties, the quantum direct and indirect dampings and the anharmonicity for the H-bond bridge. The spectral density is obtained within the linear response theory by Fourier transform of the damped autocorrelation functions. The evaluated spectra are in fairly good agreement with the experimental ones by using a minimum number of independent parameters. The effect of deuteration has been well reproduced by reducing simply the angular frequency of the fast mode and the anharmonic coupling parameter.
8. Theoretical Modelling Methods for Thermal Management of Batteries
Directory of Open Access Journals (Sweden)
Bahman Shabani
2015-09-01
Full Text Available The main challenge associated with renewable energy generation is the intermittency of the renewable source of power. Because of this, back-up generation sources fuelled by fossil fuels are required. In stationary applications whether it is a back-up diesel generator or connection to the grid, these systems are yet to be truly emissions-free. One solution to the problem is the utilisation of electrochemical energy storage systems (ESS to store the excess renewable energy and then reusing this energy when the renewable energy source is insufficient to meet the demand. The performance of an ESS amongst other things is affected by the design, materials used and the operating temperature of the system. The operating temperature is critical since operating an ESS at low ambient temperatures affects its capacity and charge acceptance while operating the ESS at high ambient temperatures affects its lifetime and suggests safety risks. Safety risks are magnified in renewable energy storage applications given the scale of the ESS required to meet the energy demand. This necessity has propelled significant effort to model the thermal behaviour of ESS. Understanding and modelling the thermal behaviour of these systems is a crucial consideration before designing an efficient thermal management system that would operate safely and extend the lifetime of the ESS. This is vital in order to eliminate intermittency and add value to renewable sources of power. This paper concentrates on reviewing theoretical approaches used to simulate the operating temperatures of ESS and the subsequent endeavours of modelling thermal management systems for these systems. The intent of this review is to present some of the different methods of modelling the thermal behaviour of ESS highlighting the advantages and disadvantages of each approach.
9. A Modified Microfinance Model Proposed for the United States
Directory of Open Access Journals (Sweden)
Eldon H Bernstein
2014-07-01
While the goal in the traditional model in developing markets is the elimination of poverty, we show how those critical conditions help to explain the lack of success in the United States. We propose a modified model whose goal is the creation of an entrepreneurial venture or improving the performance of an existing small enterprise.
10. Online adaptive approach for a game-theoretic strategy for complete vehicle energy management
NARCIS (Netherlands)
Chen, H.; Kessels, J.T.B.A.; Weiland, S.
2015-01-01
This paper introduces an adaptive approach for a game-theoretic strategy on Complete Vehicle Energy Management. The proposed method enhances the game-theoretic approach such that the strategy is able to adapt to real driving behavior. The classical game-theoretic approach relies on one probability
11. Ergonomic evaluation model of operational room based on team performance
Directory of Open Access Journals (Sweden)
YANG Zhiyi
2017-05-01
Full Text Available A theoretical calculation model based on the ergonomic evaluation of team performance was proposed in order to carry out the ergonomic evaluation of the layout design schemes of the action station in a multitasking operational room. This model was constructed in order to calculate and compare the theoretical value of team performance in multiple layout schemes by considering such substantial influential factors as frequency of communication, distance, angle, importance, human cognitive characteristics and so on. An experiment was finally conducted to verify the proposed model under the criteria of completion time and accuracy rating. As illustrated by the experiment results,the proposed approach is conductive to the prediction and ergonomic evaluation of the layout design schemes of the action station during early design stages,and provides a new theoretical method for the ergonomic evaluation,selection and optimization design of layout design schemes.
12. Theoretical and Experimental Estimations of Volumetric Inductive Phase Shift in Breast Cancer Tissue
Science.gov (United States)
González, C. A.; Lozano, L. M.; Uscanga, M. C.; Silva, J. G.; Polo, S. M.
2013-04-01
Impedance measurements based on magnetic induction for breast cancer detection has been proposed in some studies. This study evaluates theoretical and experimentally the use of a non-invasive technique based on magnetic induction for detection of patho-physiological conditions in breast cancer tissue associated to its volumetric electrical conductivity changes through inductive phase shift measurements. An induction coils-breast 3D pixel model was designed and tested. The model involves two circular coils coaxially centered and a human breast volume centrally placed with respect to the coils. A time-harmonic numerical simulation study addressed the effects of frequency-dependent electrical properties of tumoral tissue on the volumetric inductive phase shift of the breast model measured with the circular coils as inductor and sensor elements. Experimentally; five female volunteer patients with infiltrating ductal carcinoma previously diagnosed by the radiology and oncology departments of the Specialty Clinic for Women of the Mexican Army were measured by an experimental inductive spectrometer and the use of an ergonomic inductor-sensor coil designed to estimate the volumetric inductive phase shift in human breast tissue. Theoretical and experimental inductive phase shift estimations were developed at four frequencies: 0.01, 0.1, 1 and 10 MHz. The theoretical estimations were qualitatively in agreement with the experimental findings. Important increments in volumetric inductive phase shift measurements were evident at 0.01MHz in theoretical and experimental observations. The results suggest that the tested technique has the potential to detect pathological conditions in breast tissue associated to cancer by non-invasive monitoring. Further complementary studies are warranted to confirm the observations.
13. Visit of Peters Higgs at Point 2 ALICE Experiment - British theoretical physicist, He worked on proposals to unify the weak and the electromagnetic forces into a single electroweak theory, The Boson of Higgs.
CERN Multimedia
Mona Schweizer
2008-01-01
Visit of Peters Higgs at Point 2 ALICE Experiment - British theoretical physicist, He worked on proposals to unify the weak and the electromagnetic forces into a single electroweak theory, The Boson of Higgs.
14. Theoretical Framework of Advanced Training in the Field of Conflict Management in Organization
Directory of Open Access Journals (Sweden)
Kilmashkina T.N.
2018-01-01
Full Text Available In this paper, we consider the theoretical framework for creating an advanced training course for professionals working in various organizations whose functional duties include activities aimed at managing conflict situations occurring within the organization. The article also considers such problem concepts as: essence and causes of conflicts, types of conflicts in the organization; organizational, psychological, sociological and cultural ways of managing conflicts in the organization. The proposed theoretical model of advanced professional training is constructed within the framework of the competence approach which, in this case, is based on the notion that a participant in the program should master a certain set of special competencies that include knowledge, skills and abilities necessary for the effective process management of various conflict situations.
15. Strategy for a numerical Rock Mechanics Site Descriptive Model. Further development of the theoretical/numerical approach
International Nuclear Information System (INIS)
Olofsson, Isabelle; Fredriksson, Anders
2005-05-01
The Swedish Nuclear and Fuel Management Company (SKB) is conducting Preliminary Site Investigations at two different locations in Sweden in order to study the possibility of a Deep Repository for spent fuel. In the frame of these Site Investigations, Site Descriptive Models are achieved. These products are the result of an interaction of several disciplines such as geology, hydrogeology, and meteorology. The Rock Mechanics Site Descriptive Model constitutes one of these models. Before the start of the Site Investigations a numerical method using Discrete Fracture Network (DFN) models and the 2D numerical software UDEC was developed. Numerical simulations were the tool chosen for applying the theoretical approach for characterising the mechanical rock mass properties. Some shortcomings were identified when developing the methodology. Their impacts on the modelling (in term of time and quality assurance of results) were estimated to be so important that the improvement of the methodology with another numerical tool was investigated. The theoretical approach is still based on DFN models but the numerical software used is 3DEC. The main assets of the programme compared to UDEC are an optimised algorithm for the generation of fractures in the model and for the assignment of mechanical fracture properties. Due to some numerical constraints the test conditions were set-up in order to simulate 2D plane strain tests. Numerical simulations were conducted on the same data set as used previously for the UDEC modelling in order to estimate and validate the results from the new methodology. A real 3D simulation was also conducted in order to assess the effect of the '2D' conditions in the 3DEC model. Based on the quality of the results it was decided to update the theoretical model and introduce the new methodology based on DFN models and 3DEC simulations for the establishment of the Rock Mechanics Site Descriptive Model. By separating the spatial variability into two parts, one
16. Effects of pump recycling technique on stimulated Brillouin scattering threshold: a theoretical model.
Science.gov (United States)
Al-Asadi, H A; Al-Mansoori, M H; Ajiya, M; Hitam, S; Saripan, M I; Mahdi, M A
2010-10-11
We develop a theoretical model that can be used to predict stimulated Brillouin scattering (SBS) threshold in optical fibers that arises through the effect of Brillouin pump recycling technique. Obtained simulation results from our model are in close agreement with our experimental results. The developed model utilizes single mode optical fiber of different lengths as the Brillouin gain media. For 5-km long single mode fiber, the calculated threshold power for SBS is about 16 mW for conventional technique. This value is reduced to about 8 mW when the residual Brillouin pump is recycled at the end of the fiber. The decrement of SBS threshold is due to longer interaction lengths between Brillouin pump and Stokes wave.
17. Job stress and cardiovascular disease: a theoretic critical review.
Science.gov (United States)
Kristensen, T S
1996-07-01
During the last 15 years, the research on job stress and cardiovascular diseases has been dominated by the job strain model developed by R. Karasek (1979) and colleagues (R. Karasek & T. Theorell, 1990). In this article the results of this research are briefly summarized, and the theoretical and methodological basis is discussed and criticized. A sociological interpretation of the model emphasizing theories of technological change, qualifications of the workers, and the organization of work is proposed. Furthermore, improvements with regard to measuring the job strain dimensions and to sampling the study base are suggested. Substantial improvements of the job strain research could be achieved if the principle of triangulation were used in the measurements of stressors, stress, and sickness and if occupation-based samples were used instead of large representative samples.
18. Theoretical Foundations for Website Design Courses.
Science.gov (United States)
Walker, Kristin
2002-01-01
Considers how theoretical foundations in website design courses can facilitate students learning the genres of Internet communication. Proposes ways that theories can be integrated into website design courses. Focuses on two students' website portfolios and ways they utilize genre theory and activity theory discussed in class to produce websites…
19. CO2 laser with modulated losses: Theoretical models and experiments in the chaotic regime
International Nuclear Information System (INIS)
Pando L, C.L.; Meucci, R.; Ciofini, M.; Arecchi, F.T.
1993-04-01
We compare two different theoretical models for a CO 2 laser, namely the two-and four-level model, and show that the second one traces with much better accuracy the experimental behavior in the case of a chaotic dynamics due to time modulation of the cavity losses. Even though the two-level model provides a qualitative explanation of the chaotic dynamics, only the four-level one assures a quantitative fitting. We also show that, at the onset of chaos, the chaotic dynamics is low dimensional and can be described in terms of a noninvertible unidimensional map. (author). 12 refs, 8 figs, 2 tabs
20. Toward a comprehensive, theoretical model of compassion fatigue: An integrative literature review.
Science.gov (United States)
Coetzee, Siedine K; Laschinger, Heather K S
2018-03-01
This study was an integrative literature review in relation to compassion fatigue models, appraising these models, and developing a comprehensive theoretical model of compassion fatigue. A systematic search on PubMed, EbscoHost (Academic Search Premier, E-Journals, Medline, PsycINFO, Health Source Nursing/Academic Edition, CINAHL, MasterFILE Premier and Health Source Consumer Edition), gray literature, and manual searches of included reference lists was conducted in 2016. The studies (n = 11) were analyzed, and the strengths and limitations of the compassion fatigue models identified. We further built on these models through the application of the conservation of resources theory and the social neuroscience of empathy. The compassion fatigue model shows that it is not empathy that puts nurses at risk of developing compassion fatigue, but rather a lack of resources, inadequate positive feedback, and the nurse's response to personal distress. By acting on these three aspects, the risk of developing compassion fatigue can be addressed, which could improve the retention of a compassionate and committed nurse workforce. © 2017 John Wiley & Sons Australia, Ltd.
1. A theoretical model for flow boiling CHF from short concave heaters
International Nuclear Information System (INIS)
Galloway, J.E.; Mudawar, I.
1995-01-01
Experiments were performed to enable the development of a new theoretical mode for the enhancement in CHF commonly observed with flow boiling on concave heater as compared to straight heaters. High-speed video imaging and photomicrography were employed to capture the trigger mechanism for CHF each type heater. A wavy vapor layer was observed to engulf the heater surface in each case, permitting liquid access to the surface only in regions where depressions (troughs) in the liquid vapor interface made contact with the surface. CHF in each case occurred when the pressure force exerted upon the wavy vapor-liquid inter ace in the contact region could no longer overcome the momentum of the vapor produced in these regional. Shorter interfacial wavelengths with greater curvature were measured on the curve, heater than on the straight heater, promoting a greater pressure force on the wave interface and a corresponding increase in CHF for the curved heater. A theoretics. CHF model is developed from these observations, based upon a new theory for hydrodynamic instability, along a curved interface. CHF data are predicted with good accuracy for both heaters. 23 refs., 9 figs
2. Strengthening Theoretical Testing in Criminology Using Agent-based Modeling.
Science.gov (United States)
Johnson, Shane D; Groff, Elizabeth R
2014-07-01
The Journal of Research in Crime and Delinquency ( JRCD ) has published important contributions to both criminological theory and associated empirical tests. In this article, we consider some of the challenges associated with traditional approaches to social science research, and discuss a complementary approach that is gaining popularity-agent-based computational modeling-that may offer new opportunities to strengthen theories of crime and develop insights into phenomena of interest. Two literature reviews are completed. The aim of the first is to identify those articles published in JRCD that have been the most influential and to classify the theoretical perspectives taken. The second is intended to identify those studies that have used an agent-based model (ABM) to examine criminological theories and to identify which theories have been explored. Ecological theories of crime pattern formation have received the most attention from researchers using ABMs, but many other criminological theories are amenable to testing using such methods. Traditional methods of theory development and testing suffer from a number of potential issues that a more systematic use of ABMs-not without its own issues-may help to overcome. ABMs should become another method in the criminologists toolbox to aid theory testing and falsification.
3. Software for energy modelling: a theoretical basis for improvements in the user interface
Energy Technology Data Exchange (ETDEWEB)
Siu, Y.L.
1989-09-01
A philosophical critique of the relationships between theory, knowledge and practice for a range of existing energy modelling styles is presented. In particular, Habermas's ideas are invoked regarding the three spheres of cognitive interest (i.e. technical, practical and emancipatory) and three levels of understanding of knowledge, the construction of an 'ideal speech situation', and the theory of communicative competence and action. These are adopted as a basis for revealing shortcomings of a representative selection of existing computer-based energy modelling styles, and as a springboard for constructing a new theoretical approach. (author).
4. Effective Drug Delivery in Diffuse Intrinsic Pontine Glioma : A Theoretical Model to Identify Potential Candidates
NARCIS (Netherlands)
El-Khouly, Fatma E; van Vuurden, Dannis G; Stroink, Thom; Hulleman, Esther; Kaspers, Gertjan J L; Hendrikse, N Harry; Veldhuijzen van Zanten, Sophie E M
2017-01-01
Despite decades of clinical trials for diffuse intrinsic pontine glioma (DIPG), patient survival does not exceed 10% at two years post-diagnosis. Lack of benefit from systemic chemotherapy may be attributed to an intact bloodbrain barrier (BBB). We aim to develop a theoretical model including
5. Measurement of thermal conductivity and diffusivity in situ: Literature survey and theoretical modelling of measurements
Energy Technology Data Exchange (ETDEWEB)
Kukkonen, I.; Suppala, I. [Geological Survey of Finland, Espoo (Finland)
1999-01-01
In situ measurements of thermal conductivity and diffusivity of bedrock were investigated with the aid of a literature survey and theoretical simulations of a measurement system. According to the surveyed literature, in situ methods can be divided into active drill hole methods, and passive indirect methods utilizing other drill hole measurements together with cutting samples and petrophysical relationships. The most common active drill hole method is a cylindrical heat producing probe whose temperature is registered as a function of time. The temperature response can be calculated and interpreted with the aid of analytical solutions of the cylindrical heat conduction equation, particularly the solution for an infinite perfectly conducting cylindrical probe in a homogeneous medium, and the solution for a line source of heat in a medium. Using both forward and inverse modellings, a theoretical measurement system was analysed with an aim at finding the basic parameters for construction of a practical measurement system. The results indicate that thermal conductivity can be relatively well estimated with borehole measurements, whereas thermal diffusivity is much more sensitive to various disturbing factors, such as thermal contact resistance and variations in probe parameters. In addition, the three-dimensional conduction effects were investigated to find out the magnitude of axial leak` of heat in long-duration experiments. The radius of influence of a drill hole measurement is mainly dependent on the duration of the experiment. Assuming typical conductivity and diffusivity values of crystalline rocks, the measurement yields information within less than a metre from the drill hole, when the experiment lasts about 24 hours. We propose the following factors to be taken as basic parameters in the construction of a practical measurement system: the probe length 1.5-2 m, heating power 5-20 Wm{sup -1}, temperature recording with 5-7 sensors placed along the probe, and
6. Measurement of thermal conductivity and diffusivity in situ: Literature survey and theoretical modelling of measurements
International Nuclear Information System (INIS)
Kukkonen, I.; Suppala, I.
1999-01-01
In situ measurements of thermal conductivity and diffusivity of bedrock were investigated with the aid of a literature survey and theoretical simulations of a measurement system. According to the surveyed literature, in situ methods can be divided into 'active' drill hole methods, and 'passive' indirect methods utilizing other drill hole measurements together with cutting samples and petrophysical relationships. The most common active drill hole method is a cylindrical heat producing probe whose temperature is registered as a function of time. The temperature response can be calculated and interpreted with the aid of analytical solutions of the cylindrical heat conduction equation, particularly the solution for an infinite perfectly conducting cylindrical probe in a homogeneous medium, and the solution for a line source of heat in a medium. Using both forward and inverse modellings, a theoretical measurement system was analysed with an aim at finding the basic parameters for construction of a practical measurement system. The results indicate that thermal conductivity can be relatively well estimated with borehole measurements, whereas thermal diffusivity is much more sensitive to various disturbing factors, such as thermal contact resistance and variations in probe parameters. In addition, the three-dimensional conduction effects were investigated to find out the magnitude of axial 'leak' of heat in long-duration experiments. The radius of influence of a drill hole measurement is mainly dependent on the duration of the experiment. Assuming typical conductivity and diffusivity values of crystalline rocks, the measurement yields information within less than a metre from the drill hole, when the experiment lasts about 24 hours. We propose the following factors to be taken as basic parameters in the construction of a practical measurement system: the probe length 1.5-2 m, heating power 5-20 Wm -1 , temperature recording with 5-7 sensors placed along the probe, and
7. Wireless Networks under a Backoff Attack: A Game Theoretical Perspective
Directory of Open Access Journals (Sweden)
Juan Parras
2018-01-01
Full Text Available We study a wireless sensor network using CSMA/CA in the MAC layer under a backoff attack: some of the sensors of the network are malicious and deviate from the defined contention mechanism. We use Bianchi’s network model to study the impact of the malicious sensors on the total network throughput, showing that it causes the throughput to be unfairly distributed among sensors. We model this conflict using game theory tools, where each sensor is a player. We obtain analytical solutions and propose an algorithm, based on Regret Matching, to learn the equilibrium of the game with an arbitrary number of players. Our approach is validated via simulations, showing that our theoretical predictions adjust to reality.
8. An information-theoretic approach to the modeling and analysis of whole-genome bisulfite sequencing data.
Science.gov (United States)
Jenkinson, Garrett; Abante, Jordi; Feinberg, Andrew P; Goutsias, John
2018-03-07
DNA methylation is a stable form of epigenetic memory used by cells to control gene expression. Whole genome bisulfite sequencing (WGBS) has emerged as a gold-standard experimental technique for studying DNA methylation by producing high resolution genome-wide methylation profiles. Statistical modeling and analysis is employed to computationally extract and quantify information from these profiles in an effort to identify regions of the genome that demonstrate crucial or aberrant epigenetic behavior. However, the performance of most currently available methods for methylation analysis is hampered by their inability to directly account for statistical dependencies between neighboring methylation sites, thus ignoring significant information available in WGBS reads. We present a powerful information-theoretic approach for genome-wide modeling and analysis of WGBS data based on the 1D Ising model of statistical physics. This approach takes into account correlations in methylation by utilizing a joint probability model that encapsulates all information available in WGBS methylation reads and produces accurate results even when applied on single WGBS samples with low coverage. Using the Shannon entropy, our approach provides a rigorous quantification of methylation stochasticity in individual WGBS samples genome-wide. Furthermore, it utilizes the Jensen-Shannon distance to evaluate differences in methylation distributions between a test and a reference sample. Differential performance assessment using simulated and real human lung normal/cancer data demonstrate a clear superiority of our approach over DSS, a recently proposed method for WGBS data analysis. Critically, these results demonstrate that marginal methods become statistically invalid when correlations are present in the data. This contribution demonstrates clear benefits and the necessity of modeling joint probability distributions of methylation using the 1D Ising model of statistical physics and of
9. Nuclear theory group. Progress report and renewal proposal
International Nuclear Information System (INIS)
1979-01-01
The work discussed covers a broad range of topics in theoretical nuclear and intermediate-energy physics and nuclear astrophysics. Primary emphasis is placed on understanding the underlying nucleon-nucleon and meson-nucleon interactions. The research is categorized as follows: fundamental interactions; intermediate-energy physics; effective interactions, nuclear models and many-body theory; structure of finite nuclei; nuclear astrophysics; heavy-ion physics; and numerical analysis. Page-length summaries of the work are given; completed work has been or will be published. Staff vitas, recent publications, and a proposed budget complete the report
10. Special course on modern theoretical and experimental approaches to turbulent flow structure and its modelling
Energy Technology Data Exchange (ETDEWEB)
1987-08-01
The large eddy concept in turbulent modeling and techniques for direct simulation are discussed. A review of turbulence modeling is presented along with physical and numerical aspects and applications. A closure model for turbulent flows is presented and routes to chaos by quasi-periodicity are discussed. Theoretical aspects of transition to turbulence by space/time intermittency are covered. The application to interpretation of experimental results of fractal dimensions and connection of spatial temporal chaos are reviewed. Simulation of hydrodynamic flow by using cellular automata is discussed.
11. The Effect of Private Benefits of Control on Minority Shareholders: A Theoretical Model and Empirical Evidence from State Ownership
Directory of Open Access Journals (Sweden)
Kerry Liu
2017-06-01
Full Text Available Purpose: The purpose of this paper is to examine the effect of private benefits of control on minority shareholders. Design/methodology/approach: A theoretical model is established. The empirical analysis includes hand-collected data from a wide range of data sources. OLS and 2SLS regression analysis are applied with Huber-White standard errors. Findings: The theoretical model shows that, while private benefits are generally harmful to minority shareholders, the overall effect depends on the size of large shareholder ownership. The empirical evidence from government ownership is consistent with theoretical analysis. Research limitations/implications: The empirical evidence is based on a small number of hand-collected data sets of government ownership. Further studies can be expanded to other types of ownership, such as family ownership and financial institutional ownership. Originality/value: This study is the first to theoretically analyse and empirically test the effect of private benefits. In general, this study significantly contributes to the understanding of the effect of large shareholder and corporate governance.
12. Theoretical basis of the new particles
International Nuclear Information System (INIS)
Rujula, A.
1977-01-01
The four-quark standard gauge field theory of weak, electromagnetic and strong interactions is reviewed and placed into a historical perspective since as early as 1961. Theoretical predictions of the model are compared to experimental observations available as of the Conference date, charm production in e + e - annihilation being in the spotlight. Virtues and shortcomings of the standard model are discussed. The model is concluded to have been an incredibly successful predictive tool. Some theoretical developments around the standard model are also discussed in view of CP violation in SU(2)xU(1) gauge theories, the Higgs' bosons and superunification of weak, strong and electromagnetic interactions
13. Modeling Exoplanetary Atmospheres using BART, TEA, and Drift-RHD; Theoretical studies and Observational Implications
Science.gov (United States)
Dobbs-Dixon, Ian
numerous published papers, further work is needed to couple them self-consistently. Our theoretical studies focus on a number of objectives. We will start by incorporating our kinetic, non-equilibrium cloud model within BART, allowing us to obtain a consistent solution for cloud characteristics. We will further test simple parameterizations against the full solution to explore the reliability of simpler models. Utilizing Drift-RHD, we will explore the role of horizontal advection on cloud distribution, investigate the validity of 1D retrievals by comparing them to selfconsistently generated 3D models, and develop a retrieval framework for wavelengthdependent phase-curves. TEA will be enhanced with additional databases and the inclusion of condensates, providing realistic initial cloudy-model for retrievals. To explore the importance of equilibrium chemistry and exclude non-plausible chemical compositions (often the outcome of many retrieval approaches) we will relax the assumption of non-equilibrium chemistry by utilizing an analytical chemical equilibrium approach in BART. To address observations, our OBS suit for generating synthetic observations will be adapted to interface with our models, allowing us to both compare to existing observations and make predictions for future observations. With these tools, we are particularly well suited to understand discriminants between classes of models and identifying which particular set of observations could most readily distinguish cloud constituents and temperature features. The proposed research is directly relevant to the Planetary Science and Astrophysics goals through furthering our understanding of compositions, dynamics, energetics, and chemical behaviors of exoplanetary atmospheres. In addition, to maximize NASA's investment and encourage open access, we have and will continue to make all of our codes public and available to the community throughout the course of the research.
14. Mathematical and theoretical neuroscience cell, network and data analysis
CERN Document Server
Nieus, Thierry
2017-01-01
This volume gathers contributions from theoretical, experimental and computational researchers who are working on various topics in theoretical/computational/mathematical neuroscience. The focus is on mathematical modeling, analytical and numerical topics, and statistical analysis in neuroscience with applications. The following subjects are considered: mathematical modelling in Neuroscience, analytical and numerical topics; statistical analysis in Neuroscience; Neural Networks; Theoretical Neuroscience. The book is addressed to researchers involved in mathematical models applied to neuroscience.
15. A theoretical model for analysing gender bias in medicine.
Science.gov (United States)
Risberg, Gunilla; Johansson, Eva E; Hamberg, Katarina
2009-08-03
During the last decades research has reported unmotivated differences in the treatment of women and men in various areas of clinical and academic medicine. There is an ongoing discussion on how to avoid such gender bias. We developed a three-step-theoretical model to understand how gender bias in medicine can occur and be understood. In this paper we present the model and discuss its usefulness in the efforts to avoid gender bias. In the model gender bias is analysed in relation to assumptions concerning difference/sameness and equity/inequity between women and men. Our model illustrates that gender bias in medicine can arise from assuming sameness and/or equity between women and men when there are genuine differences to consider in biology and disease, as well as in life conditions and experiences. However, gender bias can also arise from assuming differences when there are none, when and if dichotomous stereotypes about women and men are understood as valid. This conceptual thinking can be useful for discussing and avoiding gender bias in clinical work, medical education, career opportunities and documents such as research programs and health care policies. Too meet the various forms of gender bias, different facts and measures are needed. Knowledge about biological differences between women and men will not reduce bias caused by gendered stereotypes or by unawareness of health problems and discrimination associated with gender inequity. Such bias reflects unawareness of gendered attitudes and will not change by facts only. We suggest consciousness-rising activities and continuous reflections on gender attitudes among students, teachers, researchers and decision-makers.
16. Linear regression crash prediction models : issues and proposed solutions.
Science.gov (United States)
2010-05-01
The paper develops a linear regression model approach that can be applied to : crash data to predict vehicle crashes. The proposed approach involves novice data aggregation : to satisfy linear regression assumptions; namely error structure normality ...
17. Team Resilience as a Second-Order Emergent State: A Theoretical Model and Research Directions
Directory of Open Access Journals (Sweden)
Clint Bowers
2017-08-01
Full Text Available Resilience has been recognized as an important phenomenon for understanding how individuals overcome difficult situations. However, it is not only individuals who face difficulties; it is not uncommon for teams to experience adversity. When they do, they must be able to overcome these challenges without performance decrements.This manuscript represents a theoretical model that might be helpful in conceptualizing this important construct. Specifically, it describes team resilience as a second-order emergent state. We also include research propositions that follow from the model.
18. Theoretical modeling of the plasma-assisted catalytic growth and field emission properties of graphene sheet
International Nuclear Information System (INIS)
Sharma, Suresh C.; Gupta, Neha
2015-01-01
A theoretical modeling for the catalyst-assisted growth of graphene sheet in the presence of plasma has been investigated. It is observed that the plasma parameters can strongly affect the growth and field emission properties of graphene sheet. The model developed accounts for the charging rate of the graphene sheet; number density of electrons, ions, and neutral atoms; various elementary processes on the surface of the catalyst nanoparticle; surface diffusion and accretion of ions; and formation of carbon-clusters and large graphene islands. In our investigation, it is found that the thickness of the graphene sheet decreases with the plasma parameters, number density of hydrogen ions and RF power, and consequently, the field emission of electrons from the graphene sheet surface increases. The time evolution of the height of graphene sheet with ion density and sticking coefficient of carbon species has also been examined. Some of our theoretical results are in compliance with the experimental observations
19. A Proposed Model of Jazz Theory Knowledge Acquisition
Science.gov (United States)
Ciorba, Charles R.; Russell, Brian E.
2014-01-01
The purpose of this study was to test a hypothesized model that proposes a causal relationship between motivation and academic achievement on the acquisition of jazz theory knowledge. A reliability analysis of the latent variables ranged from 0.92 to 0.94. Confirmatory factor analyses of the motivation (standardized root mean square residual…
20. Tourism Management and Industrial Ecology: A Theoretical Review
Directory of Open Access Journals (Sweden)
Maria Claudia Lucchetti
2014-08-01
Full Text Available Industrial Ecology (IE is based on the relation between the natural ecosystem and economic ecosystem. The concept refers to the metaphorical relation between the natural and industrial ecosystems as a model for transforming unsustainable industrial systems. Several tools and strategies are particularly significant for the IE development. In other words, the primary purpose of industrial ecology is to assess and reduce the impact economic activities on the environment. Tourism, as an economic activity, resulting in a full range of environmental impacts, should be treated like any other industry. This paper propose uses a theoretical review focused on IE for to investigate what is the best way to implement industrial ecology in the tourism activities. It seemed interesting to search within the IE concept for a model for the tourism sector, one of the fields with the greatest environmental interaction and economic implications.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6419861912727356, "perplexity": 2517.4637489858965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662540268.46/warc/CC-MAIN-20220521174536-20220521204536-00195.warc.gz"}
|
https://www.gamedev.net/forums/topic/361052-applying-metalicrobotic-effect-to-voices/
|
# Applying Metalic/Robotic Effect To Voices
## Recommended Posts
SpreeTree 396
Hi I have just recorded some voices for my current project, and they sound a bit dull. As its a science fiction based game, I wanted to add a more robotic or metalic sound to the voices, but I can't seem to do it. I am a total novice at this so it might be a simple solution. I have recorded the voices using Audacity, but the standard effect's are pretty lame, and the list of plug-ins is so long, I don't have the time to work through them all. Does anyone know of an application/plug-in I could use that would give me the desired effect? I can get my hands on some licenced software through work, so any applications you might know would be great! Thanks Spree
##### Share on other sites
Promit 13246
I think the effect you want is a vocoder, but I'm not sure how you'd use it in Audacity. Maybe you can look around and find something that suits your needs.
##### Share on other sites
If you can't get a vocoder (you'll need some sort of synth to act as a carrier signal for a vocoder too, which will add to the hassle, which it sounds like you don't want) then you could also try a ring modulator... which is essentially a very fast tremolo (sometimes called amplitude modulation). If you don't have a ring mod, try the tremolo with the 'speed' or 'frequency' setting set really high and it might get to ring modulation territory...
A vocoder can give you more varied results... but think more along the lines of the robot droids from the newer star wars film, a ring modulator will give more of a 'dalek' type sound. Obviously, it depends on the parameters though...
##### Share on other sites
Rain 7 100
While we are on the subject, does anyone know how to add these metallic undertones to audio in PRO TOOLS?
Not sure if Pro Tools has a vocoder...
##### Share on other sites
There is a free vocoder called Zerious Vocoder . A ring modulator will work OK, but it won't give you that same vocoder effect. Audacity has the ability to generate tones, so generate some static and use that as the carrier. If you have questions about how to work the vocoder, post back here or PM me, and I can run you through how a vocoder works.
Rain--there is a plugin for Pro Tools called Orange Vocoder. It is pretty pricey though, if I remember right--you might do better to just jury rig the one in Reason 3, or download the Zerious Vocoder from the link above. Good luck!
##### Share on other sites
SpreeTree 396
A vocoder was exactly what I wanted. I found a vocoder plug-in for an old copy of CoolEdit Pro I had a few years back and used that.
The sound isn't perfect, but hopefully it should do the trick. The voices are already sounding a lot more interesting.
Thanks for the help!
Spree
##### Share on other sites
Rain 7 100
Quote:
Original post by Blaise DourosRain--there is a plugin for Pro Tools called Orange Vocoder. It is pretty pricey though, if I remember right--you might do better to just jury rig the one in Reason 3, or download the Zerious Vocoder from the link above. Good luck!
Hmmm. Thank you VERY much my good man! I have project that will be greatly enhanced through this helpful tidbit.
I will try to use Zerious with ProTools.
##### Share on other sites
Guest Anonymous Poster
Yeah also you can import the files in to fruity loops, and what I do for robot voices is put it through the flanger effect and VERY LAZILY (I feel bad for using one, but it sounds so nice) Use the preset effect Vexed 5th maybe vexed 7t, im across the world from my comp, but its one of those, and I use it on ym vocal samples. Sounds dang good for robot or aliens. PLay with the parameters some yeah.
its right to keep it tight!
## Create an account or sign in to comment
You need to be a member in order to leave a comment
## Create an account
Sign up for a new account in our community. It's easy!
Register a new account
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16981488466262817, "perplexity": 2289.1604565477555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104565.76/warc/CC-MAIN-20170818043915-20170818063915-00594.warc.gz"}
|
https://tex.stackexchange.com/questions/530790/drawing-arc-with-pgf
|
# Drawing arc with pgf
Using pgf, I'd like to draw an arc (part of a circle) with a given center. I don't know the exact co-ordinates of the center, because it was obtained as an intersection of other geometric objects. The desired center has a name, but no known co-ordinates. How can I do this?
Basically, I'd like to modify an instruction of the form
\node (H) [name path=H, draw, circle through=(A')] at (E) {};
to produce just a part of the circle.
• You should provide a Minimal Working example starting with \documentclass and ending with \end{document} that provides all the minimal necessary commands and instructions to understand what you want. You should also specify which libraries you are loading and what are you doing.
– gjkf
Mar 2, 2020 at 14:07
• I'm a bit unsure what's so hard to understand. I have points A' and E defined and would like to draw merely part of the circle through A' at E. Mar 2, 2020 at 14:11
• Of course, if you know the starting location and angle and radius (needed for arc with tikz), you can locate the center. The problem is usually finding the angles. Mar 3, 2020 at 2:06
• Possible duplicate: tex.stackexchange.com/q/66216/14500 Mar 3, 2020 at 6:24
You can use the arc command of Tikz, like so:
\documentclass[tikz]{standalone}
\begin{document}
\begin{tikzpicture}
\coordinate (O) at (0,0); % Your coordinate name
\draw (O) -- ++(0:1) arc (0:150:1); % start:end:radius
\end{tikzpicture}
\end{document}
which results in something like this
In general your last command will be
\draw (center) -- ++(start:radius) arc(start:end:radius) -- (center);
• That's great, but I'd just like the arc, without the two radii included. Mar 2, 2020 at 14:09
• \documentclass[tikz]{standalone} \begin{document} \begin{tikzpicture} \coordinate (O) at (0,0); % Your coordinate name \draw (O)+(0:1) arc (0:150:1); % start:end:radius \end{tikzpicture} \end{document} ? Mar 2, 2020 at 14:13
I did not really understand the question. I assume you want to create a circular node centered on E, which goes through A but draw partially!
I suggest the solution below
\documentclass{article}
\usepackage{tikz}
\usetikzlibrary{calc,intersections}
\usetikzlibrary{through}
\usepackage{SIunitx}
\begin{document}
\begin{tikzpicture}
\node[label=A] (A) at (1,1) {+};
\node[label=E] (E) at (0,2){+};
\node (H) [name path=H, circle through=(A)] at (E) {};
\draw[red,thick]
let
\p1=(E.center), \p2=(A), \n1={veclen(\y2-\y1,\x2-\y1)}
in
(H.45) arc (45:180: \n1);
\draw (H) -- ++(3,4);
\draw (H) -- ++(-3,5);
\end{tikzpicture}
\end{document}
1) First of all if the center has a name then you can know its coordinates:
102.6 Extracting Coordinates There are two commands that can be used to “extract” the x- or y-coordinate of a coordinate.
\pgfextractx{\pgf@x}{\pgfpointanchor{E}{center}} \pgfextracty{\pgf@y}{\pgfpointanchor{E}{center}}
2) With tkz-euclideyou have:
\documentclass{standalone}
\usepackage{tkz-euclide}
\begin{document}
\begin{tikzpicture}
\coordinate[label=$O$] (O) at (3,1);
\coordinate [label=$A$](A) at (1,5);
\coordinate [label=$B$](B) at (2,4);
\coordinate [label=$C$](C) at (3,2);
\coordinate [label=$D$](D) at (5,0);
\coordinate [label=$E$](E) at (5,1);
\tkzCompass[thick,blue](O,A)
\tkzCompass[thick,red,delta=20](O,B)
\tkzCompass[thick,orange,length=2](O,C)
\tkzDrawArc[thick,brown](O,D)(E)
\foreach \point in {A,...,E,O}
\fill [black,opacity=.5] (\point) circle (1pt);
\end{tikzpicture}
\end{document}
a) The macro \tkzCompass can draw an arc with a center through a point. Without option (you can use TikZ's options) the arc has a length of 1 cm;
b) you can use the option length to change the default value length =2 for 2cm;
c) you can use the option delta. delta=20 means that the ends of the arc makes an angle of 40 degrees with the center;
d) more subtle is the last possibility. With \tkzDrawArc(O,D)(E) you draw an arc with center O passing through D and stopping on the half line [OE).
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9428093433380127, "perplexity": 2436.903682138607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571222.74/warc/CC-MAIN-20220810222056-20220811012056-00091.warc.gz"}
|
https://www.arxiv-vanity.com/papers/cond-mat/9805228/
|
# Creation of gap solitons in Bose-Einstein condensates
O. Zobay, E. M. Wright, and P. Meystre Optical Sciences Center, University of Arizona, Tucson, Arizona 85721
###### Abstract
We discuss a method to launch gap soliton-like structures in atomic Bose-Einstein condensates confined in optical traps. Bright vector solitons consisting of a superposition of two hyperfine Zeeman sublevels can be created for both attractive and repulsive interactions between the atoms. Their formation relies on the dynamics of the atomic internal ground states in two far-off resonant counterpropagating -polarized laser beams which form the optical trap. Numerical simulations show that these solitons can be prepared from a one-component state provided with an initial velocity.
###### pacs:
PACS numbers: 03.75.Fi, 05.30.Jp, 32.80.Pj
## I Introduction
The Gross-Pitaevskii equation (GPE) has been used successfully in the recent past to explain various experiments on atomic Bose-Einstein condensates (see, e.g., the references in [1, 2]), and its validity for the description of the condensate dynamics at zero temperature is now well accepted. A further confirmation would be provided by the observation of solitary matter waves, the existence of which is generic to nonlinear Schrödinger wave equations such as the GPE [3]. Such solitary waves could also find applications in the future, e.g., in the diffractionless transport of condensates.
Various theoretical studies of this problem have already been performed, predicting in particular the existence of bright solitons, with corresponding spatially localized atomic density profiles, for condensates with attractive interactions [2]. Research on condensates with repulsive interactions has focused on the formation of gray solitons which correspond to dips in the atomic density. Their creation was investigated in Refs. [1, 4], their general properties were discussed in [5], and Ref. [2] worked out their analogy to the Josephson effect.
Complementary and previous to this work, the formation of atomic solitons was also examined theoretically in the context of nonlinear atom optics [6, 7, 8, 9, 10, 11]. In these studies the interaction between the atoms was assumed to result from laser-induced dipole-dipole forces, but this theory has not been experimentally tested so far.
The reliance on attractive interactions to achieve bright matter-wave solitons in Bose-Einstein condensates is of course a serious limitation, due to the difficulties associated with achieving condensation in the first place for such interactions. The purpose of the present article is the theoretical exposition of an experimentally realizable geometry that allows one to create bright gap soliton-like structures in Bose condensates, for both attractive and repulsive signs of the two-body scattering length. Gap solitons result from the balance of nonlinearity and the effective linear dispersion of a coupled system, e.g., counterpropagating waves in a grating structure, and appear in the gaps associated with avoided crossings. Gap solitons have previously been studied in a variety of physical contexts, but particularly in nonlinear optics [12]. They were also studied in the framework of nonlinear atom optics [7], but in this case the two states involved are connected by an optical transition, and the effects of spontaneous emission can cause significant problems [8].
Several main reasons motivate our renewed interest in this problem. First, we already mentioned that bright gap solitons are known to exist in nonlinear systems irrespective of whether the nonlinear interaction is repulsive or attractive [13]. With regard to atomic condensates this means that they should be observable, at least in principle, also for Na and Rb where the positive interatomic scattering length gives rise to a repulsive mean interaction. Further, the study of bright solitary waves is of interest as they might be easier to detect than gray ones, and they could find future applications, e.g. in atomic interferometry [11]. An additional reason to study atomic gap solitons is the fact that they consist inherently of a superposition of two internal states, in our case two different Zeeman sublevels of the atomic ground state. As such, they offer a further example of a multicomponent Bose condensate the study of which has already received much interest recently [14, 15, 16]. Finally, the recent demonstration of far-off resonant dipole traps for condensates opens up the way to the “easy” generation and manipulation of such spinor systems.
This paper is organized as follows. Section II describes our model. The physics relevant for the generation of gap solitons as well as orders of magnitudes for the various experimental parameters involved are discussed in Sec. III, while Sec. IV presents a summary of our numerical results. Finally, conclusions are given in Sec. V.
## Ii The model
The situation we consider for the generation of atomic gap solitons makes use of the recently achieved confinement of Bose condensates in far off-resonant optical dipole traps [17]. We consider explicitly a trap consisting of two focused laser beams of frequency counterpropagating in the -direction and with polarizations and , respectively. These lasers are used to confine a Bose condensate which is assumed for concreteness to consist of Na atoms. The condensate is initially prepared in the atomic ground state. For lasers far detuned from the resonance frequency of the nearest transition to an excited hyperfine multiplet the dynamics of a single atom in the trap can be described by an effective Hamiltonian of the form [18]
Heff = P22m+d0ℏδ′(R)|0⟩⟨0| +d1ℏδ′(R)(|−1⟩⟨−1|+|1⟩⟨1|) +d2ℏδ′(R)(|1⟩⟨−1|e2iKlZ+|−1⟩⟨1|e−2iKlZ),
which is derived by adiabatically eliminating the excited states in the dipole and rotating wave approximations. In the Hamiltonian (II), the operators and denote the center-of-mass position and momentum of the atom of mass , the ket labels the magnetic sublevel of the Na ground states, , and . Furthermore,
δ′(R)=δs(R)/2, (2)
where we have introduced the detuning and the position-dependent saturation parameter
s(R)=D2E2(R)δ2+Γ2/4≃D2E2(R)δ2. (3)
In this expression, denotes the reduced dipole moment between the states and , is the upper to lower state spontaneous emission rate, and is the slowly varying laser field amplitude at point , the plane-wave factors having already been removed from the counterpropagating waves. In the following, we assume that , which is identical for both fields, varies only in the transverse - and -directions and is constant along the trap axis : This approximation is valid if the longitudinal extension of the confined BEC is much less than the Rayleigh range of the trapping fields, a condition we assume is satisfied. The numerical coefficients , which depend on the specific value of , are of the order of or somewhat less than unity. Note that except insofar as appears in the saturation parameter , the effects of spontaneous emission are neglected in this description111In the discussion of the atomic dynamics and Eq. (II) we have assumed that the initial state is coupled only to one excited hyperfine multiplet. However, in the optical trap the detuning of the laser frequency is large compared even to the fine structure splitting of the excited states, so that in principle several different hyperfine multiplets should be taken into account. Fortunately, the coupling to any of these multiplets gives rise to addititional contributions to Eq. (II) which are of the same analytical structure as the one given above. Only the values of , , and are different. This means that Eq. (II) may still be used in this case, the effects of the additional multiplets being included as modifications of the values of the coefficients . For simplicity, however, we will use the values of the transition in the following, i.e., , , and [18]..
The first term in the single-particle Hamiltonian (II) describes the quantized center-of-mass atomic motion, the second and third terms the (position dependent) light-shifts of the states, and the final term, proportional to , describes the coupling between the states by the counterpropagating fields. For example, coupling between the and states arises from the process involving absorption of one photon and subsequent re-emission of a photon. However, since the circularly polarized fields are counterpropagating this process also involves a transfer of linear momentum along the -axis, and this accounts for the appearance of the spatially periodic factors, or gratings, in the coupling terms. As shown below, these gratings provide the effective linear dispersion which allows for gap solitons in combination with the nonlinearity due to many-body effects.
To describe the dynamics of the Bose condensate we introduce the macroscopic wave function normalized to the total number of particles . Here is omitted as it is coupled to neither by nor by the nonlinearity if it vanishes initially, which we assume in the following. The time evolution of the spinor is determined by the two-component Gross-Pitaevskii equation
iℏ∂Ψ∂t = HeffΨ(R,t) + ([Ua|ψ1(R,t)|2+Ub|ψ−1(R,t)|2]ψ1(R,t)[Ub|ψ1(R,t)|2+Ua|ψ−1(R,t)|2]ψ−1(R,t)).
In the following we approximate the nonlinearity coefficients by with the -wave scattering length.
To identify the key physical parameters for gap soliton formation, and to facilitate numerical simulations, it is convenient to re-express Eq. (II) in a dimensionless form by introducing scaled variables , and with
tc = 1/(d2δ′0), (5) lc = tc⋅ℏK/m, (6) ρc = |d2ℏδ′0/U|,, (7)
where . Note that for our choice of , and for red detuning, we have . Equation (II) then reads
i∂ψ∂τ = ⎡⎢ ⎢⎣−MΔ+d1δ′(r)d2δ′0e2iklzδ′(r)/δ′0e−2iklzδ′(r)/δ′0−MΔ+d1δ′(r)d2δ′0⎤⎥ ⎥⎦(ψ1ψ−1) (8) +[sgn(d2ℏδ′0/U)(|ψ1|2+|ψ−1|2)](ψ1ψ−1),
where is the Laplacian in scaled variables, and we have introduced the dimensionless mass-related parameter
M=d2δ′0m/(2ℏK2l), (9)
so that .
## Iii Gap solitons
In this section we discuss the conditions under which Eqs. (8) yield gap soliton solutions. Rather than reproducing the explicit analytic forms of these solutions, which are readily available in the literature, here we introduce the reduced equations which yield gap solitons, and discuss the physics underlying their formation. Estimates for the orders of magnitude of various parameters characterizing atomic gap solitons are also given.
### iii.1 Reduced soliton equations
Two key approximations underly the appearance of gap solitons: First, we neglect all transverse variations of the electromagnetic and atomic fields, thereby reducing the problem to one spatial variable . Furthermore, we can set . Second we express the atomic fields in the form
ψ±1(z,t)=exp{i[±klz−(1/(4M)−1)τ]}ϕ±1(z,t), (10)
and we assume that the atomic field envelopes vary slowly in space in comparison to the plane-wave factors that have been separated out, so that only first-order spatial derivatives of the field envelopes need be retained and only the spatial harmonics indicated included. Under these assumptions Eqs. (8) reduce to
i(∂∂τ±2kl∂∂z)(ϕ1ϕ−1)=(0110)(ϕ1ϕ−1) +sgn(d2ℏδ′0/U)(|ϕ1|2+|ϕ−1|2)(ϕ1ϕ−1). (11)
Aceves and Wabnitz [13] have shown that these dimensionless equations have explicit travelling solitary wave solutions of hyperbolic secant form. Thus, the optical trapping geometry we propose here can support atomic gap solitons under the appropriate conditions.
Having established that our system can support gap solitons our goal in the remainder of this paper is to demonstrate through numerical simulations that these solitons, or at least a remnant of them, can arise for realistic atomic properties and that they can be created from physically reasonable initial conditions. In particular, the exact gap solitons solutions are coherent superpositions of the states where the phase and amplitude of the superposition varies spatially in a specific manner: it is not a priori clear that these gap solitons can be accessed from an initial state purely in the state for example. Furthermore, inclusion of transverse variations and spatial derivatives beyond the slowly-varying envelope approximation introduced above could, in principle, destroy the solitons [19]. For the numerical simulations to be presented here we work directly with Eqs. (8) which does not invoke these approximations.
### iii.2 Intuitive soliton picture
A simple and intuitively appealing explanation of the reason why Eq. (8) supports soliton solutions goes as follows: Consider first the one-dimensional nonlinear Schrödinger equation
i˙ψ=−M∂2ψ/∂z2+g|ψ|2ψ. (12)
This equation has bright soliton solutions if the effects of dispersion and nonlinearity can cancel each other. For this to happen, it is necessary that . In the usual case the mass-related coefficient is positive, so that bright solitons can only exist in condensates with attractive interactions . But consider now the dispersion relation for the linear part of Eq. (8) obtained after neglecting the transverse dimensions and performing the transformation
ψ±1=a±1exp{i[k±1z−ω(k)τ]}. (13)
Thereby, for the states, and is a relative longitudinal wave vector. The dispersion relation consists of two branches, which in the absence of linear coupling take the form of two parabolas corresponding to the free dynamics of the internal states . However, the linear coupling between these states results in an avoided crossing at , see Fig. 1. If the system is in a superposition of eigenstates pertaining to the lower branch of the dispersion relation, then at the crossing it can be ascribed a negative effective mass. One can thus expect that in this case the system can support soliton solutions even though the interaction is repulsive. For an attractive interaction, in contrast, soliton creation should be possible in all regions of the spectrum with positive effective mass.
From the dispersion relation picture, one can easily infer further properties of repulsive interaction solitons. First, they will only exist for weak enough dispersion, as the lower branch of the dispersion curve has a region with negative curvature only as long as the dimensionless mass . Also, the maximum possible velocity can be estimated to be of the order of which is the group velocity at the points of vanishing curvature in the dispersion relation. Finally, for a soliton at rest the contributions of the internal states and will approximately be equal, but solitons traveling with increasing positive, resp. negative, velocity will be increasingly dominated by the , resp. , contribution.
This qualitative discussion is in agreement with the analytic results of Ref.[13]. More precisely, the solutions of Ref. [13] are solitary waves. In the following, we will be concerned with the creation of long-lived localized wave packet structures which are brought about by the interplay between nonlinearity and dispersion described above. We will continue to refer to these structures as gap solitons for simplicity.
### iii.3 Soliton estimates
We now turn to a discussion of the typical orders of magnitude which characterize the soliton solutions of Eq. (II). We note from the outset that the analytical solutions of Ref. [13], as well as our numerical simulations, indicate that these characteristic scales can be directly inferred from the scale variables in Eqs. (5)–(7) which bring the Gross-Pitaevskii equations into dimensionless form. For example, the spatial extension of the scaled wave function , as well as the total norm are of order unity.222Note that the precise values of and are only of relevance for the scaling between Eqs. (II) and (8). They do not influence the essential physics of the system.
In order to obtain estimates for the characteristic length, time and density introduced in Eqs. (5)–(7) we use the parameter values of the Na experiment of Ref. [17] as a guidance. For Sodium, MHz, the saturation intensity mW/cm and the resonance wavelength nm. Choosing the trap wavelength to be far red-detuned from this value, with nm, and a maximum laser intensity kW/cm one obtains a characteristic scale for the time evolution of the condensate of the order of 50 s. The characteristic length is obtained by multiplication with the recoil velocity =1.8 cm/s, which yields m. This yields the dimensionless mass . Finally, the order of magnitude of the characteristic density cm, which means that a soliton typically contains of the order of atoms. These estimates are confirmed by our numerical simulations, which show that the typical extension of a soliton is several in the -direction, about one in the transverse direction and it contains about 1000 atoms.
Our numerical simulations show that the maximum dimensionless atomic density in a soliton is always of the order of , which appears to produce the nonlinearity necessary to balance the effects of dispersion. From this value, it is possible to obtain a first estimate of the transverse confinement of the condensate required in our two-dimensional model: We assume that the transverse spatial dependence of the atomic density can be modeled as the normalized ground state density of the harmonic trap potential [5]. The soliton density can hence be roughly estimated as , where is the Heaviside step function, a typical total number of atoms in the soliton and its length. From this condition, and using the typical values for and previoulsy discussed leads to a lower limit for in the range between 100 and 1000 Hz. Altogether, these various estimates are well within experimental reach.
## Iv Numerical results
Having characterized the idealized gap soliton solutions of Eq. (II) we now investigate whether they can be accessed from realistic initial conditions. To this end, we study numerically the following situation. A condensate of atoms in the internal state is initially prepared in a conventional optical dipole trap which provides only a tranverse confinement potential . This potential is assumed to be Gaussian, with a trapping frequency at the bottom. Axially, the condensate is confined by a harmonic magnetic trap of frequency . At the magnetic trap is turned off and the polarizations of the trapping light fields are switched to the configuration, with being unchanged.
The simulations are performed in two spatial dimensions only, and , as this can already be expected to capture the relevant physics without requiring excessive computational resources. We concentrate on the more interesting case of a condensate with repulsive interactions since that is the new case in which solitons are expected. In order to estimate atom numbers the transformation between Eqs. (8) and (II) is performed after replacing by where the variance is determined from the two-dimensional wave packet structure in question.
The main purpose of the numerical simulations is to show that gap solitons can be formed out of condensate wave functions whose initial parameters lie within a relatively broad range. It is only necessary to choose , , and such that the spatial extension of the initial condensate and the atom number are comparable to typical soliton values. They need not take on precisely defined values and the initial wave function does not have to match closely the form of a soliton.
However, the condensate will not couple effectively to a soliton if it is at rest initially. Such a situation corresponds to the point with in Fig. 1 where the effective mass is still positive (). The key to an efficient generation of solitons is therefore to provide the condensate with an initial velocity close to the recoil velocity in the -direction. This may be achieved, e.g., by suddenly displacing the center of the magnetic trap. The initial wave function can then be written as with the ground state of the combined optical and magnetic trap and [3]. It is thus placed in the vicinity of the avoided crossing. Experimentally, condensates have already been accelerated to velocities in this range by a similar method in connection with the excitation of dipole oscillations [20].
Figure 2 shows an illustrative example for the formation of a soliton out of the initial distribution. It depicts the evolution of the transverse averaged atomic density
N(z;τ)=∫dx(|ψ1(x,z;τ)|2+|ψ−1(x,z;τ)|2), (14)
as a function of the scaled variables and . In this example, nm, the maximum intensity kW/cm, s, s, and . The characteristic scales are thus m and s, the coefficient . We choose which corresponds to an initial atom number of 2900, approximately. Figure 2 shows the formation of a soliton after an initial transient phase having a duration of 50 , approximately. This transient phase is characterized by strong “radiation losses”. They occur because half of the initial state pertains to the upper branch of the dispersion relation which cannot sustain solitons. The shape of the created soliton is not stationary in time but appears to oscillate. Further examination shows that norm of the state is slightly larger than that of the state, as is expected for a soliton moving slowly in negative -direction [13]. Figure 3 shows the atomic density in the soliton at . The soliton contains about 500 atoms. The inset depicts the longitudinally integrated density and the transverse confinement potential in the shape of an inverted Gaussian. The soliton thus spreads out over half the width of the potential well, approximately.
Various numerical simulations were performed in order to assess the dependence of soliton formation on the various initial parameters. When changing the atom number a soliton was formed over the whole investigated range between 500 and 4000 atoms. With increasing and thus increasing effects of the nonlinearity the final velocity of the soliton changed from negative to positive values. For large a tendency to form two soliton wave packets out of the initial state was observed, however, the formation of each of these solitons is accompanied by large radiation losses which destabilize the other one. As to the transverse confinement parameter soliton formation was observed for s. At s a stable structure was no longer attained which is in rough agreement with the estimate given above. Whereas these numbers indicate a relatively large freedom in the choice of and (for similar results can be expected) a somewhat more restrictive condition is placed on the initial velocity . Its value should be chosen from the interval between 0.8 and 1.0 in order to guarantee soliton formation, the lower bound being determined by the point of vanishing curvature in the dispersion relation. For the initial wave function is situated more and more on the upper branch of the dispersion relation so that the tendency to form solitons is diminished rapidly.
## V Summary and conclusion
In conclusion, we have demonstrated that gap soliton-like structures can be created in a Bose condensate confined in an optical dipole trap formed by two counterpopagating -polarized laser beams. Bright solitons can be formed not only for atomic species with attractive interactions but also in the repulsive case. This is rendered possible because the atoms can be ascribed a negative effective mass if their velocity is close to the recoil velocity. The repulsive interaction solitons are inherently superpositions of two hyperfine Zeeman sublevels. The discussion of characteristic scales and numerical simulations indicated that the actual observation of these structures should be achievable within the realm of current experimental possibilities.
In our theoretical treatment spontaneous emission was neglected, an approximation justified by the large detunings in the optical trap [17]. The effects of anti-resonant terms, which were also ignored, might be of more importance. This question, as well as three-dimensional numerical studies, could be the subject of future work.
###### Acknowledgements.
We have benefited from numerous discussions with E. V. Goldstein. This work is supported in part by the U.S. Office of Naval Research Contract No. 14-91-J1205, by the National Science Foundation Grant PHY95-07639, by the U.S. Army Research Office and by the Joint Services Optics Program.
## References
• [1] T. F. Scott, R. J. Ballagh, and K. Burnett, Report No. cond–mat/9711111.
• [2] W. P. Reinhardt and C. W. Clark, J. Phys. B 30, L785 (1997).
• [3] S. A. Morgan, R. J. Ballagh, and K. Burnett, Phys. Rev. A 55, 4338 (1997).
• [4] R. Dum, J. I. Cirac, M. Lewenstein, and P. Zoller, Report No. cond–mat/9710238.
• [5] A. D. Jackson, G. M. Kavoulakis, and C. J. Pethick, Report No. cond–mat/9803116.
• [6] G. Lenz, P. Meystre, and E. W. Wright, Phys. Rev. Lett. 71, 3271 (1993).
• [7] G. Lenz, P. Meystre, and E. W. Wright, Phys. Rev. A 50, 1681 (1994).
• [8] K. J. Schernthanner, G. Lenz, and P. Meystre, Phys. Rev. A 50, 4170 (1994).
• [9] W. Zhang, D. F. Walls, and B. C. Sanders, Phys. Rev. Lett. 72, 60 (1994).
• [10] S. Dyrting, Weiping Zhang, and B. C. Sanders, Phys. Rev. A 56, 2051 (1997).
• [11] M. Holzmann and J. Audretsch, Europhys. Lett. 40, 31 (1997).
• [12] C. M. de Sterke and J. E. Sipe, in Progress in Optics, edited by E. Wolf (Elsevier, Amsterdam, 1994), Vol. XXXIII.
• [13] A. B. Aceves and S. Wabnitz, Phys. Lett. A 141, 37 (1989).
• [14] T.-L. Ho and V. B. Shenoy, Phys. Rev. Lett. 77, 2595 (1996).
• [15] E. V. Goldstein and P. Meystre, Phys. Rev. A 55, 2935 (1997).
• [16] C. J. Myatt, E. A. Burt, R. W. Ghrist, E. A. Cornell, and C. E. Wieman, Phys. Rev. Lett. 78, 586 (1997).
• [17] D. M. Stamper-Kurn, M. R. Andrews, A. P. Chikkatur, S. Inouye, H.-J. Miesner, J. Stenger, and W. Ketterle, Phys. Rev. Lett. 80, 2027 (1998).
• [18] C. Cohen-Tannoudji, in Fundamental Systems in Quantum Optics, edited by J. Dalibard, J.-M. Raimond, and J. Zinn-Justin (North-Holland, Amsterdam, 1992).
• [19] A. R. Champneys, B. A. Malomed, and M. J. Friedman, Phys. Rev. Lett. 80, 4169 (1998).
• [20] D. S. Durfee and W. Ketterle, Opt. Express 2, 299 (1998).
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9216679334640503, "perplexity": 645.9979865466472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703550617.50/warc/CC-MAIN-20210124173052-20210124203052-00126.warc.gz"}
|
https://www.statsmodels.org/devel/generated/statsmodels.discrete.discrete_model.MultinomialResults.pred_table.html
|
# statsmodels.discrete.discrete_model.MultinomialResults.pred_table¶
MultinomialResults.pred_table()[source]
Returns the J x J prediction table.
Notes
pred_table[i,j] refers to the number of times “i” was observed and the model predicted “j”. Correct predictions are along the diagonal.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9971136450767517, "perplexity": 2459.9427429926377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348492295.88/warc/CC-MAIN-20200604223445-20200605013445-00021.warc.gz"}
|
https://www.mersenneforum.org/showpost.php?s=facd549be2b7782a9b722fc0280f24d8&p=388904&postcount=6
|
View Single Post
2014-12-02, 16:06 #6 rogue "Mark" Apr 2003 Between here and the 145028 Posts I've fixed various issues: Code: Fix factor rate calculation. Fix writing ABC file as the wrong line was generated. Fix reading ABC file since it always failed. Various other code cleanup issues. Last fiddled with by rogue on 2020-09-24 at 19:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24642227590084076, "perplexity": 21355.13427475591}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360881.12/warc/CC-MAIN-20211201173718-20211201203718-00456.warc.gz"}
|
https://sml-group.cc/blog/2020-active-meta-learning/
|
Probabilistic Active Meta Learning (PAML)
Meta-learning can make machine learning algorithms more data-efficient, using experience from prior tasks to learn related tasks quicker. Since some tasks will be more-or-less informative with respect to performance on any given task in a domain, an interesting question to consider is, how might a meta-learning algorithm automatically choose an informative task to learn? Here we summarise probabilistic active meta-learning (PAML): a meta-learning algorithm that uses latent task representations to rank and select informative tasks to learn next.
PAML
In our paper1, we consider a setting where the goal is to actively explore a task domain. We assume that the meta-learning algorithm is given a set of task descriptive observations (task descriptors) to select the next task (akin to a continuous or discrete action space). For example, task descriptors might be fully or partially observed task parameterisations (e.g., weights of robot links), high-dimensional descriptors of tasks (e.g., image data of different objects for grasping), or simply a few observations from the task itself.
PAML is based on the intuition that by formalising meta-learning as a latent variable model23, the learned task embeddings will represent task differences in a way that can be exploited to make decisions about what task to learn next. Figure 2 shows the graphical model for active meta-learning that underpins PAML.
Given a set of training datasets, learning and inference is done jointly by maximising a lower on the log model evidence (ELBO), with respect to global model parameters $\theta$ and the variational parameters $\phi$ that approximate the posterior over the latent task variables $\boldsymbol{h}_i$. Since the variational posterior is chosen to be computable in closed form (e.g. Gaussian), we can naturally define a utility function as the self-information (or surprise) of a point under a mixture distribution defined by the training tasks, i.e.,
$$u(\boldsymbol{h}^{*}) := -\log \sum\nolimits_{i=1}^N q_{\phi_i}(\boldsymbol{h}^*) + \log N,$$ where $N$ is the number of training tasks and $\boldsymbol{h}^*$ is the point being evaluated. The full PAML algorithm is illustrated in Figure 3.
Results
In the paper1, we run experiments on simulated robotic systems. We test PAML’s performance on varying types of task-descriptors. We generate tasks within domains by varying configuration parameters of the simulator, such as the masses and lengths of parts of the system. We then perform experiments where the learning algorithm observes: (i) fully observed task parameters, (ii) partially observed task parameters, (iii) noisy task parameters and (iv) high-dimensional image descriptors. We compare PAML to uniform sampling (UNI), used in recent meta-learning work4 and equivalent to domain randomization5, Latin hypercube sampling (LHS) and an oracle.
Figure 4 shows the results for observed task descriptors and Figure 5 for image task descriptors. In all experiments, we see a noticeable improvement in data-efficiency, measured as the predictive performance—i.e. RMSE and negative log-likelihood (NLL)—on a set of test tasks, plotted against the number of training tasks added.
Conclusion
To summarise, PAML is a probabilistic formulation of active meta-learning. By exploiting learned task representations and their relationship in latent space, PAML can use prior experience to select more informative tasks. The flexibility of the underlying active meta-learning model enables PAML to do this even when the task descriptors—the representation of the tasks observed by the model—are partially observed or even when they are images.
References
1. J. Kaddour, S. Sæmundsson, and M. Deisenroth, Probabilistic Active Meta-Learning, NeurIPS 2020. ↩︎
2. S. Sæmundsson, K. Hofmann, and M. Deisenroth. Meta Reinforcement Learning with Latent Variable Gaussian Processes. UAI, 2018. ↩︎
3. J. Gordon, J. Bronskill, M. Bauer, S. Nowozin, and R. Turner. Meta-learning Probabilistic Inference for Prediction. ICLR, 2019. ↩︎
4. C. Finn, P. Abbeel, and S. Levine. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. ICML, 2017. ↩︎
5. B. Mehta, M. Diaz, F. Golemo, C. J. Pal, and Liam Paull. Active Domain Randomization, CoRL 2020. ↩︎
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8723245859146118, "perplexity": 2250.2090188774005}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704824728.92/warc/CC-MAIN-20210127121330-20210127151330-00407.warc.gz"}
|
https://kammouriaudit.com/w40iu/article.php?id=logistic-regression-hyperparameters-r-643005
|
The objective of the dataset is to assess health care quality. To keep things simple, we will focus on a linear model, the logistic regression model, and the common hyperparameters tuned for this model. i.e 100/(100+5), Specificity = TN/(TN + FP) .When it’s actually no, how often does it predict no?? On the other hand, at the point (0.6, 0.9), we’re correctly labeling about 90% of the poor care cases, but have a false positive rate of 60%. linear regression, logistic regression, regularized regression) discussed algorithms that are intrinsically linear.Many of these models can be adapted to nonlinear patterns in the data by manually adding model terms (i.e. We’ll use these predictions to create our ROC curve. James Bergstra’s first proposed solution was so entertaining because, absent evidence that it works, it seems almost flippant to even propose: he suggested replacing Grid Search with Random Search. Jasper Snoek, Hugo Larochelle and Ryan Adams suggest that one use a. Generate the tuning grid for tuning hyperparameters Usage The output of a Logistic regression model is a probability. This is what we’ll try to beat with our logistic regression model. It returns predicted class labels. The ROC curve captures all thresholds simultaneously. I’ll go through the traditional approach, then describe the newer and cleaner methods. Typically, hyperparameters are set using the Grid Search algorithm, which works as follows: Grid Search is about the worst algorithm one could possibly use, but it’s in widespread use because (A) machine learning experts seem to have less familiarity with derivative-free optimization techniques than with gradient-based optimization methods and (B) machine learning culture does not traditionally think of hyperparameter tuning as a formal optimization problem. To get the best set of hyperparameters we can use Grid Search. James’ argument is this: most ML models have low-effective dimension, which means that a small number of parameters really affect the cost function and most have almost no effect. Our results demonstrate that our attacks can accurately steal hyperparameters. Single-variate logistic regression is the most straightforward case of logistic regression. Booster: It helps to select the type of models for each iteration. Click here Guide to Machine Learning(in R) for Beginners: Linear Regression. This two-part minimization problem is similar in many ways to stepwise regression. We take a 70:30 ratio keeping 70% of the data for training and 30% for testing. Although this appears to be the only solution, it isn’t the best one. Logistic regression can be binomial or multinomial. When it predicts yes, how often is it correct?100/(10+100), A ROC(Receiver Operator Characteristic Curve) can help in deciding the best threshold value. LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True, intercept_scaling=1, max_iter=100, multi_class='ovr', n_jobs=1, penalty='l2', random_state=None, solver='liblinear', … We should always predict on the unseen observations but here we want to get the value of the threshold , hence the predictions on the train set. It’s an extension of linear regression where the dependent variable is categorical and not continuous. Since we are building the model on training data, we use qualityTrain .The family argument tells the glm function to build a logistic regression model. We evaluate the effectiveness of our attacks both theoretically and empirically. (In the case of Linear regression, the baseline model predicts the average of all data points as the outcome). We first split the dataset into train and test. In this section, we will explore hyperparameter optimization of the logistic regression model on the sonar dataset. Features such as tenure_group, Contract, PaperlessBilling, MonthlyCharges and InternetService appear to play a role in customer churn. However, the worth … Continue reading → Related. Since this approach seems like it might be worst than Grid Search, it’s worth pondering why it should work. Logistic Regression in R. In this article, we’ll be working with the Framingham Dataset. It returns predicted class probabilities. It is implemented in the linear_model library. Version 3 of 3. Hyperparameters can be classified as model hyperparameters, that cannot be inferred while fitting the machine to the training set because they refer to the model selection task, or algorithm hyperparameters, that in principle have no influence on the performance of the model but affect the speed and quality of the learning process. There is a TRUE or FALSE value for each of our observations.TRUE means that we should put that observation in the training set, and FALSE means that we should put that observation in the testing set. 2.3 Logistic Regression. 2. So which threshold value one should pick? Wrapping Up: Final comments and some exercises to test your skills. An R script file with all of the commands used in this lecture can also be downloaded from my Github repository. One should select the best threshold for the trade-off one wants to make. Add the dataset that you want to use for training, and connect it to the middle input of Tune Model Hyperparameters. The goal is to determine the optimum hyperparameters for a machine learning model. 29. In the multiclass case, the training algorithm uses the one-vs-rest (OvR) scheme if the ‘multi_class’ option is set to ‘ovr’, and uses the cross-entropy loss if the ‘multi_class’ option is set to ‘multinomial’. There are two types of errors that this model can make:1. where the model predicts 1, or poor care, but the actual outcome is 0. Logistic regression is implemented in LogisticRegression. At MSR this week, we had two very good talks on algorithmic methods for tuning the hyperparameters of machine learning models. 10/10/2020; 7 minutes to read; In this article. In mikropml: User-Friendly R Package for Supervised Machine Learning Pipelines. If the probability is greater than this threshold value, the event is predicted to happen otherwise it is predicted not to happen. We implemented super learning using the SuperLearner package in the R programming language. The trade-off parameter of logistic regression that determines the strength of the regularization is called C, and higher values of C correspond … We have constructed the most basic of regression ANNs without modifying any of the default hyperparameters associated with the neuralnet() function. You also decide a cut off value/threshold and then conclude that people with a probability higher than the threshold will buy the product and vice versa. We found that there are 11 missing values in “TotalCharges” columns. For splitting the data we will use the caTools Package. Like stepwise regression, it feels like an opportunity for clean abstraction is being passed over, but it’s not clear to me (or anyone I think) if there is any analytic way to solve this problem more abstractly. Lasso regression. 2y ago. Next, you can use this set of hyperparameters to train a model and test on the unseen dataset to see whether the model generalizes on the unseen dataset. To make this discussion a little more quantitative, we use what’s called a confusion matrix or classification matrix. In an optimization problem regarding model’s hyperparameters, the aim is to identify : where ffis an expensive function. It returns class probabilities; multi:softmax - multiclassification using softmax objective. Recall that we made predictions on our training set and called them predictTrain. Apart from starting the hyperparameter jobs, the logs of the jobs and the results of the best found hyperparameters can also be seen in the Jobs dashboard. scikit-learn Classification using Logistic Regression Example. So our baseline model has an accuracy of 75%. There … We can select a threshold value. The following output shows the default hyperparemeters used in sklearn. We will first do a simple linear regression, then move to the Support Vector Regression so that you can see how the two behave with the same data. Our results demonstrate that our attacks can accurately steal hyperparameters. Hyperparameters. Logistic Regression. i.e 50/(50+10), Precision = TP/predicted yes. Selecting appropriate settings for hyperparameters is a constant problem in machine learning, which is somewhat surprising given how much expertise the machine learning community has in optimization theory. 1st Regression ANN: Constructing a 1-hidden layer ANN with 1 neuron. Since it gives the probability of people who are more likely to buy a product, it enables the company, to focus only on the customers who are most likely to say Yes. If, for example, we plan to use L2-regularized linear regression to solve our problem, we will use the training set and validation set to select a value for the $$\lambda$$ hyperparameter that is used to determine the strength of the penalty for large coefficients relative to the penalty for errors in predictions. D&D’s Data Science Platform (DSP) – making healthcare analytics easier, High School Swimming State-Off Tournament Championship California (1) vs. Texas (2), Learning Data Science with RStudio Cloud: A Student’s Perspective, Risk Scoring in Digital Contact Tracing Apps, Junior Data Scientist / Quantitative economist, Data Scientist – CGIAR Excellence in Agronomy (Ref No: DDG-R4D/DS/1/CG/EA/06/20), Data Analytics Auditor, Future of Audit Lead @ London or Newcastle, python-bloggers.com (python/data-science news), Python Musings #4: Why you shouldn’t use Google Forms for getting Data- Simulating Spam Attacks with Selenium, Building a Chatbot with Google DialogFlow, LanguageTool: Grammar and Spell Checker in Python, Click here to close (This popup will not appear again). Posted on July 20, 2012 by John Myles White in R bloggers | 0 Comments. You built a simple Logistic Regression classifier in Python with the help of scikit-learn. It shrinks the regression coefficients toward zero by penalizing the regression model with a penalty term called L1-norm, which is the sum of the absolute coefficients.. The model can accurately identify patients receiving low-quality care with test set accuracy being equal to 78% which is greater than our baseline model. Almost certainly (B) is more important than (A). Hyperparameters study, experiments and finding best hyperparameters for the task; I think hyperparameters thing is really important because it is important to understand how to tune your hyperparameters because they might affect both performance and accuracy. We should try and improve the network by modifying its basic structure and hyperparameter modification. Selecting appropriate settings for hyperparameters is a constant problem in machine learning, which is somewhat surprising given how much expertise the machine learning community has in optimization theory. Copyright © 2020 | MH Corporate basic by MH Themes, Frank Hutter, Holger H. Hoos and Kevin Leyton-Brown, Click here if you're looking to post or find an R/data-science job, Introducing our new book, Tidy Modeling with R, How to Explore Data: {DataExplorer} Package, R – Sorting a data frame by the contents of a column, Multi-Armed Bandit with Thompson Sampling, 100 Time Series Data Mining Questions – Part 4, Whose dream is this? To make sure that we all get the same split, we’ll set our seed. Tuning is a vital part of the process of working with logistic regression. In this tutorial we introduce a neural network used for numeric predictions and cover: 1. XGBoost provides a large range of hyperparameters. And for all of the true good care cases, we predict an average probability of about 0.19.This is good because it looks like we’re predicting a higher probability of the actual poor care cases. Anchors. i.e 100+50/165. Along the way you will learn some best practice tips & tricks for choosing which hyperparameters to tune and what values to set & build learning curves to analyze your hyperparameter choices. We call our attacks hyperparameter stealing attacks. For the … Tuning is a vital part of the process of working with logistic regression. Suppos… There are two popular ways to do this: label encoding and one hot encoding. Out of those 165 cases, the classifier predicted “yes” 110 times, and “no” 55 times. Multivariate Adaptive Regression Splines. The only way that appears is to contact every person on the list and ask them whether they will buy the product or not. Here, we are using the R style formula. For each parameter $$p_{i}$$ the researcher selects a list of values to test empirically. So now let’s create our training and testing sets using the subset function. Each row represents a customer, each column contains that customer’s attributes: The raw data contains 7043 rows (customers) and 21 columns (features). On the other hand, if one is more concerned with having a high sensitivity or high true positive rate, one should pick a threshold that minimizes the false positive rate. So, it will have more design decisions and hence large hyperparameters. Admission_binary predicted by (~) CGPA … Logistic Regression (aka logit, MaxEnt) classifier. But if the probability of poor care is less than the threshold value, t, then we predict good quality care. 1.General Hyperparameters. It predicts the probability of the outcome variable. Additionally, the table provides a Likelihood ratio test. The lower the threshold, or closer to (1,1), the higher the sensitivity and lower the specificity. ... Validation for finding Best Model and Hyperparameters. the tapply function computes the average prediction for each of the true outcomes. Tuning the Hyperparameters of a Logistic Regression Model This section contains Python code for the analysis in the CASL version of this example, which contains details about the results. data mycas. Hyperparameters are certain values or weights that determine the learning process of an algorithm. Hyper-parameters of logistic regression. We will also use an argument called type=” response” which gives us the probabilities. We evaluate the effectiveness of our attacks both theoretically and empirically. In machine learning, a hyperparameter is a parameter whose value is used to control the learning process. The McFadden Pseudo R-squared value is the commonly reported metric for binary logistic regression model fit.The table result showed that the McFadden Pseudo R-squared value is 0.282, which indicates a decent model fit. gbtree is used by default. Linear Regression: Implementation, Hyperparameters and their Optimizations One of the company’s task is to send out e-mail offers to customers with a proposal to buy certain products. The following DATA step creates the getStarted data set in a data table named mycas.getStarted. C(x_{Test}, y_{Test}, \theta_{Train + Validation}(\lambda_{Validation}^{*}), \lambda_{Validation}^{*}) When and how to use the Keras Functional API, Moving on as Head of Solutions and AI at Draper and Dash. At the point (0, 0.4), we’re correctly labeling about 40% of the poor care cases with a very small false positive rate. While I have yet to see it published, I’d like to see more people try the Nelder-Mead method for tuning hyperparameters. Hyperparameter gradients might also not be available. solver in [‘newton-cg’, ‘lbfgs’, ‘liblinear’, ‘sag’, ‘saga’] Regularization ( penalty) can sometimes be helpful. We see that by increasing the threshold value, the model’s sensitivity decreases and specificity increases while the reverse happens if the threshold value is decreased. Problem Formulation. \[ An alternative approach is to view our problem as one of Bayesian Optimization: we have an arbitrary function that we want to minimize which is costly to evaluate and we would like to find a good approximate minimum in a small number of evaluations. Our results highlight the need for new defenses … The line shows how these two outcome measures vary with different threshold values. “Yes” or “No”, “Success” or “Failure”). Hence, 131 diabetic patients were randomly selected between the ages of 35 and 55. There are 99 training samples and 32 testing samples. Description. This was all about Logistic Regression in R. We studied the intuition and math behind it and also how Logistic regression makes it very easy to solve a … To begin with we will use this simple data set: I just put some data in excel. For each element of the Cartesian product of these values, the computer evaluates the cost function. The coefficients, or β values, are selected to maximize the likelihood of predicting a high probability for observations actually belonging to class 1 and predicting a low probability for observations actually belonging to class 0. Your job as a data scientist is to determine whether the contacted person will buy the product or not. Clearly, the Linear Regression algorithm will not work here since it only works for problems with a continuous outcome variable. Selecting appropriate settings for hyperparameters is a constant problem in machine learning, which is somewhat surprising given how much expertise the machine learning community has in optimization theory. We’ll call our model QualityLog and use the “glm” function or “generalized linear model” to buildour logistic regression model. For instance, we evaluate our attacks on Amazon Machine Learning. Mod. This model training took more than 1 hour in my local machine (i7, 16 GB RAM), even after using NVIDIA GPU. In this model, the probabilities describing the possible outcomes of a single trial are modeled using a logistic function. binary:logistic - logistic regression for binary classification. while the false positive rate, or 1 minus the specificity, is given on the x-axis. Regression Hyperparameters: Tuning the model. & Inference - CS698X (Piyush Rai, IITK) Bayesian Linear Regression (Hyperparameter Estimation, Sparse Priors), Bayesian Logistic Regression 6 Learning Hyperparameters … we would get 98/131 observations correct and an accuracy of 75%. We have 131 observations, one for each of the patients in our data set, and 14 different variables. Additionally, the table provides a Likelihood ratio test. Share Tweet. When viewed in this perspective, the natural strategy is to regress the cost function on the settings of the hyperparameters. Hence, in this case, we would predict that all patients are receiving good care. Fitting Binary Logistic Regression. The McFadden Pseudo R-squared value is the commonly reported metric for binary logistic regression model fit.The table result showed that the McFadden Pseudo R-squared value is 0.282, which indicates a decent model fit. Logistic regression predicts probabilities in the range of ‘0’ and ‘1’. As an example, here we will show how to carry out a analysis for Pima Indians data set similar to analysis from Chapter 5.4 of Gelman and Hill (2007) using stan_glm. Logistic regression does not really have any critical hyperparameters to tune. You tuned the hyperparameters with grid search and random search and saw which one performs better. a. There is only one independent variable (or feature), which is = . A simple data set. Tune Model Hyperparameters. 4. We see here that we pass as the solver liblinear, and the only thing you need to know, there's different solvers that optimized for … So the first thing that we do is fit a regular logistic regression model, which is just going to have no extra hyperparameters, no regularization. Artificial neural networks are commonly thought to be used just for classification because of the relationship to logistic regression: neural networks typically use a logistic activation function and output values from 0 to 1 like logistic regression. Uses Cross Validation to prevent overfitting. For instance, we evaluate our attacks on Amazon Machine Learning. Menu Solving Logistic Regression with Newton's Method 06 Jul 2017 on Math-of-machine-learning. The higher the threshold, or closer to (0, 0), the higher the specificity and the lower the sensitivity. Multinomial logistic refers to cases where the outcome can have three or more possible types of values (e.g., “good” vs. “very good” vs. “best” ). Depending on the form or the dimension of the initial problem, it might be really expensive to find the optimal value of xx. I don't see the xgboost R package having any … It is generated by plotting the True Positive Rate (y-axis) against the False Positive Rate (x-axis) as you vary the threshold for assigning observations to a given class.ROC curve will always end at (1,1). The following output shows the default hyperparemeters used in sklearn. Introduction & 'Parameters' 50 xp Parameters in Logistic Regression 50 xp Extracting a Logistic Regression … Click here Guide to Machine Learning(in R) for Beginners: Decision Trees, Latest news from Analytics Vidhya on our Hackathons and some of our best articles! In this video, learn how to highlight the key hyperparameters to be considered for tuning. The ROC curve always starts at the point (0, 0) i.e threshold of value 1. In this article, we’ll be working with the Framingham Dataset. Random Search for Classification . Let's bolster our newly acquired knowledge by solving a practical problem in R. Practical - Tuning XGBoost in R. In this practical section, we'll learn to tune xgboost in two ways: using the xgboost package and MLR package. Note : In order to run this code, the data that are described in the CASL version need to be accessible to the CAS server. Regression Hyperparameters. regression, logistic regression, support vector machine, and neural network. We find that for all of the true poor care cases, we predict an average probability of about 0.44. ... and tuning of more hyperparameters for grid search. So how to choose the optimum threshold value. Notebook. Accuracy : (TP+TN)/Total . The rows are labelled with actual outcomes while the columns are labelled with predicted outcomes. To set up the problem of hyperparameter tuning, it’s helpful to think of the canonical model-tuning and model-testing setup used in machine learning: one splits the original data set into three parts — a training set, a validation set and a test set. Hyperparameters study, experiments and finding best hyperparameters for the task; I think hyperparameters thing is really important because it is important to understand how to tune your hyperparameters because they might affect both performance and accuracy. We also see that both of these variables have at least one. Input Execution Info Log Comments (4) This Notebook has been released under the Apache 2.0 open source license. 34 Table 3 shows the R packages we used to implement each algorithm in the super learner and the corresponding hyperparameters that we tuned. View source: R/hyperparameters.R. All you have is a sample of customers that were contacted recently, their age and a variable whether or not they took action. The threshold at this point will be 0. Linear Regression: Implementation, Hyperparameters and their Optimizations This means that we will always classify these observations falling into class 1(Specificity will be 0. Possible types of validation techniques using R for the trade-off you want randomly! Grid with the help of scikit-learn tapply function computes the average of all data points as the outcome all! Of other parameters ( typically node weights ) are derived via training model case! To AcuteDrugGapSmall are the two possible predicted classes: the classifier made a total of predictions... By one in the range of ‘ 0 ’ and ‘ 1 ’ set of hyperparameters can. Was modified to this notebook has been released under the curve ( AUC ) under the Apache open... In case of linear regression algorithm will not work here since it only works for problems a. Which defines what we ’ ll try to fit in probabilities between 0 and 1, which =... To implement each algorithm in the sample leave the organisation, and neural network your. The traditional approach, then we predict an average probability of about 0.44 those! ” or “ Failure ” ) under ROC is called area under is! ( a ) does it predict yes testing Sets using the SuperLearner package in Analytics. The only way that appears is to determine whether the contacted person will the... The quality of a logistic regression and rstanarm is from a CRAN vignette was to. ( in the range of ‘ 0 ’ and ‘ 1 ’ otherwise it is wise to use caTools! Unique prediction classes traditional approach, then we predict good quality care when viewed in this video learn. Correct and an accuracy of 75 % the point ( 0, or 1 the. Help of scikit-learn different threshold values imagine you are working as a data named... Unique prediction classes it helps to select the type of models for each parameter \ ( {... Constructed the most straightforward case of logistic regression with Newton 's Method Jul... Or good care, but you can substitute any appropriately defined CAS engine libref of more hyperparameters for a Learning... Greater than this threshold value, the computer selects the hyperparameter settings from this grid with help... Vignette, Pima Indians data is used Failure ” ) really expensive to find the optimal of! Values to test your skills, which are the independent variables while is! Parameter - a control variable that retains strength modification of regularization by being inversely positioned to middle... ) classifier aka Logit, MaxEnt ) classifier away message of this notebook by Vehtari. 1,1 ), which is = following project, I have learned important! Statistical measures of the hyperparameters logistic regression hyperparameters r machine Learning Pipelines since this approach seems like it might be than! Source license our use of cookies Precision = TP/predicted yes the analysis in this blog, we our! With a proposal to buy certain products parameter \ ( p_ { }!, R ( rate ) means “ the occurrence count in unit ( 1 ) ”! Found that there are 11 missing values selected between the ages of 35 55. Gamma regression, support vector regression ( p_ { I } \ ) the researcher selects a list of (! Actual outcomes to the lambda regulator then describe the newer and cleaner methods numeric predictions and:. We would get 98/131 observations correct and an accuracy of 75 % an. Variable that retains strength modification of regularization by being inversely positioned to the predicted outcomes a data named. To make sure that we will use the subset function our baseline model in case logistic... Our ROC curve looks like a good choice in this tutorial is that this problem is similar in ways... Use what ’ s an extension of linear regression predicted not to happen can used! Most straightforward case of logistic regression algorithm will not work here since only. Threshold, or ROC curve then describe the newer and cleaner methods make. Throughout the analysis in this case certainly ( B ) is more important than ( ). Statements assume that the CAS logistic regression hyperparameters r libref assume that the CAS engine libref named... Dataset and will use this simple data set into a training set and called them predictTrain dataset that want!, in this case to run a logistic regression model ) this notebook has been adapted the. Looks like a good choice in this tutorial, you can substitute any appropriately defined CAS engine libref named... To predict the quality of a logistic regression is to send out offers. That were contacted recently, their age and a variable whether or not of xx of. Have 131 observations, one for each parameter \ ( p_ { I } )! Outcomes of a binary classification ” which gives us the probabilities take a 70:30 keeping. Whether the contacted person will buy the product or not 165 employees were being studied ) in practice the. 2Y ago yes ” or “ no ”, “ Success ” or “ no 55! Health care quality count in unit ( 1 ) interval ” notebook by Aki Vehtari many ways to do:! We need your help “ no ” 55 times are the two possible outcomes the variable. Default hyperparemeters used in sklearn the ROC curve, or closer to ( 1,1 ) for and. Also see that both of these values, the event is predicted to. A binary classification test: Sensitivity/Recall = TP/ ( TP + FN.! 98/131 observations correct and an accuracy of 75 % predict an average probability of poor care is more important (... Probabilities describing the possible outcomes we now use the performance of a single are! Proposal to buy certain products, thanks for reading and the corresponding hyperparameters that we all the. Scientist for an e-commerce company sample.split command to split the dataset that want! Super Learning using the SuperLearner package in the range of ‘ 0 ’ and ‘ ’. Should work certainly ( B ) is more important than ( a.... 2Y logistic regression hyperparameters r variables from InpatientDays to AcuteDrugGapSmall are the two possible outcomes of a wine services. And ‘ 1 ’ source license obtain the following output shows the R style formula should try improve! Whether or not action to automatically Tune the hyperparameters with grid search and random and.: I just put some data in excel and called them predictTrain this notebook by Aki Vehtari and Optimizations! Setting num_class parameter denoting number of unique prediction classes section, we are using SuperLearner! Of a logistic regression one by one in the R style formula is, event! % of the initial problem, it might be worst than grid search, it s! What ’ s no logistic regression hyperparameters r methylation = TP/predicted yes it is predicted to happen regression model, learn how use... Measures of the model, we would predict that all patients are receiving good care, you! Your skills the values of other parameters ( typically node weights ) are derived via training on.: it helps to select the best threshold for the trade-off you want to use the Tune model module... ( 0.1, 0.5 ) on this ROC curve always starts at the important hyperparameters of machine to... Is only one data set in a data scientist is to assess health care quality they took action is! The xgboost R package for Supervised machine Learning model accurately steal hyperparameters model... Imply more Shrinkage of the Cartesian product of these values, the outcome is coded as “ and. Find that for all of the tuneLogistic action to automatically Tune the hyperparameters in strange ways, ’. Scientist, you can see useful differences in performance or convergence with different (... Scientist, you agree to our logistic regression hyperparameters r of cookies regression ANN: Constructing a 1-hidden layer ANN with neuron... 165 cases, we use cookies on Kaggle to deliver our services, analyze web,... Isn ’ t the best set of hyperparameters we can use grid search the regularization -! Guide to machine Learning models TP + FN ) mycas, but you can useful. Curve, can help us decide which value of xx use sapply check. Probability of about 0.44 actual outcome is coded as “ 0″ and “ in. Following confusion matrix or classification matrix a control variable that retains strength modification regularization. Is only one independent variable ( or feature ), the worth Continue. See it published, I applied three different machine Learning designer an e-commerce company applied to binary.! We obtain the following project, I applied three different machine Learning model researcher. All patients are receiving good care, but the actual outcome is coded as “ 0″ and “ no,. 32 testing samples predicts the average of all data points as the ). Training set and testing set regression algorithm on the y-axis be 0 missing values really expensive to the. Function is always between 0 and 1, which are the two possible types values! Specificity are statistical measures of the true poor care is more important (. Values ( e.g form or the dimension of the hyperparameters of a logistic regression, the take away message this... Functional API, Moving on as Head of Solutions and AI at Draper Dash. Continue reading → Related API, Moving on as Head of Solutions and AI at Draper Dash! Function which defines what we ’ ll see an explanation for the trade-off you want to randomly our! Is P ( y = 1 ) interval ”: linear regression where the dependent variable is categorical and continuous!
2020 logistic regression hyperparameters r
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5215128064155579, "perplexity": 1149.8482830747514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703509104.12/warc/CC-MAIN-20210117020341-20210117050341-00416.warc.gz"}
|
https://codereview.stackexchange.com/questions/211366/finding-the-longest-word-without-these-characters
|
# Finding the longest word without these characters
My goal is to go through the list of all English words (separated by '\n' characters) and find the longest word which doesn't have any of these characters: "gkmqvwxz". And I want to optimize it as much as possible. Here's what I came up with:
#include <string.h>
#include <ctype.h>
#include <stdlib.h>
#include <stdio.h>
#include <stddef.h>
#include <unistd.h>
static inline int is_legal(size_t beg, size_t end, char* buffer)
{
static const char* bad = "gkmqvwxzio"; /* unwanted chars */
for (; beg != end; ++beg) { /* go through current word */
char ch = tolower(buffer[beg]); /* The char might be upper case */
for (size_t j = 0; bad[j]; ++j)
if (ch == bad[j]) /* If it is found, return false */
return 0;
}
return 1; /* else return true */
}
int main(void)
{
char *buffer = NULL; /* contents of the text file */
size_t length = 5000000; /* maximum size */
FILE* fp;
fp = fopen("words.txt", "rb");
if (fp) {
fseek(fp, 0, SEEK_END);
fseek(fp, 0, SEEK_SET);
buffer = malloc(length);
if (buffer) {
}
fclose(fp);
}
size_t beg = 0; /* current word boundaries */
size_t end = 0;
size_t mbeg = 0; /* result word */
size_t mend = 0;
while (buffer[end]) {
beg = end++;
for (; buffer[end] && buffer[end] != '\n'; ++end) /* read the next word */
; /* for loop doesn't have a body */
if ((end - beg) > (mend - mbeg) && is_legal(beg, end, buffer)) { /* if it is a fit, save it */
mbeg = beg;
mend = end;
}
}
printf("%.*s\n", mend - mbeg, buffer + mbeg); /* print the output */
return 0;
}
I read it all at once, then go through it with two indexes denoting beginning and ending of current word. When I find a word that fits, I save the corresponding indexes. Finally I print the output, which is "supertranscendentness". The output is correct, but I'd like to know:
1. If there's undefined behavior in my code
2. If there's a better way of doing this (without sacrificing performance)
3. If there's a way to improve the performance
Another point is the size_t length = 5000000; part. It is an estimated size of the string based of the file size.
The code is not bad as it stands, but I think there are some things that could be improved.
## Think of the user
The input file name and unwanted letters are all hardcoded at the moment. It would be nice if the user could specify one or both of these parameters on the command line.
There is almost no error checking or handling. It's not hard to add, and it makes the program much more robust. Here's how the start of main might look:
int main(int argc, char *argv[]) {
if (argc != 2) {
puts("Usage: longword filename");
return 0;
}
FILE* fp;
fp = fopen(argv[1], "rb");
if (!fp) {
perror("couldn't open words file");
return 3;
}
size_t length = 5000000;
char *buffer = malloc(length);
if (buffer == NULL) {
perror("couldn't allocate memory");
return 2;
}
length = fread(buffer, 1, length, fp);
if (ferror(fp)) {
free(buffer);
return 1;
}
// rest of program here
free(buffer);
}
## Consider using standard library functions
At a very small performance penalty (as measured on my machine), one could write a very clean version using only standard functions:
char *longest = NULL;
int longestlen = 0;
char *word = strtok(buffer, "\n");
while (word) {
const int len = strlen(word);
if (len > longestlen) {
if (strpbrk(word, "gkmqvwxzio") == NULL) {
longestlen = strlen(word);
longest = word;
}
}
word = strtok(NULL, "\n");
}
printf("%s\n", longest);
That is the way I'd probably write it unless there were some compelling reason that's not fast enough.
## Use functions
Your is_legal function is not bad, but I'd also write a get_word_len function to fetch the length of the next word in the buffer.
static inline int get_word_len(const char *buff, const char *end) {
int len = 0;
for ( ; *buff != '\n' && buff < end; ++buff, ++len)
{}
return len;
}
## Use const where practical
The is_legal function doesn't alter the passed string, so that parameter should be const.
## Think carefully about the problem
The current code might print the word followed by \n, but if the words doesn't happen to be the first in the file, it will also print the \n from the previous word. It's not necessarily wrong, but it's not consistent.
## Use bool for boolean values
The implmentation of bool is in <stdbool.h> and should be used as the return type of is_legal.
## Use only the required headers
In this program neither <stddef.h> nor <unistd.h> appear to be needed; I'd recommend omitting them and only including headers that are actually needed.
## Consider using pointers
There may not be a performance difference in this case, but for problems like these, the use of pointers seems more natural to me. For example:
const char *end = buffer + length;
const char *longest = buffer;
int longestlen = 0;
for (const char *curr=buffer; curr < end; ) {
const int wordlen = get_word_len(curr, end);
if (wordlen > longestlen) {
if (is_good_word(curr, wordlen)) {
longestlen = wordlen;
longest = curr;
}
}
curr += wordlen + 1;
}
printf("%.*s\n", longestlen, longest);
Here, is_good_word is like your is_legal function:
static inline bool is_good_word(const char *curr, int wordlen) {
static const char* bad = "gkmqvwxzio";
for ( ; wordlen; --wordlen) {
char ch = tolower(*curr++);
return false;
}
}
}
return true;
}
## Don't leak memory
The program allocates but does not free the buffer space. Yes, the operating system will clean up after you, but a free costs very little and allows for better memory leak checking with tools like valgrind.
• Great points, agreed with most of them. Quick questions: Jan 12, 2019 at 19:20
• Add error handling: I would really appreciate a few pointers to sources about error handling in such cases; Best practices, best design patterns Jan 12, 2019 at 19:22
• Use only the required headers: I used <stddef.h> for size_t and <unistd.h> for fread. Are these not the correct headers? Jan 12, 2019 at 19:25
• I've updated my answer to show error handling examples. Also, fread is defined in <stdio.h> and since fread returns a size_t, we know it is already defined once we have <stdio.h>. See this: en.cppreference.com/w/c/types/size_t Jan 12, 2019 at 19:28
1. You only use things from 3 of the 6 includes. <string.h>, <stddef.h>, and <unistd.h> are superfluous, the last one just limiting portability.
2. is_legal() does not need to know about the bigger buffer. Just the sequence it should inspect is sufficient.
3. You assume everything works out perfectly fine:
• The file can be opened for reading.
• You succeed in allocating 5_000_000 bytes.
• You can read all those Bytes from the file.
4. You fail to free the array you malloc()-ed. Not really a problem though, as the program terminates immediately afterwards.
5. If you allocate a fixed amount of memory on every run, why not just make it a static array?
6. return 0; is implicit for main() since C99.
Design limitations and considerations:
1. Consider using a smaller fixed buffer (size should be a power of 2, at least 32k or so), and scanning the file from start to end, instead of slurping it all in.
2. Consider allowing the user to override which characters are forbidden.
3. You are only handling single-byte character-sets. That might be enough, and it certainly simplifies things significantly.
4. Your code is almost certainly IO-bound, so the gains from optimising the algorithm are probably strictly limited. Still, consider a bit of pre-processing to cut out the more expensive calls.
Specifically, prepare two bitfields character and whitespace, and use a simple lookup.
• Great points, thank you. A couple of quick questions: Jan 14, 2019 at 17:08
• Wouldn't fixing the size to 32k and reading into it in a loop mean I have to copy the "longest legal word" candidate every time there's one? Sounds inefficient, how would I go about fixing that? Jan 14, 2019 at 17:09
• How could I consider supporting longer than single-byte character-sets? Jan 14, 2019 at 17:11
• Wouldn't an array of size 5,000,000 cause stackoverflow if I were to make it static? Jan 14, 2019 at 17:12
• @Ayxan You could have two dynamic buffers, for the longest yet, respectively the current prospect. And that's just in case the longest word is unexpectedly long. If you want to support more than single-Byte character-sets, things get complicated, just take a peek into unicode as an example. And variables with static lifetime don't have anything to do with the stack, they aren't automatic. Jan 14, 2019 at 20:48
The fseek() calls in main() achieve nothing. They appear to be relicts of an attempt to measure file size that would look something like this (once the error checking has been added):
FILE *const fp = fopen("words.txt", "rb");
if (!fp) {
perror("fopen");
return 1;
}
if (fseek(fp, 0, SEEK_END)) {
perror("fseek");
return 1;
}
long length = ftell(fp);
if (length < 0) {
perror("ftell");
return 1;
}
if (fseek(fp, 0, SEEK_SET)) {
perror("fseek");
return 1;
}
char *const buffer = malloc(length+1);
if (!buffer) {
fputs("malloc failed", stderr);
return 1;
}
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20854108035564423, "perplexity": 7111.951996617762}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662562410.53/warc/CC-MAIN-20220524014636-20220524044636-00727.warc.gz"}
|
https://ideas.repec.org/p/arx/papers/1011.0748.html
|
## Author Info
• Takero Ibuki
• Jun-ichi Inoue
Registered author(s):
## Abstract
Statistical properties of order-driven double-auction markets with Bid-Ask spread are investigated through the dynamical quantities such as response function. We first attempt to utilize the so-called {\it Madhavan-Richardson-Roomans model} (MRR for short) to simulate the stochastic process of the price-change in empirical data sets (say, EUR/JPY or USD/JPY exchange rates) in which the Bid-Ask spread fluctuates in time. We find that the MRR theory apparently fails to simulate so much as the qualitative behaviour ('non-monotonic' behaviour) of the response function $R(l)$ ($l$ denotes the difference of times at which the response function is evaluated) calculated from the data. Especially, we confirm that the stochastic nature of the Bid-Ask spread causes apparent deviations from a linear relationship between the $R(l)$ and the auto-correlation function $C(l)$, namely, $R(l) \propto -C(l)$. To make the microscopic model of double-auction markets having stochastic Bid-Ask spread, we use the minority game with a finite market history length and find numerically that appropriate extension of the game shows quite similar behaviour of the response function to the empirical evidence. We also reveal that the minority game modeling with the adaptive ('annealed') look-up table reproduces the non-linear relationship $R(l) \propto -f(C(l))$ ($f(x)$ stands for a non-linear function leading to '$\lambda$-shapes') more effectively than the fixed (`quenched') look-up table does.
If you experience problems downloading a file, check if you have the proper application to view it first. In case of further problems read the IDEAS help page. Note that these files are not on the IDEAS site. Please be patient as the files may be large.
File URL: http://arxiv.org/pdf/1011.0748
## Bibliographic Info
Paper provided by arXiv.org in its series Papers with number 1011.0748.
as
in new window
Length: Date of creation: Nov 2010 Date of revision: Mar 2011 Handle: RePEc:arx:papers:1011.0748 Contact details of provider: Web page: http://arxiv.org/
## References
References listed on IDEAS
Please report citation or reference errors to , or , if you are the registered author of the cited work, log in to your RePEc Author Service profile, click on "citations" and make appropriate adjustments.:
as in new window
1. S. V. Vikram & Sitabhra Sinha, 2010. "Emergence of universal scaling in financial markets from mean-field dynamics," Papers 1006.0628, arXiv.org.
Full references (including those not matched with items on IDEAS)
## Lists
This item is not listed on Wikipedia, on a reading list or among the top items on IDEAS.
## Corrections
When requesting a correction, please mention this item's handle: RePEc:arx:papers:1011.0748. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (arXiv administrators)
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If references are entirely missing, you can add them using this form.
If the full references list an item that is present in RePEc, but the system did not link to it, you can help with this form.
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your profile, as there may be some citations waiting for confirmation.
Please note that corrections may take a couple of weeks to filter through the various RePEc services.
This information is provided to you by IDEAS at the Research Division of the Federal Reserve Bank of St. Louis using RePEc data.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28407517075538635, "perplexity": 2088.1181707379037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049272349.32/warc/CC-MAIN-20160524002112-00141-ip-10-185-217-139.ec2.internal.warc.gz"}
|
https://rd.springer.com/chapter/10.1007%2F978-3-319-95420-2_14
|
# From 07.00 to 22.00: A Dual-Earner Couple’s Typical Day in Italy
Old Questions and New Evidence from Social Sequence Analysis
• Ivano Bison
• Alessandro Scalcon
Open Access
Chapter
Part of the Life Course Research and Social Policies book series (LCRS, volume 10)
## Abstract
In what ways do dual-earner couples organize their workdays and how do they (de)synchronize their daily activities? Using a multichannel sequence analysis approach, the paper tackles these questions. We consider the couples’ division of work-family activities in holistic terms by setting it within the context of everyday life, that is, the overall temporal pattern of combination of His and Her multiple activities. Our multichannel sequence analysis approach is based on a Lexicographic Index that seeks to overcome some optimal matching limits of the sequence analysis. The case-study concerns how Italian dual-earner couples organize their daily activities (sleep, personal care, work, moving, housework, free time), during a typical Monday to Friday work day, 7.00 am to 10.00 pm. The analysis, carried out using the data from the 2008 Italian Census on Time Use (the last one available), involves 873 couples where both partners filled the given diaries on the very same day. All the analyses confirm the idea that dual-earner couples package their life time mainly in accordance with their jobs and eventual children management. Moreover, the analyses show that this time packaging changes in relation to the level of education, social class and the occupational sector of the couple.
## 1 Introduction
How do dual-earner couples organize their workdays and how do they (de)synchronize their daily activities? These are the questions that we address in this paper using a multichannel sequence analysis approach. Our purpose is to consider the couples’ division of work-family activities in holistic terms by setting it within the context of everyday life, that is, the overall temporal pattern of combination of His and Her multiple activities.1
Our multichannel approach is based on a Lexicographic Index (Bison 2011) that seeks to overcome some optimal matching limits of sequence analysis (Bison 2009). The case-study concerns Italian dual-earner couples and uses data from the Italian Time Use Survey 2008 (Istat 2011).
We know that for dual-earner couples the risk of experiencing a certain “lack of family time” is higher than for other couples (Saraceno 2012), due to the combination and the rigidity of His and Her work constraints. The spouses of these couples face various challenges: according to their working schedules, they are required to find the right amount of time for their family—i.e. housework, childcare and other non-paid work—as well as with their family—i.e. desirable and shared activities, such as free time—integrating collective needs with individual ones. In other words, the time scarcity of dual-earner couples obliges them to adopt a complementary strategy (Mansour and McKinnish 2014) in order to reconcile their multiple family needs of “production” and “consumption”, while preserving their personal satisfaction with work and daily life. The two individual careers have to coexist with a third one: family life.
In this scenario, an important component of a dual-earner couple’s strategy is synchronization/desynchronization. For instance, previous studies show that a certain degree of desynchronization of working schedules may be a useful solution for partners because it can promote a more equal division of housework and child care (Presser 1994; Chenu and Robinson 2002; Lesnard 2008; Naldini and Saraceno 2011). At the same time, it is recognized that a certain degree of synchronization in work commitments can encourage the partners to spend time together in other desirable activities (Hamermesh 2002; Lesnard 2008).
More generally, looking at (de)synchronizations is important because they reveal a latent behavioral pattern of different work-family specializations and suggest new explanations for the continuing persistence of gender inequalities in the division of work-family activities. The study of (de)synchronizations could enable identification of multiple equilibria (Esping-Andersen et al. 2013). According to Esping-Andersen and colleagues, such study could reveal different behavioral patterns of work-family specializations—(i) egalitarian, (ii) unstable and (iii) traditional—resulting from systematic co-action by different generative mechanisms, both symbolic-cultural (e.g. Berk 1985; West and Zimmerman 1987) and economic-material (e.g. Becker 1964; Coverman 1985; Manser and Brown 1980).
How do we measure (de)synchronizations? There are two radically different main approaches: time budgets—the dominant approach in the time use literature—and sequence analysis.
In the former approach, we measure (de)synchronizations as the amount of time in which both the spouses have or have not done an activity at the same time or have or have not spent that time together in the same place. Thus, we obtain synchronicity ratios or percentages. However, in this way nothing is known about when the activity schedules overlap. This is a crucial limitation for two main reasons.
First, time is socially structured, and so too are social rhythms and constraints. Hence, being simultaneously at work at 10 a.m. or 10 p.m. has radically different impacts on a couple’s daily life. Furthermore, different timings in working schedules may have radically different impacts on daily life if combined with other time demanding features, like for instance the institutional constraints of children’s schedules (e.g. school hours).
Second, take the case of a full-time shift perfectly synchronized with a part-time afternoon shift: by considering only the duration of the overlap, we will mistakenly classify it as a highly desynchronized working schedule. However, such kinds of structural desynchronization—due simply to differences in duration—should not be compared with hypothetical other kinds with the same off-scheduling amount but different and more complex organization during the day (Nock and Kingston 1984).
According to Lesnard (2008), if we know little about how family time is daily balanced with work time for both spouses, this is mainly because we are used to adopting the too simplistic approach of the dominant time-budget perspective. To date, scholars have underestimated the importance of daily scheduling, while paying more attention to total amounts of time (Lesnard 2008). They have traditionally acquired time budget information related to different daily activities, but these should be seen in a holistic perspective that makes it possible to study the couple’s days as a whole, avoiding the manipulation of time as if it were clay.
An alternative to the time-budget approach is sequence analysis (Lesnard 2008). According to Hallberg (2003), “while the traditional time allocation model typically studies the total time spent in, e.g., market work, over a day or a week, it provides little or no insight into the temporal pattern of time-use and therefore, potentially, misses a vital part of the mechanisms underlying empirical observations”. A sequence analysis of time-use would evidence the routine aspects of daily life, as well as the couples’ projects (Hagerstrand 1982), the performance of their complementary strategy across several daily constraints and unexpected events (Hellgren 2014). Finally, the analysis of time-use temporal patterns—instead of time budgets—seems more relevant in the study of the daily strategies and behaviors of a couple (Hallberg 2003).
We have pointed out that in order to understand the complexity of work-family balance strategies, it is necessary to study the couple’s daily time-use pattern as a whole and in a more holistic way by adopting a multichannel sequence analysis approach. In the following section, we introduce the Lexicographic Index used to measure the resemblance between multinomial sequences. Section 3 sets out the data and methods. Section 4 presents the main statistical and graphical results of this study. Finally, Sect. 5 is devoted to summing up the main findings.
## 2 The Lexicographic Index
There are three main problems with current techniques used to compute the distances among sequences. One derives from the way in which similarity between two sequences is defined (Abbott and Tsay 2000; Wu 2000; Dijkstra and Taris 1995; Elzinga 2003; Bison 2009); the second is how to handle multinomial sequences (Abbott 1990); the third is how to treat a multichannel sequence as a whole (Gauthier et al. 2010).
Here we present an alternative method for computing distances among sequences. The lexicographic index (Bison 2011) is based on the sorting order of two different modes of observing events in a binary sequence. The first order is given by duration, that is the quantity of time, and is therefore based on the total number of observed events u in the sequence x. The second order is timing, that is when this event happens, i.e. the ‘places’ sk (k = 1, …, u) in the sequence when 1 occurs.2 For instance, we may have only three binary sequences of length 3 and u = 1. They differ according to when the event occurred; at time t1, t2 or t3. Hence, we may order these sequences [100], [010] and [001] according to the time order of events. Because the nature of the sorting order is double, the proposed index consists of two distinct parts.
The first part, d′(x), ranging from 0 to 1, takes account of the duration and therefore the different amounts of realization u recorded in the sequence:
\displaystyle \begin{aligned} \begin{array}{rcl} d'(x) =u/T \; \text{for } u > 0 &\displaystyle \quad \text{ and }\quad &\displaystyle 0 \; \text{for } u = 0 {} \end{array} \end{aligned}
(1)
where T is the length of the sequence.
The second part, d″(x), ranging from 0 to 1, takes account of timing and therefore the different numbers of combinations displayed by the sequences with variation in the amount of time. It is
\displaystyle \begin{aligned} \begin{array}{rcl} d''(x) &=& \left\{ \begin{array}{l@{\hspace{1em}} l} 0 & \text{for } u=0 \\ {} \dfrac{{T\choose{u}}}{{T\choose{u}} + 1} \; \dfrac{1 + {T\choose u} - \big[ {B_u \choose C_u} - {\sum_{k=1}^u} {B_k \choose C_k} - {A_k \choose C_k} \big]} {{T\choose u}} & \text{for } 0<u<T \\\\ {} 1 & \text{for } u=T \end{array} \right. \end{array} \end{aligned}
(2)
where Ak is the exact position of sk in the sequence, Bk is the last position that sk can occupy within the sequence, and Ck is the first position that sk can occupy. For example, for sequence [0101], with T = 4 and u = 2, we have: for s1 the exact place of the first 1 is A1 = 2, the last position is B1 = 3 and the first position is C1 = 1; for s2 the exact place of the second 1 is A2 = 4, the last position is B2 = 4 and the first position is C2 = 2, the value of d″([0101]) is 0.285714 and is obtained as
\displaystyle \begin{aligned} \begin{array}{rcl} \frac{ \frac{4!}{2!(4-2)!}}{\frac{4!}{2!(4-2)!} +1} \; \frac{1+ \frac{4!}{2!(4-2)!} - \Big(\frac{4!}{2!(4-2)!} - \big(\frac{3!}{1!(3-1)!} - \frac{2!}{1!(2-1)!} +\frac{4!}{2!(4-2)!} - \frac{4!}{2!(4-2)!}\big)\Big)} {\frac{4!}{2!(4-2)!}} \enspace. \end{array} \end{aligned}
In turn, these two indices are the coordinates of the sequence in a bi-dimensional space and the distance between two binary sequences (xi, x) is the Euclidean distance between a couple of lexicographic indices
\displaystyle \begin{aligned} \begin{array}{rcl} r(x_i,x_{\ell}) = \sqrt{\big(d'(x_i)-d'(x_\ell)\big)^2 + \big(d''(x_i)-d''(x_\ell)\big)^2} \enspace. \end{array} \end{aligned}
(3)
Passing from a binary sequence to a multinomial sequence is easy. Just as a qualitative variable of m modality can be represented by m dummy variables, so a multinomial sequence of alphabet Q can be represented by |Q| binary sequences xq with values 0–1. For example, the sequence x = [123321] and alphabet Q = {1, 2, 3} can be represented by the following three binary sequences x1 = [100001];x2 = [010010];x3 = [001100]. To each of these binary sequences it is possible to apply the lexicographic index and compute the coordinates $$\{d_q^{\prime } (x_q); d_q^{\prime \prime } (x_q)\}$$. The multinomial sequence x is therefore described by a vector of real numbers. The distance between two multinomial sequences (xi, x) is the Euclidean distance between their transformations {d′(xiq);d″(xiq)} and {d′(xℓq);d″(xℓq)}. Formally, it is:
\displaystyle \begin{aligned} \begin{array}{rcl} r'(x_i,x_{\ell}) = \sqrt{\sum_{q=1}^{|Q|}\big(d'(x_{iq})-d'(x_{\ell q})\big)^2 + \big(d''(x_{iq})-d''(x_{\ell q})\big)^2} \enspace. \end{array} \end{aligned}
(4)
We conclude this section of the paper by briefly discussing the index just presented. Firstly, it is not a comparison between the sequences that defines their distance. The index has a known beginning and end; each point is univocal and identifies one and only one combination of states in sequence. Two sequences which differ in the position of only one element will have different positions. From every point one can retrace the exact sequence that has produced it. A second characteristic of the index concerns its output. Each value of the index, in fact, can be conceived as a coordinate in the space of the multinomial sequence. This characteristic enables the researcher to adopt different methods to calculate the distance, but also to define forms of space other than Euclidean. Furthermore, the third characteristic is the natural way in which to handle multichannel sequences.
## 3 The Data, Their Organization and the Coding of the Activities in a Multichannel Approach
The goal of our analysis was to discover how 873 Italian dual-earner couples organized their daily activities during a typical work day from Monday to Friday. We used data from the Italian Time Use Survey 2008 (Istat 2011). We considered time-use diaries of dual-earner couples’ activities (His and Her) from 7.00 to 22.00.3 Each daily activity was observed every 10 min, and the data files for the sequence analysis consisted of 873 pairs of sequences, one for Him and one for Her, with a total of 90 points in time. Each couples of rows of this file corresponded to a cohabitation, while each variable corresponded to 10 min of observation and each cell of the row/column intersection stated the activities of Him or Her at time t.
In order to simplify the analysis, six different groups of activity were considered: Sleep; personal care—i.e. having a shower, eating, etc. (P.Care); paid work (Work); moving—any kind (Move); unpaid work—i.e. housework, child care, repair, etc. (H.Care); free time and other activities with or without others (F.Time).
Having defined the six daily macro-activities, the next step was to establish how to codify the daily activities of Him and Her in the couple. In this case, His activities and Her activities interact in time to give rise to the couple’s daily activities. Taken individually, each of these two sequences takes the form of a series of mutually exclusive episodes. The problem is therefore how to codify two interacting sequences composed of a plurality of mutually exclusive events. To date, all the solutions proposed have been based on the generation of events combinations (Pollock 2007; Gauthier et al. 2010; Aisenbrey and Fasang 2017): that is, on the construction of a single sequence that combines the states of Him and Her.
This operation has several consequences. Firstly, as Abbott pointed out, using combinations of events requires one to pay “…the price of losing all information about the temporal ‘shape’ of events—their duration and their intensity in terms of producing occurrence—in short their time horizon” (Abbott 1990, p. 146). Secondly, there is the risk that distinct time-use patterns will be tied together, although the order of causality may be bi-directional.
There are various reasons to believe that daily activities of Him and Her cannot be reduced to a simple combination of states. Internally, moreover, each sequence consists of states regulated by their own mechanisms which operate differently in defining the timing and duration of each individual episode. For instance, consider the mechanisms that underlie the regulation of the states of free time and housework. In the former case, it is the working time that mainly regulates the time spent on these two activities; in the latter, we should expect a stronger interaction between gender roles.
It is therefore possible to hypothesize that the sequences of Him and Her—and the states of which they are composed—have their own underlying generative mechanisms which establish the timing and duration of episodes. These generative mechanisms work independently of each other and interact in time: they stand in a coexistence relationship. Finally, the couple’s daily activities are the result of a complex process of co-action between two sequences, that of Him and that of Her, regulated by different generative mechanisms resulting from the co-action between different states. Consequently, reducing everything to a combination of events means loss of a large part of information about the temporal ‘shape’ of events.
A couple’s daily activities, or more correctly the couple sequences analyzed here, are therefore configured by the co-action of two multinomial sequences composed of mutually exclusive episodes. By extending the proposed application of the Lexicographical Index (see Sect. 2) to this case-study, 12 binary sequences can be defined, six for His states and six for Her states, each one of length t = 90, that is, the overall number of points of observation. The couple sequence is defined as a point in a 24-dimension space whose coordinates are the 24 lexicographic indexes defining the respective sequences of Him and Her. The distance between two couple sequences is given by the Euclidean distance between the two points of the two sequences in the 24-dimension space.
The coordinates defined for all 873 couples were analyzed using a k-means cluster algorithm.4 The joint exam of the scree plot (Makles 2012) of the kink in the curve generated from the within sum of squares (WSS), the η2 coefficient (0.43) and the proportional reduction of error, suggests that seven is the optimal number of groups drawn from a set of 20 cluster solutions with random starting points.
## 4 From 7.00 to 22.00: A Typical Working Day of a Dual-Earner Couple in Italy
It is not news that the everyday life of a dual-earner couple is complex. It involves a long and difficult schedule of: waking up, having a shower, breakfast, taking the car-bus-train, going to work, beginning work, lunch, resuming work, coming back home, then housework and family/child care for Her, relaxation for Him, dinner, and at the end of the day, before they go to sleep, some leisure activity. Overall (Fig. 1), this was also the typical daily routine followed by our 873 Italian dual-earner couples from 7.00 to 22.00. Looking at the most frequent activity combinations in the morning, at 7.00, 75.0% of couples were involved in personal care or going to work. From 8:00am to 6:00pm all the couples were at work.5 At 6:00pm, the couples started to be desynchronized: She was engaged in housework/children care; meanwhile He continued to work until 7.00pm. From 7.00pm to 7.30pm, He had some free-time activities, while She continued her housework activity. Finally, together, they had dinner and engaged in free-time activities.
Differences in the spouses’ daily time-budgets for each activity (Table 1) also confirm a well-known finding on the unequal gender division of work-family activities (Gershuny and Robinson 1988; Raley et al. 2012; Craig et al. 2014). For instance, on average, She spends 2 and a half hours more than her partner on housework and childcare, while He spends 1 h and a half more than his partner on paid work.
Table 1
Mean time (in hours:minutes) spent on each activity, by cluster
However, the timing of this daily organization changes when we move from general into the seven clusters. In this case, there emerges a more composite picture of daily life, where “multiple equilibria” (Esping-Andersen et al. 2013) of time allocation during a typical workday and (de)synchronization strategies jointly explain the dual-earner couple’s patterns of time use. The average time activities (Table 1) show a clear difference in the time spent on each activity by Him and Her within partners and among clusters.
Moreover, on shifting the focus to the schedules of each activity, the modal sequence graphs6 (Fig. 1), give us a clearer picture of how the strategies of dual-earner time-use change over time in a typical workday. Both within the spouses and among the clusters the differences in time-use (Fig. 1) mainly occur in the second part of the workday. Until noon the couple’s everyday lives are quite “synchronized”. Him and Her show differences in the afternoon, when fewer women than men are at work and when the women shift their activities from paid to unpaid work (housework, child care, etc.). In other words, gender inequalities in the work-family balance are generally set in the afternoon.
Hence, the preliminary results seem to suggest that, on the one hand, gender specializations in different activities can assume different meanings when contextualized in the whole day and, on the other hand, that the partners’ daily life seems to develop with socially shared, recognized, and identifiable patterns of combined time use. This insight raises two further questions. The first is how these patterns result from a complex process of adaptation to both work-social-family constraints and individual needs. The second question concerns how the daily times are combined by spouses, and how their performed combinations are random instead of being regulated by common generative mechanisms.
In order to investigate the complex process of adaptation of dual-earner couples’ daily time organization, we ran a multinomial logistic regression model to verify if such patterns resulted from working-social-familial and individual constraints (Table 2). For this analysis, we used information about the couple’s educational qualifications, social class, economic sector, and the presence of children.7
Table 2
Multinomial logistic regression on the seven clusters by presence of children and sector, level of education and social class of couple. (Jackknife replication). Reference cluster (A)
B
C
D
E
F
G
Children (ref. No child)
Children 0–14
0.72**
0.15
0.13
0.66*
0.52
0.92***
Children 14+
0.84**
−0.17
0.52
0.54
0.82**
0.93**
Sector (ref. both Industry)
Both priv. services
−2.22***
−1.47**
−1.62**
−1.61**
−2.58***
−1.89***
Both pub. services
−0.38
−0.39
−1.60*
−0.92
−1.23
−1.45*
He industry &
−0.84
−0.70
−1.08
−1.25
−1.12
−1.78**
She pub. services
He priv. services &
−1.39*
−1.16
−0.66
−0.45
−1.10
−1.44*
She pub. services
Others
−1.24*
−1.23*
−0.99
−0.85
−1.58**
−0.97
Education (ref. University)
Upper-secondary
0.73**
1.11**
0.72*
0.18
0.77**
0.88**
school diploma
Compulsory
0.24
0.56
0.54
−0.04
0.75*
0.88*
Social class (ref. I+II)
IIIa
1.29**
−0.99*
0.03
0.65
0.36
0.67
IVabc
0.90
−0.62
−0.59
0.71
0.25
−0.47
VI+VIIab
1.93**
−0.23
0.20
1.25**
0.64
−0.04
Constant
−0.73
0.37
0.43
−0.10
0.59
0.00
Note: * p < 0.1, ** p < 0.05, *** p < 0.01; Pseudo R2 =0.05
A joint reading of the modal multichannel sequence graphs (Fig. 1), the multinomial logistic regression parameters (Table 2) and the margins estimated probability (Table 3) quite clearly shows what are the (de)synchronization strategies adopted by couples and what may be the hidden generative mechanisms (Hallberg 2003). We highlight the importance of the presence of children, work sector and the educational level in explaining the cluster differences (Table 2).
Table 3
Predictive margins probability (in percentage) on the seven clusters by presence of children, sector, level of education and social class of couple. Marginal effects at reference profile
A
B
C
D
E
F
G
Children (ref. No child)
Children 0–14
7.9
7.8
13.2
13.8
13.7
23.9
19.8
Children 14+
7.1
7.8
8.6
18.4
11.0
29.2
18.0
Sector (ref. both Industry)
Both priv. services
46.3
2.4
15.4
14.1
8.4
6.4
7.0
Both pub. services
26.7
8.8
26.1
8.3
9.6
14.2
6.3
He industry & She pub. services
28.9
6.0
20.7
15.1
7.5
17
4.9
He priv. services & She pub. services
26.4
3.2
11.9
21.1
15.2
15.9
6.3
Others
30.6
4.3
12.9
17.5
11.8
11.4
11.6
Education (ref. University)
Upper-secondary school diploma
5.9
5.8
25.8
18.7
6.4
23.1
14.3
Compulsory
7.2
4.4
18.2
19.1
6.2
27.5
17.4
Social class (ref. I + II)
IIIa
9.0
15.6
4.8
14.2
15.5
23.4
17.5
IVabc
11.6
13.7
9.1
9.9
21.4
27.1
7.3
VI + VIIab
6.7
22.2
7.7
12.6
21.3
23.0
6.5
Reference profilea
12.2
5.9
17.7
18.8
11.0
22.1
12.3
aReference profile: Children (No child), Sector (both Industry), Education (University), Social class (I+II)
Three different forms of time-use organization are highlighted by the graphs (Fig. 1). The first is characterized by a general synchronization of the spouses’ different activities during the day. This maximally “egalitarian” (Esping-Andersen et al. 2013) gender participation in unpaid work seems able to preserve the free time of the spouses. Couples in clusters (A) and (B) are associated with the highest synchronization levels.
These dual-earner couples are characterized by a tertiary educational level for couples in cluster (A) and a secondary educational level for those in cluster (B) (Table 3). Thus, a low educational level seems to be an obstacle to an egalitarian strategy of synchronization.
What distinguishes the two clusters is the presence of children (Table 2), which increases the probability of being a member of cluster (B), while the couples in cluster (A) are more likely to be without a child. There are also differences of occupational sector and class between the couples in clusters (A) and (B). Couples in cluster (A) work in the private services sector, while those in cluster (B) are mainly employed in the public services sector. At the same time, couples in cluster (B) are mainly employees (IIIa or VI + VIIab), while in cluster (A) they are more likely to be self-employed (I + II or IVabc).
This particular combination of characteristics—and constraints—creates synchronized couples’ patterns (Fig. 1). However, there are some substantive differences. In cluster (B), the spouses seem to have breakfast together before going to work and starting it synchronically. They both stop working quite early in the afternoon, favored probably by their kind of job and the economic sector in which they are employed. At 17:00 She is already at home, while He comes back at 17:40. Thereafter, both the spouses spend the rest of the day at home, doing housework and childcare before having dinner together and, finally, enjoying most of their free time synchronically. The only second part of the day in which they are not synchronized is the one immediately after dinner, when She postpones her free time for 20 min due to housework (Fig. 1).
In cluster (A) the absence of children and the type of work (self-employment in the private services sector) would seem to explain why the couples start their day in a manner differently from the others (Fig. 1). Both the spouses wake up together, and later than the couples in other clusters. They also have breakfast at the same time. Then He leaves the house while She quickly tides up before going to work. Job commitments fill equally most of their daily time. Moreover, their lunch and dinners are synchronized. Finally, the extent of their job commitments and the parallel absence of children seem to pull the spouses in cluster (A) directly to free and leisure time.8
Cluster (C) falls—although not completely—within the synchronized time-use patterns (Fig. 1). Couples in this cluster have some features in common with those of cluster (A). In particular, like dual-earner couples (A), those in cluster (C) are more likely to be childless. They also are mainly members of the upper class (I+II) (Table 3).
Cluster (C) has some characteristics in common with clusters (A) and (B) also in terms of daily time organization, even if its pattern ends with a longer tail of synchronized personal care: spouses may still be having dinner together at the end of the observation (22:00). However, what really makes cluster (C) unique is the time organization around lunch. While for cluster (B), there is no specific time for lunch, and for cluster (A) the time interval for lunch is well defined between two work ‘segments’, for cluster (C) the break from work is longer for Her. Moreover, around a certain synchronized lunch-time, there is a certain desynchronization due to His work commitments and Her housework tasks. Finally, before going back to work, She is even able to spend a short time relaxing. Here, the sequence of activity combinations around lunch is much more chaotic, fragmented and desynchronized compared with the clusters (A) and (B). However, except for this desynchronized part of the day, probably due to different work commitments, the rest of the day is mainly synchronized.
Alongside the synchronized patterns other desynchronized daily time-use patterns emerge. These strategies of desynchronization seem to be specialized into two forms, on the basis of the kind of tasks sequentially performed and combined by the two spouses during the day.
The first kind of strategy is called functionally desynchronized. Here, gender differences in activities-in-time appear to be an adaptation to structural desynchronization (Nock and Kingston 1984) of His and Her working schedules. The difference in work duration between men and women appears to produce a counterbalancing force by which—at the end of the work day—She ‘compensates’ the different spread of paid-work commitments of Him with unpaid work, in a quite calibrated way that preserves the free time of both the spouses. The gender division of work-family activities is “unstable” (Esping-Andersen et al. 2013), mostly due to “structural constraints” of the partners’ working schedules.
Couples in clusters (D) and (E) are associated with the clearest functionally desynchronized patterns. For both clusters, Her working schedule is shifted forward in the afternoon (Fig. 1) and in most cases at least one spouse of these dual-earner couples is employed in the public sector. The most important differences between these couples is that (D) do not have a child while (E) do so (Table 2).
In cluster (D), He starts work much earlier than Her. On the other hand, She spends more time on personal care before going out to work. The probable absence of children may be helpful in this regard. At the end of the workday, these spouses come back home later and synchronically. Once at home, they desynchronize themselves again (Fig. 1) and while He takes a break to relax, She does some housework. It seems that there is some sort of compensation of daily time activities: He starts work much earlier than Her in the morning, and the gendered housework at the end of the day seems useful in establishing the balance, before dinner. Finally, they both eat and relax together (Fig. 1).
In the time-use pattern (E), She wakes up a little before Him, probably because of the young children’s demands. They have breakfast together before going to work, and they start working synchronically. In the afternoon, She leaves the workplace much earlier than Him, perhaps in order to devote herself again to childcare and housework. After His return from work, they eat together, before spending synchronous free time. Again, the clear non-cooperation of Him in the household tasks may be due to the evident spread of work commitments during the whole day.
The second desynchronizations strategy is what we call traditional. Here, the couple’s distribution of activities during the day does not seem to follow any compensatory mechanism. The overall desynchronization seems to be weakly linked to the “structure” of the spouses’ work commitments (Nock and Kingston 1984). Conversely, it appears to be an outcome of a more “traditional” gender attitude to the work-family balance. Here, the result is a marked overload in paid/unpaid work for women (Mattingly and Blanchi 2003), with stronger evidence of the gendered leisure gap (Beblo and Robledo 2008).
Couples in clusters (F) and (G) are characterized by the presence of younger children, a low level of education, mainly compulsory level, and are mainly employed in the industrial sector. There are some differences in job features: couples in cluster (G) are mainly members of the white collar middle class (IIIa), while those in cluster (F) are mainly members of the petty bourgeoisie (IVabc). Moreover, cluster (G) shows a relatively high presence of couples where He works in the industrial sector and She in the public sector or He does so in the private sector and She in the public sector.
The time-use pattern of cluster (F) is apparently similar to that of cluster (E). In fact, She comes back home before Him and deals with domestic chores. However, compared with cluster (E) we note a greater extension of Her household commitments, from the early afternoon until the evening, when He has already finished his workhours. Thus, on one hand the desynchronization seems functional for the long time spent by Him at work; on the other hand, this couple’s time-use pattern does not show any cooperative or compensatory forms of time-use organization between the spouses (Fig. 1).
Last but not least, cluster (G) is certainly the maximum expression of the traditional desynchronization. The time-use pattern (G) describes a couple in which everything is on Her shoulders. The delay of the exit from home is followed by a long journey to work. Then, she continues to work until the late afternoon. Finally, when both the spouses return home, He takes a break and rests, while She continues to do housework and child care. The only synchronized moment in the final part of this couple’s pattern is when they have dinner. Among all the time-use patterns, this is certainly the one with the highest level of gender inequality in regard to the daily work-family balance challenge (Fig. 1).
## 5 Conclusions
In the introduction of this paper, we pointed out the importance of adopting a multichannel sequence analysis approach to gain better understanding of the complexity of the work-family balance through holistic study of dual-earner couples’ daily time use as an overall pattern. At the end of this paper, it is evident that the sequence analysis of time use diaries provides a rather clear and meaningful representation of the main patterns of the everyday organization of Italian dual-earner couples. The analysis shows the clear co-action of multiple generative mechanisms that give shape and relevance to each of seven patterns and define different forms of (de)synchronization in the everyday-life organization of both individuals and couples.
These patterns are attributable to three different strategies for organization of daily activities, and three types of equilibria (Esping-Andersen et al. 2013) within the family. In fact, these patterns describe three sets of work-family equilibrium strategies performed by dual-earner couples, with different expected levels of desirability. The first defines the synchronization strategies (clusters A, B and C). Hence, the housework division by gender is “egalitarian” because both partners participate in the housework and are able to share most of the free time available. The second defines the functional desynchronization strategies (clusters D and E). The division of housework by gender is “unstable” (Esping-Andersen et al. 2013) mostly because of “structural constraints” of the partners’ work schedules (Nock and Kingston 1984). Nevertheless, the behavior of Him and Her reflects a collaborative complementarity which still tends to preserve the free time of both. The third pattern defines the traditional desynchronization strategies (clusters F and G). Partners are characterized by an unequal division of housework. They exhibit the classic features of a “traditional” equilibrium where the woman has heavy overexposure to home/child care tasks and limited free time availability (Mattingly and Blanchi 2003; Beblo and Robledo 2008).
The close relations with certain household features (the presence of children and the couple’s level of education, social class and job sector) support the contention that such behaviors and patterns result, on the one hand, from the internal bargaining among each couple conditioned by the cultural-economic characteristics of the partners themselves, and, on the other, by external social constraints. 9
The time-use patterns result from the complex co-action among individual, family and social factors whose combination defines the relevance and the shape of patterns. The time balance within His and Her activities, as well as its configuration across the day, is not random; rather, it changes according to multiple latent factors.
Dual-earner couples package their daily life mainly in accordance with their work and its schedules, and therefore mainly the type of job and the economic sector (Hamermesh 2002; Warren 2003; Lesnard 2008). Moreover, the analyses show that this time packaging changes in relation to the presence of children. We observed that the presence of children (especially young ones) introduces elements of desynchronization and specialization within the couples. The impact of young children, however, may differ according to both the couples’ work schedules and their gendered attitudes to work-family activities. According to such a view, the last factor is the couples’ level of education. We can assume it as a proxy for the predisposition towards egalitarian gender attitudes (Hakim 2003; Oláh et al. 2014). Not by chance, the most “egalitarian” strategies of synchronization are performed by high-educated couples, while the most “traditional” strategies of not functional desynchronization are performed by couples with a low level of education.
In conclusion, the presence of children, the level of education, and job characteristics are three dimensions that contribute to defining the patterns of couples’ daily activities, already constrained by several social rhythms (i.e. school hours; lunch and dinner time; shop opening hours; etc.).
## Footnotes
1. 1.
We immediately point out that, in this paper, we only consider heterosexual couples, because of the limitations of the Italian Time Use Survey questionnaire. Moreover, we stress that the choice of using the male pronoun before the female one is perfectly conscious: to make the reading easier, we needed to follow a single criterion and we decided to cite the spouses following the order of records in our data files, i.e. male-female.
2. 2.
Duration and timing are two of the three aspects identified by Studer and Ritschard (2016) as mattering in sequence comparison. Here, we do not pay attention to the third one, sequencing, which is not a concern for studying (de)synchronization.
3. 3.
Excluded from the sample were: (a) couples living with other couples (parents or others); (b) couples that filled in the questionnaire on different days, or during the weekend; (c) couples with incomplete information by one or both of the spouses; and, (d) couples in which his or her age was over 65.
4. 4.
Our distance measure between sequences could as well be used for clustering with the property-based and fuzzy methods addressed by Studer (2018) in this bundle.
5. 5.
The absence of a break for lunch does not mean that spouses do not eat; only that, overall, there is not a common time interval for lunch due to the different work schedules.
6. 6.
For each cluster and for each point-in-time, the most frequent activities combination was identified. On this criterion, only 16 of all the 36 (six for each spouse) possible combinations were found to be frequently performed by the couples, suggesting a certain routine by couples in everyday life.
7. 7.
Educational qualifications were classified as: (1) compulsory level—elementary school certificate (including no educational qualifications) and lower-secondary school certificate (including 2-to-3 year vocational certificates); (2) upper-secondary school diploma (including post-secondary diplomas); and, (3) university degree (including postgraduate qualifications). Social class was classified according to the EGP scale: (I+II) Service class; professionals, administrators, and managers; (IIIa) Routine non manual workers; (IVabc) Petty bourgeoisie; Farmers; (VI+VIIab) Skilled and non-skilled workers; Agricultural Labourers. The economic sector (agriculture and industry, private services and public services) of the couple was the combination of the main job sector of Him and Her. The couple’s educational level (and social class) is defined as the highest educational level (social class position) between the spouses.
8. 8.
The absence of housework does not mean that spouses do not perform any housework. Simply, they are more likely to do it in a non-regular way, during brief and scattered moments of spare time. Moreover, they may not necessarily do the housework every day, maybe postponing the chores to the weekend.
9. 9.
The solutions of these particular couples in daily scheduling affected the spouses’ level of satisfaction as an outcome of daily life quality. For details see Bison and Scalcon (2016).
## Notes
### Acknowledgements
The authors warmly thank the anonymous reviewers for their constructive comments.
## References
1. Abbott, A. (1990). A primer on sequence methods. Organization Science, 1(4), 375–392.
2. Abbott, A., & Tsay, A. (2000). Sequence analysis and optimal matching methods in sociology: Review and prospect. Sociological Methods & Research, 29(1), 3–33.
3. Aisenbrey, S., & Fasang, A. (2017). The interplay of work and family trajectories over the life course: Germany and the united states in comparison. American Journal of Sociology, 122(5), 1448–1484.
4. Beblo, M., & Robledo, J. R. (2008). The wage gap and the leisure gap for double-earner couples. Journal of Population Economics, 21(2), 281–304.
5. Becker, G. S. (1964). Human capital national bureau of economic research. New York: National Bureau of Economic Research.Google Scholar
6. Berk, S. (1985). The gender factory: The apportionment of work in American households. Boston: Springer US.
7. Bison, I. (2009). Om matters: The interaction effects between indel and substitution costs. Methodological Innovations Online, 4(2), 53–67.
8. Bison, I. (2011). Lexicographic index: A new measurement of resemblance among sequences. In A. Bryman (Ed.), The SAGE handbook of innovation in social research methods (p. 422). London/Thousand Oaks: Sage.
9. Bison, I., & Scalcon, A. (2016). From 07.00 to 22.00: A dual-earner typical day in Italy. Old questions and new evidences from social sequence analysis. In G. Ritschard & M. Studer (Eds.), Proceedings of the International Conference on Sequence Analysis and Related Methods, Lausanne, June 8–10 (pp. 35–71).Google Scholar
10. Chenu, A., & Robinson, J. P. (2002). Synchonicity in the work schedules of working couples. Monthly Labor Review, 125(4), 55–63.Google Scholar
11. Coverman, S. (1985). Explaining husbands’ participation in domestic labor. The Sociological Quarterly, 26(1), 81–97.
12. Craig, L., Powell, A., & Smyth, C. (2014). Towards intensive parenting? Changes in the composition and determinants of mothers’ and fathers’ time with children 1992–2006. The British Journal of Sociology, 65(3), 555–579.
13. Dijkstra, W., & Taris, T. (1995). Measuring the agreement between sequences. Sociological Methods & Research, 24(2), 214–231.
14. Elzinga, C. H. (2003). Sequence similarity. Sociological Methods & Research, 32(1), 3–29.
15. Esping-Andersen, G., Boertien, D., Bonke, J., & Gracia, P. (2013). Couple specialization in multiple equilibria. European Sociological Review, 29(6), 1280–1294.
16. Gauthier, J.-A., Widmer, E. D., Bucher, P., & Notredame, C. (2010). Multichannel sequence analysis applied to social science data. Sociological Methodology, 40(1), 1–38.
17. Gershuny, J., & Robinson, J. P. (1988). Historical changes in the household division of labor. Demography, 25(4), 537–552.
18. Hagerstrand, T. (1982). Diorama, path and project. Tijdschrift voor economische en sociale geografie, 73(6), 323–339.
19. Hakim, C. (2003). A new approach to explaining fertility patterns: Preference theory. Population and Development Review, 29(3), 349–374.
20. Hallberg, D. (2003). Synchronous leisure, jointness and household labor supply, Labour Economics, 10(2), 185–203, ISSN 0927-5371 https://www.sciencedirect.com/science/article/abs/pii/S092753710300006X
21. Hamermesh, D. S. (2002). Timing, togetherness and time windfalls. Journal of Population Economics, 15(4), 601–623.
22. Hellgren, M. (2014). Extracting more knowledge from time diaries? Social Indicators Research, 119(3), 1517–1534.
23. Istat (2011). Indagine multiscopo sulle famiglie – Uso del tempo anno 2008–2009. manuale utente. Technical report, Istat.Google Scholar
24. Lesnard, L. (2008). Off-scheduling within dual-earner couples: An unequal and negative externality for family time. American Journal of Sociology, 114(2), 447–490.
25. Makles, A. (2012). Stata tip 110: How to get the optimal k-means cluster solution. Stata Journal: StataCorp LP, 12(2), 347–351.
26. Manser, M., & Brown, M. (1980). Marriage and household decision-making: A bargaining analysis. International Economic Review, 21(1), 31–44.
27. Mansour, H., & McKinnish, T. (2014). Couples’ time together: Complementarities in production versus complementarities in consumption. Journal of Population Economics, 27(4), 1127–1144.
28. Mattingly, M. J., & Blanchi, S. M. (2003). Gender differences in the quantity and quality of free time: The U.S. experience*. Social Forces, 81(3), 999–1030.
29. Naldini, M., & Saraceno, C. (2011). Conciliare famiglia e lavoro: vecchi e nuovi patti tra sessi e generazioni. Il Mulino, Bologna.Google Scholar
30. Nock, S. L., & Kingston, P. W. (1984). The family work day. Journal of Marriage and Family, 46(2), 333–343.
31. Oláh, L. S., Richter, R., & Kotowska, I. E. (2014). State-of-the-art report. The new roles of men and women and implications for families and societies. Families and societies. FamilesAndSocieties Working Paper Series 11, Stockholm University.Google Scholar
32. Pollock, G. (2007). Holistic trajectories: A study of combined employment, housing and family careers by using multiple-sequence analysis. Journal of the Royal Statistical Society: Series A (Statistics in Society), 170(1), 167–183.
33. Presser, H. B. (1994). Employment schedules among dual-earner spouses and the division of household labor by gender. American Sociological Review, 59(3), 348–364.
34. Raley, S., Bianchi, S. M., & Wang, W. (2012). When do fathers care? Mothers’ economic contribution and fathers’ involvement in child care. American Journal of Sociology, 117(5), 1422–1459.
35. Saraceno, C. (2012). Coppie e famiglie. Feltrinelli Editore.Google Scholar
36. Studer, M. (2018). Divisive property-based and fuzzy clustering for sequence analysis. In G. Ritschard & M. Studer (Eds.), Sequence analysis and related approaches: Innovative methods and applications. Cham: Springer (this volume).Google Scholar
37. Studer, M., & Ritschard, G. (2016). What matters in differences between life trajectories: A comparative review of sequence dissimilarity measures. Journal of the Royal Statistical Society, Series A, 179(2), 481–511.
38. Warren, T. (2003). Classand gender-based working time? time poverty and the division of domestic labour. Sociology, 37(4), 733–752.
39. West, C., & Zimmerman, D. H. (1987). Doing gender. Gender & Society, 1(2), 125–151.
40. Wu, L. L. (2000). Some comments on “sequence analysis and optimal matching methods in sociology: Review and prospect”. Sociological Methods & Research, 29(1), 41–64.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5408926606178284, "perplexity": 3107.3223795103463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584350539.86/warc/CC-MAIN-20190123193004-20190123215004-00226.warc.gz"}
|
https://git.qt.io/tohunger/qt-creator/-/commit/36e7ec989ec19fb4efddd3a47e43123014606b19
|
Commit 36e7ec98 by Kavindra Palaraja
### Fixes: Documentation - removed some repeated information; more cleanups
parent 33bdffed
... ... @@ -114,7 +114,7 @@ The task pane in Qt Creator can display one of four different panes: \gui{Build Issues}, \gui{Search Results}, \gui{Application Output}, and \gui{Compile}. These panes are available in all modes. \gui{Compile Output}. These panes are available in all modes. \section2 Build Issues ... ... @@ -142,8 +142,8 @@ \section2 Compile The \gui{Compile} pane provides all the output from the compiler. In other words, it is a more verbose version of information displayed in the The \gui{Compile Output} pane provides all the output from the compiler. In other words, it is a more verbose version of information displayed in the \gui{Build Issues} \image qtcreator-compile-pane.png ... ... @@ -600,25 +600,25 @@ \o Function \o Key Combination \row \o Go to a Line in the Current Document \o Go to a line in the current document \o Ctrl + K, l, Space, and the line number \row \o Go to a Function Definitions \o Go to a function definition \o Ctrl + K, :, Space, and the function name \row \o Go to a Help Topic \o Go to a help topic \o Ctrl + K, ?, Space, and the topic \row \o Go to an Already Opened Document \o Go to an opened document \o Ctrl + K, o, Space, and the document name. \row \o Go to a File in the File System (browse the file system) \o Go to a file in the file system (browse the file system) \o Ctrl + K, f, Space, and the file name. \row \o Go to a File in any Loaded Project \o Go to a file in any project currently loaded \o Ctrl + K, a, Space, and the function name. \row \o Go to a File in the Current Project \o Go to a file in the current project \o Ctrl + K, p, Space, and the function name. \endtable */ ... ... @@ -1041,84 +1041,84 @@ \header \o Function \o Key Combination \row \o Activate Build & Run Mode \o Ctrl + 4 \o Activate \gui Welcome mode \o Ctrl + 1 \row \o Activate \gui Edit mode \o Ctrl + 2 \row \o Activate Debug Mode \o Activate \gui Debug mode \o Ctrl + 3 \row \o Activate Edit Mode \o Ctrl + 2 \o Activate \gui Projects mode \o Ctrl + 4 \row \o Activate Help Mode \o Activate \gui Help mode \o Ctrl + 5 \row \o Activate Output Mode \o Activate \gui Output mode \o Ctrl + 6 \row \o Activate Welcome Mode \o Ctrl + 1 \row \o Find \o Ctrl + F \row \o Find Next \o Find next \o F3 \row \o Go back to Code Editor (\gui Edit mode: The first press gives the editor focus, without closing secondary windows; the second press closes all secondary windows. \gui Debug mode or \gui Help mode: Switch to \gui Edit mode.) \o Go back to the code editor (\gui Edit mode: The first press gives the editor focus, without closing secondary windows; the second press closes all secondary windows. \gui Debug mode or \gui Help mode: Switch to \gui Edit mode.) \o Esc \row \o Go to a Line \o Go to a line \o Ctrl + L \row \o Start Debugging \o Start debugging \o F5 \row \o Stop Debugging \o Stop debugging \o Shift + F5 \row \o Toggle Application Output \o Toggle \gui{Application Output} pane \o Alt + 3 \row \o Toggle Code Declaration and Definition \o Toggle code declaration and definition \o F2 \row \o Toggle Header File and Source File \o Toggle header file and source file \o F4 \row \o Toggle Side Bar \o Alt + 0 \row \o Toggle Task List \o Toggle \gui{Build Issues} pane \o Alt + 1 \row \o Toggle Search Results \o Toggle \gui{Search Results} pane \o Alt + 2 \row \o Toggle Compile Output \o Toggle \gui{Compile Output} pane \o Alt + 4 \row \o Select Welcome Mode \o Ctrl + 1 \row \o Select Edit Mode \o Ctrl + 2 \row \o Select Debug Mode \o Ctrl + 3 \row \o Select Build & Run Mode \o Ctrl + 4 \row \o Select Help Mode \o Ctrl + 5 \row \o Select Output Mode \o Ctrl + 6 \endtable */ ... ...
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8899801969528198, "perplexity": 24593.129807299843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517018.29/warc/CC-MAIN-20220517063528-20220517093528-00266.warc.gz"}
|
http://mathvis.academic.wlu.edu/2015/07/
|
# 3D printing with liquid
The neat thing that we’ve been doing in the past couple of weeks is to use the FormLabs Form 1+ liquid resin printer. It is just so cool!
The first objects we printed were the strange bowls (shells, washers and smooth). Previously we tried to print them on the MakerBot 2X, but the sheer number of supports meant the print was not a great success. However, the FormLabs printed them beautifully. We all loved watching the bowls slowly come out of the liquid resin.
We next printed was the Bulge-Head solid. It is one of our favorites!
Finally, we had great success printing parametric curves and other surfaces with the liquid resin printer.
# Parametric Curves: Spiral, Self-Intersecting Curve, and Helix
I then made a series of models of parametric curves. The first was a model of a spiral that increases in diameter as it travels along the $$z$$-axis. The curve comes from Section 10.7 in Stewart’s Essential Calculus. The curve is defined by the equations $$x=t*\cos(t), y=t*\sin(t)$$, and $$z=t$$. I designed the model in Cinema 4D using the Formula Spline to draw the curve, the Sweep NURB to give the curve depth, and the Wrap Tool to wrap the text around the curve. I also used the Extrude Tool to give the equations depth and the Boole tool to connect them with the curve. The print failed a few of times due to a tangled filament and a jammed extruder, but it worked after the fourth try. The result was that the equations looked messy and the letter $$t$$ is hard to make out in places. Also, the MakerBot did not include supports for the last rotation, which caused the print to be messy towards the vertex of the spiral. We remedied these issues by printing another version of the spiral on the Formlabs printer, where we imprinted the equations into the object. The print came out much better as the equations were neater as was the end of the spiral. This model can be found on Thingiverse.
I used the FormLabs printer to create a model of a space curve from Stewart’s Essential Calculus (Section 10.7, exercise 18). The first challenge was to draw the object in Cinema 4D without a self-intersection (3D printers do not accept intersecting geometry). Professor Denne suggested that I make two half-curves that intersect, then make a Boolean out of them. The suggestion worked, so I was then able to put text onto it. It was tricky to figure out how to get the equations onto the curve, but I decided to put them on top of the lower ring of the model. They turned out well, as did the curve, which was very smooth and with minimal deformation due to its supports. You can find this model on Thingiverse here.
My next print was of a helix on the Formlabs printer. I first printed a Black one with a radius of 2 mm, but it turned out to be very small and frail, and the equations were hardly legible. I then fixed these issues by making the radius 4 mm, but the equations are again hard to read because of the white resin. This model can be found on Thingiverse here.
Our experience has taught us that equations are easiest to read on the FormLabs prints in grey. You can find instructions on how to use Cinema 4D to add equations to parametrized curves here.
# Torus Knots
I modeled the trefoil knot as two torus knots $$T(2,3)$$ and $$T(3,2)$$. The parametric equations for a $$T(p,q)$$ knot are $$x = \cos(pt)*(3+\cos(qt)), y=\sin(pt)*(3+\cos(qt))$$, and $$z=\sin(qt)$$. Here, $$p$$ is the number of times the knot winds around the longitude of a torus, and $$q$$ is the number of times the knot winds around the meridian of a torus.
Both models were printed on the FormLabs printer. I first made a small $$T(2,3)$$ knot with a label extruded out of the curve (as shown to the left). I used Cinema 4D to design the model by using the Formula Spline to draw the curve, the Sweep NURB to give the curve depth, and the Wrap Tool to wrap the text around the curve. I also used the Extrude Tool to give the equations depth and the Boole Tool to connect the equations to the curve. For both knots I had to make sure the ends of the knots overlapped correctly. Before printing the $$T(3,2)$$ knot, I had to change the range of $$t$$ to $$t=[0, 2\pi]$$ instead of $$t=[0, 5\pi]$$ (I initially used $$5\pi$$ to be sure that the curve closed).
The first $$T(2,3)$$ knot came out nicely, however the text was a little small. Using the subscript made the numbers too small, so I reprinted the knot used parentheses instead, as shown here. The $$T(3,2)$$ knot also looked great, as it was smooth and there were merely small nubs where the supports were, which could be removed with an exacto blade. We’ve discovered that the FormLabs printer makes smoother surfaces and finer curves than does the MakerBot, which is why it is ideal for printing knots.
You can find the torus knots on Thingiverse here T(2,3) and here T(3,2). Instructions on how to make torus knots in Cinema 4D can be found here. Professor Denne has also created another worksheet in Mathematica about Torus knots. It can be found here.
In order to make a monkey saddle I created the surface in Mathematica. I then exported it as a .wrl file and imported it into Cinema 4D. Once it was in Cinema 4D, like all surfaces, I made it the correct size, optimized the polygons and extruded them by 0.20 cm.
I then added an equation to the surface by punching it all the way through. This time with the equation I used Arial as the font instead of Times New Roman to hopefully avoid the issues we had with the formula when printing the hyperbolic paraboloid.
I printed the monkey saddle using the liquid printer and the formula in Arial font ended up looking great. The model can be found on Thingiverse here
# Helicoids
My next project (after finally finishing all the quadratic surfaces) was to make a helicoid. I spend some time on Mathematica creating different helicoids by changing the parameters of the formula. The helicoid is pararametrized by $$x=u\cos(t), y=u\sin(t),$$ and $$z=u$$, where $$u\in[-1,1]$$ and $$t\in[0,2\pi]$$.
Professor Denne and I decided to print two of the ones I created to start (we may print more!). I exported the following Mathematica files and imported them into Cinema 4D.
Professor Denne used these files (as well as others) to create another worksheet in Mathematica. It can be found here
After making them the correct size I optimized the polygons and extruded them by 0.20 cm (a process I can now do very quickly after all my practice with the quadratic surfaces). I then printed each of them on the liquid printer and had fantastic results!
The .STL and .form files for both of these helicoids can be found on Thingiverse.
Later on, I made another helicoid, this one with $$u\in[0.25,1.25]$$ and $$t\in[0,2\pi]$$. This model can be found on Thingiverse here.
# Klein Bottle
Just a really short post to share our general excitement over having just about completed all of the objects from Multivariable Calculus. We just have a few more to print out. We will spend our remaining week(!) printing out some interesting topological objects – many of these directly from Thingiverse.
We printed one such object today. This is the Voronoi Klein Bottle from MadOverlord on Thingiverse. We printed this on the MakerBot 2X with a raft but no supports. After a moment’s thought one can see that the print succeeds (despite the short horizontal lines on the design) because the Voronoi cells are small enough. Interesting! The black filament also hides a few rough spots on the print.
The Klein bottle is named after Felix Klein (25 April 1849 – 22 June 1925), a German mathematician who saw many connections between Group Theory and Geometry. It is a one-sided surface and is a generalization of a Mobius strip. (In fact, it is topologically equivalent to two Mobius strips glued together along their boundaries.)
There are many fabulous descriptions of this topological object, one of my favorites is The Adventures of the Klein Bottle found on YouTube (from the wonderful folks at the Frei Universitat in Berlin).
# Volumes by Slices: Iterated Integrals
I have modeled the the solid from Example 8 of Section 12.1 from Stewart’s Essential Calculus. It is bounded by the surfaces $$z=\sin x \cos y$$, $$z=0$$, $$y=0$$, and $$x=\pi/2$$. The example demonstrates the strategy behind computing a double integral using Fubini’s Theorem. I approximated the solid by eight slices in the $$x$$ and $$y$$ directions. In order to draw the correct splines in Cinema 4D, I had to use the correct parametric equations to plug into the inputs $$x(t),\, y(t)$$, and $$z(t)$$. For the first object I held $$x$$ constant (approximating integration with respect to the $$y$$-variable). The parametric equations were $$x(t)=\frac{k\pi}{32}$$, $$y(t)=t$$, and $$z(t)=\sin(\frac{k\pi}{32}) \cos(t)$$ for $$k=1, 3, 5, \dots, 15$$. I then created a slice in Cinema 4D by adding straight splines. I extruded each slice by 0.2, which is just greater than $$\pi/16$$ (the width of each slice). I placed each slice so that it overlapped slightly with the next slice – this will allow the objects to be merged (via a Boole) and will prevent vertical lines from showing in the print after the slices are collected into one object. I had to use a Boole with two cubes for the two approximations by slices because the solids each had two very thin edges (that is, I shaved off some volume from two of the edges).
I then repeated the entire process, but this time with a constant $$y$$-variable. I also printed the smooth solid, which is the volume that is approximated by the slices. In order to do this, I imported the solid from Mathematica and put equations on it like I have in other models. These models can be found on Thingiverse here, here, and here.
My next object was a bulge-head solid. This solid lies above the $$xy$$–plane, outside the unit sphere, and inside the cardioid of revolution given by $$\rho=1+\cos\phi$$. Professor Beanland had given us these equations, since he was really curious to see what the solid looked like. He’d nicknamed it the cone-head solid, but after printing we renamed it the bulge-head solid.
Since the outside of the solid was a cardioid of revolution, I decided to create the solid in Cinema 4D by creating two splines (one for the cardioid, the other for the hemisphere) and revolving each around an appropriate axis. Professor Denne helped me to figure out which parametric equations to place into Cinema 4D’s inputs for a formula spline. These were $$x(t)=1+2\cos(t) + \cos(2t)$$, $$y(t)=2\sin(t)+\sin(2t)$$, and $$z(t)=0$$, where $$t=[0,\pi/2]$$. For the spline that would later become the hemisphere, I used $$x(t)=\cos(t)$$, $$y(t)=\sin(t)$$, and $$z(t)=0$$, where $$t=[0, \pi/2]$$. I then used the Lathe Tool with an angle of $$360^\circ$$ to make the two boundaries of the solid. I then put them into a Boole to make a union between the two boundaries. I printed the bulge-head solid on the FormLabs printer using clear resin. When loading the object into the FormLabs software, we got a warning about the object’s integrity, but we decided to continue the print anyway. Later on we were worried that the object would use up too much resin and that it may have some problems on the surface (like the smooth strange bowl did). It turned out that added a bit more resin mid-build, just to be on the safe side. The solid looks pretty good right now because it only has a few pimples on the inside, but no significant lumps. The object is still hardening and once it’s completely dry we’ll remove the outside supports. This will probably leave a few pimples as well.
You can find this model on Thingiverse here.
# Quadratic Surfaces – Hyperboloid of Two Sheets
The last quadratic surface I printed was a hyperboloid of two sheets.
For the hyperboloid of two sheets I created the entire object from scratch in Cinema 4D using the same process I used to create the cone and other similar objects. For this surface I used the same formula spline as the hyperboloid of one sheet $$x(t)=cosh(t), y(t)=sinh(t), z(t)=0$$, and then rotated it to the correct orientation. I then used the lathe tool and rotated this spline 180 degrees since this was all it needed. Because of this I needed to use only 30 rotation segments for a total of 60 all around the object.
I also had to reverse the normals on half of the object to make sure they were all aligned with the other half before I extruded the surface. I optimized the polygons to be sure the edges joined up into one object. I then extruded the surface to create my hyperboloid of two sheets. I copied this and put equations through one of them. I also made sure to Boole the edges of the hyperboloid to make them flat for printing.
Here is a picture of the final object! It can be found on Thingiverse here.
# Quadratic Surfaces – Hyperbolic Paraboloid
The quadratic surface that gave me the most trouble was the hyperbolic paraboloid. This surface could not be created in Cinema 4D and had to be imported from Mathematica. When I imported the surface from Mathematica the center of the saddle had hundreds of little polygons that overlapped, which became a huge problem when I tried to extrude them to give the surface thickness.
I had to spend a long time experimenting to find the lowest number of plot points I could use in Mathematica and still get an accurate object. Once I had done this I did the same thing with the optimize tool in Cinema 4D to see how big I could make the polygons before the surface started to lose accuracy. The first time I went through all these steps the hyperbolic paraboloid I had chosen just didn’t work correctly. So, I went back to the beginning and created a new Mathematica file of a hyperbolic paraboloid, and spent some time deciding where to cut it off to create edges that were as straight as possible.
Once I had done this and imported the surface into Mathmatica I optimized the surface as much as it would allow and extruded it. Finally I had a surface I could print! I then added the hyperbolic paraboloid’s equation to the surface. Instead of just imprinted the equation, since the surface was so thin I punched it all the way through.
In order to print this surface I used the FormLabs liquid printer. When the object came out of the printer it looked great and only had a few minor flaws to fix after this first print. One of the issues was the size of the object; it was just a little too small. The other issue was that the 2 in the exponent of the equation didn’t quite form correctly because it was too small. The final issues was that the equation had a $$+$$ sign where there should have been a $$–$$ sign (oops). The equation was little too long with a 0 that was missing its center. To fix these problems I rearranged the equation (and fixed the sign issues) in Adobe Illustrator and then punched it through the surface again. The second print on the liquid printed I made 1.4 times larger than the last print.
The final print still had issues with the formula but otherwise worked out well. We are currently looking into changing the font to see if that helps with this issue. This model can be found on Thingiverse here.
Using my experiences building this and the other quadratic surfaces, I’ve put together a set of instructions on how to build quadratic surfaces using Mathematica and Cinema 4D. This can be found here.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6168063879013062, "perplexity": 1008.8687966752237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948047.85/warc/CC-MAIN-20180426012045-20180426032045-00098.warc.gz"}
|
http://www.msri.org/general_events/18873
|
# Mathematical Sciences Research Institute
Home » Audit Committee
# Committee Meeting
Audit Committee November 03, 2009 All Day
Description Tuesday, November 3, 2009. Audit Committee meeting from 3:00 pm to 4:30 pm.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9168326258659363, "perplexity": 13371.744843259794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637898644.9/warc/CC-MAIN-20141030025818-00152-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://www.ideals.illinois.edu/handle/2142/80627
|
## Files in this item
FilesDescriptionFormat
application/pdf
3395560.pdf (2MB)
(no description provided)PDF
## Description
Title: Strongly Interacting Fermi Gases, Radio Frequency Spectroscopy and Universality Author(s): Zhang, Shizhong Doctoral Committee Chair(s): Leggett, Anthony J. Department / Program: Physics Discipline: Physics Degree Granting Institution: University of Illinois at Urbana-Champaign Degree: Ph.D. Genre: Dissertation Subject(s): Physics, Low Temperature Abstract: In Chapter 5, we consider the BEC-BCS crossover problem in the dilute atomic gases frm a general point of view. We show that as a result of the specific properties of the ultra-cold Fermi gases, there exists a universal function which incorporates all the many-body information of the system. Although we do not yet know how to compute the function analytically, we show that many physical quantities can be expressed in terms of this function. It gives an intuitive way of understanding of how universality arises in the cold atomic gases. Issue Date: 2009 Type: Text Language: English Description: 133 p.Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2009. URI: http://hdl.handle.net/2142/80627 Other Identifier(s): (MiAaPQ)AAI3395560 Date Available in IDEALS: 2015-09-25 Date Deposited: 2009
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8655152320861816, "perplexity": 3109.20252298227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823009.19/warc/CC-MAIN-20181209185547-20181209211547-00341.warc.gz"}
|
http://clay6.com/qa/27895/positive-deviations-from-ideal-gas-behaviour-takes-place-because
|
Browse Questions
# Positive deviations from ideal gas behaviour takes place because
$\begin{array}{1 1} (a)\;Molecular\;interaction\;between\;atoms\;and\;\large\frac{PV}{nRT} > 1\\(b)\;Molecular\;interaction\;between\;atoms\;and\;\large\frac{PV}{nRT} < 1\\(c)\;Finite\;size\;of\;atoms\;and\;\large\frac{PV}{nRT} > 1\\(d)\;Finite\;size\;of\;atoms\;and\;\large\frac{PV}{nRT} < 1\end{array}$
For positive deviations Z > 1 i.e the conditions when repulsive forces predominants
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7054075002670288, "perplexity": 10326.295360942504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218199514.53/warc/CC-MAIN-20170322212959-00451-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://en.m.wikisource.org/wiki/Popular_Science_Monthly/Volume_62/December_1902/Mental_and_Moral_Heredity_in_Royalty_V
|
# Popular Science Monthly/Volume 62/December 1902/Mental and Moral Heredity in Royalty V
(1902)
Mental and Moral Heredity in Royalty V by Frederick Adams Woods
MENTAL AND MORAL HEREDITY IN ROYALTY, IV.
By DR. FREDERICK ADAMS WOODS,
HARVARD UNIVERSITY.
Spain.
THE early history of its great family is coincident with the history of the rise of Spain's greatness as a nation. Whatever value other factors may have had in producing Spain's glory the presence of the long line of great rulers and warriors must have been one of the greatest. This influence of the great leaders could make itself felt then, even more than now.
Within a short time we have had an example in Lord Roberts of what genius for generalship can accomplish in the turn of events. How much greater impress on his times the great man must have made in those medieval days when the masses knew almost nothing!
I know of no other direct line, except the then reigning one in Portugal, where greatness was maintained for so long a period, nor has there appeared any other than these two dynasties, where vigorous and distinguished blood was so continuously introduced into the stock. Portugal was five times united with the best of the stock of Spain to its evident advantage. Spain took wives three times from Portugal. Two of these, the marriage of Ferdinand II. of Leon (d. 1187) and Ferdinand IV. (d. 1317), were of great benefit. The third was valuable as far as the introduction of Portugal's blood was concerned, but happened to be very unwise, because it brought back again in a double way the cruel traits of Sancho IV. which resulted in producing Pedro 'the Cruel' whose tyrannies amounted almost to madness.
There are a few exceptions among the noble characters, such as the cruel tyrants just referred to, whose traits will be seen to be evidently caused by heredity. Still for twenty-one generations in the direct male line of Castile from Sancho II. in the tenth century to Charles Quint, the greatest ruler of his time (d. 1558), there were only four who did not possess a high degree of strength and ability. These were Alfonso IX., Ferdinand IV., John I. of Castile and Ferdinand I. of Aragon.
The first two of these were in the early centuries. John I. of Castile and Ferdinand I. of Aragon were father and son, who lived in the period just before the time of Ferdinand and Isabella.
There were two others, also father and son, who ruled over Castile at about this same time, who were exceedingly weak. These were John II. and Henry IV. They are not in the direct line under discussion at present, but it is interesting to see that John II. was a grandson of John I. just noted for his weaknesses and the causes of this temporary running out and subsequent rejuvenation in Ferdinand and Isabella will be discussed later.
During the early centuries of Christian Spain the conditions of the times were such that every sovereign was obliged to defend his right to the throne against the jealousies of his family, so that almost constant wars were being waged among the nearest kin and it was practically impossible that several generations of weak and incompetent kings should not have been wrested from the throne. This factor of natural selection undoubtedly did much to insure the strength of the stock.
The long minorities of the sovereigns of Castile and Aragon which occurred time and again during these centuries have always been considered by all historians as one of her greatest misfortunes, leading to intrigues, civil wars and disasters; affairs being put in a healthy condition again only when the king himself was old enough to take things in his own hands.
This and the fact that the country invariably gained ground under good rulers and just as certainly lost under weak ones make it evident how much more important the king was in those days and under those conditions than he has been in England, for instance, where the progress has been due to the people as a whole, especially her aristocracy and upper classes.[1]
Such a long line of great rulers as this, such an almost unbroken repetition of great physical and mental strength is almost unparalleled, save by Portugal, in all history. If there is much in heredity it must certainly be necessary here to show that the dynasty was continually maintained by the introduction of just such great qualities either from the best part of its own stock or from outside families.
We can discuss twenty marriages in the direct line. The following fourteen can be seen to have introduced stock equally vigorous and able. These fourteen are those of Sancho II., Ferdinand I. of Leon, Alfonso VI., Ferdinand II., Alfonso IX., Ferdinand IV., Alfonso II., Henry III., Don John II. of Aragon, Ferdinand and Isabella, Johanna 'the Mad.' These were scattered along the course and sufficiently account for the perpetuation of the strain. Many of these unions were remarkably good, being well backed on all sides. Of the other six, four were 'obscure,' tending that much to dilute the distinguished qualities.
There was one, the marriage of Alfonso VI., that was distinctly bad, as its average value was incapable as well as vicious. The remaining one introduced mostly poor stock but had a small element of goodness in it. I refer to the marriage of John the First of Castile. Half the pedigree of Henry II. of Transtama and of Alfonso VI. are uncertain for different reasons, as will appear.
Beginning now with the most ancient times let us take up the character of each sovereign and discuss the effect on the breed of blood introduced in the marriage of each. Sancho I. by his courage and mental and physical energy extended his dominion in all directions. He reduced important fortresses on both banks of the Ebro, recovered Eioja and conquered the country from Tudela to Najera, Tarragona and Agreda, and the mountain districts surrounding the sources of the Duero. He was also prudent and pious by nature and his conquests were retained throughout his life by the wisdom of his acts. He died in 994.
Sancho married Urraca, daughter of Ferdinand, belonging to the same stock. They had a son Garcias, called 'the Trembler,' about whom little is known with certainty except that he won battles and apparently he was a successful warrior. The name of 'Trembler' was applied to him because before battle, as he himself put it, 'My body trembles before the danger to which my courage is about to expose it.' The pedigree of his wife, Ximenia, is unknown to me, but from this time on to the present, the descent of the female side can be shown with very satisfactory completeness, and it is these pedigrees which show that qualities were infused in the stock all the way down the line, sufficient to keep up the elements of greatness which never ran out in Spain until the death of the Emperor Charles Quint. After this the worst possible unions were made, and then Spain fell.
Sancho III., who died in 1035, was the son of the 'Trembler.' He must have had great ability for war and government, as he made himself the most powerful prince of his age and country. He married Nunnia, the heiress of Castile, who belonged to a powerful family. He held what he got by inheritance and marriage and even extended his dominions by conquest. He was called 'the Major,' or 'the Great.'
Sancho III. was followed by his son, Ferdinand I. He had high abilities and virtues and made himself the most powerful among many monarchs in Spain. He also is called in history 'the Great.' He married a daughter of Alfonso V. of Leon, a successful soldier and ruler and the son of the valiant Bermudo II., who had won distinction by defeating the Moors.
Ferdinand died in 1065. His son, Alfonso VI., was a great warrior and called 'the Valiant.' Alfonso VI. allied himself to an outside stock. He married a daughter of Robert, Duke of Burgundy. It does not appear that her ancestors were especially distinguished, except that her great-grandfather was Hugh Capet. This can not be classed among the brilliant marriages from the present point of view, as the great qualities are so remote.
Their daughter, Urraca, became queen. She was overbearing and tyrannical in her conduct, with morals of very questionable repute. Her mind was of a light and trivial order, though her ambition was as great as it was unprincipled. 'She left to posterity a character darkened by many crimes and scarcely redeemed by a single virtue.' Her reign, 1109-1126, was fortunately for her people a short one, but she succeeded in keeping the country embroiled in family feuds. (Dunham, 'Spain,' II., 162.) Urraca is the first one in the group who had any such traits. On searching for character of her mother's people, who must have introduced these qualities if they came by heredity, I found them amply accounted for in her grandfather and his mother. Robert, Duke of Burgundy, her grandfather, is described in a short column in the 'Biog. Univer.,' most of which tells of his violent temper. His mother, Constance, was a 'wicked intriguer,' and instigated his revolting from his weak and peace-loving father, King Robert of France. 'Robert (the Duke) had a most violent temper and was capable in the excesses of his anger of the most atrocious extremes.' He showed no application to affairs of state and abandoned the government to cruel and incompetent ministers. Queen Urraca married Raymond, Count of Burgundy. He was not at all distinguished, nor were his family.
The successor of the notorious Queen Urraca was Alfonso VII., who luckily did not repeat his mother's character. Unfortunately for our purpose we cannot be sure of his father, owing to the licentiousness of the Queen. The characteristics of this son and his effect on the country may be well shown by quoting Dunham, 'History of Spain and Portugal,' II., 165:
Alfonso was no common monarch. Though he lost Portugal and was unable to withstand the genius of his namesake of Aragon, whom he imitated in assuming the imperial title, yet with fewer pretensions, though he is undeserving the exaggerated praises of the national historians, it cannot be denied that he exhibited great firmness in circumstances often very difficult, that he caused his territory to be respected by his Christian neighbors and greatly aggrandized it at the expense of the Mohammedans. His talents, however, were inferior to his ambition, and his moderation to both.
If this Alfonso VII. had wedded only average qualities it is probable that the ancient greatness of the race would have run out, but what happened is unusual in the story of families. Just at the time when it is weakened by dilution it is again strengthened by the qualities of a great man. The wife of Alfonso was the daughter of Raymond Berengaria III. (d. 1131), Count of Provence, a prudent sovereign who extended his dominions by inheritance, marriage and victory, ruled fifty years and actually carried his conquests across the sea to the shores of Majorica and made successful wars against the Arabs.
The product of this union was Ferdinand II. (1187) of Leon. He was a very able general and had many estimable and generous personal qualities. He made a marriage calculated to perpetuate the great qualities of his stock, that with Urraca, daughter of Alfonso I., the great founder of Portugal, who by consulting the Portugal chart may be seen to be backed up by distinguished fathers and grandfathers and to have himself derived in part his genius for war from the same stock of Spain already discussed, namely, Alfonso VI. 'the Valiant.'
However, Alfonso IX., his son, was without distinguished qualities or virtues. Coming as he does at the union of greatness he must be counted as an exception. Still the genius of the race does not die here. His marriage was one of the very best. His wife, Berengaria, was a famous heroine of Spanish history. She was a truly great and noble woman. Not only in her own qualities, but by her ancestors she must have brought into Spain one of the best strains that any royal person at that time would have been likely to have represented.
She was the daughter of Alfonso VIII. of Castile, rightly called 'the Noble,' whose reign was of great benefit to the country, himself a son of a successful warrior during a short career and grandson of Alfonso VII. already noted for his success. Her grandfather was Henry II., one of England's most vigorous and able kings, according to Hume 'the greatest prince of his time for wisdom, virtue and abilities.'
After the death of Alfonso IX., the throne was taken up by Ferdinand III. his son. 'He was a just, pious, able and paternal ruler, as well as a valiant soldier.' He triumped over the infidels and considerably extended his domains. His wife was a daughter of the Emperor Philip, a vigorous, warlike character, who, being assassinated when only thirty years old, never had an opportunity to display his real abilities. Philip was the son of Frederick Barbarosa, the greatest man and greatest power of his day. Thus a certain amount of able blood was here introduced. Still we see Isaac Angelus in the pedigree, an abusive and incapable ruler. A little more than half of it all was very beneficial, for Frederick was just and wise as well as extremely able, while the Emperor Philip was up to the standard already established here in Spain. The power of the country was considerably increased under Ferdinand III.
Alfonso X., who was the son of Ferdinand III., had abilities and ambition, but was not at all a man suited to the times. He was weak and irresolute, not obeyed by his subordinates, and his reign was far from successful. His time was devoted to learning and the advancement of science, which alone prospered under his rule. He showed a slight amount of cruelty, but this was not conspicuous compared to others in this age and land. There is no question but that Alfonso X. was a man of great intellect.
His character forms an exception and is the only one of the sort I have met with in this region. It is easily accounted for by a combination of ancestral qualities, but such combinations are apparently far from common. He was a poet, scientist and writer, and through his influence learning was greatly advanced. He is said to have been the first royal personage who was also a man of letters. The marriage of Alfonso X. with Violanta undoubtedly served to a certain extent to perpetuate the strength of the stock, for his wife was a daughter of James, the Giant Conqueror of Aragon. Still James with his great abilities as a warrior was violent, cruel, passionate and licentious, and aside from James there is not much distinguished blood in the characteristics of Violanta 's pedigree.
We now come to a period of misfortune for christian Spain, and it is interesting to note how closely the welfare of the country follows the character of the sovereigns, how great the impress of the ruler was on his times in those early days in spite of the theoretical representation of the people in the popular branch of the Cortes.
During the reigns of the next two succeeding monarchs, Sancho IV. and Ferdinand IV., the family feuds and lack of a strong and wise ruler affected the country so disastrously that practically anarchy may be said to have prevailed.
Sancho IV. inherited the cruel, passionate disposition of his grandfather, James of Aragon, without his wisdom. His character was also warlike, vigorous and cruel and the only good fruits of his reign were his conquests against the Moors, whom he defeated in Andalusia and even carried his victories into Tarifa, a town in the very furthest extremity of Spain. The marriage Sancho made, when considered on the grounds of perpetuating greatness, may be considered half or more than half good. His queen, Mary, can be seen on the chart to be descended from largely 'obscure' stock, though she was the great-granddaughter of the famous heroine, Berengaria, already mentioned. She was her worthy descendant, for she repeated her character in every particular. Resolute, calm and devoted, she was an astute diplomatist and politician. Whatever successes there were were due largely to her.
Sancho's reign was short, lasting only eleven years. During the life of the queen mother, she exercised, as we have said, a beneficial influence, but after her death the reign of the feeble Ferdinand IV. was one long list of disasters. Some may wonder why Ferdinand should have been so weak, but as many of his immediate ancestors were far from being endowed with vigorous minds, of course he had a chance to get qualities from the poorer of them. He did repeat the cruel, passionate and tyrannical disposition to perfection, but no one appears to have paid any attention to his wishes.
Now again when the mental qualities are threatened we find them brilliantly restored. Constantine, the wife of Ferdinand, was just the one to effect this, as a glance at the chart will show. It is interesting to see Alfonso X., the scholar and poet, again in his grandson Diniz of Portugal, in another country and in another day where probably no influence of environment could come into play. Alfonso was the first and he was the second royal personage who was also a man of letters. The issue of this union was another one of the heroes of old Castile, Alfonso II., who succeeded to the throne in 1312, when only one year old; grew to be a great warrior against the Moors, and taking after his maternal grandmother possessed a large share of prudence and virtue, some of the rarer characteristics of his tribe. As an example of the respect felt for him even by his enemies the following may suffice: The Moorish king of Granada is said to have exclaimed when he heard of Alfonso's death, 'We have lost the best king in the world—one who knew how to honor the worthy, whether friend or foe.' This eulogy is, however, somewhat offset by the evidence that he was extremely cruel at times.
It is now to be noted that there are an unusual number in the pedigree of Alfonso, who have the adjective cruel or some other designation of depravity attached to them. Now a close intermarriage here will undoubtedly give rise to some of those great and valiant qualities, courage, energy and ability in the leadership of men, which were possessed by some, though not by all these royal lords and dames. There is a fair chance that the literary or possibly the pious and amiable qualities may reappear. But such a close intermarriage would be a hazardous one to say the least.
Let us take a survey of the pedigree of Alfonso XL in order to see what proportionate amount of cruelty and depravity there is in the ancestry of each succeeding generation.
In five degrees of kinship back of Ferdinand II. (d. 1187) we find three such, among the nine persons whose records were obtainable. In the same degree for Alfonso IX. there were only two among the nine. Ferdinand III. (d. 1252), who represents the next generation, had but three degenerate ancestors among the twelve. In the same degree of kinship for his son Alfonso X., we find five among eighteen. For the next generation (Sancho IV.) the number is two in twelve. Ferdinand IV. (d. 1312), his son, had three in fifteen. So we see that this type of character, though common, was present in Spanish royalty in these early centuries only to the extent of about one in four or five, but in the ancestry of Alfonso XI., on account of a gathering of this cruel type, we find no less than eleven such among the fifteen who could furnish records of any sort. It is simply that about Alfonso XI. there happens to be brought together a number of strains from the four different countries, Aragon, Castile, Hungary and Portugal, each containing an average amount of the qualities in question. However, owing to strange jumping about, which so many characteristics show in the course of hereditary transmission, Alfonso himself shows none of them, but is himself the bridge over which they pass to appear in his son whose actions seemed more like that of a demon than a man the incarnation of cruelty itself.
A very close intermarriage was made by this Alfonso IX. of Castile. His wife was the daughter of Alfonso IV. of Portugal, a brilliant warrior, but withal a cruel tyrant and the one of all rulers in Portugal on whom rests the greatest odium.[2]
Now let us see what proportion of the passionate and cruel would be found in five degrees of kinship for a child of Alfonso XI. by such a wedlock. Owing to the intermarriage we find but eleven different persons as several names appear twice. There are only three who are free from the characteristics in question, or eight in eleven show the passionate and cruel type. If we take all for six degrees removed we find the number even worse, eleven in fourteen. A son could scarcely escape the worst sort of inheritance, except by the greatest fortune. What did happen was this. Pedro, the only legitimate son of Alfonso XI., known in all history as 'Pedro the Cruel,' amused himself in some such ways as this. He imprisoned and foully treated his first wife, Blanche of Bourbon, and during the first part of his reign had many noblemen, among others Don Juan, his cousin, executed in his presence. Once, it is stated, in the presence of the ladies of the court he commanded a number of gentlemen to be butchered until the Queen, his mother, fell into a dead faint in company with most of the ladies present. "He then caused to be murdered his own aunt, Dona Leonora of Aragon, mother of the above Don Juan, for nothing except that Aragon would not make peace with him 'being compelled to get Moors to do the job, as no Castilian could be induced to undertake it,' says King Pedro IV. of Aragon in his memoirs. A certain priest coming before him to say that St. Domingo had appeared to him in a dream and counselled him to tell the king that he would meet his death at the hands of his brother, Henry Pedro insisted that the priest must have been prompted by Don Henry himself, and so ordered the poor dreamer to be burnt alive. One lady, Urraca Osorio, for refusing his address, was burnt alive in the market place of Seville. Another disfigured herself in order to escape his attentions. "He was as devoid of generosity as of pity, as reckless of the truth as of life, as greedy of gain as of blood—a false knight, a perjured husband, a brutal son."[3]
Thus Pedro 'the Cruel' is amply accounted for by heredity alone, without bringing in the question of the inheritance of any acquired characters, and it does not seem that this brutality could be the result of the environment in which he lived since before his day when times were even rougher we find so many kings and queens possessing every virtue. There were never any before as bad as Pedro nor were there any, on grounds of heredity alone, as likely to be so. It is interesting to note that he was the great-great-grandfather of Richard III. of England, with whom he is often compared. Pedro's actions cost him the loss of most of his subjects, and finally his life at the hands of his bastard brother, Henry, who had somewhat the same characteristics though in a lesser degree.
Henry established a new line under the title of Henry II. His own origin was, probably, without distinction on his mother's side, and this is one of the four successive unions now to be discussed which can not in any way be used to illustrate the perpetuation of genius. It is also at this time that we find four incompetent rulers, three of whom are described as imbeciles. This is very significant, though I do not see that the imbecility of John I. of Castile is at all properly accounted for by heredity. Mere weakness, cruelty and licentiousness might be well expected, but not imbecility in the medical sense of the word, and I do not know that this medical sense is implied by the historians when using this term in connection with these persons. The origin of the well-known insanity in the Spanish and Austrian houses, perpetuated over thirteen generations and involving more than a score of individuals, is a very interesting question. It cannot be traced with certainty prior to Isabella, the Queen of John II. of Castile. This Isabella was out and out insane, according to the celebrated English alienist, Ireland,[4] and from her, onward, the insanity passed along in one form or another by the very intermarriages which their pride and political motives caused them to arrange, with the intended idea of making permanent their world power, but with the inevitable result of losing that same prestige by placing it in the hands of the unfortunate children whose inheritance was necessarily mental weakness as the result of such unwise wedlocks.
Without taking up the characters separately we need only look at the chart to get a clear idea of the predetermined cause which lead to the peculiar characters who were foremost during this epoch and to see how perfectly natural it was that there should have been some exhibiting the most depraved characteristics while others, like Ferdinand and Isabella, were fortunate enough to inherit the genius which we see is likewise present in a conspicuous degree. The chart shows that Isabella might be expected to be greater than Ferdinand. She had five elements of genius in her pedigree, being through intermarriage twice the great-granddaughter of John of Gaunt, Duke of Lancaster, one of the great men of his day, and John the Great of Portugal appears twice in the pedigree for the same reason. She was also the granddaughter of Henry III. of Castile, who was a model of all that a king should be. Both Ferdinand and Isabella possessed high ability and character, as can be fully confirmed by consulting any history of the times. They were married through personal choice of the queen, as she appreciated in Ferdinand a man worthy of her love. Nothing could be better for the welfare of the country than that two such able rulers should sit upon the throne at once. But Ferdinand was her second cousin and the descendant of weak or perfidious rulers.
We now see that the children of this union have two estimable parents but they have a remarkably bad lot of grandparents, and back of this we find the worst weaknesses in some while in others is much ability of a very high sort. We should not expect a child to be ordinary. On the other hand the most extraordinary is only to be expected. The two descendants whom we have here to consider are Joanna and her son, the Emperor Charles Quint. The former got the insanity and imbecility, the latter the genius and a touch of the neurosis as well. Every one in this region of the chart fills in a link in a way to be expected and is readily and perfectly explained.
The pedigree of Philip the Fair, who married this mad Joanna, contains the great fighting qualities of the old kings, tremendous energy, and great ruling functions without a bit of the insanity and weaknesses shown in Castile and Leon. This was the famous marriage that placed the Hapsburgs on the highest pinnacle of power a marriage almost certain to produce genius and as certain to produce some descendants whose heritage would be imbecility or weakness, or whose ambition would only lead them to mad extremes. Both the genius and the insanity appear quite as we should expect, and it is to be noted that the neuroses are now seen to appear for the first time in the Hapsburgs, since they are introduced into this family through the blood of Castile and Leon; and furthermore these afflictions appear at once. From this time onward, insanity is rampant. Why should it have remained so and not have diminished through reversion to the mean? Let us look at the subsequent marriages.
The Emperor Charles V. married Isabella, a daughter of Emanuel the First of Portugal, a mediocre king; and an inbred descendant of the great Portugal house. Her mother was a sister of the mad Joanna and granddaughter of John the imbecile, and Isabella, the insane. So this may be called a pretty close intermarriage, as well as an unadvisable one. The Emperor himself was somewhat eccentric. He was cruel as well as inordinately ambitious, but he was withal a great ruler. Towards the latter part of his life he was especially subject to melancholia. The effect of this unwise marriage was of course to perpetuate these traits. We shall see under Austria how the evil qualities were much less conspicuous and how the influence of outside stock made itself felt in counteracting these undesirable perversions. The descendants bred true to kind, and in all regions of the chart we find the vicious qualities appearing in places where we should most expect them, that is, in places where the intermarriages were closest.
It is a matter of common belief that intermarriage alone is a cause of insanity, therefore, it is worth while to consider that here it is merely perpetuating what already exists and cannot be considered the cause of its beginning. In a later chapter this question will be more fully discussed. It was not yet time for the intellectual qualities to entirely disappear, for Charles Quint had two descendants who are celebrated historical characters. These were Don John of Austria and Alexandre Farnese, both of whom so distinguished themselves by virtue of their great abilities that abundant material can be found in any biographical dictionary to confirm the belief that these men were geniuses. His grandson, Albert Archduke of Austria and Governor of the Netherlands (son of Maximilian II.), was a man of high though not the highest talents. There are three others worth mentioning in this connection. The Archduke Charles, his great-grandson, is spoken of in this way:
He died in the twenty-sixth year of his age of a malignant fever. He was deeply regretted by the nation, being universally considered a prince of extraordinary merit and endowments. . . active and ambitious spirit.[5]
The Cardinal Ferdinand, his brother, was a man of equal mark and merit, who as Governor of the Netherlands there warded off Spain's impending disasters until his untimely death brought a great loss upon his country. He is spoken of in the highest terms by all historians, especially for his bravery, prudence and magnanimity.[6] Don John, a natural son of Philip IV., also was the possessor of great qualities.
It is noteworthy that three of these six were illegitimate, and that the greatest, Alexandre Farnese and Don John, were of these three. It seems probable that owing to the extremely high-strung and unstable condition of nearly all the members of the family, a union with an entirely different class of people would be of advantage to the health and balance of mind. It was not so much that ability was needed as a toning down of the excessiveness that had been manifesting itself in so many ways.
Of these mentioned, one was a son, two were grandsons, two were great-grandsons and one was a great-great-grandson. The most eminent were the closest related, and it is probable that the number of more distant relationship would not have been so large (as in the case of Galton's tables) but for the close intermarriages, giving the genius a chance to be further perpetuated than would ordinarily have been the case.
The kings of Spain never again had anything of the renowned abilities of Isabella, Charles, or the celebrated warriors of early days like Alfonso VI. (1126), James I. of Aragon, or John the Great of Portugal. It might have been that some of the eldest sons should have inherited the great qualities instead of little ones, but Spain may be said to have been unlucky in this, and as the next three, Philip II., III. and IV., did not get the best, in each succeeding generation the chances of its reappearing become more and more dim until the probabilities of a reversion were entirely unlikely.
Let us now notice the neuroses in this same region. The amount of insanity, or at least marked deviation from the normal, should be strikingly conspicuous owing to the intermarriages. It is so. Philip II. is described in this way by Motley.
He was believed to be the reverse of the Emperor (his father). Charles sought great enterprises, Philip would avoid them. . . . The son was reserved, cautious, suspicious of all men and capable of sacrificing a realm from hesitation and timidity. The father had a genius for action, the son a predeliction for repose. His talents were in truth very much below mediocrity. A petty passion for contemptible details characterized him from youth. . . diligent with great ambition. . . . He was grossly licentious and cruel.[7]
Philip II. evidently took after his grandmother, Joanna 'the Mad,' who was weak and melancholic, and perhaps also his grandfather, the feeble Philip 'the Fair' of Austria. He did not resemble either his father or mother. Both of Philip's marriages were from the biological point of view extremely unwise, the first being worse than the second, as Mary was a daughter of John III. of Portugal, who was weak and bigoted, in fact, a man much like Philip himself. Philip's wife was doubly related to him, being both first and second cousin, and this relation coming by way of the insane ancestors. So what wonder that the child of this union, Don Carlos, should have been one of the most despicable and unfortunate specimens of humanity in modern history?
The following pedigree of Don Carlos shows his chances of inheriting the inbred neurosis:
John, ${\displaystyle =}$ Isabel, imbecile, insane. John, ${\displaystyle =}$ Isabel, John, ${\displaystyle =}$ Isabel, John, ${\displaystyle =}$ Isabel, imbecile, insane. imbecile, insane. imbecile, insane. Isabella. Isabella. Isabella ${\displaystyle =}$ Ferdinand Isabella. Johanna. ${\displaystyle =}$ Philip, Johanna, Emanuel I., ${\displaystyle =}$ Mary. 'mad.'weak. 'mad.' weak. Emanuel I, ${\displaystyle =}$ Mary. Chas. V., ${\displaystyle =}$ Isabel. Chas. V., ${\displaystyle =}$ Isabel. weak melancholic. melancholic. John III. ${\displaystyle =}$ Catherine. weak. Mary. ${\displaystyle =}$ Philip II., morose, cruel. Don Carlos, madly depraved and cruel.
Here if there had been many children instead of one I should say that in a rough way extreme degeneration would be likely to be present in somewhat more than half the number. It is significant to notice that the two worst characters in all modern royalty, Don Carlos and Peter 'the Cruel,' are also the two who have the worst pedigrees.
Don Carlos, it will be observed, though a great-grandson of Joanna 'the Mad' and Philip 'the 'Weak,' has almost exactly the same blood. Ferdinand and Isabella extend right across the chart. Emanuel I. takes his origin from a root almost identical with both Ferdinand and Isabella, and this root we have seen is the reign in which the insanity must have originated.
I do not see how Philip could have planned it better if he had wanted this son whom he really so much despised.
The son by Philip's only other productive marriage was Philip III. Here again we have a close inbreeding, though through a somewhat "better route. Anne was his own niece and even more closely related than a niece, as her father was Philip's own cousin. The only outside blood was distant, by Ladislaus. King of Hungary. This blood was presumably healthy though not distinguished. Philip was a man of very low mental calibre (about grade 2). Hume says he was not a fool, though Prescott calls him the imbecile grandson of Charles V. The melancholic tendency a]3peared in him, though not to the extent of insanity. Ireland sums the whole situation up thus: "Philip was a man of feeble and indolent character, governed by worthless favorites. The power of Spain declined as rapidly as it had risen."[8]
This is the same story over again in the history of Spain. We find the condition of the country reflecting the character and strength of the monarch. Many times through the course of the centuries she had been blessed apparently through heredity by great and able rulers and her course had been hampered only here and there by the presence of a weak one; but all this from the great Emperor Charles's day onward was to be just reversed by the same almost unerring law of descent. I do not mean that a weak monarch might not exceptionally, even in those early days, reign over a glorious period. The greatness of Portugal lasted through the reigns of two weak sovereigns, Emanuel I. and John III., though the germs of decay were clearly at work. Likewise Spain's glory had its greatest outward manifestation of splendor in the time of Philip II. whose acts were nearly all injudicious. The increment of one period made itself felt in a later. Still in general the countries prospered only under the great leaders.
Philip was not as bad as Carlos, nor was his pedigree quite as hopeless. The roots from which he sprung were practically all from the weak John II. of Castile and Isabella the insane. In this he is like Carlos. However, it is to be noted that three of his immediate ancestors were excellent characters, though not especially gifted. These are represented as such on the chart. Ferdinand I. and Maximillian II. will be taken up under Austria.
The marriage of Philip III. was no more fortunate. His queen was the daughter of Charles, Duke of Styria, who was evidently not the possessor of great talents, as I have never been able to find a reference to his character or achievements. He was the son of the same Ferdinand I. Charles's wife was of 'obscure' origin. Thus the neurosis was perpetuated and furthermore the genius was not maintained. However, very high ability still cropped out in two of Philip the Third's many children. These were Charles and Ferdinand, already treated. But unfortunately the crown did not fall to either of them, and so we have an artificial election of the worst. The reign of Philip IV., who became king, was a period of great misfortune. His only good qualities were his love of.art and literature, and perhaps his best bequests to the world are the famous portraits of himself and family painted by the great Velasquez.
Besides being weak and foolish he was 'far inferior to his predecessor in purity of life.' "Spain might still have regained the lofty station she once held in the rank of kingdoms if at the succession of Philip IV. a wise and energetic monarch had ascended the throne."[9]
By his marriage with his niece, Maria Anne, he succeeded in having two degenerates, Prosper, who had convulsive fits from his birth and died young, and Charles II., who became king.
Charles was the last of the Spanish-Austria line and in him all its weaknesses were combined. Feeble in mind and body, he was grossly superstitious and so ignorant that he did not know the names of some of his own towns and provinces.[10]
By his marriage with Elizabeth, who was a great-granddaughter of Ferdinand I., and consequently partially of the same tainted stock, Philip IV. had one licentious weakling out of three children. This child, Don Balthaza, the subject of the famous Velasquez recently acquired by the Boston Museum of Fine Arts, was so dissipated that he brought himself to his grave before he had reached his seventeenth year.[11] Another of the three, Maria Theresa, who married Louis XIV., was extremely stupid.
Charles V. did not have any posterity and the war of the Spanish succession deluged Europe with blood, but the Austrian House did not reach its end through any sterility caused by inbreeding, for in spite of the inbreeding it is noteworthy that they had large families, quite as large as elsewhere. Many of the children died in infancy, but the wives were not sterile. It can not be argued that inbreeding was a cause of the large percentages of early deaths, since we have also to deal with the question of insanity and neuroses. All sorts of mental and physical defects, such as are known to be frequently found in families with an insane diathesis, may have been the cause.
This completes the study of what may be conveniently classified as two groups. First (a) the old Castile, Leon and Aragon, families; second, (b) the Hapsburgs in Spain. Let us first review the characteristics of the former. This subgroup (a) contains 97 names. The character and ability of the 97 have been found in 63 cases with sufficient fullness for the purpose in hand. The other 34 must be marked 'obscure.' They are valuable in a negative way. There were about 39 of the total who had very marked ability, evidently considerably above the average of kings and queens and such as should place them in grades 7 to 10 of the standard here used. This percentage of over one in three is a high one, but the most striking fact is that out of the thirty actual sovereigns on the thrones of Castile, Leon and Aragon, no less than twenty-two are of this group. This I attribute in part to the constant struggle between the rival families, between brothers of the same family and other close relatives, in their jealous greed for power and domain, thus keeping up a struggle for existence, capable of showing itself in results, and partly to fortuitous chance endowing the heir to the throne with the qualities of the stronger rather than the weaker of his ancestry. The number of weak or indolent is correspondingly small, though high temper, jealousy and ambition are present in nearly all.
I find about six persons to whom the terms feeble, characterless and indolent, are applied. Two of these, Andrew II., King of Hungary, and Ferdinand IV., of Castile, are apart from the others. The remaining four are very closely related, being father, son, nephew and his son. These are John I., John II., Henry IV. of Castile and Ferdinand I. of Aragon.
The family had already existed twelve generations before these characteristics appeared in it. In the tenth generation one of the greatest names is found in Ferdinand IV., and even in the nineteenth and twenty-first generations some of the best and most vigorous and ambitious appear in Ferdinand, Isabella and the Emperor Charles, all of whom were the descendants of the privileged few with a pedigree practically entirely of this sort extending back through more than twenty generations on all sides, and including many thousands of nobles titles.
These names which close the group are as great as those which opened it. How can this be if the assumption of rank and power is to lead to degeneration? It may be argued that the necessity for action in these times of incessant strife obliged the individuals to be energetic and so the characters were the product of their times, but we have seen that the selection alone would produce this. Furthermore, against the environment explanation we must remember the great number of able and vigorous men who appear much later in history in other countries and the descendants of forty instead of twenty generations of blue-bloods. The modern Saxe-Coburg-Gotha chart is almost entirely free from weaknesses and indolence.
The insanity apparently starts in Peter the Cruel. We have seen how his character might well have been the result of a combination of a large number of cruel persons. This insanity continually reappeared in Spain, where one finds it most rampant. It occasionally appeared in Austria, where it was less often introduced. It probably was also the origin of the Plantagenet neurosis, the full history of which I have not yet had time to study with any completeness.
1. Conf. Havelock Ellis, 'Study of British Genius,' Popular Science Monthly. (Geniuses have come from the upper classes.)
2. McMurdo's 'History Portugal,' three volumes, London, 1899.
3. Watts' 'The Christian Recovery of Spain.'
4. Ireland, 'Blot on the Brain.'
5. Dunlop, 'Mem. Spain.'
6. Dunlop, 'Mem. Sp.,' I., 183, also Hume's 'Spain.'
7. Motley's 'Rise Dutch Rep.,' Vol. I., p. 142.
8. Ireland, 'Blot on the Brain,' p. 156.
9. Dunlop, 'Mem. Spain,' Vol. I., p. 23.
10. Young, 'Hist. Netherlands,' p. 611.
11. Dunlop, 'Mems. Spain,' Vol. I., p. 378
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 12, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5565914511680603, "perplexity": 3994.8150920593775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500151.93/warc/CC-MAIN-20230204173912-20230204203912-00719.warc.gz"}
|
http://www.gradesaver.com/textbooks/math/algebra/algebra-a-combined-approach-4th-edition/chapter-2-section-2-5-formulas-and-problem-solving-exercise-set-page-146/24
|
## Algebra: A Combined Approach (4th Edition)
h = $\frac{S - 4lw}{2w}$
Solve for h by isolating it to one side: S = 4lw + 2wh Subtract 4lw: S - 4lw = 2wh Divide both sides by 2w: $\frac{S - 4lw}{2w}$ = h
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5664367079734802, "perplexity": 7394.30412837452}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945604.91/warc/CC-MAIN-20180422135010-20180422155010-00305.warc.gz"}
|
http://math.stackexchange.com/users/4175/andry?tab=activity&sort=comments
|
Andry
Reputation
396
Top tag
Next privilege 500 Rep.
Access review queues
Badges
2 13
Newest
Impact
~10k people reached
• 0 posts edited
• 0 helpful flags
• 32 votes cast
# 52 Comments
Feb 14 comment Infinite self-convolution for a function I had sort of the same feeling... Would you be able to link me to a proof or something? Thank you Mar 9 comment Queue system with queue-triggered input process I labeled it as homework, thank you for telling... Sep 10 comment Self multiplication of a CDF degenerates into a Dirac Delta? Thank you very much for your answer, it could provide many useful details. I checked the another one as the correct answer because it simply makes more explicit what was questioned... but your answer provides good Math background. Thank you!!! Apr 13 comment Infinite self-convolution for a function @StefanSmith: Yeah, I am also getting contradictory results. It is not easy to manage this thing here... Furthermore, nobody asked me to solve this specific problem, actually it is just something that I need to do in order to achieve another target. Apr 13 comment Infinite self-convolution for a function Yes! I thought it was the same right? Apr 12 comment Infinite self-convolution for a function Variance increases everytime... and it seems not reaching a stable value... Mar 20 comment Problems getting transformation function from source and destination random variables knowledge when handling the discrete case Thankyou user65384 for your answer, but I am afraid to say that you did not get the point. I do not want to know how to get $F_Y(y)$, in my case I suppose I already have it. In your example you use $g(\cdot) = ln(\cdot)$ but in my question I pointed clear that $g$ must be $g(\cdot) =F_Y^{-1}(F_X(\cdot))$. In particular I need to consider the case where $F_Y(y)$ is a stair function that, so, cannot be inverted! Mar 19 comment Deriving the transformation function of a random variable from the original and the final distributions Just a question, how can I make the transformation when X and Y are discrete? $F_Y^{-1}(y)$ cannot be done... Feb 18 comment Deriving the process of successfully consumed requests from the process of request-producers and the process of request-consumers At the moment I am thinking how to re-write the question in a better way... Need some time sorry... you are right btw... Feb 18 comment Deriving the process of successfully consumed requests from the process of request-producers and the process of request-consumers This is what I am trying to understand... I provided some initial data, but I am trying to device a way to get this thing done... In my question I just wanted to know about a possible approach using the two quantities I introduced (say the two probabilities). I am aware that the question is getting a little vague... I'll try to edit it it... Feb 18 comment Deriving the process of successfully consumed requests from the process of request-producers and the process of request-consumers @joriki: It is one of the things I am asking as well... Feb 17 comment Deriving the process of successfully consumed requests from the process of request-producers and the process of request-consumers I had a feeling... But I cannot figure out how to adapt a death-birth model to this scenario. Btw, gonna check it out, thank you very much! Jan 27 comment Mean of iid random variables, problem understanding a passage in a paper OK, now I think I understand... he is treating $r$ which is a probability, like a random variable... Jan 27 comment Mean of iid random variables, problem understanding a passage in a paper You know I have a problem here... Sure you are right, but what about $P(r)$? What does it mean? It is a pdf, well, but here the mean is considered on a continuos r.v. However I would say that $P(r)$ is the number of edges whose $r_{i,j}$ is $r$ (say) on the total number of edges... I cannot figure what P(r) represents... Jan 27 comment Mean of iid random variables, problem understanding a passage in a paper I am providing a link to the paper: repositories.lib.utexas.edu/bitstream/handle/2152/13376/… Please refer to page 11 and you'll find it. Jan 27 comment Mean of iid random variables, problem understanding a passage in a paper ABSOLUTELY sure about this... But if you think this is wrong, please tell me... Jan 27 comment Mean of iid random variables, problem understanding a passage in a paper Thankyou... I was typing fast and did not realize I used gt and lt... Oct 9 comment Do hashing functions have a probability distribution calculated for their output? @MJD: I am trying... thanks for you message :) Jun 14 comment Is there any closed-form expression to calculate each element of the inverse of a matrix? Yeah by closed-form expression I mean a set of rules that involves elementary operations... For example, cofactors are calculated using minors, If I wanted to replace the cofactor term in the relation with an expression, how would it be... What I'd like to reach is a final formula not involving more steps to calculate the final quantity. Maybe it is not possible, just want a confirmation of this if possible. Jun 4 comment Eigenvalues of a quasi-stochastic matrix Ah, yeah, thankyou :)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8863295912742615, "perplexity": 510.2772335273594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398456289.53/warc/CC-MAIN-20151124205416-00301-ip-10-71-132-137.ec2.internal.warc.gz"}
|
https://indico.cern.ch/event/645835/
|
# NEUTRINO PLATFORM WEEK
29 January 2018 to 2 February 2018
CERN
Europe/Zurich timezone
Fundamental questions in neutrino physics such as the existence of leptonic CP violation, the Majorana nature of neutrinos or the origin of neutrino masses and mixings could have essential implications in other areas of high energy physics, from collider physics to indirect searches for new physics, as well as in our understanding of the universe. This workshop aims at bringing together at CERN neutrino experts to discuss recent progress in this area. Although the main focus of the workshop will be neutrino oscillation physics and BSM physics related to the oscillation program, the topics to be discussed include:
• Prospects on measuring leptonic CP violation and the neutrino mass matrix
• Non-standard searches in future neutrino experiments
• Neutrinoless double-beta decay
• Charged lepton flavour violation, lepton EDMs
• Neutrino physics in colliders
• Neutrino masses and theories of flavour
• Neutrinos in cosmology: neutrino DM and DE connections, baryogenesis
• Neutrinos in astrophysics: origin of PeV neutrinos, SuperNova neutrinos, neutrinos and GW
This workshop is part of the CERN Theory Neutrino Platform activities and will be carried out in coordination with the Fermilab Theory group.
The workshop will also be held in connection with the DUNE collaboration meeting and will consist of joint and separated sessions which will include selected overview talks and ample time for discussions. Talks are by invitation only, if you are interested in giving a talk please contact the organizers.
Registration: The attendance will be limited to around 80 people. Applications to attend will be open until late December, 2017 but the sooner you register the better.
Organisers : G. Barenboim, P. Hernandez, P. Huber, S. Parke, S. Pascoli and T. Schwetz
Starts
Ends
Europe/Zurich
CERN
4/3-006 - TH Conference Room
Go to map
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8501548767089844, "perplexity": 3749.726466734817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704792131.69/warc/CC-MAIN-20210125220722-20210126010722-00469.warc.gz"}
|
https://projecteuclid.org/search_result?type=index&q.a.subject=49J45&q.f.author=Chang,%20Mao-Sheng
|
## Keyword »
• You have access to this content.
• You have partial access to this content.
• You do not have access to this content.
## Search results
Showing 1-2 of 2 results
Select/deselect all
• Export citations
### MINIMIZERS AND GAMMA-CONVERGENCE OF ENERGY FUNCTIONALS DERIVED FROM $p$-LAPLACIAN EQUATION
Chang, Mao-Sheng, Lee, Shu-Cheng, and Yen, Chien-Chang
Taiwanese Journal of Mathematics Volume 13, Number 6B (2009), 2021-2036.
Journal article
### The characterization of Riemannian metric arising from phase transition problems
Chang, Mao-Sheng, Lee, Shu-Cheng, and Yen, Chien-Chang
Tohoku Mathematical Journal Volume 61, Number 3 (2009), 333-347.
Journal article
Select/deselect all
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2791388928890228, "perplexity": 13206.274391444562}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986675316.51/warc/CC-MAIN-20191017122657-20191017150157-00029.warc.gz"}
|
http://openstudy.com/updates/507be22ee4b07c5f7c1f4095
|
Got Homework?
Connect with other students for help. It's a free community.
• across
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
55 members online
• 0 viewing
sauravshakya Group Title What is the n th term of the series: 1+1+2+3+5+8+13+21+... 2 years ago 2 years ago Edit Question Delete Cancel Submit
• This Question is Closed
1. Coolsector
Best Response
You've already chosen the best response.
0
A(n) = A(n-1) + A(n-2) where a1 = 1 and a2 = 1 ?
• 2 years ago
2. sauravshakya
Best Response
You've already chosen the best response.
0
In terms of n
• 2 years ago
3. estudier
Best Response
You've already chosen the best response.
0
Looks familiar....
• 2 years ago
4. Samkeyv
Best Response
You've already chosen the best response.
0
N th term is given by the formulae A(n)=A(n+1)+A(n+2) N=(n+1)+(n+2)
• 2 years ago
5. sauravshakya
Best Response
You've already chosen the best response.
0
???
• 2 years ago
6. sauravshakya
Best Response
You've already chosen the best response.
0
I know |dw:1350296451209:dw|
• 2 years ago
7. sauravshakya
Best Response
You've already chosen the best response.
0
But what in terms of n only.
• 2 years ago
8. Coolsector
Best Response
You've already chosen the best response.
0
it's funny that there is a question about the golden ratio now
• 2 years ago
9. mukushla
Best Response
You've already chosen the best response.
4
it is fibonacci...
• 2 years ago
10. sauravshakya
Best Response
You've already chosen the best response.
0
Yes
• 2 years ago
11. kenttknguyen
Best Response
You've already chosen the best response.
0
In terms of n ----> FIBOnnACCI
• 2 years ago
12. mukushla
Best Response
You've already chosen the best response.
4
$F_n-F_{n-1}-F_{n-2}=0 \ \ \ n\ge2$ setting up characterestic equation gives$\lambda^2-\lambda-1=0$wchich gives$\phi_1=\frac{1+\sqrt{5}}{2}$$\phi_2=\frac{1-\sqrt{5}}{2}$and so$F_n=A\phi_1^n+B\phi_2^n$and all u need is finding A and B using the values of $$F_0$$ and $$F_1$$
• 2 years ago
13. mukushla
Best Response
You've already chosen the best response.
4
finally
• 2 years ago
1 Attachment
14. sauravshakya
Best Response
You've already chosen the best response.
0
@mukushla how |dw:1350297426979:dw|
• 2 years ago
15. sauravshakya
Best Response
You've already chosen the best response.
0
oh I get it now...... thanx
• 2 years ago
16. Coolsector
Best Response
You've already chosen the best response.
0
@sauravshakya may you explain how ?
• 2 years ago
17. Coolsector
Best Response
You've already chosen the best response.
0
i thought i got it but i realized that i was wrong
• 2 years ago
18. Coolsector
Best Response
You've already chosen the best response.
0
ok got it .. nvm :)
• 2 years ago
• Attachments:
See more questions >>>
spraguer (Moderator) 5→ View Detailed Profile
23
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9998409748077393, "perplexity": 24606.31999976014}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400375630.34/warc/CC-MAIN-20141119123255-00012-ip-10-235-23-156.ec2.internal.warc.gz"}
|
http://math.stackexchange.com/questions/138459/kendall-notations-general-distribution-what-does-that-mean
|
Kendall notation's “General distribution”, what does that mean?
The first and second parameters for the Kendall's notation may have a G value, which stands for General distribution, see here.
But what does that mean? What is a general distribution?
-
Queueing theory uses Kendall's notation, as you described. There are three components describing the behavior of a queue:
1. The customers arriving for service, which is usually described by a Poisson process (random arrivals), but sometimes by non-Poisson processes or even deterministic arrivals rates
2. The time required to service each customer, which is usually described by a probability distribution, e.g. exponential or gamma (Erlang) distributed service times, possibly deterministic though.
3. The number of service providers, a positive integer value.
Generally general case
In the most general case, the behavior of a queue would be described as G/G/c where G is an unknown rate of customer arrivals, with an unknown service time distribution, G, (which is NOT necessarily the same as the process that characterizes arrivals), and c is an integer value greater than or equal to one.
In such general terms, it doesn't make much sense! It is more accessible to start with a specific queue behavior, for example, the performance of a fixed hard disk-drive (i.e. DASD, or direct access storage device).
M/G/1
DASD performance is modeled most accurately as an M/G/1 queue. M means that customers, or requests for disk access, behave according to a Poisson process. This is referred to as a stochastic, or Markov process, thus the use of "M". The rate at which the disk drive is able to meet these requests for service is unknown. Since job service times can have an arbitrary distribution, this is designated by "G" for "general". Finally, if there is only one disk-drive, c = 1.
M/M/c
Let's consider another example, where customers arrive randomly (according to a Poisson process), with exponentially distributed service times. There are multiple servers. This would be described as an M/M/c queue.
This is the typical situation at Walmart, during the night shift (with few cashiers on duty), or at a bank with tellers, or when making a phone call for customer support. Customers arrive randomly (M). The time required to check out their groceries or answer their question is also random (M) e.g. when grocery queues don't have a "10 items or less" configuration for some checkers. Meanwhile, there are a fixed number of cashiers or telephone support staff on duty, we'll say five. This would be an M/M/5 queue.
-
So G is just a "placeholder" for another distribution, right? – Marco A. Apr 29 '12 at 17:03
Sometimes. I guess I would describe G as the general case where you just don't know what sort of service time distribution to expect. Unknown isn't the same as random, of course (that confused me when I first learned about this). Also, there are methods for characterizing a G distribution, because you might not EVER be able to determine what distribution would take the place of G. – Ellie Kesselman Apr 30 '12 at 20:17
Thank you for your help! – Marco A. Apr 30 '12 at 22:05
@Paul You are most welcome! I hope I actually answered your question. I could try to describe HOW one actually handles a G distribution, as there are ways, but that wasn't really what you asked. Or is it? I did coursework in queuing theory, used it at my old job as a performance engineer for IBM... DASD, naturally ;o) and really like thinking about queues, have friends who like queues too, so do not hesitate to ask! – Ellie Kesselman May 1 '12 at 5:26
oh I thought that this stuff wasn't even used in real life jobs.. I thought it was merely theoretical, but seems that I'm wrong! I'm okay with the G general theory since I'm not required to study it for now (I'm following an academic course), I just wanted to understand what the G meant and you helped me in that. Do you have any experience with multi-class queues too? – Marco A. May 1 '12 at 9:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5738135576248169, "perplexity": 873.0695140435696}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929256.27/warc/CC-MAIN-20150521113209-00314-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://web.cs.dal.ca/~arc/publications/1-11/
|
Scalable parallel computational geometry for coarse grained multicomputers. F. Dehne, A. Fabri, and A. Rau-Chaplin Abstract: We study {\em scalable parallel computational geometry} algorithms for the {\em coarse grained multicomputer} model: $p$ processors solving a problem on $n$ data items, were each processor has $O(\frac{n}{p}) \gg O(1)$ local memory and all processors are connected via some arbitrary interconnection network (e.g. mesh, hypercube, fat tree). We present $O(\frac{T_{sequential}}{p} + T_s(n,p))$ time scalable parallel algorithms for several computational geometry problems. $T_s(n,p)$ refers to the time of a global sort operation. Our results are independent of the multicomputer's interconnection network. Their time complexities become optimal when $\frac{T_{sequential}}{p}$ dominates $T_s(n,p)$ or when $T_s(n,p)$ is optimal. This is the case for several standard architectures, including meshes and hypercubes, and a wide range of ratios $\frac{n}{p}$ that include many of the currently available machine configurations. Our methods also have some important {\em practical} advantages: For interprocessor communication, they use only a small fixed number of one global routing operation, global sort, and all other programming is in the sequential domain. Furthermore, our algorithms use only a small number of very large messages, which greatly reduces the overhead for the communication protocol between processors. (Note however, that our time complexities account for the lengths of messages.) Experiments show that our methods are easy to implement and give good timing results. paper.pdf paper.ps Home * Publications
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6674001216888428, "perplexity": 1045.247800296872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00208-ip-10-147-4-33.ec2.internal.warc.gz"}
|
http://tex.stackexchange.com/questions/13270/a-package-template-using-xkeyval/13275
|
# A package template using xkeyval?
I would like to write a package offering a number of commands. The package should accept options, and some of these options should be available as command options. Usage should be as follows:
...
\usepackage[optA=val1,optB=val2]{mypackage}
\begin{document}
\mycommand[optB=val3]
...
It seems as if the xkeyval package can do that, but I am not sure how this should be done exactly.
-
Let me provide a simple (but full) example:
\ProvidesPackage{myemph}[2011/03/12 v1.0 a test package]
\providecommand\my@emphstyle{\em}
% Note that the argument must be expandable,
% or use xkvltxp package before \documentclass (see manual of xkeyval)
\RequirePackage{xkeyval}
\DeclareOptionX{style}{%
\def\my@emphstyle{\csname my@style@#1\endcsname}}
% predefined styles
\providecommand\my@style@default{\em}
\providecommand\my@style@bold{\bfseries}
\ProcessOptionsX
% For simple key-value commands, keyval would suffie
\define@key{myemph}{code}{%
\def\my@emphstyle{#1}}
\define@key{myemph}{style}{%
\def\my@emphstyle{\csname my@style@#1\endcsname}}
\newcommand\setemph[1]{%
\setkeys{myemph}{#1}}
\renewcommand\emph[1]{%
{\my@emphstyle #1}}
\endinput
Test file:
\documentclass{article}
\usepackage[style=default]{myemph}
\begin{document}
Something \emph{important}
\setemph{style=bold}
Something \emph{important}
\setemph{code=\Large\sffamily}
Something \emph{important}
\end{document}
-
I have read this article en.wikibooks.org/wiki/LaTeX/Macros which lead me to this post. Can you explain what \providecommand\my@emphstyle{\em} does? I know the command \providecommand but I cant find what the @means. Also, if I do a regular short document and including just this line in the preamble, my document wont compile. – Adam Feb 4 '14 at 23:19
@Adam: You may read \@ and @ in macro names – Leo Liu Feb 5 '14 at 5:21
@Adam: See also, for example, What do \makeatletter and \makeatother do? – Leo Liu Feb 5 '14 at 5:23
Accepting key-value input can be done using a number of packages, and the general approach is the same for all of them: I covered this in some detail in a TUGboat article. Essentially, there are three things you need to do
1. Define one or more keys;
2. Tell LaTeX to process package options using these keys;
In the question, you've mentioned the xkeyval package, with others including kvoptions, pgfkeys (plus pgfopts) and the LaTeX3 keys system l3keys (plus l3keys2e). I have used all of these in the past, and I would favour pgfkeys (if you do not want to use LaTeX3) or the LaTeX3 keys implementation (if you are happy using expl3). The reason is that these two have in my opinion the best overall method for defining keys. (I should add that I wrote most of the LaTeX3 keys system, and this was based initially on the pgfkeys approach.)
As the question asks for an xkeyval approach, I will sketch one out here. First, of course, you'll need to load the package.
\usepackage{xkeyval}
This also loads the parent keyval package, which provides some of the basic mechaism. To define keys, the basic macro is \define@key:
\define@key{mypkg}{optA}{<code for optA>} % 'mypkg' is the 'family' for the keys
\define@key{mypkg}{optB}{<code for optB>}
Within the code, #1 will be the value passed to the key. You can define key types with richer validation (for example Boolean keys) using the various xkeyval macros. As I say, the xkeyval approach is rather dense, and I think 'one question per key type' might be best if you want more information!
The second stage is to process the package options. To do this, in place of \ProcessOptions you use \ProcessOptionsX<mypkg>. This will work through the package options, looking for a defined key for each one and executing the code it finds.
Finally, to define a macro to use the keys after package loading, you need \setkeys:
\newcommand\mymacro[1]{\setkeys{mypkg}{#1}}
What you should notice here is that key-value package options are just keys that are defined when the \ProcessOptionsX macro is used. So it is possible to define keys only as package options, then disable them by doing \defin@key again. It's also possible to define options that are only available after package loading, by simply placing \define@key after \ProcessOptionsX.
-
A small correction: they should be \define@key<mypkg>{optA}{<code for optA>} and \ProcessOptionsX<mypkg> – Leo Liu Mar 12 '11 at 8:01
@Leo: been a while since I used xkeyval: I'll update that. – Joseph Wright Mar 12 '11 at 8:21
@Leo: I'm sure I'm right on \define@key, as the syntax comes from the keyval package! – Joseph Wright Mar 12 '11 at 8:22
sorry I thought it was \DeclareOptionX – Leo Liu Mar 12 '11 at 10:39
An example with some code from the documentation
\NeedsTeXFormat{LaTeX2e}
\ProvidesPackage{mypackage}[12/03/2011]
\RequirePackage{xkeyval}
\DeclareOptionX{parindent}[20pt]{\setlength\parindent{#1}}
\ExecuteOptionsX{parindent=0pt}
\ProcessOptionsX\relax
% etc.
\endinput
\DeclareOptionX is equivalent to (thanks to Ahmed Musa for the correction)
\define@key{mypackage.sty}{parindent}[20pt]{\setlength\parindent{#1}}
-
You have mixed package and class calls. In your case, \DeclareOptionX is equivalent to \define@key{mypackage.sty}{parindent}[20pt]{\setlength\parindent{#1}}. – Ahmed Musa Aug 10 '12 at 12:50
@AhmedMusa Thanks for the correction. – Alain Matthes Aug 10 '12 at 14:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8518829941749573, "perplexity": 2767.102157794213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701146550.16/warc/CC-MAIN-20160205193906-00143-ip-10-236-182-209.ec2.internal.warc.gz"}
|
https://www.ias.ac.in/listing/bibliography/joaa/N._P._S._MITHUN
|
• N. P. S. MITHUN
Articles written in Journal of Astrophysics and Astronomy
• The Cadmium Zinc Telluride Imager on AstroSat
The Cadmium Zinc Telluride Imager (CZTI) is a high energy, wide-field imaging instrument on AstroSat. CZTI’s namesake Cadmium Zinc Telluride detectors cover an energy range from 20 keV to >200 keV, with 11% energy resolution at 60 keV. The coded aperture mask attains an angular resolution of 17′ over a 4.6× 4.6 (FWHM) field-of-view. CZTI functions as an open detector above 100 keV, continuously sensitive to GRBs and other transients in about 30% of the sky. The pixellated detectors are sensitive to polarization above ∼100 keV, with exciting possibilities for polarization studies of transients and bright persistent sources. In this paper, we provide details of the complete CZTI instrument, detectors, coded aperture mask, mechanical and electronic configuration, as well as data and products.
• Charged Particle Monitor on the AstroSat Mission
Charged Particle Monitor (CPM) on-board the Astrosat satellite is an instrument designed to detect the flux of charged particles at the satellite location. A Cesium Iodide Thallium (CsI(Tl)) crystal is used with a Kapton window to detect protons with energies greater than 1 MeV. The ground calibration of CPM was done using gamma-rays from radioactive sources and protons from particle accelerators. Based on the ground calibration results, energy deposition above 1 MeV are accepted and particle counts are recorded. It is found that CPM counts are steady and the signal for the onset and exit of South Atlantic Anomaly (SAA) region are generated in a very reliable and stable manner.
• A generalized event selection algorithm for AstroSat CZT imager data
The Cadmium–Zinc–Telluride (CZT) Imager on board AstroSat is a hard X-ray imaging spectrometer operating in the energy range of 20–100 keV. It also acts as an open hard X-ray monitor above 100 keV capable of detecting transient events like the Gamma-ray Bursts (GRBs). Additionally, the instrument has thesensitivity to measure hard X-ray polarization in the energy range of 100–400 keV for bright on-axis sources like Crab and Cygnus X-1 and bright GRBs. As hard X-ray instruments like CZTI are sensitive to cosmic rays in addition to X-rays, it is required to identify and remove particle induced or other noise events and select events for scientific analysis of the data. The present CZTI data analysis pipeline includes algorithms for such event selection, but they have certain limitations. They were primarily designed for the analysis of data from persistent X-ray sources where the source flux is much less than the background and thus are not best suited for sources like GRBs. Here, we re-examine the characteristics of noise events in CZTI and present a generalized event selectionmethod that caters to the analysis of data for all types of sources. The efficacy of the new method is reviewed by examining the Poissonian behavior of the selected events and the signal to noise ratio for GRBs.
• Exploring sub-MeV sensitivity of AstroSat–CZTI for ON-axis bright sources
The Cadmium–Zinc–Telluride Imager (CZTI) onboard AstroSat is designed for hard X-ray imaging and spectroscopy in the energy range of 20–100 keV. The CZT detectors are of 5-mm thickness and hence have good efficiency for Compton interactions beyond 100 keV. The polarisation analysis using CZTIrelies on such Compton events and have been verified experimentally. The same Compton events can also be used to extend the spectroscopy up to 380 keV. Further, it has been observed that about 20% pixels of the CZTI detector plane have low gain, and they are excluded from the primary spectroscopy. If these pixels are included, then the spectroscopic capability of CZTI can be extended up to 500 keV and further up to 700 keV with a better gain calibration in the future. Here we explore the possibility of using the Compton events as well as the low gain pixels to extend the spectroscopic energy range of CZTI for ON-axis bright X-ray sources. We demonstrate this technique using Crab observations and explore its sensitivity.
• Characterisation of cosmic ray induced noise events in AstroSat-CZT imager
The Cadmium Zinc Telluride (CZT) Imager onboard AstroSat consists of pixelated CZT detectors, which are sensitive to hard X-rays above 20 keV. The individual pixels are triggered by ionising events occurring in them, and the detectors operate in a self-triggered mode, recording each event separatelywith information about its time of incidence, detector co-ordinates, and channel that scales with the amount of ionisation. The detectors are sensitive not only to photons from astrophysical sources of interest, but also prone to a number of other events like background X-rays, cosmic rays, and noise in detectors or theelectronics. In this work, a detailed analysis of the effect of cosmic rays on the detectors is made and it is found that cosmic rays can trigger multiple events which are closely packed in time (called ‘bunches’). Higher energy cosmic rays, however, can also generate delayed emissions, a signature previously seen in the PICsIT detector on-board INTEGRAL. An algorithm to automatically detect them based on their spatial clustering properties is presented. Residual noise events are examined using examples of Gamma Ray Bursts as target sources.
• Imaging calibration of AstroSat Cadmium Zinc Telluride Imager (CZTI)
AstroSat is India’s first space-based astronomical observatory, launched on September 28, 2015. One of the payloads aboard AstroSat is the Cadmium Zinc Telluride Imager (CZTI), operating at hard X-rays. CZTI employs a two-dimensional coded aperture mask for the purpose of imaging. In this paper, we discuss various image reconstruction algorithms adopted for the test and calibration of the imaging capability of CZTI and present results from CZTI on-ground as well as in-orbit image calibration.
• Sub-MeV spectroscopy with AstroSat-CZT imager for gamma ray bursts
Cadmium–Zinc–Telluride Imager (CZTI) onboard AstroSat has been a prolific Gamma-Ray Burst (GRB) monitor. While the 2-pixel Compton scattered events (100–300 keV) are used to extract sensitive spectroscopic information, the inclusion of the low-gain pixels ($\sim$20% of the detector plane) aftercareful calibration extends the energy range of Compton energy spectra to 600 keV. The new feature also allows single-pixel spectroscopy of the GRBs to the sub-MeV range which is otherwise limited to 150 keV. We also introduced a new noise rejection algorithm in the analysis (‘Compton noise’). These new additionsnot only enhances the spectroscopic sensitivity of CZTI, but the sub-MeV spectroscopy will also allow proper characterization of the GRBs not detected by Fermi. This article describes the methodology of single, Compton event and veto spectroscopy in 100–900 keV combined for the GRBs detected in the first year of operation. CZTI in last five years has detected $\sim$20 bright GRBs. The new methodologies, when applied on the spectral analysis for this large sample of GRBs, has the potential to improve the results significantly and help in better understanding the prompt emission mechanism.
• The AstroSat mass model: Imaging and flux studies of off-axis sources with CZTI
The Cadmium Zinc Telluride Imager (CZTI) on AstroSat is a hard X-ray coded-aperture mask instrument with a primary field-of-view of $4.6^{\circ} \times 4.6^{\circ}$ (FWHM).The instrument collimators become increasinglytransparent at energies above $\sim$100 keV, making CZTI sensitive to radiation from the entire sky. While this has enabled CZTI to detect a large number of off-axis transient sources, calculating the source flux or spectrum requires knowledge of the direction and energy dependent attenuation of the radiation incident upon the detector. Here, we present a GEANT4-based mass model of CZTI and AstroSat that can be used to simulate the satellite response to the incident radiation, and to calculate an effective ‘‘response file’’ for converting the source counts into fluxes and spectra. We provide details of the geometry and interaction physics, and validate the model by comparing the simulations of imaging and flux studies with observations. Spectroscopic validation of the massmodel is discussed in a companion paper, Chattopadhyay et al. (J. Astrophys. Astr., vol. 42 (2021) https://doi.org/10.1007/s12036-021-09718-2).
• # Journal of Astrophysics and Astronomy
Volume 43, 2022
All articles
Continuous Article Publishing mode
• # Continuous Article Publication
Posted on January 27, 2016
Since January 2016, the Journal of Astrophysics and Astronomy has moved to Continuous Article Publishing (CAP) mode. This means that each accepted article is being published immediately online with DOI and article citation ID with starting page number 1. Articles are also visible in Web of Science immediately. All these have helped shorten the publication time and have improved the visibility of the articles.
• # Editorial Note on Continuous Article Publication
Posted on July 25, 2019
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6979942321777344, "perplexity": 3111.604650155949}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303385.49/warc/CC-MAIN-20220121131830-20220121161830-00715.warc.gz"}
|
https://martin-thoma.com/what-is-the-best-programming-language/
|
What is the best programming language?
There is no such thing as a best programming language. Sorry about that, I’ve just thought it would be a catchy title. I would rather choose my tools after I know the problem I have to solve.
Some programming languages are very good at some tasks. I don’t know any that is very good at every task.
This comic illustrates what I mean:
A fair test
Bash
The bash is great for tiny tasks where other programs are involved.
Example
Resizing all jpg-images in a given folder to a maximum resolution of 1600x1600 while maintaining the aspect ratio:
for i in *.JPG;do convert "$i" -resize 1600x1600 "${i%.JPG}-resized.jpg"; done
See Converting Files with Linux for more examples.
Python
Python does a incredibly well job for small problems. I don’t have experience with big projects, but some have been done using Python (see list below). Python is dynamically typed, offers a lot of functions out of the box and is easy to learn and understand. You might argue that Python is executable Pseudocode as it is so easy to read. Additionally, it offers a very neat library for math functions with NumPy.
Examples of Python-Code in applications include:
Java
Java is used in the economy for simple, but huge tasks. It is static and strong typed, has some widely used coding convetions, is easy to learn and has a big library.
Here are some examples for programs written in Java:
• Mars Rovers (source)
• BitTorrent client Vuze
• Sites that have URLs like "*.do", "*.jsp" and "...servlet..." are most likely written in Java.
• Games:
C++
C++ is easy to write and blazing-fast. See Performance of Matrix multiplication in Python, Java and C++.
Some projects done in C++ are:
C
Programs done with C:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19688090682029724, "perplexity": 1874.1521758322829}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860123151.14/warc/CC-MAIN-20160428161523-00165-ip-10-239-7-51.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/geometric-series-geometric-progression.150174/
|
# Geometric series/geometric progression
1. Jan 5, 2007
### Elec68
I can't figure this out for the life of me:
A geometric series exists with the third term of 8 and the sixth term of 128, what is the geometric series?
2. Jan 5, 2007
### Hurkyl
Staff Emeritus
Have you tried anything at all? What do you know about geometric series?
3. Jan 6, 2007
### HallsofIvy
Staff Emeritus
In particular, do you know the formula for the nth term of a geometric sequence? Use that formula knowing that a3= 8 and a6= 128 to get two equations in the two parameters you need.
Similar Discussions: Geometric series/geometric progression
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8614820241928101, "perplexity": 1812.8221939174584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218203515.32/warc/CC-MAIN-20170322213003-00503-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://365go.me/implication-rules-problem-12570/
|
# Implication rules problem
## What is the rule of implication?
In propositional logic, material implication is a valid rule of replacement that allows for a conditional statement to be replaced by a disjunction in which the antecedent is negated. The rule states that P implies Q is logically equivalent to not- or and that either form can replace the other in logical proofs.
## What are the first 4 rules of inference?
The first two lines are premises . The last is the conclusion . This inference rule is called modus ponens (or the law of detachment ).
Rules of Inference.
Name Rule
Simplification p\wedge q \therefore p
Conjunction p q \therefore p\wedge q
Resolution p\vee q \neg p \vee r \therefore q\vee r
## What are the 9 rules of inference?
Terms in this set (9)
• Modus Ponens (M.P.) -If P then Q. -P. …
• Modus Tollens (M.T.) -If P then Q. …
• Hypothetical Syllogism (H.S.) -If P then Q. …
• Disjunctive Syllogism (D.S.) -P or Q. …
• Conjunction (Conj.) -P. …
• Constructive Dilemma (C.D.) -(If P then Q) and (If R then S) …
• Simplification (Simp.) -P and Q. …
• Absorption (Abs.) -If P then Q.
See also Is it a fallacy to do a justified action coincidentally? (i.e. without the right justification)
## What are inference rules and implications?
Introduction. Rules of inference are syntactical transform rules which one can use to infer a conclusion from a premise to create an argument. A set of rules can be used to infer any valid conclusion if it is complete, while never inferring an invalid conclusion, if it is sound.
## What are the two parts of an implication?
In an implication p⇒q, the component p is called the sufficient condition, and the component q is called the necessary condition.
## What is P or not Q equivalent to?
if p is a statement variable, the negation of p is “not p”, denoted by ~p. If p is true, then ~p is false. Conjunction: if p and q are statement variables, the conjunction of p and q is “p and q”, denoted p q.
Commutative p q q p p q q p
Negations of t and c ~t c ~c t
## What are the 8 rules of inference?
Review of the 8 Basic Sentential Rules of Inference
• Modus Ponens (MP) p⊃q, p. ∴ q.
• Modus Tollens (MT) p⊃q, ~q. ∴ ~p.
• Disjunctive Syllogism(DS) p∨q, ~p. ∴ q. …
• Simplication (Simp) p.q. ∴ p. …
• Conjunction (Conj) p, q. ∴ …
• Hypothetical Syllogism (HS) p⊃q, q⊃r. ∴ …
• Constructive Dilemma (CD) (p⊃q), (r⊃s), p∨r.
## What are rules of inference explain with example?
Table of Rules of Inference
Rule of Inference Name
P∨Q¬P∴Q Disjunctive Syllogism
P→QQ→R∴P→R Hypothetical Syllogism
(P→Q)∧(R→S)P∨R∴Q∨S Constructive Dilemma
(P→Q)∧(R→S)¬Q∨¬S∴¬P∨¬R Destructive Dilemma
## What is resolution in rules of inference?
Resolution Inference Rules. Resolution is an inference rule (with many variants) that takes two or more parent clauses and soundly infers new clauses. A special case of resolution is when the parent causes are contradictory, and an empty clause is inferred. Resolution is a general form of modus ponens.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8474900126457214, "perplexity": 4607.310100051799}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499946.80/warc/CC-MAIN-20230201144459-20230201174459-00779.warc.gz"}
|
http://jdh.hamkins.org/tag/alfredo-roque-freire/
|
# Bi-interpretation in set theory, Oberwolfach Set Theory Conference, January 2022
This was a talk for the 2022 Set Theory Conference at Oberwolfach, which was a hybrid of in-person talks and online talks on account of the Covid pandemic. I gave my talk online 10 January 2022.
Abstract: Set theory exhibits a truly robust mutual interpretability phenomenon: in any model of one set theory we can define models of diverse other set theories and vice versa. In any model of ZFC, we can define models of ZFC + GCH and also of ZFC + ¬CH and so on in hundreds of cases. And yet, it turns out, in no instance do these mutual interpretations rise to the level of bi-interpretation. Ali Enayat proved that distinct theories extending ZF are never bi-interpretable, and models of ZF are bi-interpretable only when they are isomorphic. So there is no nontrivial bi-interpretation phenomenon in set theory at the level of ZF or above. Nevertheless, for natural weaker set theories, we prove, including ZFC- without power set and Zermelo set theory Z, there are nontrivial instances of bi-interpretation. Specifically, there are well-founded models of ZFC- that are bi-interpretable, but not isomorphic—even $\langle H_{\omega_1},\in\rangle$ and $\langle H_{\omega_2},\in\rangle$ can be bi-interpretable—and there are distinct bi-interpretable theories extending ZFC-. Similarly, using a construction of Mathias, we prove that every model of ZF is bi-interpretable with a model of Zermelo set theory in which the replacement axiom fails. This is joint work with Alfredo Roque Freire.
# Bi-interpretation in weak set theories
Abstract. In contrast to the robust mutual interpretability phenomenon in set theory, Ali Enayat proved that bi-interpretation is absent: distinct theories extending ZF are never bi-interpretable and models of ZF are bi-interpretable only when they are isomorphic. Nevertheless, for natural weaker set theories, we prove, including Zermelo-Fraenkel set theory $\newcommand\ZFCm{\text{ZFC}^-}\ZFCm$ without power set and Zermelo set theory Z, there are nontrivial instances of bi-interpretation. Specifically, there are well-founded models of ZFC- that are bi-interpretable, but not isomorphic — even $\langle H_{\omega_1},\in\rangle$ and $\langle H_{\omega_2},\in\rangle$ can be bi-interpretable — and there are distinct bi-interpretable theories extending ZFC-. Similarly, using a construction of Mathias, we prove that every model of ZF is bi-interpretable with a model of Zermelo set theory in which the replacement axiom fails.
Set theory exhibits a robust mutual interpretability phenomenon: in a given model of set theory, we can define diverse other interpreted models of set theory. In any model of Zermelo-Fraenkel ZF set theory, for example, we can define an interpreted model of ZFC + GCH, via the constructible universe, as well as definable interpreted models of ZF + ¬AC, of ZFC + MA + ¬CH, of ZFC + $\mathfrak{b}<\mathfrak{d}$, and so on for hundreds of other theories. For these latter theories, set theorists often use forcing to construct outer models of the given model; but nevertheless the Boolean ultrapower method provides definable interpreted models of these theories inside the original model (explained in theorem 7). Similarly, in models of ZFC with large cardinals, one can define fine-structural canonical inner models with large cardinals and models of ZF satisfying various determinacy principles, and vice versa. In this way, set theory exhibits an abundance of natural mutually interpretable theories.
Do these instances of mutual interpretation fulfill the more vigourous conception of bi-interpretation? Two models or theories are mutually interpretable, when merely each is interpreted in the other, whereas bi-interpretation requires that the interpretations are invertible in a sense after iteration, so that if one should interpret one model or theory in the other and then re-interpret the first theory inside that, then the resulting model should be definably isomorphic to the original universe (precise definitions in sections 2 and 3). The interpretations mentioned above are not bi-interpretations, for if we start in a model of ZFC+¬CH and then go to L in order to interpret a model of ZFC+GCH, then we’ve already discarded too much set-theoretic information to expect that we could get a copy of our original model back by interpreting inside L. This problem is inherent, in light of the following theorem of Ali Enayat, showing that indeed there is no nontrivial bi-interpretation phenomenon to be found amongst the set-theoretic models and theories satisfying ZF. In interpretation, one must inevitably discard set-theoretic information.
Theorem. (Enayat 2016)
1. ZF is solid: no two models of ZF are bi-interpretable.
2. ZF is tight: no two distinct theories extending ZF are bi-interpretable.
The proofs of these theorems, provided in section 6, seem to use the full strength of ZF, and Enayat had consequently inquired whether the solidity/tightness phenomenon somehow required the strength of ZF set theory. In this paper, we shall find support for that conjecture by establishing nontrivial instances of bi-interpretation in various natural weak set theories, including Zermelo-Fraenkel theory $\ZFCm$, without the power set axiom, and Zermelo set theory Z, without the replacement axiom.
Main Theorems
1. $\ZFCm$ is not solid: there are well-founded models of $\ZFCm$ that are bi-interpretable, but not isomorphic.
2. Indeed, it is relatively consistent with ZFC that $\langle H_{\omega_1},\in\rangle$ and $\langle H_{\omega_2},\in\rangle$ are bi-interpretable.
3. $\ZFCm$ is not tight: there are distinct bi-interpretable extensions of $\ZFCm$.
4. Z is not solid: there are well-founded models of Z that are bi-interpretable, but not isomorphic.
5. Indeed, every model of ZF is bi-interpretable with a transitive inner model of Z in which the replacement axiom fails.
6. Z is not tight: there are distinct bi-interpretable extensions of Z.
These claims are made and proved in theorems 20, 17, 21 and 22. We shall in addition prove the following theorems on this theme:
7. Well-founded models of ZF set theory are never mutually interpretable.
8. The Väänänen internal categoricity theorem does not hold for $\ZFCm$, not even for well-founded models.
These are theorems 14 and 16. Statement (8) concerns the existence of a model $\langle M,\in,\bar\in\rangle$ satisfying $\ZFCm(\in,\bar\in)$, meaning $\ZFCm$ in the common language with both predicates, using either $\in$ or $\bar\in$ as the membership relation, such that $\langle M,\in\rangle$ and $\langle M,\bar\in\rangle$ are not isomorphic.
# The axiom of well-ordered replacement is equivalent to full replacement over Zermelo + foundation
In recent work, Alfredo Roque Freire and I have realized that the axiom of well-ordered replacement is equivalent to the full replacement axiom, over the Zermelo set theory with foundation.
The well-ordered replacement axiom is the scheme asserting that if $I$ is well-ordered and every $i\in I$ has unique $y_i$ satisfying a property $\phi(i,y_i)$, then $\{y_i\mid i\in I\}$ is a set. In other words, the image of a well-ordered set under a first-order definable class function is a set.
Alfredo had introduced the theory Zermelo + foundation + well-ordered replacement, because he had noticed that it was this fragment of ZF that sufficed for an argument we were mounting in a joint project on bi-interpretation. At first, I had found the well-ordered replacement theory a bit awkward, because one can only apply the replacement axiom with well-orderable sets, and without the axiom of choice, it seemed that there were not enough of these to make ordinary set-theoretic arguments possible.
But now we know that in fact, the theory is equivalent to ZF.
Theorem. The axiom of well-ordered replacement is equivalent to full replacement over Zermelo set theory with foundation.
$$\text{ZF}\qquad = \qquad\text{Z} + \text{foundation} + \text{well-ordered replacement}$$
Proof. Assume Zermelo set theory with foundation and well-ordered replacement.
Well-ordered replacement is sufficient to prove that transfinite recursion along any well-order works as expected. One proves that every initial segment of the order admits a unique partial solution of the recursion up to that length, using well-ordered replacement to put them together at limits and overall.
Applying this, it follows that every set has a transitive closure, by iteratively defining $\cup^n x$ and taking the union. And once one has transitive closures, it follows that the foundation axiom can be taken either as the axiom of regularity or as the $\in$-induction scheme, since for any property $\phi$, if there is a set $x$ with $\neg\phi(x)$, then let $A$ be the set of elements $a$ in the transitive closure of $\{x\}$ with $\neg\phi(a)$; an $\in$-minimal element of $A$ is a set $a$ with $\neg\phi(a)$, but $\phi(b)$ for all $b\in a$.
Another application of transfinite recursion shows that the $V_\alpha$ hierarchy exists. Further, we claim that every set $x$ appears in the $V_\alpha$ hierarchy. This is not immediate and requires careful proof. We shall argue by $\in$-induction using foundation. Assume that every element $y\in x$ appears in some $V_\alpha$. Let $\alpha_y$ be least with $y\in V_{\alpha_y}$. The problem is that if $x$ is not well-orderable, we cannot seem to collect these various $\alpha_y$ into a set. Perhaps they are unbounded in the ordinals? No, they are not, by the following argument. Define an equivalence relation $y\sim y’$ iff $\alpha_y=\alpha_{y’}$. It follows that the quotient $x/\sim$ is well-orderable, and thus we can apply well-ordered replacement in order to know that $\{\alpha_y\mid y\in x\}$ exists as a set. The union of this set is an ordinal $\alpha$ with $x\subseteq V_\alpha$ and so $x\in V_{\alpha+1}$. So by $\in$-induction, every set appears in some $V_\alpha$.
The argument establishes the principle: for any set $x$ and any definable class function $F:x\to\text{Ord}$, the image $F\mathrel{\text{”}}x$ is a set. One proves this by defining an equivalence relation $y\sim y’\leftrightarrow F(y)=F(y’)$ and observing that $x/\sim$ is well-orderable.
We can now establish the collection axiom, using a similar idea. Suppose that $x$ is a set and every $y\in x$ has a witness $z$ with $\phi(y,z)$. Every such $z$ appears in some $V_\alpha$, and so we can map each $y\in x$ to the smallest $\alpha_y$ such that there is some $z\in V_{\alpha_y}$ with $\phi(y,z)$. By the observation of the previous paragraph, the set of $\alpha_y$ exists and so there is an ordinal $\alpha$ larger than all of them, and thus $V_\alpha$ serves as a collecting set for $x$ and $\phi$, verifying this instance of collection.
From collection and separation, we can deduce the replacement axiom $\Box$
I’ve realized that this allows me to improve an argument I had made some time ago, concerning Transfinite recursion as a fundamental principle. In that argument, I had proved that ZC + foundation + transfinite recursion is equivalent to ZFC, essentially by showing that the principle of transfinite recursion implies replacement for well-ordered sets. The new realization here is that we do not need the axiom of choice in that argument, since transfinite recursion implies well-ordered replacement, which gives us full replacement by the argument above.
Corollary. The principle of transfinite recursion is equivalent to the replacement axiom over Zermelo set theory with foundation.
$$\text{ZF}\qquad = \qquad\text{Z} + \text{foundation} + \text{transfinite recursion}$$
There is no need for the axiom of choice.
# Different set theories are never bi-interpretable
I was fascinated recently to discover something I hadn’t realized about relative interpretability in set theory, and I’d like to share it here. Namely,
Different set theories extending ZF are never bi-interpretable!
For example, ZF and ZFC are not bi-interpretable, and neither are ZFC and ZFC+CH, nor ZFC and ZFC+$\neg$CH, despite the fact that all these theories are equiconsistent. The basic fact is that there are no nontrivial instances of bi-interpretation amongst the models of ZF set theory. This is surprising, and could even be seen as shocking, in light of the philosophical remarks one sometimes hears asserted in the philosophy of set theory that what is going on with the various set-theoretic translations from large cardinals to determinacy to inner model theory, to mention a central example, is that we can interpret between these theories and consequently it doesn’t much matter which context is taken as fundamental, since we can translate from one context to another without loss.
The bi-interpretation result shows that these interpretations do not and cannot rise to the level of bi-interpretations of theories — the most robust form of mutual relative interpretability — and consequently, the translations inevitably must involve a loss of information.
To be sure, set theorists classify the various set-theoretic principles and theories into a hierarchy, often organized by consistency strength or by other notions of interpretative power, using forcing or definable inner models. From any model of ZF, for example, we can construct a model of ZFC, and from any model of ZFC, we can construct models of ZFC+CH or ZFC+$\neg$CH and so on. From models with sufficient large cardinals we can construct models with determinacy or inner-model-theoretic fine structure and vice versa. And while we have relative consistency results and equiconsistencies and even mutual interpretations, we will have no nontrivial bi-interpretations.
(I had proved the theorem a few weeks ago in joint work with Alfredo Roque Freire, who is visiting me in New York this year. We subsequently learned, however, that this was a rediscovery of results that have evidently been proved independently by various authors. Albert Visser proves the case of PA in his paper, “Categories of theories and interpretations,” Logic in Tehran, 284–341, Lect. Notes Log., 26, Assoc. Symbol. Logic, La Jolla, CA, 2006, (pdf, see pp. 52-55). Ali Enayat gave a nice model-theoretic argument for showing specifically that ZF and ZFC are not bi-interpretable, using the fact that ZFC models can have no involutions in their automorphism groups, but ZF models can; and he proved the general version of the theorem, for ZF, second-order arithmetic $Z_2$ and second-order set theory KM in his 2016 article, A. Enayat, “Variations on a Visserian theme,” in Liber Amicorum Alberti : a tribute to Albert Visser / Jan van Eijck, Rosalie Iemhoff and Joost J. Joosten (eds.) Pages, 99-110. ISBN, 978-1848902046. College Publications, London. The ZF version was apparently also observed independently by Harvey Friedman, Visser and Fedor Pakhomov.)
Meanwhile, let me explain our argument. Recall from model theory that one theory $S$ is interpreted in another theory $T$, if in any model of the latter theory $M\models T$, we can define (and uniformly so in any such model) a certain domain $N\subset M^k$ and relations and functions on that domain so as to make $N$ a model of $S$. For example, the theory of algebraically closed fields of characteristic zero is interpreted in the theory of real-closed fields, since in any real-closed field $R$, we can consider pairs $(a,b)$, thinking of them as $a+bi$, and define addition and multiplication on those pairs in such a way so as to construct an algebraically closed field of characteristic zero.
Two theories are thus mutually interpretable, if each of them is interpretable in the other. Such theories are necessarily equiconsistent, since from any model of one of them we can produce a model of the other.
Note that mutual interpretability, however, does not insist that the two translations are inverse to each other, even up to isomorphism. One can start with a model of the first theory $M\models T$ and define the interpreted model $N\models S$ of the second theory, which has a subsequent model of the first theory again $\bar M\models T$ inside it. But the definition does not insist on any particular connection between $M$ and $\bar M$, and these models need not be isomorphic nor even elementarily equivalent in general.
By addressing this, one arrives at a stronger and more robust form of mutual interpretability. Namely, two theories $S$ and $T$ are bi-interpretable, if they are mutually interpretable in such a way that the models can see that the interpretations are inverse. That is, for any model $M$ of the theory $T$, if one defines the interpreted model $N\models S$ inside it, and then defines the interpreted model $\bar M$ of $T$ inside $N$, then $M$ is isomorphic to $\bar M$ by a definable isomorphism in $M$, and uniformly so (and the same with the theories in the other direction). Thus, every model of one of the theories can see exactly how it itself arises definably in the interpreted model of the other theory.
For example, the theory of linear orders $\leq$ is bi-interpretable with the theory of strict linear order $<$, since from any linear order $\leq$ we can define the corresponding strict linear order $<$ on the same domain, and from any strict linear order $<$ we can define the corresponding linear order $\leq$, and doing it twice brings us back again to the same order.
For a richer example, the theory PA is bi-interpretable with the finite set theory $\text{ZF}^{\neg\infty}$, where one drops the infinity axiom from ZF and replaces it with the negation of infinity, and where one has the $\in$-induction scheme in place of the foundation axiom. The interpretation is via the Ackerman encoding of hereditary finite sets in arithmetic, so that $n\mathrel{E} m$ just in case the $n^{th}$ binary digit of $m$ is $1$. If one starts with the standard model $\mathbb{N}$, then the resulting structure $\langle\mathbb{N},E\rangle$ is isomorphic to the set $\langle\text{HF},\in\rangle$ of hereditarily finite sets. More generally, by carrying out the Ackermann encoding in any model of PA, one thereby defines a model of $\text{ZF}^{\neg\infty}$, whose natural numbers are isomorphic to the original model of PA, and these translations make a bi-interpretation.
We are now ready to prove that this bi-interpretation situation does not occur with different set theories extending ZF.
Theorem. Distinct set theories extending ZF are never bi-interpretable. Indeed, there is not a single model-theoretic instance of bi-interpretation occurring with models of different set theories extending ZF.
Proof. I mean “distinct” here in the sense that the two theories are not logically equivalent; they do not have all the same theorems. Suppose that we have a bi-interpretation instance of the theories $S$ and $T$ extending ZF. That is, suppose we have a model $\langle M,\in\rangle\models T$ of the one theory, and inside $M$, we can define an interpreted model of the other theory $\langle N,\in^N\rangle\models S$, so the domain of $N$ is a definable class in $M$ and the membership relation $\in^N$ is a definable relation on that class in $M$; and furthermore, inside $\langle N,\in^N\rangle$, we have a definable structure $\langle\bar M,\in^{\bar M}\rangle$ which is a model of $T$ again and isomorphic to $\langle M,\in^M\rangle$ by an isomorphism that is definable in $\langle M,\in^M\rangle$. So $M$ can define the map $a\mapsto \bar a$ that forms an isomorphism of $\langle M,\in^M\rangle$ with $\langle \bar M,\in^{\bar M}\rangle$. Our argument will work whether we allow parameters in any of these definitions or not.
I claim that $N$ must think the ordinals of $\bar M$ are well-founded, for otherwise it would have some bounded cut $A$ in the ordinals of $\bar M$ with no least upper bound, and this set $A$ when pulled back pointwise by the isomorphism of $M$ with $\bar M$ would mean that $M$ has a cut in its own ordinals with no least upper bound; but this cannot happen in ZF.
If the ordinals of $N$ and $\bar M$ are isomorphic in $N$, then all three models have isomorphic ordinals in $M$, and in this case, $\langle M,\in^M\rangle$ thinks that $\langle N,\in^N\rangle$ is a well-founded extensional relation of rank $\text{Ord}$. Such a relation must be set-like (since there can be no least instance where the predecessors form a proper class), and so $M$ can perform the Mostowski collapse of $\in^N$, thereby realizing $N$ as a transitive class $N\subseteq M$ with $\in^N=\in^M\upharpoonright N$. Similarly, by collapsing we may assume $\bar M\subseteq N$ and $\in^{\bar M}=\in^M\upharpoonright\bar M$. So the situation consists of inner models $\bar M\subseteq N\subseteq M$ and $\langle \bar M,\in^M\rangle$ is isomorphic to $\langle M,\in^M\rangle$ in $M$. This is impossible unless all three models are identical, since a simple $\in^M$-induction shows that $\pi(y)=y$ for all $y$, because if this is true for the elements of $y$, then $\pi(y)=\{\pi(x)\mid x\in y\}=\{x\mid x\in y\}=y$. So $\bar M=N=M$ and so $N$ and $M$ satisfy the same theory, contrary to assumption.
If the ordinals of $\bar M$ are isomorphic to a proper initial segment of the ordinals of $N$, then a similar Mostowski collapse argument would show that $\langle\bar M,\in^{\bar M}\rangle$ is isomorphic in $N$ to a transitive set in $N$. Since this structure in $N$ would have a truth predicate in $N$, we would be able to pull this back via the isomorphism to define (from parameters) a truth predicate for $M$ in $M$, contrary to Tarski’s theorem on the non-definability of truth.
The remaining case occurs when the ordinals of $N$ are isomorphic in $N$ to an initial segment of the ordinals of $\bar M$. But this would mean that from the perspective of $M$, the model $\langle N,\in^N\rangle$ has some ordinal rank height, which would mean by the Mostowski collapse argument that $M$ thinks $\langle N,\in^N\rangle$ is isomorphic to a transitive set. But this contradicts the fact that $M$ has an injection of $M$ into $N$. $\Box$
It follows that although ZF and ZFC are equiconsistent, they are not bi-interpretable. Similarly, ZFC and ZFC+CH and ZFC+$\neg$CH are equiconsistent, but no pair of them is bi-interpretable. And again with all the various equiconsistency results concerning large cardinals.
A similar argument works with PA to show that different extensions of PA are never bi-interpretable.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9473897814750671, "perplexity": 302.43113869704456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662534773.36/warc/CC-MAIN-20220521014358-20220521044358-00519.warc.gz"}
|
http://math.stackexchange.com/questions/115316/question-on-induction-proof-about-direct-sums-of-irreducible-submodules
|
# Question on induction proof (about direct sums of irreducible submodules)
Let $V$ be an $L$-module. I want to show that $V$ is a direct sum of irreducible $L$-submodules if each $L$-submodule of $V$ possesses a complement.
I want to show this via induction on the dimension of $V$. Do I start with $\dim V=1$ or $\dim V=2$ for my base case?
-
Surely you need to assume "finite dimensional" somewhere... The proof must include the case $\dim V = 1$, though in that case the claim is trivial. Whether you need to do the case $\dim V = 2$ separately (as a special case) or not will depend on the precise argument in your inductive step; sometimes the inductive step requires the $n=2$ case to be already established, which is why it is proven separately. Sometimes it doesn't. – Arturo Magidin Mar 1 '12 at 17:28
@Arturo, thanks for the edit! I will incorporate this style next time. – Edison Mar 1 '12 at 17:28
A $1$-dimensional module can be viewed as a trivial sum of irreducible modules. – Joe Johnson 126 Mar 1 '12 at 17:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9167205095291138, "perplexity": 204.8981299523251}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454702032759.79/warc/CC-MAIN-20160205195352-00093-ip-10-236-182-209.ec2.internal.warc.gz"}
|
http://forum.symthic.com/battlefield-1-general-discussion/battlefield-1-technical-discussion/11298-battlefield-1-hit-simulation/?s=f11a67bc087507109007a41bc2596b639821bb1a
|
Welcome to symthic forums! We would love if you'd register!
You don't have to be expert in bit baking, everyone is more than welcome to join our community.
You are not logged in.
## Battlefield 1 hit simulation
Hey! If this is your first visit on symthic.com, also check out our weapon damage charts.
Currently we have charts for Battlefield 3, Call of Duty: Black Ops 2, Medal of Honor: Warfighter and Call of Duty: Modern Warfare 3
Posts: 8
Date of registration
: Mar 10th 2016
Platform: PC
Location: Bucharest
Battlelog:
Reputation modifier: 3
Tuesday, June 6th 2017, 8:04pm
### Battlefield 1 hit simulation
Hello all.
I am trying to write a python script simulating different scenarios for several weapons and chances to get a kill for each one. There was a thread about the BF1 hitbox here. I was instructed, as the thread was quite old to open a new one if I wanted info about the subject.
So my questions are:
• Which are the hitbox areas' multipliers for other classes' weapons (Assault, Medic, Support)?
• What is the complete table of positions of the 'capsules'?
Of course if someone could point me to how I can find this information by myself, it would be great.
Can't get a title
Posts: 1,531
Date of registration
: Dec 23rd 2013
Platform: Xbox One
Location: The Land of Multitudinous Kangaroos
Reputation modifier: 13
Wednesday, June 7th 2017, 12:12am
something something Model 8 bestgun
### Quoted from "Pastafarianism"
Next, wanna try adding a guy that you KNOW is bad, and just testing to see that? Example: PP-2000 (god I so wanna love this gun, and yet...)
### Quoted from "Pastafarianism"
Example: PP-2000 (god I so wanna love this gun, and yet...)
Yes, it comes in last so far, but that is mostly because I'm making it shoot at 100m ADS - Not Moving as one of the criteria. Even then, between 50-100m Not Moving, when you include Useability, it is only 1.37% worse than the MTAR-21. Within 50m then it even beats the A-91.
Have a look, vs. the A-91 Carbine:
Using it with Muzzle Brake and Compensator is a wash in terms of overall performance. Comp is SLIGHTLY more accurate, while MB is SLIGHTLY more easy to use. Their overall scores are basically tied, with MB just ahead. I guess either can be recommended.
### Quoted from "Pastafarianism"
But... You can't be counting for the fact that it takes 9 bullets to kill at "long" range... Don't you dare tell me my A-91 is worse than a 9 BTK 650 RPM mediocre PDW.
Also. Just go heavy barrel. The recoil is low enough.
### Quoted from "Zer0Cod3x"
Well, technically...
Comparing a PP2K with HB and an A-91 with comp and stubby (as you suggested in an earlier post), at 50m not moving, the A-91 is only better by 4 damage per hitrate. While at 75m and 100m, surprisingly the PP2K does better than the A-91 (I'm pretty damn surprised as well).
And 10m and 50m moving the PP2K also does more damage per hitrate than the A-91. At 25m the A-91 is only better by about half a bullet's damage as well.
In addition, the PP2K has a much larger mag size and substantially less recoil. And it looks hella awesome. So comparing the A-91 to a PDW is of some worth after all, as the PP2K is better (technically, not practically) than the A-91.
Mind blown.
### Quoted from "Pastafarianism"
I... I...
*cries in a corner*
### Quoted from "Veritable"
Zer0Cod3x explained it very well. If you look at the raw numbers right here on Symthic Comparison, you can see how that happened:
A-91 vs PP-2000 | BF4 Weapon Comparison | Symthic
A-91's "23%" RPM advantage only afforded it 1 extra round.
Velocities are wash.
V-Recoil are wash (and this is HBar on PP2k vs. A-91 without).
Hipfire and ADS - Moving are better on the PP2k, but it's a PDW and not the surprising part.
The surprising part is that, as equipped (and we see above that PP2k HBar has almost same V-Recoil as A-91 without HBar so why not?), the PDW performs better at 50 - 100m than a bloody Carbine. Why?
SIPS, 42% better on the PP2k.
And here is the most important part. ADS - Not Moving Spread, 0.35 vs. 0.2, 43% improvement.
Without HBar then of course the PP2k loses, which is why when I add all the attachments together for an Overall Ranking, it would slot below the A-91. Run HBar on it, though, then... I'm sorry
### Quoted from "Pastafarianism"
@Veritable
@Zer0Cod3x
I... I...
But...
Wha...
I AM HAVING AN EXISTENTIAL CRISIS IN SCHOOL BECAUSE OF YOU TWO.
FUCK YOU NERDS AND YOUR FANCY NUMBERS
SEXY RUSSIAN BULLPUPS FTW.
In all seriousness, thank you both so much for giving me the numbers. I still don't want to accept them. You have led the horse to water. I still need to drink.
Symthic Developer
Posts: 3,721
Date of registration
: Mar 21st 2013
Platform: PC
Location: __main__, Finland
Reputation modifier: 17
Wednesday, June 7th 2017, 7:33am
### Quoted
Which are the hitbox areas' multipliers for other classes' weapons (Assault, Medic, Support)?
What do you mean exactly? What are the damage multipliers of different weapons vs. different bodyparts? We have multipliers of "normal" weapons vs. different body parts on our site.
Also on the older Hitbox thread I had these images up, but silly Dropbox removed the public-link system:
Dropbox - 1020222446.png
Dropbox - 1020222451.png
• 3VerstsNorth - Analysis of game mechanics in BF4 (tickrates, effects of tickrate, etc)
• leptis - Analysis of shotguns, recoil, recoil control and air drag.
• Veritable - Scoring of BF4/BF1 firearms in terms of usability, firing and other mechanics.
• Miffyli - Random statistical analysis of BF4 battlereports/players and kill-distances. (list is cluttered with other threads).
Sorry if your name wasn't on the list, I honestly can't recall all names : ( . Nudge me if you want to be included
Posts: 8
Date of registration
: Mar 10th 2016
Platform: PC
Location: Bucharest
Battlelog:
Reputation modifier: 3
Wednesday, June 7th 2017, 10:31am
Thanks guys.
@Miffyli: Do we know that the common area/volume between the capsules is shared or that is your estimation (dropbox pics)?
Symthic Developer
Posts: 3,721
Date of registration
: Mar 21st 2013
Platform: PC
Location: __main__, Finland
Reputation modifier: 17
Wednesday, June 7th 2017, 5:33pm
### Quoted from "dfk_7677"
Thanks guys.
@Miffyli: Do we know that the common area/volume between the capsules is shared or that is your estimation (dropbox pics)?
I took the numbers from link Zer0Cod3x when I made those images, but I did a bad and did not write up notes . So going with real-time hunch here:
At line 84 of DefaultBoneSoldierCollision¹ you have array of BoneCollisionData objects.
Each of these objects seem to contain name of the bone, the capsule dimensions (radius, length) of capsule used and the transformation matrix. trans::Vec3 representscapsule's starting point relative to soldier location², and forward::Vec3 is an unit vector representing capsule's direction.
Example:
### Source code
1
2
3
4
CapsuleLength: 2.0
trans: [0, 0, 1]
forward: [0, 1, 0]
Place the starting point of capsule at [0,0,1] (trans), and second point at [0,1,1] ($$trans + forward \cdot CapsuleLength$$). Now draw the capsule of radius CapsuleRadius like shown in this image by leptis.
(¹): Not guaranteed to be the actual data used in game, could be overwritten somewhere.
(²): Do not cite on me this one, but this is how I recall it.
• 3VerstsNorth - Analysis of game mechanics in BF4 (tickrates, effects of tickrate, etc)
• leptis - Analysis of shotguns, recoil, recoil control and air drag.
• Veritable - Scoring of BF4/BF1 firearms in terms of usability, firing and other mechanics.
• Miffyli - Random statistical analysis of BF4 battlereports/players and kill-distances. (list is cluttered with other threads).
Sorry if your name wasn't on the list, I honestly can't recall all names : ( . Nudge me if you want to be included
PvF 2017 Champion
Posts: 7,177
Date of registration
: Apr 3rd 2012
Platform: PC
Battlelog:
Reputation modifier: 19
Wednesday, June 7th 2017, 10:37pm
Kinda sad that the arms and the lower legs don't seem to have transforms
And changing the forward, right, and up vectors for the pose makes it hard to get the rotation in unity
The head isn't centered either but that might just be part of animations
At least the placement and dimensions of the bones matches the 1.7m tall 0.5m wide cylinder I was using for testing purposes
Data Browser
Passive Spotting is the future!
With this, I'll rid MGO3 of infestation. Sans bad gameplay MGO3 will be torn asunder. And then it shall be free. People will suffer, of course - a phantom pain.
Reddit and Konami will rewrite the records... And I will be demonized in human memory. But... The thirst for good gameplay that I have planted will infest MGO3. No one can stop it now. The Rebalance Mod will unleash that thirst unto the future.
Are you a scrub?
### Quoted from "blahdy"
If it flies, it dies™.
This post has been edited 1 times, last edit by "NoctyrneSAGA" (Jun 7th 2017, 10:42pm)
Posts: 8
Date of registration
: Mar 10th 2016
Platform: PC
Location: Bucharest
Battlelog:
Reputation modifier: 3
Thursday, June 8th 2017, 3:06pm
@Miffyli:
I wasn't clear enough. I get the same 'results' as you for the 2D/3D representation of the capsules. The thing is that there are overlapping areas. In your images, you split the common area equally between the 2 capsules (for example upper and lower torso or upper torso and head). Do we know if this is correct, of perhaps one of the 2 capsules takes precedence when a hit occurs. I would try to see what happens in an empty server, but I don't think I could be accurate enough to distinguish between the capsules.
Symthic Developer
Posts: 3,721
Date of registration
: Mar 21st 2013
Platform: PC
Location: __main__, Finland
Reputation modifier: 17
Thursday, June 8th 2017, 4:45pm
@dfk_7677
Ah yes. I can not say with certainty, but I would imagine it takes the damage multiplier from fix bodypart hit of the overlapping ones. However they could indeed have some ranking, e.g. take largest damage of all bodyparts hit.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3618183135986328, "perplexity": 8087.870083285527}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159744.52/warc/CC-MAIN-20180923193039-20180923213439-00029.warc.gz"}
|
https://www.physicsforums.com/threads/uniformly-polarized-disk-on-a-conducting-plane-e-field.538319/
|
# Uniformly Polarized disk on a conducting plane (E-Field)
• Start date
• #1
1,097
2
## Homework Statement
A uniformly polarized dielectric disk surrounded by air is lying at a conducting plane, as shown in the figure. The polarization vector in the is,
$$\vec{P} = P \hat{k},$$
the disk radius is a, and the thickness d. Calculate the electric field intensity vector along the disk axis normal to the conducting plane (z-axis).
## The Attempt at a Solution
See the second figure attached for their solution and a picture of the problem, and the first figure for my attempt.
Are our answers the same? I can't seem to get it exactly in the form they have but it looks relatively close.
Can someone confirm?
Is my answer equivalent to theirs? If no what did I do wrong?
#### Attachments
• 62.4 KB Views: 519
• 54.6 KB Views: 595
Related Advanced Physics Homework Help News on Phys.org
• #2
lightgrav
Homework Helper
1,248
30
your E1(z) is ok, your E2(z) is ok. your re-writing of them , as they are added, makes them seem more complicated, rather than terms canceling (to simplify).
• Last Post
Replies
1
Views
765
• Last Post
Replies
5
Views
3K
• Last Post
Replies
4
Views
2K
• Last Post
Replies
2
Views
9K
• Last Post
Replies
4
Views
1K
• Last Post
Replies
0
Views
15K
• Last Post
Replies
4
Views
668
• Last Post
Replies
3
Views
3K
• Last Post
Replies
8
Views
2K
• Last Post
Replies
2
Views
9K
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8616317510604858, "perplexity": 3516.5314711633096}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735836.89/warc/CC-MAIN-20200803224907-20200804014907-00375.warc.gz"}
|
https://worldwidescience.org/topicpages/b/bulge+metal-poor+globular.html
|
#### Sample records for bulge metal-poor globular
1. Tidal stripping stellar substructures around four metal-poor globular clusters in the galactic bulge
International Nuclear Information System (INIS)
Chun, Sang-Hyun; Kang, Minhee; Jung, DooSeok; Sohn, Young-Jong
2015-01-01
We investigate the spatial density configuration of stars around four metal-poor globular clusters (NGC 6266, NGC 6626, NGC 6642, and NGC 6723) in the Galactic bulge region using wide-field deep J, H, and K imaging data obtained with the Wide Field Camera near-infrared array on the United Kingdom Infrared Telescope. A statistical weighted filtering algorithm for the stars on the color–magnitude diagram is applied in order to sort cluster member candidates from the field star contamination. In two-dimensional isodensity contour maps of the clusters, we find that all four of the globular clusters exhibit strong evidence of tidally stripped stellar features beyond the tidal radius in the form of tidal tails or small density lobes/chunks. The orientations of the extended stellar substructures are likely to be associated with the effect of dynamic interaction with the Galaxy and the cluster's space motion. The observed radial density profiles of the four globular clusters also describe the extended substructures; they depart from theoretical King and Wilson models and have an overdensity feature with a break in the slope of the profile at the outer region of clusters. The observed results could imply that four globular clusters in the Galactic bulge region have experienced strong environmental effects such as tidal forces or bulge/disk shocks of the Galaxy during the dynamical evolution of globular clusters. These observational results provide further details which add to our understanding of the evolution of clusters in the Galactic bulge region as well as the formation of the Galaxy.
2. Ruprecht 106 - A young metal-poor Galactic globular cluster
International Nuclear Information System (INIS)
Buonanno, R.; Buscema, G.; Fusi Pecci, F.; Richer, H.B.; Fahlman, G.G.
1990-01-01
The first CCD photometric survey in the Galactic globular cluster Ruprecht 106 has been performed. The results show that Ruprecht 106 is a metal-poor cluster with (Fe/H) about -2 located at about 25 kpc from the Galactic center. A sizable, high centrally concentrated population of blue stragglers was detected. Significant differences in the positions of the turnoffs in the color-magnitude diagram are found compared to those in metal-poor clusters. The cluster appears younger than other typical metal-poor Galactic globulars by about 4-5 Gyr; if true, this object would represent the first direct proof of the existence of a significant age spread among old, very metal-poor clusters. 51 refs
3. NEAR-IR PHOTOMETRIC PROPERTIES OF HB, MSTO, AND SGB FOR METAL POOR GALACTIC GLOBULAR CLUSTERS
Directory of Open Access Journals (Sweden)
J.-W. Kim
2007-03-01
Full Text Available We report photometric features of the HB, MSTO, and SGB for a set of metal-poor Galactic globular clusters on the near-IR CMDs. The magnitude and color of the MSTO and SGB are measured on the fiducial normal points of the CMDs by applying a polynomial fit. The near-IR luminosity functions of horizontal branch stars in the classical second parameter pair M3 and M13 indicate that HB stars in M13 are dominated by hot stars that are rotatively faint in the infrared, whereas HB stars in M3 are brighter than those in M13. The luminosity functions of HB stars in the observed bulge clusters, except for NGC 6717, show a trend that the fainter hot HB stars are dominated in the relatively metal-poor clusters while the relatively metal-rich clusters contain the brighter HB stars. It is suggestive that NGC 6717 would be an extreme example of the second-parameter phenomenon for the bulge globular clusters.
4. Sulphur in the metal poor globular cluster NGC 6397
Science.gov (United States)
Koch, A.; Caffau, E.
2011-10-01
Sulphur (S) is a non-refractory α-element that is not locked into dust grains in the interstellar medium. Thus no correction to the measured, interstellar sulphur abundance is needed and it can be readily compared to the S content in stellar photospheres. Here we present the first measurement of sulphur in the metal poor globular cluster (GC) NGC 6397, as detected in a MIKE/Magellan high signal-to-noise, high-resolution spectrum of one red giant star. While abundance ratios of sulphur are available for a larger number of Galactic stars down to an [Fe/H] of ~ -3.5 dex, no measurements in globular clusters more metal poor than -1.5 dex have been reported so far. We find aNLTE, 3-D abundance ratio of [S/Fe] = +0.52 ± 0.20 (stat.) ± 0.08 (sys.), based on theS I, Multiplet 1 line at 9212.8 Å. This value is consistent with a Galactic halo plateau as typical of other α-elements in GCs and field stars, but we cannot rule out its membership with a second branch of increasing [S/Fe] with decreasing [Fe/H], claimed in the literature, which leads to a large scatter at metallicities around - 2 dex. The [S/Mg] and [S/Ca] ratios in this star are compatible with a Solar value to within the (large) uncertainties. Despite the very large scatter in these ratios across Galactic stars between literature samples, this indicates that sulphur traces the chemical imprints of the other α-elements in metal poor GCs. Combined with its moderate sodium abundance ([S/Na]NLTE = 0.48), the [S/Fe] ratio in this GC extends a global, positive S-Na correlation that is not seen in field stars and might indicate that proton-capture reactions contributed to the production of sulphur in the (metal poor) early GC environments. This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile.
5. Looking for imprints of the first stellar generations in metal-poor bulge field stars
Science.gov (United States)
Siqueira-Mello, C.; Chiappini, C.; Barbuy, B.; Freeman, K.; Ness, M.; Depagne, E.; Cantelli, E.; Pignatari, M.; Hirschi, R.; Frischknecht, U.; Meynet, G.; Maeder, A.
2016-09-01
detected in our sample. The heavy elements Y, Zr, Ba, La, and Eu also exhibit oversolar abundances. Three out of the five stars analysed here show slightly enhanced [Y/Ba] ratios similar to those found in other metal-poor bulge globular clusters (NGC 6522 and M 62). Conclusions: This sample shows enhancement in the first-to-second peak abundance ratios of heavy elements, as well as dominantly s-process element excesses. This can be explained by different nucleosynthesis scenarios: (a) the main r-process plus extra mechanisms, such as the weak r-process; (b) mass transfer from asymptotic giant branch stars in binary systems; (c) an early generation of fast-rotating massive stars. Larger samples of moderately metal-poor bulge stars, with detailed chemical abundances, are needed to better constrain the source of dominantly s-process elements in the early Universe. Observations collected at the European Southern Observatory, Paranal, Chile (ESO), under programmes 089.B-0208(A).
6. VERY METAL-POOR STARS IN THE OUTER GALACTIC BULGE FOUND BY THE APOGEE SURVEY
International Nuclear Information System (INIS)
García Pérez, Ana E.; Majewski, Steven R.; Hearty, Fred R.; Cunha, Katia; Shetrone, Matthew; Johnson, Jennifer A.; Zasowski, Gail; Smith, Verne V.; Beers, Timothy C.; Schiavon, Ricardo P.; Holtzman, Jon; Nidever, David; Allende Prieto, Carlos; Bizyaev, Dmitry; Ebelke, Garrett; Malanushenko, Elena; Malanushenko, Viktor; Eisenstein, Daniel J.; Frinchaboy, Peter M.; Girardi, Léo
2013-01-01
Despite its importance for understanding the nature of early stellar generations and for constraining Galactic bulge formation models, at present little is known about the metal-poor stellar content of the central Milky Way. This is a consequence of the great distances involved and intervening dust obscuration, which challenge optical studies. However, the Apache Point Observatory Galactic Evolution Experiment (APOGEE), a wide-area, multifiber, high-resolution spectroscopic survey within Sloan Digital Sky Survey III, is exploring the chemistry of all Galactic stellar populations at infrared wavelengths, with particular emphasis on the disk and the bulge. An automated spectral analysis of data on 2403 giant stars in 12 fields in the bulge obtained during APOGEE commissioning yielded five stars with low metallicity ([Fe/H] ≤ –1.7), including two that are very metal-poor [Fe/H] ∼ –2.1 by bulge standards. Luminosity-based distance estimates place the 5 stars within the outer bulge, where 1246 of the other analyzed stars may reside. A manual reanalysis of the spectra verifies the low metallicities, and finds these stars to be enhanced in the α-elements O, Mg, and Si without significant α-pattern differences with other local halo or metal-weak thick-disk stars of similar metallicity, or even with other more metal-rich bulge stars. While neither the kinematics nor chemistry of these stars can yet definitively determine which, if any, are truly bulge members, rather than denizens of other populations co-located with the bulge, the newly identified stars reveal that the chemistry of metal-poor stars in the central Galaxy resembles that of metal-weak thick-disk stars at similar metallicity.
7. VERY METAL-POOR STARS IN THE OUTER GALACTIC BULGE FOUND BY THE APOGEE SURVEY
Energy Technology Data Exchange (ETDEWEB)
Garcia Perez, Ana E.; Majewski, Steven R.; Hearty, Fred R. [Department of Astronomy, University of Virginia, Charlottesville, VA 22904-4325 (United States); Cunha, Katia [Steward Observatory, University of Arizona, Tucson, AZ 85721 (United States); Shetrone, Matthew [McDonald Observatory, University of Texas at Austin, Fort Davis, TX 79734 (United States); Johnson, Jennifer A.; Zasowski, Gail [Department of Astronomy, The Ohio State University, Columbus, OH 43210 (United States); Smith, Verne V.; Beers, Timothy C. [National Optical Astronomy Observatories, Tucson, AZ 85719 (United States); Schiavon, Ricardo P. [Gemini Observatory, 670 N. A' Ohoku Place, Hilo, HI 96720 (United States); Holtzman, Jon [Department of Astronomy, MSC 4500, New Mexico State University, P.O. Box 30001, Las Cruces, NM 88003 (United States); Nidever, David [Department of Astronomy, University of Michigan, Ann Arbor, MI 48109 (United States); Allende Prieto, Carlos [Departamento de Astrofisica, Universidad de La Laguna, E-38206 La Laguna, Tenerife (Spain); Bizyaev, Dmitry; Ebelke, Garrett; Malanushenko, Elena; Malanushenko, Viktor [Apache Point Observatory, P.O. Box 59, Sunspot, NM 88349-0059 (United States); Eisenstein, Daniel J. [Harvard Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Frinchaboy, Peter M. [Department of Physics and Astronomy, Texas Christian University, 2800 South University Drive, Fort Worth, TX 76129 (United States); Girardi, Leo [Laboratorio Interinstitucional de e-Astronomia - LIneA, Rua Gal. Jose Cristino 77, Rio de Janeiro, RJ - 20921-400 (Brazil); and others
2013-04-10
Despite its importance for understanding the nature of early stellar generations and for constraining Galactic bulge formation models, at present little is known about the metal-poor stellar content of the central Milky Way. This is a consequence of the great distances involved and intervening dust obscuration, which challenge optical studies. However, the Apache Point Observatory Galactic Evolution Experiment (APOGEE), a wide-area, multifiber, high-resolution spectroscopic survey within Sloan Digital Sky Survey III, is exploring the chemistry of all Galactic stellar populations at infrared wavelengths, with particular emphasis on the disk and the bulge. An automated spectral analysis of data on 2403 giant stars in 12 fields in the bulge obtained during APOGEE commissioning yielded five stars with low metallicity ([Fe/H] {<=} -1.7), including two that are very metal-poor [Fe/H] {approx} -2.1 by bulge standards. Luminosity-based distance estimates place the 5 stars within the outer bulge, where 1246 of the other analyzed stars may reside. A manual reanalysis of the spectra verifies the low metallicities, and finds these stars to be enhanced in the {alpha}-elements O, Mg, and Si without significant {alpha}-pattern differences with other local halo or metal-weak thick-disk stars of similar metallicity, or even with other more metal-rich bulge stars. While neither the kinematics nor chemistry of these stars can yet definitively determine which, if any, are truly bulge members, rather than denizens of other populations co-located with the bulge, the newly identified stars reveal that the chemistry of metal-poor stars in the central Galaxy resembles that of metal-weak thick-disk stars at similar metallicity.
8. New VVV Survey Globular Cluster Candidates in the Milky Way Bulge
Energy Technology Data Exchange (ETDEWEB)
Minniti, Dante; Gómez, Matías [Departamento de Física, Facultad de Ciencias Exactas, Universidad Andrés Bello, Av. Fernandez Concha 700, Las Condes, Santiago (Chile); Geisler, Douglas; Fernández-Trincado, Jose G. [Departamento de Astronomía, Casilla 160-C, Universidad de Concepción, Casilla 160-C, Concepción (Chile); Alonso-García, Javier; Beamín, Juan Carlos; Borissova, Jura; Catelan, Marcio; Ramos, Rodrigo Contreras; Kurtev, Radostin; Pullen, Joyce [Instituto Milenio de Astrofísica, Santiago (Chile); Palma, Tali; Clariá, Juan J. [Observatorio Astronómico, Universidad Nacional de Córdoba, Laprida 854, Córdoba (Argentina); Cohen, Roger E. [Space Telescope Science Institute, 2700 San Martin Drive, Baltimore (United States); Dias, Bruno [European Southern Observatory, Alonso de Cordova 3107, Vitacura, Santiago (Chile); Hempel, Maren [Pontificia Universidad Católica de Chile, Instituto de Astrofísica, Av. Vicuña Mackenna 4860, Santiago (Chile); Ivanov, Valentin D. [European Southern Observatory, Karl-Schwarszchild-Str. 2, D-85748 Garching bei Muenchen (Germany); Lucas, Phillip W. [Dept. of Astronomy, University of Hertfordshire, Hertfordshire (United Kingdom); Moni-Bidin, Christian; Alegría, Sebastian Ramírez [Instituto de Astronomía, Universidad Católica del Norte, Antofagasta (Chile); and others
2017-11-10
It is likely that a number of Galactic globular clusters remain to be discovered, especially toward the Galactic bulge. High stellar density combined with high and differential interstellar reddening are the two major problems for finding globular clusters located toward the bulge. We use the deep near-IR photometry of the VISTA Variables in the Vía Láctea (VVV) Survey to search for globular clusters projected toward the Galactic bulge, and hereby report the discovery of 22 new candidate globular clusters. These objects, detected as high density regions in our maps of bulge red giants, are confirmed as globular cluster candidates by their color–magnitude diagrams. We provide their coordinates as well as their near-IR color–magnitude diagrams, from which some basic parameters are derived, such as reddenings and heliocentric distances. The color–magnitude diagrams reveal well defined red giant branches in all cases, often including a prominent red clump. The new globular cluster candidates exhibit a variety of extinctions (0.06 < A {sub Ks} < 2.77) and distances (5.3 < D < 9.5 kpc). We also classify the globular cluster candidates into 10 metal-poor and 12 metal-rich clusters, based on the comparison of their color–magnitude diagrams with those of known globular clusters also observed by the VVV Survey. Finally, we argue that the census for Galactic globular clusters still remains incomplete, and that many more candidate globular clusters (particularly the low luminosity ones) await to be found and studied in detail in the central regions of the Milky Way.
9. A High-precision Trigonometric Parallax to an Ancient Metal-poor Globular Cluster
Science.gov (United States)
Brown, T. M.; Casertano, S.; Strader, J.; Riess, A.; VandenBerg, D. A.; Soderblom, D. R.; Kalirai, J.; Salinas, R.
2018-03-01
Using the Wide Field Camera 3 (WFC3) on the Hubble Space Telescope (HST), we have obtained a direct trigonometric parallax for the nearest metal-poor globular cluster, NGC 6397. Although trigonometric parallaxes have been previously measured for many nearby open clusters, this is the first parallax for an ancient metal-poor population—one that is used as a fundamental template in many stellar population studies. This high-precision measurement was enabled by the HST/WFC3 spatial-scanning mode, providing hundreds of astrometric measurements for dozens of stars in the cluster and also for Galactic field stars along the same sightline. We find a parallax of 0.418 ± 0.013 ± 0.018 mas (statistical, systematic), corresponding to a true distance modulus of 11.89 ± 0.07 ± 0.09 mag (2.39 ± 0.07 ± 0.10 kpc). The V luminosity at the stellar main-sequence turnoff implies an absolute cluster age of 13.4 ± 0.7 ± 1.2 Gyr. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with programs GO-13817, GO-14336, and GO-14773.
10. Main sequence of the metal-poor globular cluster M30 (NGC 7099)
International Nuclear Information System (INIS)
Alcaino, G.; Liller, W.
1980-01-01
We present photographic photometry for 673 stars in the metal-poor globular cluster M30 (NGC 7099). The Racine wedge was used with the CTIO 1-m Yale telescope (Δm=3/sup m/.60), the CTIO 4-m telescope (Δm=6/sup m/.83), and the ESO 3.6-m telescope (Δm=4/sup m/.12) to extend the photoelectric limit from Vapprox. =16.3 to Vapprox. =20.4. For the main-sequence turn-off, we have determined its position to lie at V=18.4 +- 0.1 (m.e.) and B-V=0.49 +- 0.03 (m.e.). From these values, we calculate the intrinsic values M/sub v/ =3.87 and (B-V) 0 =0.47. For the cluster as a whole, we derive a distance modulus (m-M)/sub V/=14.53 +- 0.15 and reddening E(B-V)=0.02 +- 0.02. Using the models of Iben and Rood [Astrophys. J. 159, 605 (1970)] and the isochrones of Demarque and McClure [(1977), in Evolution of Galaxies and Stellar Populations, edited by B. Tinsley and R. B. Larson (Yale University Observatory, New Haven), p. 199], we deduce the cluster's age to be 14.5( +- 4.0) x 10 9 yr. The large uncertainty in this value emphasizes the dire need for more work on cluster evolution
11. SOLAR-LIKE OSCILLATIONS IN A METAL-POOR GLOBULAR CLUSTER WITH THE HUBBLE SPACE TELESCOPE
International Nuclear Information System (INIS)
Stello, Dennis; Gilliland, Ronald L.
2009-01-01
We present analyses of variability in the red giant stars in the metal-poor globular cluster NGC 6397, based on data obtained with the Hubble Space Telescope. We use a nonstandard data reduction approach to turn a 23 day observing run originally aimed at imaging the white dwarf population, into time-series photometry of the cluster's highly saturated red giant stars. With this technique we obtain noise levels in the final power spectra down to 50 parts per million, which allows us to search for low-amplitude solar-like oscillations. We compare the observed excess power seen in the power spectra with estimates of the typical frequency range, frequency spacing, and amplitude from scaling the solar oscillations. We see evidence that the detected variability is consistent with solar-like oscillations in at least one and perhaps up to four stars. With metallicities 2 orders of magnitude lower than those of the Sun, these stars present so far the best evidence of solar-like oscillations in such a low-metallicity environment.
12. CHEMICAL ABUNDANCES IN NGC 5053: A VERY METAL-POOR AND DYNAMICALLY COMPLEX GLOBULAR CLUSTER
Energy Technology Data Exchange (ETDEWEB)
Boberg, Owen M.; Friel, Eileen D.; Vesperini, Enrico [Astronomy Department, Indiana University, Bloomington, IN 47405 (United States)
2015-05-10
NGC 5053 provides a rich environment to test our understanding of the complex evolution of globular clusters (GCs). Recent studies have found that this cluster has interesting morphological features beyond the typical spherical distribution of GCs, suggesting that external tidal effects have played an important role in its evolution and current properties. Additionally, simulations have shown that NGC 5053 could be a likely candidate to belong to the Sagittarius dwarf galaxy (Sgr dSph) stream. Using the Wisconsin–Indiana–Yale–NOAO–Hydra multi-object spectrograph, we have collected high quality (signal-to-noise ratio ∼ 75–90), medium-resolution spectra for red giant branch stars in NGC 5053. Using these spectra we have measured the Fe, Ca, Ti, Ni, Ba, Na, and O abundances in the cluster. We measure an average cluster [Fe/H] abundance of −2.45 with a standard deviation of 0.04 dex, making NGC 5053 one of the most metal-poor GCs in the Milky Way (MW). The [Ca/Fe], [Ti/Fe], and [Ba/Fe] we measure are consistent with the abundances of MW halo stars at a similar metallicity, with alpha-enhanced ratios and slightly depleted [Ba/Fe]. The Na and O abundances show the Na–O anti-correlation found in most GCs. From our abundance analysis it appears that NGC 5053 is at least chemically similar to other GCs found in the MW. This does not, however, rule out NGC 5053 being associated with the Sgr dSph stream.
13. Chemical Abundances in NGC 5053: A Very Metal-poor and Dynamically Complex Globular Cluster
Science.gov (United States)
Boberg, Owen M.; Friel, Eileen D.; Vesperini, Enrico
2015-05-01
NGC 5053 provides a rich environment to test our understanding of the complex evolution of globular clusters (GCs). Recent studies have found that this cluster has interesting morphological features beyond the typical spherical distribution of GCs, suggesting that external tidal effects have played an important role in its evolution and current properties. Additionally, simulations have shown that NGC 5053 could be a likely candidate to belong to the Sagittarius dwarf galaxy (Sgr dSph) stream. Using the Wisconsin-Indiana-Yale-NOAO-Hydra multi-object spectrograph, we have collected high quality (signal-to-noise ratio ˜ 75-90), medium-resolution spectra for red giant branch stars in NGC 5053. Using these spectra we have measured the Fe, Ca, Ti, Ni, Ba, Na, and O abundances in the cluster. We measure an average cluster [Fe/H] abundance of -2.45 with a standard deviation of 0.04 dex, making NGC 5053 one of the most metal-poor GCs in the Milky Way (MW). The [Ca/Fe], [Ti/Fe], and [Ba/Fe] we measure are consistent with the abundances of MW halo stars at a similar metallicity, with alpha-enhanced ratios and slightly depleted [Ba/Fe]. The Na and O abundances show the Na-O anti-correlation found in most GCs. From our abundance analysis it appears that NGC 5053 is at least chemically similar to other GCs found in the MW. This does not, however, rule out NGC 5053 being associated with the Sgr dSph stream.
14. Abundances in the Galactic bulge
Energy Technology Data Exchange (ETDEWEB)
Barbuy, B; Alves-Brito, A [Universidade de Sao Paulo, IAG, Rua do Matao 1226, Sao Paulo 05508-900 (Brazil); Ortolani, S; Zoccali, M [Dipartimento di Astronomia, Universita di Padova, Vicolo dell' Osservatorio 2, I-35122 Padova (Italy); Hill, V; Gomez, A [Observatoire de Paris-Meudon, 92195 Meudon Cedex (France); Melendez, J [Centro de AstrofIsica da Universidade de Porto, Rua das Estrelas, 4150-762 Porto (Portugal); Asplund, M [Max Planck Institute for Astrophysics, Postfach 1317, 85741 Garching (Germany); Bica, E [Departamento de Astronomia, Universidade Federal do Rio Grande do Sul, CP 15051, Porto Alegre 91501-970 (Brazil); Renzini, A [Osservatorio Astronomico di Padova, Vicolo dell' Osservatorio 5, I-35122 Padova (Italy); Minniti, D [Department of Astronomy and Astrophysics, Universidad Catolica de Chile, Casilla 306, Santiago 22 (Chile)], E-mail: [email protected]
2008-12-15
The metallicity distribution and abundance ratios of the Galactic bulge are reviewed. Issues raised by recent work of different groups, in particular the high metallicity end, the overabundance of {alpha}-elements in the bulge relative to the thick disc and the measurement of giants versus dwarfs, are discussed. Abundances in the old moderately metal-poor bulge globular clusters are described.
15. Oxygen abundances in halo giants. I - Giants in the very metal-poor globular clusters M92 and M15 and the metal-poor halo field
Science.gov (United States)
Sneden, Christopher; Kraft, Robert P.; Prosser, Charles F.; Langer, G. E.
1991-12-01
Oxygen, iron, vanadium, and scandium abundances are derived for very metal-poor giants in the globular clusters M92 and M15, and giants of comparable metallicity in the local halo field. The forbidden O I line dublet (6300, 6363) and nearby metallic lines in spectra are analyzed using line analysis and spectral synthesis codes. The Fe/H abundance for M92 is estimated at -2.25 +/-0.02 based on nine giants with a range of 500 K in effective temperature. No evidence for star-to-star variations in the Fe/H abundance was found. O-rich and O-poor stars appear intermixed in the H-R diagram. O - N nuclear synthesis and mixing to the surface are proposed as the best explanation for the low-oxygen giants. The nitrogen abundances obtained earlier for nine of the ten halo field giants in this sample are incompatible with the very large nitrogen abundances expected of the O/Fe abundance of about + 1.2 in halo field subdwarfs, as found by Abia and Rebolo (1989), and not more than 0.6 in halo giants, as found in this and other studies.
16. A Wide-Field Photometric Survey for Extratidal Tails Around Five Metal-Poor Globular Clusters in the Galactic Halo
Science.gov (United States)
Chun, Sang-Hyun; Kim, Jae-Woo; Sohn, Sangmo T.; Park, Jang-Hyun; Han, Wonyong; Kim, Ho-Il; Lee, Young-Wook; Lee, Myung Gyoon; Lee, Sang-Gak; Sohn, Young-Jong
2010-02-01
Wide-field deep g'r'i' images obtained with the Megacam of the Canada-France-Hawaii Telescope are used to investigate the spatial configuration of stars around five metal-poor globular clusters M15, M30, M53, NGC 5053, and NGC 5466, in a field-of-view ~3°. Applying a mask filtering algorithm to the color-magnitude diagrams of the observed stars, we sorted cluster's member star candidates that are used to examine the characteristics of the spatial stellar distribution surrounding the target clusters. The smoothed surface density maps and the overlaid isodensity contours indicate that all of the five metal-poor globular clusters exhibit strong evidence of extratidal overdensity features over their tidal radii, in the form of extended tidal tails around the clusters. The orientations of the observed extratidal features show signatures of tidal tails tracing the clusters' orbits, inferred from their proper motions, and effects of dynamical interactions with the Galaxy. Our findings include detections of a tidal bridge-like feature and an envelope structure around the pair of globular clusters M53 and NGC 5053. The observed radial surface density profiles of target clusters have a deviation from theoretical King models, for which the profiles show a break at 0.5-0.7rt , extending the overdensity features out to 1.5-2rt . Both radial surface density profiles for different angular sections and azimuthal number density profiles confirm the overdensity features of tidal tails around the five metal-poor globular clusters. Our results add further observational evidence that the observed metal-poor halo globular clusters originate from an accreted satellite system, indicative of the merging scenario of the formation of the Galactic halo. Based on observations carried out at the Canada-France-Hawaii Telescope, operated by the National Research Council of Canada, the Centre National de la Recherche Scientifique de France, and the University of Hawaii. This is part of the
17. On the kinematic separation of field and cluster stars across the bulge globular NGC 6528
Energy Technology Data Exchange (ETDEWEB)
Lagioia, E. P.; Bono, G.; Buonanno, R. [Dipartimento di Fisica, Università degli Studi di Roma-Tor Vergata, via della Ricerca Scientifica 1, I-00133 Roma (Italy); Milone, A. P. [Research School of Astronomy and Astrophysics, The Australian National University, Cotter Road, Weston, ACT 2611 (Australia); Stetson, P. B. [Dominion Astrophysical Observatory, Herzberg Institute of Astrophysics, National Research Council, 5071 West Saanich Road, Victoria, BC V9E 2E7 (Canada); Prada Moroni, P. G. [Dipartimento di Fisica, Università di Pisa, I-56127 Pisa (Italy); Dall' Ora, M. [INAF-Osservatorio Astronomico di Capodimonte, Salita Moiariello 16, I-80131 Napoli (Italy); Aparicio, A.; Monelli, M. [Instituto de Astrofìsica de Canarias, E-38200 La Laguna, Tenerife, Canary Islands (Spain); Calamida, A.; Ferraro, I.; Iannicola, G. [INAF-Osservatorio Astronomico di Roma, Via Frascati 33, I-00044 Monte Porzio Catone (Italy); Gilmozzi, R. [European Southern Observatory, Karl-Schwarzschild-Straße 2, D-85748 Garching (Germany); Matsunaga, N. [Kiso Observatory, Institute of Astronomy, School of Science, The University of Tokyo, 10762-30, Mitake, Kiso-machi, Kiso-gun, 3 Nagano 97-0101 (Japan); Walker, A., E-mail: [email protected] [Cerro Tololo Inter-American Observatory, National Optical Astronomy Observatory, Casilla 603, La Serena (Chile)
2014-02-10
We present deep and precise multi-band photometry of the Galactic bulge globular cluster NGC 6528. The current data set includes optical and near-infrared images collected with ACS/WFC, WFC3/UVIS, and WFC3/IR on board the Hubble Space Telescope. The images cover a time interval of almost 10 yr, and we have been able to carry out a proper-motion separation between cluster and field stars. We performed a detailed comparison in the m {sub F814W}, m {sub F606W} – m {sub F814W} color-magnitude diagram with two empirical calibrators observed in the same bands. We found that NGC 6528 is coeval with and more metal-rich than 47 Tuc. Moreover, it appears older and more metal-poor than the super-metal-rich open cluster NGC 6791. The current evidence is supported by several diagnostics (red horizontal branch, red giant branch bump, shape of the sub-giant branch, slope of the main sequence) that are minimally affected by uncertainties in reddening and distance. We fit the optical observations with theoretical isochrones based on a scaled-solar chemical mixture and found an age of 11 ± 1 Gyr and an iron abundance slightly above solar ([Fe/H] = +0.20). The iron abundance and the old cluster age further support the recent spectroscopic findings suggesting a rapid chemical enrichment of the Galactic bulge.
18. The Metal-poor non-Sagittarius (?) Globular Cluster NGC 5053: Orbit and Mg, Al, and Si Abundances
Science.gov (United States)
Tang, Baitian; Fernández-Trincado, J. G.; Geisler, Doug; Zamora, Olga; Mészáros, Szabolcs; Masseron, Thomas; Cohen, Roger E.; García-Hernández, D. A.; Dell’Agli, Flavia; Beers, Timothy C.; Schiavon, Ricardo P.; Sohn, Sangmo Tony; Hasselquist, Sten; Robin, Annie C.; Shetrone, Matthew; Majewski, Steven R.; Villanova, Sandro; Schiappacasse Ulloa, Jose; Lane, Richard R.; Minnti, Dante; Roman-Lopes, Alexandre; Almeida, Andres; Moreno, E.
2018-03-01
Metal-poor globular clusters (GCs) exhibit intriguing Al–Mg anti-correlations and possible Si–Al correlations, which are important clues to decipher the multiple-population phenomenon. NGC 5053 is one of the most metal-poor GCs in the nearby universe and has been suggested to be associated with the Sagittarius (Sgr) dwarf galaxy, due to its similarity in location and radial velocity with one of the Sgr arms. In this work, we simulate the orbit of NGC 5053, and argue against a physical connection between Sgr and NGC 5053. On the other hand, the Mg, Al, and Si spectral lines, which are difficult to detect in the optical spectra of NGC 5053 stars, have been detected in the near-infrared APOGEE spectra. We use three different sets of stellar parameters and codes to derive the Mg, Al, and Si abundances. Regardless of which method is adopted, we see a large Al variation, and a substantial Si spread. Along with NGC 5053, metal-poor GCs exhibit different Mg, Al, and Si variations. Moreover, NGC 5053 has the lowest cluster mass among the GCs that have been identified to exhibit an observable Si spread until now.
19. The helium abundance in the metal-poor globular clusters M30 and NGC 6397
Energy Technology Data Exchange (ETDEWEB)
Mucciarelli, A.; Lovisi, L.; Lanzoni, B.; Ferraro, F. R. [Dipartimento di Fisica and Astronomia, Università degli Studi di Bologna, Viale Berti Pichat 6/2, I-40127 Bologna (Italy)
2014-05-01
We present the helium abundance of the two metal-poor clusters M30 and NGC 6397. Helium estimates have been obtained by using the high-resolution spectrograph FLAMES at the European Southern Observatory Very Large Telescope and by measuring the He I line at 4471 Å in 24 and 35 horizontal branch (HB) stars in M30 and NGC 6397, respectively. This sample represents the largest data set of He abundances collected so far in metal-poor clusters. The He mass fraction turns out to be Y = 0.252 ± 0.003 (σ = 0.021) for M30 and Y = 0.241 ± 0.004 (σ = 0.023) for NGC 6397. These values are fully compatible with the cosmological abundance, thus suggesting that the HB stars are not strongly enriched in He. The small spread of the Y distributions are compatible with those expected from the observed main sequence splitting. Finally, we find a hint of a weak anticorrelation between Y and [O/Fe] in NGC 6397 in agreement with the prediction that O-poor stars are formed by (He-enriched) gas polluted by the products of hot proton-capture reactions.
20. BVRI CCD photometry of the metal-poor globular cluster M68 (NGC 4590)
International Nuclear Information System (INIS)
Alcaino, G.; Liller, W.; Alvarado, F.; Wenderoth, E.
1990-01-01
BVRI photometry of the low metallicity globular cluster M68 (NGC 4590) was obtained with a CCD camera and the 2.2-m ESO telescope. The resulting BV color-magnitude diagrams are compared with the observations of McClure et al. (1987). The observations are also compared with theoretical isochrones, yielding a cluster age of 13 Gyr with a likely external uncertainty of 2 or 3 Gyr. 25 refs
1. The Globular Clusters of the Galactic Bulge: Results from Multiwavelength Follow-up Imaging
Science.gov (United States)
Cohen, Roger; Geisler, Doug; Mauro, Francesco; Alonso Garcia, Javier; Hempel, Maren; Sarajedini, Ata
2018-01-01
The Galactic globular clusters (GGCs) located towards the bulge of the Milky Way suffer from severe total and differential extinction and high field star densities. They have therefore been systematically excluded from deep, large-scale homogenous GGC surveys, and will present a challenge for Gaia. Meanwhile, existing observations of bulge GGCs have revealed tantalizing hints that they hold clues to Galactic formation and evolution not found elsewhere. Therefore, in order to better characterize these poorly studied stellar systems and place them in the context of their optically well-studied counterparts, we have undertaken imaging programs at optical and near-infrared wavelengths. We describe these programs and present a variety of results, including self-consistent measurement of bulge GGC ages and structural parameters. The limitations imposed by spatially variable extinction and extinction law are highlighted, along with the complimentary nature of forthcoming facilities, allowing us to finally complete our picture of the Milky Way GGC system.
2. High resolution infrared spectra of Bulge Globular Clusters: Liller 1, NGC 6553, and Ter 5
Science.gov (United States)
Origlia, L.; Rich, R. M.; Castro, S. M.
2001-12-01
Using the NIRSPEC spectrograph at Keck II, we have obtained echelle spectra covering the range 1.5-1.8μ m for 2 of the brightest giants in Liller 1 and NGC 6553, old metal rich globular clusters in the Galactic bulge. We also report a preliminary analysis for two giants in the obscured bulge globular cluster Ter 5. We use spectrum synthesis for the abundance analysis, and find [Fe/H]=-0.3+/-0.2 and [O/H]=+0.3+/- 0.1 (from the OH lines) for the giants in Liller 1 and NGC 6553. We measure strong lines for the alpha elements Mg, Ca, and Si, but the lower sensitivity of these lines to abundance permits us to only state a general [α /Fe]=+0.3+/-0.2 dex. The composition of the clusters is similar to that of field stars in the bulge and is consistent with a scenario in which the clusters formed early, with rapid enrichment. Our iron abundance for NGC 6553 is poorly consistent with either the low or the high values recently reported in the literature, unless unusally large, or no α -element enhancements are adopted, respectively. We will also present an abundance analsyis for 2 giants in the highly reddened bulge cluster Ter 5, which appears to be near the Solar metallicity. R. Michael Rich acknowledges finacial support from grant AST-0098739, from the National Science Foundation. Data presented herein were obtained at the W.M.Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation. The authors gratefully acknowledge those of Hawaiian ancestry on whose sacred mountain we are privileged to be guests. Without their generous hospitality, none of the observations presented would have been possible.
3. On the physical nature of globular cluster candidates in the Milky Way bulge
Science.gov (United States)
Piatti, Andrés E.
2018-06-01
We present results from 2MASS JKs photometry on the physical reality of recently reported globular cluster (GC) candidates in the Milky Way (MW) bulge. We relied our analysis on photometric membership probabilities that allowed us to distinguish real stellar aggregates from the composite field star population. When building colour-magnitude diagrams and stellar density maps for stars at different membership probability levels, the genuine GC candidate populations are clearly highlighted. We then used the tip of the red giant branch (RGB) as distance estimator, resulting in heliocentric distances that place many of the objects in regions near the MW bulge, where no GC had been previously recognized. Some few GC candidates resulted to be MW halo/disc objects. Metallicities estimated from the standard RGB method are in agreement with the values expected according to the position of the GC candidates in the Galaxy. Finally, we derived, for the first time, their structural parameters. We found that the studied objects have core, half-light, and tidal radii in the ranges spanned by the population of known MW GCs. Their internal dynamical evolutionary stages will be described properly when their masses are estimated.
4. ULTRA-DEEP GEMINI NEAR-INFRARED OBSERVATIONS OF THE BULGE GLOBULAR CLUSTER NGC 6624
Energy Technology Data Exchange (ETDEWEB)
Saracino, S.; Dalessandro, E.; Ferraro, F. R.; Lanzoni, B.; Miocchi, P. [Dipartimento di Fisica e Astronomia, Università di Bologna, Viale Berti Pichat 6/2, I-40127 Bologna (Italy); Geisler, D.; Mauro, F.; Cohen, R. E.; Villanova, S. [Departamento de Astronomía, Universidad de Concepción, Casilla 160-C, Concepción (Chile); Origlia, L. [INAF—Osservatorio Astronomico di Bologna, via Ranzani 1, I-40127 Bologna (Italy); Bidin, C. Moni, E-mail: [email protected] [Instituto de Astronomía, Universidad Católica del Norte, Av. Angamos 0610, Antofagasta (Chile)
2016-11-20
We used ultra-deep J and K {sub s} images secured with the near-infrared (NIR) GSAOI camera assisted by the multi-conjugate adaptive optics system GeMS at the GEMINI South Telescope in Chile, to obtain a ( K {sub s} , J - K {sub s} ) color–magnitude diagram (CMD) for the bulge globular cluster NGC 6624. We obtained the deepest and most accurate NIR CMD from the ground for this cluster, by reaching K {sub s} ∼ 21.5, approximately 8 mag below the horizontal branch level. The entire extension of the Main Sequence (MS) is nicely sampled and at K {sub s} ∼ 20 we detected the so-called MS “knee” in a purely NIR CMD. By taking advantage of the exquisite quality of the data, we estimated the absolute age of NGC 6624 ( t {sub age} = 12.0 ± 0.5 Gyr), which turns out to be in good agreement with previous studies in the literature. We also analyzed the luminosity and mass functions of MS stars down to M ∼ 0.45 M{sub ⊙}, finding evidence of a significant increase of low-mass stars at increasing distances from the cluster center. This is a clear signature of mass segregation, confirming that NGC 6624 is in an advanced stage of dynamical evolution.
5. Ultra-deep GEMINI Near-infrared Observations of the Bulge Globular Cluster NGC 6624.
Science.gov (United States)
Saracino, S.; Dalessandro, E.; Ferraro, F. R.; Geisler, D.; Mauro, F.; Lanzoni, B.; Origlia, L.; Miocchi, P.; Cohen, R. E.; Villanova, S.; Moni Bidin, C.
2016-11-01
We used ultra-deep J and K s images secured with the near-infrared (NIR) GSAOI camera assisted by the multi-conjugate adaptive optics system GeMS at the GEMINI South Telescope in Chile, to obtain a (K s , J - K s ) color-magnitude diagram (CMD) for the bulge globular cluster NGC 6624. We obtained the deepest and most accurate NIR CMD from the ground for this cluster, by reaching K s ˜ 21.5, approximately 8 mag below the horizontal branch level. The entire extension of the Main Sequence (MS) is nicely sampled and at K s ˜ 20 we detected the so-called MS “knee” in a purely NIR CMD. By taking advantage of the exquisite quality of the data, we estimated the absolute age of NGC 6624 (t age = 12.0 ± 0.5 Gyr), which turns out to be in good agreement with previous studies in the literature. We also analyzed the luminosity and mass functions of MS stars down to M ˜ 0.45 M⊙, finding evidence of a significant increase of low-mass stars at increasing distances from the cluster center. This is a clear signature of mass segregation, confirming that NGC 6624 is in an advanced stage of dynamical evolution. Based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), Ministério da Ciência, Tecnologia e Inovação (Brazil) and Ministerio de Ciencia, Tecnología e Innovación Productiva (Argentina). Based on observations gathered with ESO-VISTA telescope (program ID 179.B-2002).
6. Tracing the Chemical Evolution of Metal-rich Galactic Bulge Globular Clusters
Science.gov (United States)
Munoz Gonzalez, Cesar; Saviane, Ivo; Geisler, Doug; Villanova, Sandro
2018-01-01
We present in this poster the metallicity characterization of the four metal rich Bulge Galactic Gobular Clusters, which have controversial metallicities. We analyzed our high-resolution spectra (using UVES-580nm and GIRAFFE-HR13 setups) for a large sample of RGB/AGB targets in each cluster in order to measure their metallicity and prove or discard the iron spread hypothesis. We have also characterized chemically stars with potentially different iron content by measuring light (O, Na, Mg, Al), alpha (Si, Ca, Ti), iron–peak (V, Cr, Ni, Mn) and s and r process (Y, Zr, Ba, Eu) elements. We have identified possible channels responsible for the chemical heterogeneity of the cluster populations, like AGB or massive fast-rotating stars contamination, or SN explosion. Also, we have analyzed the origin and evolution of these bulge GCs and their connection with the bulge itself.
7. A Fossil Bulge Globular Cluster Revealed by very Large Telescope Multi-conjugate Adaptive Optics
Czech Academy of Sciences Publication Activity Database
Ortolani, S.; Barbuy, B.; Momany, Y.; Saviane, I.; Bica, E.; Jílková, L.; Salerno, G.M.; Jungwiert, Bruno
2011-01-01
Roč. 737, č. 1 (2011), 31/1-31/9 ISSN 0004-637X Institutional research plan: CEZ:AV0Z10030501 Keywords : galaxy * globular clusters Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics Impact factor: 6.024, year: 2011
8. Ages of the Bulge Globular Clusters NGC 6522 and NGC 6626 (M28) from HST Proper-motion-cleaned Color–Magnitude Diagrams
Science.gov (United States)
Kerber, L. O.; Nardiello, D.; Ortolani, S.; Barbuy, B.; Bica, E.; Cassisi, S.; Libralato, M.; Vieira, R. G.
2018-01-01
Bulge globular clusters (GCs) with metallicities [Fe/H] ≲ ‑1.0 and blue horizontal branches are candidates to harbor the oldest populations in the Galaxy. Based on the analysis of HST proper-motion-cleaned color–magnitude diagrams in filters F435W and F625W, we determine physical parameters for the old bulge GCs NGC 6522 and NGC 6626 (M28), both with well-defined blue horizontal branches. We compare these results with similar data for the inner halo cluster NGC 6362. These clusters have similar metallicities (‑1.3 ≤ [Fe/H] ≤ ‑1.0) obtained from high-resolution spectroscopy. We derive ages, distance moduli, and reddening values by means of statistical comparisons between observed and synthetic fiducial lines employing likelihood statistics and the Markov chain Monte Carlo method. The synthetic fiducial lines were generated using α-enhanced BaSTI and Dartmouth stellar evolutionary models, adopting both canonical (Y ∼ 0.25) and enhanced (Y ∼ 0.30–0.33) helium abundances. RR Lyrae stars were employed to determine the HB magnitude level, providing an independent indicator to constrain the apparent distance modulus and the helium enhancement. The shape of the observed fiducial line could be compatible with some helium enhancement for NGC 6522 and NGC 6626, but the average magnitudes of RR Lyrae stars tend to rule out this hypothesis. Assuming canonical helium abundances, BaSTI and Dartmouth models indicate that all three clusters are coeval, with ages between ∼12.5 and 13.0 Gyr. The present study also reveals that NGC 6522 has at least two stellar populations, since its CMD shows a significantly wide subgiant branch compatible with 14% ± 2% and 86% ± 5% for first and second generations, respectively. Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute.
9. DISCOVERY OF RR LYRAE STARS IN THE NUCLEAR BULGE OF THE MILKY WAY
Energy Technology Data Exchange (ETDEWEB)
Minniti, Dante; Ramos, Rodrigo Contreras; Zoccali, Manuela; Gran, Felipe [Instituto Milenio de Astrofisica, Santiago (Chile); Rejkuba, Marina; Valenti, Elena [European Southern Observatory, Karl-Schwarszchild-Str. 2, D-85748 Garching bei Muenchen (Germany); Gonzalez, Oscar A., E-mail: [email protected], E-mail: [email protected] [UK Astronomy Technology Centre, Royal Observatory, Blackford Hill, Edinburgh, EH9 3HJ (United Kingdom)
2016-10-10
Galactic nuclei, such as that of the Milky Way, are extreme regions with high stellar densities, and in most cases, the hosts of a supermassive black hole. One of the scenarios proposed for the formation of the Galactic nucleus is merging of primordial globular clusters. An implication of this model is that this region should host stars that are characteristically found in old Milky Way globular clusters. RR Lyrae stars are primary distance indicators, well known representatives of old and metal-poor stellar populations, and therefore are regularly found in globular clusters. Here we report the discovery of a dozen RR Lyrae type ab stars in the vicinity of the Galactic center, i.e., in the so-called nuclear stellar bulge of the Milky Way. This discovery provides the first direct observational evidence that the Galactic nuclear stellar bulge contains ancient stars (>10 Gyr old). Based on this we conclude that merging globular clusters likely contributed to the build-up of the high stellar density in the nuclear stellar bulge of the Milky Way.
10. The best and brightest metal-poor stars
Energy Technology Data Exchange (ETDEWEB)
Schlaufman, Kevin C.; Casey, Andrew R., E-mail: [email protected], E-mail: [email protected] [Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)
2014-12-10
The chemical abundances of large samples of extremely metal-poor (EMP) stars can be used to investigate metal-free stellar populations, supernovae, and nucleosynthesis as well as the formation and galactic chemical evolution of the Milky Way and its progenitor halos. However, current progress on the study of EMP stars is being limited by their faint apparent magnitudes. The acquisition of high signal-to-noise spectra for faint EMP stars requires a major telescope time commitment, making the construction of large samples of EMP star abundances prohibitively expensive. We have developed a new, efficient selection that uses only public, all-sky APASS optical, 2MASS near-infrared, and WISE mid-infrared photometry to identify bright metal-poor star candidates through their lack of molecular absorption near 4.6 microns. We have used our selection to identify 11,916 metal-poor star candidates with V < 14, increasing the number of publicly available candidates by more than a factor of five in this magnitude range. Their bright apparent magnitudes have greatly eased high-resolution follow-up observations that have identified seven previously unknown stars with [Fe/H] ≲ –3.0. Our follow-up campaign has revealed that 3.8{sub −1.1}{sup +1.3}% of our candidates have [Fe/H] ≲ –3.0 and 32.5{sub −2.9}{sup +3.0}% have –3.0 ≲ [Fe/H] ≲ –2.0. The bulge is the most likely location of any existing Galactic Population III stars, and an infrared-only variant of our selection is well suited to the identification of metal-poor stars in the bulge. Indeed, two of our confirmed metal-poor stars with [Fe/H] ≲ –2.7 are within about 2 kpc of the Galactic center. They are among the most metal-poor stars known in the bulge.
11. Heavy elements abundances in metal-poor stars
International Nuclear Information System (INIS)
Magain, P.; Jehin, E.; Neuforge, C.; Noels, A.
1998-01-01
A sample of 21 metal-poor stars have been analysed on the basis of high resolution and high signal-to-noise spectra. Correlations between relative abundances of 16 elements have been studied, with a special emphasis on the neutron-capture ones. This analysis reveals the existence of two sub-populations of field halo stars, namely Pop IIa and Pop IIb. They differ by the behaviour of the s-process elements versus the α and r-process elements. We suggest a scenario of formation of these stars, which closely relates the field halo stars to the evolution of globular clusters. The two sub-populations would have evaporated the clusters during two different stages of their chemical evolution
12. Globular clusters - Fads and fallacies
International Nuclear Information System (INIS)
White, R.E.
1991-01-01
The types of globular clusters observed in the Milky Way Galaxy are described together with their known characteristics, with special attention given to correcting the erroneous statements made earlier about globular clusters. Among these are the following statements: the Galaxy is surrounded by many hundreds of globular clusters; all globular clusters are located toward the Galactic center, all globular clusters are metal poor and move about the Galaxy in highly elliptical paths; all globular clusters contain RR Lyrae-type variable stars, and the RR Lyrae stars found outside of globulars have come from cluster dissolution or ejection; all of the stars in a given cluster were born at the same time and have the same chemical composition; X-ray globulars are powered by central black holes; and the luminosity functions for globular clusters are well defined and well determined. Consideration is given to the fact that globular clusters in the Magellanic Clouds differ from those in the Milky Way by their age distribution and that the globulars of the SMC differ from those of the LMC
13. Galactic bulges
CERN Document Server
2016-01-01
This book consists of invited reviews on Galactic Bulges written by experts in the field. A central point of the book is that, while in the standard picture of galaxy formation a significant amount of the baryonic mass is expected to reside in classical bulges, the question what is the fraction of galaxies with no classical bulges in the local Universe has remained open. The most spectacular example of a galaxy with no significant classical bulge is the Milky Way. The reviews of this book attempt to clarify the role of the various types of bulges during the mass build-up of galaxies, based on morphology, kinematics, and stellar populations, and connecting their properties at low and high redshifts. The observed properties are compared with the predictions of the theoretical models, accounting for the many physical processes leading to the central mass concentration and their destruction in galaxies. This book serves as an entry point for PhD students and non-specialists and as a reference work for researchers...
14. EXTREMELY METAL-POOR GALAXIES: THE ENVIRONMENT
Energy Technology Data Exchange (ETDEWEB)
Filho, M. E. [Universidad de Las Palmas de Gran Canaria–Universidad de La Laguna, CIE Canarias: Tri-Continental Atlantic Campus, Canary Islands (Spain); Almeida, J. Sánchez; Muñoz-Tuñón, C. [Instituto Astrofísica de Canarias, E-38200 La Laguna, Tenerife (Spain); Nuza, S. E.; Kitaura, F.; Heß, S., E-mail: [email protected] [Leibniz-Institut für Astrophysik Potsdam (AIP), An der Sternwarte 16, D-14482 Potsdam (Germany)
2015-04-01
We have analyzed bibliographical observational data and theoretical predictions, in order to probe the environment in which extremely metal-poor dwarf galaxies (XMPs) reside. We have assessed the H i component and its relation to the optical galaxy, the cosmic web type (voids, sheets, filaments and knots), the overdensity parameter and analyzed the nearest galaxy neighbors. The aim is to understand the role of interactions and cosmological accretion flows in the XMP observational properties, particularly the triggering and feeding of the star formation. We find that XMPs behave similarly to Blue Compact Dwarfs; they preferably populate low-density environments in the local universe: ∼60% occupy underdense regions, and ∼75% reside in voids and sheets. This is more extreme than the distribution of irregular galaxies, and in contrast to those regions preferred by elliptical galaxies (knots and filaments). We further find results consistent with previous observations; while the environment does determine the fraction of a certain galaxy type, it does not determine the overall observational properties. With the exception of five documented cases (four sources with companions and one recent merger), XMPs do not generally show signatures of major mergers and interactions; we find only one XMP with a companion galaxy within a distance of 100 kpc, and the H i gas in XMPs is typically well-behaved, demonstrating asymmetries mostly in the outskirts. We conclude that metal-poor accretion flows may be driving the XMP evolution. Such cosmological accretion could explain all the major XMP observational properties: isolation, lack of interaction/merger signatures, asymmetric optical morphology, large amounts of unsettled, metal-poor H i gas, metallicity inhomogeneities, and large specific star formation.
15. Pristine Survey : High-Resolution Spectral Analyses of New Metal-poor Stars
Science.gov (United States)
Venn, Kim; Starkenburg, Else; Martin, Nicolas; Kielty, Collin; Youakim, Kris; Arnetsen, Anke
2018-06-01
The Pristine survey (Starkenburg et al. 2017) is a new and very successful metal-poor star survey. Combining high-quality narrow-band CaHK CFHT/MegaCam photometry with existing broadband photometry from SDSS, then very metal-poor stars have been found as confirmed from low-resolution spectroscopy (Youakim et al. 2017). Furthermore, we have extended this survey towards the Galactic bulge in a pilot program to test the capabilities in the highly crowded and (inhomogeneously) extincted bulge (Arentsen et al. 2018). High resolution spectral follow-up analyses have been initiated at the CFHT with Espadons (Vevolution or changes in the IMF, e.g., carbon enrichment, high [alpha/Fe] ratios vs alpha-challenged stars, and details in the neutron capture element ratios. While these early studies are being carried out using classical model atmospheres and synthetic spectral fitting (Venn et al. 2017, 2018), we are also exploring the use of a neural network for the fast, efficient, and precise determination of these stellar parameters and chemical abundances (e.g., StarNet, Fabbro et al. 2018).
16. Abundances in very metal-poor stars
Science.gov (United States)
Johnson, Jennifer Anne
We measured the abundances of 35 elements in 22 field red giants and a red giant in the globular cluster M92. We found the [Zn/Fe] ratio increases with decreasing [Fe/H], reaching ~0.3 at [Fe/H] = -3.0. While this is a larger [Zn/Fe] than found by previous investigators, it is not sufficient to account for the [Zn/Fe] observed in the damped Lyα systems. We test different models for the production of the s-process elements by comparing our [Y/Zr] values, which have been produced by the r- process, to predictions of what the s-process does not produce. We find that the models of Arlandini et al. (1999), which calculate s-process production in a model AGB star, agree the best. We then look at the r-process abundances across a wide range in mass. The [Y/Ba] values for most of our stars cluster around -0.30, but there are three outliers with [Y/Ba] values up to 1 dex higher. Thus the heavy element abundances do not show the same pattern from Z = 39 to Z = 56. However, our abundances ratios from Pd (Z = 46) to Yb (Z = 70) are consistent with a scaled solar system r- process pattern, arguing that at least the heavy r- process elements are made in a universal pattern. If we assume that this same pattern hold through thorium, we can determine the ages of our stars from the present abundance of radioactive thorium and an initial thorium abundance based on the abundance of stable heavy elements. Our results for five stars are consistent with those stars being the same age. Our mean age is 10.8 +/- 2 Gyr. However that result depends critically on the assumed Th/stable ratio, which we adopt from models of the r-process. For an average age of 15 Gyrs, the initial Th/Eu ratio we would need is 0.590. Finally, the [element/Fe] ratios for elements in the iron group and lower do not show any dispersion, unlike for the r- process elements such as Y and Ba. Therefore the individual contributions of supernovae have been erased for the lighter elements.
17. Infrared colours and inferred masses of metal-poor giant stars in the Keplerfield
Science.gov (United States)
Casey, A. R.; Kennedy, G. M.; Hartle, T. R.; Schlaufman, Kevin C.
2018-05-01
Intrinsically luminous giant stars in the Milky Way are the only potential volume-complete tracers of the distant disk, bulge, and halo. The chemical abundances of metal-poor giants also reflect the compositions of the earliest star-forming regions, providing the initial conditions for the chemical evolution of the Galaxy. However, the intrinsic rarity of metal-poor giants combined with the difficulty of efficiently identifying them with broad-band optical photometry has made it difficult to exploit them for studies of the Milky Way. One long-standing problem is that photometric selections for giant and/or metal-poor stars frequently include a large fraction of metal-rich dwarf contaminants. We re-derive a giant star photometric selection using existing public g-band and narrow-band DDO51photometry obtained in the Keplerfield. Our selection is simple and yields a contamination rate of main-sequence stars of ≲1% and a completeness of about 80 % for giant stars with Teff ≲ 5250 K - subject to the selection function of the spectroscopic surveys used to estimate these rates, and the magnitude range considered (11 ≲ g ≲ 15). While the DDO51filter is known to be sensitive to stellar surface gravity, we further show that the mid-infrared colours of DDO51-selected giants are strongly correlated with spectroscopic metallicity. This extends the infrared metal-poor selection developed by Schlaufman & Casey, demonstrating that the principal contaminants in their selection can be efficiently removed by the photometric separation of dwarfs and giants. This implies that any similarly efficient dwarf/giant discriminant (e.g., Gaiaparallaxes) can be used in conjunction with WISEcolours to select samples of giant stars with high completeness and low contamination. We employ our photometric selection to identify three metal-poor giant candidates in the Keplerfield with global asteroseismic parameters and find that masses inferred for these three stars using standard
18. AN EXTREMELY CARBON-RICH, EXTREMELY METAL-POOR STAR IN THE SEGUE 1 SYSTEM
International Nuclear Information System (INIS)
Norris, John E.; Yong, David; Gilmore, Gerard; Wyse, Rosemary F. G.; Frebel, Anna
2010-01-01
We report the analysis of high-resolution, high signal-to-noise ratio, spectra of an extremely metal-poor, extremely C-rich red giant, Seg 1-7, in Segue 1-described in the literature alternatively as an unusually extended globular cluster or an ultra-faint dwarf galaxy. The radial velocity of Seg 1-7 coincides precisely with the systemic velocity of Segue 1, and its chemical abundance signature of [Fe/H] = -3.52, [C/Fe] = +2.3, [N/Fe] = +0.8, [Na/Fe] = +0.53, [Mg/Fe] = +0.94, [Al/Fe] = +0.23, and [Ba/Fe] < -1.0 is similar to that of the rare and enigmatic class of Galactic halo objects designated CEMP-no (carbon-rich, extremely metal-poor with no enhancement (over solar ratios) of heavy neutron-capture elements). This is the first star in a Milky Way 'satellite' that unambiguously lies on the metal-poor, C-rich branch of the Aoki et al. bimodal distribution of field halo stars in the ([C/Fe], [Fe/H])-plane. Available data permit us only to identify Seg 1-7 as a member of an ultra-faint dwarf galaxy or as debris from the Sgr dwarf spheroidal galaxy. In either case, this demonstrates that at extremely low abundance, [Fe/H ] <-3.0, star formation and associated chemical evolution proceeded similarly in the progenitors of both the field halo and satellite systems. By extension, this is consistent with other recent suggestions that the most metal-poor dwarf spheroidal and ultra-faint dwarf satellites were the building blocks of the Galaxy's outer halo.
19. NGC 6273: Towards Defining A New Class of Galactic Globular Clusters?
Science.gov (United States)
Johnson, Christian I.; Rich, Robert Michael; Pilachowski, Catherine A.; Caldwell, Nelson; Mateo, Mario L.; Ira Bailey, John; Crane, Jeffrey D.
2016-01-01
A growing number of observations have found that several Galactic globular clusters exhibit abundance dispersions beyond the well-known light element (anti-)correlations. These clusters tend to be very massive, have >0.1 dex intrinsic metallicity dispersions, have complex sub-giant branch morphologies, and have correlated [Fe/H] and s-process element enhancements. Interestingly, nearly all of these clusters discovered so far have [Fe/H]~-1.7. In this context, we have examined the chemical composition of 18 red giant branch (RGB) stars in the massive, metal-poor Galactic bulge globular cluster NGC 6273 using high signal-to-noise, high resolution (R~27,000) spectra obtained with the Michigan/Magellan Fiber System (M2FS) and MSpec spectrograph mounted on the Magellan-Clay 6.5m telescope at Las Campanas Observatory. We find that the cluster exhibits a metallicity range from [Fe/H]=-1.80 to -1.30 and is composed of two dominant populations separated in [Fe/H] and [La/Fe] abundance. The increase in [La/Eu] as a function of [La/H] suggests that the increase in [La/Fe] with [Fe/H] is due to almost pure s-process enrichment. The most metal-rich star in our sample is not strongly La-enhanced, but is α-poor and may belong to a third "anomalous" stellar population. The two dominant populations exhibit the same [Na/Fe]-[Al/Fe] correlation found in other "normal" globular clusters. Therefore, NGC 6273 joins ω Centauri, M 22, M 2, and NGC 5286 as a possible new class of Galactic globular clusters.
20. Analytical Solution for Stellar Density in Globular Clusters MA Sharaf
Introduction. A globular cluster is a spherical collection of stars that orbits a galactic core as a satellite. They are generally composed of hundreds of thousands of low-metal, old stars. The types of stars found in a globular cluster are similar to those in the bulge of a spiral galaxy but confined to a volume of only a few cubic ...
1. Magnesium isotopes in giants in the Milky Way inner disk and bulge: First results with 3D stellar atmospheres.
Science.gov (United States)
Thygesen, Anders; Sbordone, Luca; Christlieb, Norbert; Asplund, Martin
2015-01-01
The Milky Way bulge is one of the most poorly understood components of our galaxy and its formation history is still a matter of debate (early collapse vs. disk instability). All knowledge of its chemical evolution history has been so far derived by measuring elemental abundances: no isotopic mixtures have been measured so far in the Bulge. While quite challenging, isotopic measurements can be accomplished with present instruments in bulge stars for a few elements, Magnesium being one of them.Of the three stable Mg isotopes, the most common one, 24Mg, is mainly produced by α capture in SN II, while the other two, 25Mg and 26Mg, can be produced efficiently in massive AGB stars, through the 22Ne(α, n)25Mg(n, γ)26Mg reactions as well as the Mg-Al chain. Moreover, SN II production of 25Mg and 26Mg increases with increasing progenitor metallicity, so in older stellar populations, where only the signature of metal-poor SNe is to be expected, one should not see a significant 25Mg or 26Mg fraction. However, if larger 25Mg/24Mg and 26Mg/24Mg ratios are observed, relative to what is produced in SNe, this is a clear sign of an AGB contribution. As such, Mg isotopic ratios are a very useful probe of AGB pollution onset and chemical enrichment timescale in a stellar population.Here, we present the first ever measurements of Mg isotopes in 7 red giant stars in the Milky Way bulge and inner disk, including two stars in the bulge globular cluster NGC6522. The isotopic abundances have been derived from high resolution, high signal-to-noise VLT-UVES spectra using both standard 1D atmospheric models as well as state-of-the-art 3D hydrodynamical models and spectrosynthesis. The use of 3D atmospheric models impacts the derived ratios and this work represents the first derivation of Mg isotopes using full 3D spectrosynthesis. These results yield new constraints on the proposed formation scenarios of the Milky Way bulge.
2. On the temperatures, colours, and ages of metal-poor stars predicted by stellar models
International Nuclear Information System (INIS)
Van den Berg, D A
2008-01-01
Most (but not all) of the investigations that have derived the effective temperatures of metal-poor, solar-neighbourhood field stars, from analyses of their spectra or from the infrared flux method, favour a T eff scale that is ∼100-120 K cooler than that given by stellar evolutionary models. This seems to be at odds with photometric results, given that the application of current colour-T eff relations to the observed subdwarf colours suggests a preference for hotter temperatures. Moreover, the predicted temperatures for main-sequence stars at the lowest metallicities ([Fe/H] eff for them unless some fundamental modification is made to the adopted physics. No such problems are found if the temperatures of metal-poor field stars are ∼100-120 K warmer than most determinations. In this case, stellar models would appear to provide consistent interpretations of both field and globular cluster (GC) stars of low metallicity. However, this would imply, e.g. that M 92 has an [Fe/H] value of approximately - 2.2, which is obtained from analyses of Fe I lines, instead of approximately equal to - 2.4, as derived from Fe II lines (and favoured by studies of three-dimensional model atmospheres). Finally, the age of the local, Population II subgiant HD 140283 (and GCs having similar metal abundances) is estimated to be ∼13 Gyr, if diffusive processes are taken into account.
3. The formation of the Galactic bulge of the Milky Way
Directory of Open Access Journals (Sweden)
Freeman K.
2012-02-01
Full Text Available We aim to determine if the bulge formed via mergers as predicted by Cold Dark Matter (CDM theory, or from disk instabilities, as suggested by its boxy shape, or both processes. We are observing about 28,000 bulge stars in fields that span longitudes of − 31 to + 26° and latitudes of − 5° to − 10°, targeting mostly red clump giants and we are measuring stellar velocities and chemical abundances. We have almost concluded our observations and have analysed data of 23,000 stars. We find a cylindrical rotation profile for the bulge which blends smoothly out into the disk and from the [Fe/H] results we find the bulge to be comprised of separate components, with an underlying slowly rotating metal poor subsample which we believe to be the inner halo stars and metal weak thick disk. We find only a small [Fe/H] gradient with latitude in the bulge, of − 0.07dex/kpc. This weak gradient does not necessarily support a merger origin for our bulge and the composite nature of the bulge is consistent with formation out of the thin disk as per instability formation models.
4. Carbon-enhanced metal-poor stars in dwarf galaxies
NARCIS (Netherlands)
2015-01-01
We investigate the frequency and origin of carbon-enhanced metal-poor (CEMP) stars in Local Group dwarf galaxies by means of a statistical, data-calibrated cosmological model for the hierarchical build-up of the Milky Way and its dwarf satellites. The model self-consistently explains the variation
5. Carbon-enhanced metal-poor stars and thermohaline mixing
NARCIS (Netherlands)
Stancliffe, R.J.; Glebbeek, E.; Izzard, R.G.; Pols, O.R.
2007-01-01
One possible scenario for the formation of carbon-enhanced metal-poor stars is the accretion of carbon-rich material from a binary companion which may no longer visible. It is generally assumed that the accreted material remains on the surface of the star and does not mix with the interior until
6. High-resolution abundance analysis of red giants in the globular cluster NGC 6522
Science.gov (United States)
Barbuy, B.; Chiappini, C.; Cantelli, E.; Depagne, E.; Pignatari, M.; Hirschi, R.; Cescutti, G.; Ortolani, S.; Hill, V.; Zoccali, M.; Minniti, D.; Trevisan, M.; Bica, E.; Gómez, A.
2014-10-01
Context. The [Sr/Ba] and [Y/Ba] scatter observed in some galactic halo stars that are very metal-poor and in a few individual stars of the oldest known Milky Way globular cluster NGC 6522 have been interpreted as evidence of early enrichment by massive fast-rotating stars (spinstars). Because NGC 6522 is a bulge globular cluster, the suggestion was that not only the very-metal poor halo stars, but also bulge stars at [Fe/H] ~ -1 could be used as probes of the stellar nucleosynthesis signatures from the earlier generations of massive stars, but at much higher metallicity. For the bulge the suggestions were based on early spectra available for stars in NGC 6522, with a medium resolution of R ~ 22 000 and a moderate signal-to-noise ratio. Aims: The main purpose of this study is to re-analyse the NGC 6522 stars reported previously by using new high-resolution (R ~ 45 000) and high signal-to-noise spectra (S/N > 100). We aim at re-deriving their stellar parameters and elemental ratios, in particular the abundances of the neutron-capture s-process-dominated elements such as Sr, Y, Zr, La, and Ba, and of the r-element Eu. Methods: High-resolution spectra of four giants belonging to the bulge globular cluster NGC 6522 were obtained at the 8m VLT UT2-Kueyen telescope with the UVES spectrograph in FLAMES-UVES configuration. The spectroscopic parameters were derived based on the excitation and ionization equilibrium of Fe i and Fe ii. Results: Our analysis confirms a metallicity [Fe/H] = -0.95 ± 0.15 for NGC 6522 and the overabundance of the studied stars in Eu (with +0.2 < [Eu/Fe] < + 0.4) and alpha-elements O and Mg. The neutron-capture s-element-dominated Sr, Y, Zr, Ba, and La now show less pronounced variations from star to star. Enhancements are in the range 0.0 < [Sr/Fe] < +0.4, +0.23 < [Y/Fe] < +0.43, 0.0 < [Zr/Fe] < +0.4, 0.0 < [La/Fe] < +0.35, and 0.05 < [Ba/Fe] < +0.55. Conclusions: The very high overabundances of [Y/Fe] previously reported for the four studied
7. Eyes, Bulging (Proptosis)
Science.gov (United States)
... Early Breast Cancer to Avoid Chemo Could a Blood Test Spot Lung Cancer Early? Experimental Drug Shows 'Modest' Benefit ... often done when bulging affects only one eye. Blood tests to measure how well the thyroid is working are done when ... When bulging leads to severe dry eyes, lubrication with artificial tears is needed to ...
8. Stellar Archaeology -- Exploring the Universe with Metal-Poor Stars
OpenAIRE
Frebel, Anna
2010-01-01
The abundance patterns of the most metal-poor stars in the Galactic halo and small dwarf galaxies provide us with a wealth of information about the early Universe. In particular, these old survivors allow us to study the nature of the first stars and supernovae, the relevant nucleosynthesis processes responsible for the formation and evolution of the elements, early star- and galaxy formation processes, as well as the assembly process of the stellar halo from dwarf galaxies a long time ago. T...
9. Three-dimensional models of metal-poor stars
OpenAIRE
Collet, R.
2008-01-01
I present here the main results of recent realistic, 3D, hydrodynamical simulations of convection at the surface of metal-poor red giant stars. I discuss the application of these convection simulations as time-dependent, 3D, hydrodynamical model atmospheres to spectral line formation calculations and abundance analyses. The impact of 3D models on derived elemental abundances is investigated by means of a differential comparison of the line strengths predicted in 3D under the assumption of loc...
10. Effect of the horizontal branch on the colours of globular clusters
Energy Technology Data Exchange (ETDEWEB)
Sil' chenko, O K [Moskovskij Gosudarstvennyj Univ. (USSR). Gosudarstvennyj Astronomicheskij Inst. ' ' GAISh' '
1963-05-01
The influence of the horizontal branch (HB) on the integral UBV colours of globular clusters is studied by means of statistical analysis of the colour-magnitude diagram catalogue for globular clusters of our Galaxy. The colour correction for HB is shown to be always negative. It turns out to be small for m. tal-rich globular clusters ((Fe/H)>-1.1) and independent on the HB shape for metal-poor ones.
11. Effect of the horizontal branch on the colours of globular clusters
International Nuclear Information System (INIS)
Sil'chenko, O.K.
1963-01-01
The influence of the horizontal branch (HB) on the integral UBV colours of globular clusters is studied by means of statistical analysis of the colour-magnitude diagram catalogue for globular clusters of our Galaxy. The colour correction for HB is shown to be always negative. It turns out to be small for m. tal-rich globular clusters ([Fe/H]>-1.1) and independent on the HB shape for metal-poor ones
12. Oxygen and iron abundances in two metal-poor dwarfs
Science.gov (United States)
Spiesman, William J.; Wallerstein, George
1991-11-01
Oxygen abundances from the O I line at 6300 A in two metal-poor K dwarfs, HD 25329 and HD 134440, are derived. The spectra were obtained with the KPNO 4-m echelle spectrograph and long camera, yielding a resolution of 32,000 and an S/N of about 125. Model atmospheres with Te of 4770 were appropriate to both stars, whose metallicities were found to be -1.74 and -1.43 for HD 25329 and HD 134440, respectively. These oxygen abundances are 0.3 and 0.4 for the two stars. From the resolution an S/N a 3(sigma) upper limit of 0.8 is derived for each star, which may be combined into an upper limit of O/Fe of 0.6 for a generic K dwarf with Fe/H of 1.6. These values are more in line with O/Fe as seen in similarly metal-poor red giant than those reported in metal-poor subdwarfs by Abia and Rebolo (1989).
13. Three-dimensional models of metal-poor stars
International Nuclear Information System (INIS)
Collet, R
2008-01-01
I present here the main results of recent realistic, three-dimensional (3D), hydrodynamical simulations of convection at the surface of metal-poor red giant stars. I discuss the application of these convection simulations as time-dependent, 3D, hydrodynamical model atmospheres to spectral line formation calculations and abundance analyses. The impact of 3D models on derived elemental abundances is investigated by means of a differential comparison of the line strengths predicted in 3D under the assumption of local thermodynamic equilibrium (LTE) with the results of analogous line formation calculations performed with classical, 1D, hydrostatic model atmospheres. The low surface temperatures encountered in the upper photospheric layers of 3D model atmospheres of very metal-poor stars cause spectral lines of neutral metals and molecules to appear stronger in 3D than in 1D calculations. Hence, 3D elemental abundances derived from such lines are significantly lower than estimated by analyses with 1D models. In particular, differential 3D-1D LTE abundances for C, N and O derived from CH, NH and OH lines are found to be in the range -0.5 to - 1 dex. Large negative differential 3D-1D corrections to the Fe abundance are also computed for weak low-excitation Fe i lines. The application of metal-poor 3D models to the spectroscopic analysis of extremely iron-poor halo stars is discussed.
14. Carbon-enhanced metal-poor stars in dwarf galaxies
OpenAIRE
2015-01-01
We investigate the frequency and origin of carbon-enhanced metal-poor (CEMP) stars in Local Group dwarf galaxies by means of a statistical, data-calibrated cosmological model for the hierarchical build-up of the Milky Way and its dwarf satellites. The model self-consistently explains the variation with dwarf galaxy luminosity of the observed: i) frequency and [Fe/H] range of CEMP stars; ii) metallicity distribution functions; iii) star formation histories. We show that if primordial faint sup...
15. LITHIUM ABUNDANCES OF EXTREMELY METAL-POOR TURNOFF STARS
International Nuclear Information System (INIS)
Aoki, Wako; Inoue, Susumu; Barklem, Paul S.; Beers, Timothy C.; Christlieb, Norbert; Perez, Ana E. GarcIa; Norris, John E.; Carollo, Daniela
2009-01-01
We have determined Li abundances for eleven metal-poor turnoff stars, among which eight have [Fe/H] <-3, based on LTE analyses of high-resolution spectra obtained with the High Dispersion Spectrograph on the Subaru Telescope. The Li abundances for four of these eight stars are determined for the first time by this study. Effective temperatures are determined by a profile analysis of Hα and Hβ. While seven stars have Li abundances as high as the Spite Plateau value, the remaining four objects with [Fe/H] <-3 have A(Li) =log (Li/H)+ 12 ∼< 2.0, confirming the existence of extremely metal-poor (EMP) turnoff stars having low Li abundances, as reported by previous work. The average of the Li abundances for stars with [Fe/H]<-3 is lower by 0.2 dex than that of the stars with higher metallicity. No clear constraint on the metallicity dependence or scatter of the Li abundances is derived from our measurements for the stars with [Fe/H]<-3. Correlations of the Li abundance with effective temperatures, with abundances of Na, Mg, and Sr, and with the kinematical properties are investigated, but no clear correlation is seen in the EMP star sample.
16. THE ACS NEARBY GALAXY SURVEY TREASURY. IX. CONSTRAINING ASYMPTOTIC GIANT BRANCH EVOLUTION WITH OLD METAL-POOR GALAXIES
International Nuclear Information System (INIS)
Girardi, Leo; Williams, Benjamin F.; Gilbert, Karoline M.; Rosenfield, Philip; Dalcanton, Julianne J.; Marigo, Paola; Boyer, Martha L.; Dolphin, Andrew; Weisz, Daniel R.; Skillman, Evan; Melbourne, Jason; Olsen, Knut A. G.; Seth, Anil C.
2010-01-01
In an attempt to constrain evolutionary models of the asymptotic giant branch (AGB) phase at the limit of low masses and low metallicities, we have examined the luminosity functions and number ratios between AGB and red giant branch (RGB) stars from a sample of resolved galaxies from the ACS Nearby Galaxy Survey Treasury. This database provides Hubble Space Telescope optical photometry together with maps of completeness, photometric errors, and star formation histories for dozens of galaxies within 4 Mpc. We select 12 galaxies characterized by predominantly metal-poor populations as indicated by a very steep and blue RGB, and which do not present any indication of recent star formation in their color-magnitude diagrams. Thousands of AGB stars brighter than the tip of the RGB (TRGB) are present in the sample (between 60 and 400 per galaxy), hence, the Poisson noise has little impact in our measurements of the AGB/RGB ratio. We model the photometric data with a few sets of thermally pulsing AGB (TP-AGB) evolutionary models with different prescriptions for the mass loss. This technique allows us to set stringent constraints on the TP-AGB models of low-mass, metal-poor stars (with M sun , [Fe/H]∼ sun . This is also in good agreement with recent observations of white dwarf masses in the M4 old globular cluster. These constraints can be added to those already derived from Magellanic Cloud star clusters as important mileposts in the arduous process of calibrating AGB evolutionary models.
17. The Oldest Stars of the Extremely Metal-Poor Local Group Dwarf Irregular Galaxy Leo A
Science.gov (United States)
Schulte-Ladbeck, Regina E.; Hopp, Ulrich; Drozdovsky, Igor O.; Greggio, Laura; Crone, Mary M.
2002-08-01
We present deep Hubble Space Telescope (HST) single-star photometry of Leo A in B, V, and I. Our new field of view is offset from the centrally located field observed by Tolstoy et al. in order to expose the halo population of this galaxy. We report the detection of metal-poor red horizontal branch stars, which demonstrate that Leo A is not a young galaxy. In fact, Leo A is as least as old as metal-poor Galactic Globular Clusters that exhibit red horizontal branches and are considered to have a minimum age of about 9 Gyr. We discuss the distance to Leo A and perform an extensive comparison of the data with stellar isochrones. For a distance modulus of 24.5, the data are better than 50% complete down to absolute magnitudes of 2 or more. We can easily identify stars with metallicities between 0.0001 and 0.0004, and ages between about 5 and 10 Gyr, in their post-main-sequence phases, but we lack the detection of main-sequence turnoffs that would provide unambiguous proof of ancient (>10 Gyr) stellar generations. Blue horizontal branch stars are above the detection limits but difficult to distinguish from young stars with similar colors and magnitudes. Synthetic color-magnitude diagrams show it is possible to populate the blue horizontal branch in the halo of Leo A. The models also suggest ~50% of the total astrated mass in our pointing to be attributed to an ancient (>10 Gyr) stellar population. We conclude that Leo A started to form stars at least about 9 Gyr ago. Leo A exhibits an extremely low oxygen abundance, only 3% of solar, in its ionized interstellar medium. The existence of old stars in this very oxygen-deficient galaxy illustrates that a low oxygen abundance does not preclude a history of early star formation. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.
18. Discovery of a Metal-Poor Little Cub
Science.gov (United States)
Kohler, Susanna
2017-09-01
The discovery of an extremely metal-poor star-forming galaxy in our local universe, dubbed Little Cub, is providing astronomers with front-row seats to the quenching of a near-pristine galaxy.SDSS image of NGC 3359 (left) and Little Cub (right), with overlying contours displaying the location of hydrogen gas. Little Cubs (also shown in the inset) stellar mass lies in the blue contour of the right-hand side. The outer white contours show the extended gas of the galaxy, likely dragged out as a tidal tail by Little Cubs interaction with NGC 3359. [Hsyu et al. 2017]The Hunt for Metal-Poor GalaxiesLow-metallicity, star-forming galaxies can show us the conditions under which the first stars formed. The galaxies with the lowest metallicities, however, also tend to be those with the lowest luminosities making them difficult to detect. Though we know that there should be many low-mass, low-luminosity, low-metallicity galaxies in the universe, weve detected very few of them nearby.In an effort to track down more of these metal-poor galaxies, a team of scientists led by Tiffany Hsyu (University of California Santa Cruz) searched through Sloan Digital Sky Survey data, looking for small galaxies with the correct photometric color to qualify a candidate blue compact dwarfs, a type of small, low-luminosity, star-forming galaxy that is often low-metallicity.Hsyu and collaborators identified more than 2,500 candidate blue compact dwarfs, and next set about obtaining follow-up spectroscopy for many of the candidates from the Keck and Lick Observatories. Though this project is still underway, around 100 new blue compact dwarfs have already been identified via the spectroscopy, including one of particular interest: the Little Cub.Little CubThis tiny star-forming galaxy gained its nickname from its location in the constellation Ursa Major. Little Cub is perhaps 50 or 60 million light-years away, and Hsyu and collaborators find it to be one of the lowest-metallicity star
19. The Shape of Extremely Metal-Poor Galaxies
Science.gov (United States)
Putko, Joseph; Sánchez Almeida, Jorge; Muñoz-Tuñón, Casiana; Elmegreen, Bruce; Elmegreen, Debra
2018-01-01
This work is the first study on the 3D shape of starbursting extremely metal-poor galaxies (XMPs; a galaxy is said to be an XMP if its ionized gas-phase metallicity is less than 1/10 the solar value). A few hundred XMPs have been identified in the local universe primarily through mining the spectroscopic catalog of the Sloan Digital Sky Survey (SDSS), and follow-up observations have shown that metallicity drops significantly at the starburst (compared to the quiescent component of the galaxy). As the timescale for gas mixing is short, the metal-poor gas triggering the starburst must have been accreted recently. This is strong observational evidence for the cold flow accretion predicted by cosmological models of galaxy formation, and, in this respect, XMPs seem to be the best local analogs of the very first galaxies.The ellipsoidal shape of a class of galaxies can be inferred from the observed axial ratio (q) distribution (q = minor axis/major axis) of a large sample of randomly-oriented galaxies. Fitting ellipses to 200 XMPs using r-band SDSS images, we observe that the axial ratio distribution falls off at q ~0.8, and we determine that these falloffs are not due to biases in the data. The falloff at low axial ratio indicates that the XMPs are thick for their size, and the falloff at high axial ratio suggests the vast majority of XMPs are triaxial. We also observe that smaller XMPs are thicker in proportion to their size, and it is expected that for decreasing galaxy size the ratio of random to rotational motions increases, which correlates with increasing relative thickness. The XMPs are low-redshift dwarf galaxies dominated by dark matter, and our results are compatible with simulations that have shown dark matter halos to be triaxial, with triaxial stellar distributions for low-mass galaxies and with triaxiality increasing over time. We will offer precise constraints on the 3D shape of XMPs via Bayesian analysis of our observed axial ratio distribution.This work
20. Searching for dust around hyper metal poor stars
International Nuclear Information System (INIS)
Venn, Kim A.; Divell, Mike; Starkenburg, Else; Puzia, Thomas H.; Côté, Stephanie; Lambert, David L.
2014-01-01
We examine the mid-infrared fluxes and spectral energy distributions for stars with iron abundances [Fe/H] <–5, and other metal-poor stars, to eliminate the possibility that their low metallicities are related to the depletion of elements onto dust grains in the formation of a debris disk. Six out of seven stars examined here show no mid-IR excesses. These non-detections rule out many types of circumstellar disks, e.g., a warm debris disk (T ≤ 290 K), or debris disks with inner radii ≤1 AU, such as those associated with the chemically peculiar post-asymptotic giant branch spectroscopic binaries and RV Tau variables. However, we cannot rule out cooler debris disks, nor those with lower flux ratios to their host stars due to, e.g., a smaller disk mass, a larger inner disk radius, an absence of small grains, or even a multicomponent structure, as often found with the chemically peculiar Lambda Bootis stars. The only exception is HE0107-5240, for which a small mid-IR excess near 10 μm is detected at the 2σ level; if the excess is real and associated with this star, it may indicate the presence of (recent) dust-gas winnowing or a binary system.
1. Searching for dust around hyper metal poor stars
Energy Technology Data Exchange (ETDEWEB)
Venn, Kim A.; Divell, Mike; Starkenburg, Else [Department of Physics and Astronomy, University of Victoria, 3800 Finnerty Road, Victoria, BC, V8P 5C2 (Canada); Puzia, Thomas H. [Institute of Astrophysics, Pontificia Universidad Catolica de Chile, Av. Vicuna Mackenna 4860, 7820436 Macul, Santiago (Chile); Côté, Stephanie [NRC Herzberg Institute of Astrophysics, 5071 West Saanich Road, Victoria, BC, V9E 2E7 (Canada); Lambert, David L., E-mail: [email protected] [McDonald Observatory and the Department of Astronomy, University of Texas at Austin, RLM 15.308, Austin, TX 78712 (United States)
2014-08-20
We examine the mid-infrared fluxes and spectral energy distributions for stars with iron abundances [Fe/H] <–5, and other metal-poor stars, to eliminate the possibility that their low metallicities are related to the depletion of elements onto dust grains in the formation of a debris disk. Six out of seven stars examined here show no mid-IR excesses. These non-detections rule out many types of circumstellar disks, e.g., a warm debris disk (T ≤ 290 K), or debris disks with inner radii ≤1 AU, such as those associated with the chemically peculiar post-asymptotic giant branch spectroscopic binaries and RV Tau variables. However, we cannot rule out cooler debris disks, nor those with lower flux ratios to their host stars due to, e.g., a smaller disk mass, a larger inner disk radius, an absence of small grains, or even a multicomponent structure, as often found with the chemically peculiar Lambda Bootis stars. The only exception is HE0107-5240, for which a small mid-IR excess near 10 μm is detected at the 2σ level; if the excess is real and associated with this star, it may indicate the presence of (recent) dust-gas winnowing or a binary system.
2. Searching for fossil fragments of the Galactic bulge formation process
Science.gov (United States)
Ferraro, Francesco
2017-08-01
We have discovered that the stellar system Terzan5 (Ter5) in the Galactic bulge harbors stellar populations with very different IRON content (delta[Fe/H] 1 dex, Ferraro+09, Nature 462, 483) and AGES (12 Gyr and 4.5 Gyr for the sub-solar and super-solar metallicity populations, respectively, Ferraro+16, ApJ,828,75). This evidence demonstrates that Ter5 is not a globular cluster, and identifies it as (1) a site in the Galactic bulge where recent star formation occurred, and (2) the remnant of a massive system able to retain the iron-enriched gas ejected by violent supernova explosions. The striking chemical similarity between Ter5 and the bulge opens the fascinating possibility that we discovered the fossil remnant of a pristine massive structure that could have contributed to the Galactic bulge assembly.Prompted by this finding, here we propose to secure deep HST optical observations for the bulge stellar system Liller1, that shows a similar complexity as Ter5, with evidence of two stellar populations with different iron content. The immediate goal is to properly explore the main sequence turnoff region of the system for unveiling possible splits due to stellar populations of different ages. As demonstrated by our experience with Ter5, the requested HST observations, in combination with the K-band diffraction limited images that we already secured with GeMS-Gemini, are essential to achieve this goal.The project will allow us to establish if other fossil remnants of the bulge formation epoch do exist, thus probing that the merging of pre-evolved massive structures has been an important channel for the formation of the Galactic bulge.
3. Solving the Mystery of Galaxy Bulges and Bulge Substructure
Science.gov (United States)
Erwin, Peter
2017-08-01
Understanding galaxy bulges is crucial for understanding galaxy evolution and the growth of supermassive black holes (SMBHs). Recent studies have shown that at least some - perhaps most - disk-galaxy bulges are actually composite structures, with both classical-bulge (spheroid) and pseudobulge (disky) components; this calls into question the standard practice of using simple, low-resolution bulge/disk decompositions to determine spheroid and SMBH mass functions. We propose WFC3 optical and near-IR imaging of a volume- and mass-limited sample of local disk galaxies to determine the full range of pure-classical, pure-pseudobulge, and composite-bulge frequencies and parameters, including stellar masses for classical bulges, disky pseudobulges, and boxy/peanut-shaped bulges. We will combine this with ground-based spectroscopy to determine the stellar-kinematic and population characteristics of the different substructures revealed by our WFC3 imaging. This will help resolve growing uncertainties about the status and nature of bulges and their relation to SMBH masses, and will provide an essential local-universe reference for understanding bulge (and SMBH) formation and evolution.
4. The Gaia-ESO Survey: Exploring the complex nature and origins of the Galactic bulge populations
Science.gov (United States)
Rojas-Arriagada, A.; Recio-Blanco, A.; de Laverny, P.; Mikolaitis, Š.; Matteucci, F.; Spitoni, E.; Schultheis, M.; Hayden, M.; Hill, V.; Zoccali, M.; Minniti, D.; Gonzalez, O. A.; Gilmore, G.; Randich, S.; Feltzing, S.; Alfaro, E. J.; Babusiaux, C.; Bensby, T.; Bragaglia, A.; Flaccomio, E.; Koposov, S. E.; Pancino, E.; Bayo, A.; Carraro, G.; Casey, A. R.; Costado, M. T.; Damiani, F.; Donati, P.; Franciosini, E.; Hourihane, A.; Jofré, P.; Lardo, C.; Lewis, J.; Lind, K.; Magrini, L.; Morbidelli, L.; Sacco, G. G.; Worley, C. C.; Zaggia, S.
2017-05-01
Context. As observational evidence steadily accumulates, the nature of the Galactic bulge has proven to be rather complex: the structural, kinematic, and chemical analyses often lead to contradictory conclusions. The nature of the metal-rich bulge - and especially of the metal-poor bulge - and their relation with other Galactic components, still need to be firmly defined on the basis of statistically significant high-quality data samples. Aims: We used the fourth internal data release of the Gaia-ESO survey to characterize the bulge metallicity distribution function (MDF), magnesium abundance, spatial distribution, and correlation of these properties with kinematics. Moreover, the homogeneous sampling of the different Galactic populations provided by the Gaia-ESO survey allowed us to perform a comparison between the bulge, thin disk, and thick disk sequences in the [Mg/Fe] vs. [Fe/H] plane in order to constrain the extent of their eventual chemical similarities. Methods: We obtained spectroscopic data for 2500 red clump stars in 11 bulge fields, sampling the area -10° ≤ l ≤ + 8° and -10° ≤ b ≤ -4° from the fourth internal data release of the Gaia-ESO survey. A sample of 6300 disk stars was also selected for comparison. Spectrophotometric distances computed via isochrone fitting allowed us to define a sample of stars likely located in the bulge region. Results: From a Gaussian mixture models (GMM) analysis, the bulge MDF is confirmed to be bimodal across the whole sampled area. The relative ratio between the two modes of the MDF changes as a function of b, with metal-poor stars dominating at high latitudes. The metal-rich stars exhibit bar-like kinematics and display a bimodality in their magnitude distribution, a feature which is tightly associated with the X-shape bulge. They overlap with the metal-rich end of the thin disk sequence in the [Mg/Fe] vs. [Fe/H] plane. On the other hand, metal-poor bulge stars have a more isotropic hot kinematics and do
5. Velocity Dispersions Across Bulge Types
International Nuclear Information System (INIS)
Fabricius, Maximilian; Bender, Ralf; Hopp, Ulrich; Saglia, Roberto; Drory, Niv; Fisher, David
2010-01-01
We present first results from a long-slit spectroscopic survey of bulge kinematics in local spiral galaxies. Our optical spectra were obtained at the Hobby-Eberly Telescope with the LRS spectrograph and have a velocity resolution of 45 km/s (σ*), which allows us to resolve the velocity dispersions in the bulge regions of most objects in our sample. We find that the velocity dispersion profiles in morphological classical bulge galaxies are always centrally peaked while the velocity dispersion of morphologically disk-like bulges stays relatively flat towards the center--once strongly barred galaxies are discarded.
6. A New Globular Cluster in the Area of VVVX
Science.gov (United States)
Bica, E.; Minniti, D.; Bonatto, C.; Hempel, M.
2018-06-01
We communicate the discovery of a new globular cluster in the Galaxy that was first detected on WISE/2MASS images and is now confirmed with VVVX photometry. It is a Palomar-like cluster projected at ℓ = 359.15°, b = 5.73°, and may be related to the bulge. We derive an absolute magnitude of MV ≈ -3.3, thus being an underluminous globular cluster. Our analyses provide a reddening of E(B - V) = 1.08 ± 0.18 and a distance to the Sun d⊙ = 6.3 ± 1 kpc, which implies a current position in the bulge volume. The estimated metallicity is [Fe/H] = -1.5 ± 0.25. It adds to the recently discovered faint globular cluster (Minniti 22) and candidates found with VVV, building up expectations of ≈50 globular clusters yet to be discovered in the bulge. We also communicate the discovery of an old open cluster in the same VVVX tile as the globular cluster. The VVVX photometry provided E(B - V) = 0.62 ± 0.1, d⊙ = 7.6 ± 1 kpc, and an age of 1.5 ± 0.3 Gyr. With a height from the plane of ≈0.8 kpc, it adds to nine Gyr-class clusters recently discovered within 0.8 ⩽ Z ⩽ 2.2 kpc, as recently probed in the single VVV tile b201. We suggest that these findings may be disclosing the thick disk at the bulge, which so far has no open cluster counterpart, and hardly any individual star. Thus, the VVV and VVVX surveys are opening new windows for follow-up studies, to employ present and future generations of large aperture telescopes.
7. VLT/UVES abundances of individual stars in the Fornax dwarf spheroidal globular clusters
NARCIS (Netherlands)
Letarte, B.; Hill, V.; Jablonka, P.; Tolstoy, E.; Randich, S; Pasquini, L
2006-01-01
We present high resolution abundance analysis of nine stars belonging to three of the five globular clusters (GCs) of the Fornax dwarf galaxy. The spectra were taken with UVES at a resolution of 43 000. We find them to be slightly more metal-poor than what was previously calculated with other
8. Grid sleeve bulge tool
International Nuclear Information System (INIS)
Phillips, W.D.; Vaill, R.E.
1980-01-01
An improved grid sleeve bulge tool is designed for securing control rod guide tubes to sleeves brazed in a fuel assembly grid. The tool includes a cylinder having an outer diameter less than the internal diameter of the control rod guide tubes. The walls of the cylinder are cut in an axial direction along its length to provide several flexible tines or ligaments. These tines are similar to a fork except they are spaced in a circumferential direction. The end of each alternate tine is equipped with a semispherical projection which extends radially outwardly from the tine surface. A ram or plunger of generally cylindrical configuration and about the same length as the cylinder is designed to fit in and move axially of the cylinder and thereby force the tined projections outwardly when the ram is pulled into the cylinder. The ram surface includes axially extending grooves and plane surfaces which are complimentary to the inner surfaces formed on the tines on the cylinder. As the cylinder is inserted into a control rod guide tube, and the projections on the cylinder placed in a position just below or above a grid strap, the ram is pulled into the cylinder, thus moving the tines and the projections thereon outwardly into contact with the sleeve, to plastically deform both the sleeve and the control rod guide tube, and thereby form four bulges which extend outwardly from the sleeve surface and beyond the outer periphery of the grid peripheral strap. This process is then repeated at the points above the grid to also provide for outwardly projecting surfaces, the result being that the grid is accurately positioned on and mechanically secured to the control rod guide tubes which extend the length of a fuel assembly
9. A HIGH-VELOCITY BULGE RR LYRAE VARIABLE ON A HALO-LIKE ORBIT
International Nuclear Information System (INIS)
Kunder, Andrea; Storm, J.; Rich, R. M.; Hawkins, K.; Poleski, R.; Johnson, C. I.; Shen, J.; Li, Z.-Y.; Cordero, M. J.; Nataf, D. M.; Bono, G.; Walker, A. R.; Koch, A.; De Propris, R.; Udalski, A.; Szymanski, M. K.; Soszynski, I.; Pietrzynski, G.; Ulaczyk, K.; Wyrzykowski, Ł.
2015-01-01
We report on the RR Lyrae variable star, MACHO 176.18833.411, located toward the Galactic bulge and observed within the data from the ongoing Bulge RR Lyrae Radial Velocity Assay, which has the unusual radial velocity of −372 ± 8 km s −1 and true space velocity of −482 ± 22 km s −1 relative to the Galactic rest frame. Located less than 1 kpc from the Galactic center and toward a field at (l, b) = (3, −2.5), this pulsating star has properties suggesting it belongs to the bulge RR Lyrae star population, yet a velocity indicating it is abnormal, at least with respect to bulge giants and red clump stars. We show that this star is most likely a halo interloper and therefore suggest that halo contamination is not insignificant when studying metal-poor stars found within the bulge area, even for stars within 1 kpc of the Galactic center. We discuss the possibility that MACHO 176.18833.411 is on the extreme edge of the bulge RR Lyrae radial velocity distribution, and also consider a more exotic scenario in which it is a runaway star moving through the Galaxy
10. A HIGH-VELOCITY BULGE RR LYRAE VARIABLE ON A HALO-LIKE ORBIT
Energy Technology Data Exchange (ETDEWEB)
Kunder, Andrea; Storm, J. [Leibniz-Institut für Astrophysik Potsdam (AIP), An der Sternwarte 16, D-14482 Potsdam (Germany); Rich, R. M. [Department of Physics and Astronomy, University of California at Los Angeles, Los Angeles, CA 90095-1562 (United States); Hawkins, K. [Institute of Astronomy, Madingley Road, Cambridge CB3 0HA (United Kingdom); Poleski, R. [Department of Astronomy, Ohio State University, 140 W. 18th Avenue, Columbus, OH 43210 (United States); Johnson, C. I. [Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138 (United States); Shen, J.; Li, Z.-Y. [Key Laboratory for Research in Galaxies and Cosmology, Shanghai Astronomical Observatory, Chinese Academy of Sciences, 80 Nandan Road, Shanghai 200030 (China); Cordero, M. J. [Astronomisches Rechen-Institut: Zentrum für Astronomie, Mönchhofstr. 12-14, D-69120 Heidelberg (Germany); Nataf, D. M. [Research School of Astronomy and Astrophysics, The Australian National University, Canberra, ACT 2611 (Australia); Bono, G. [Dipartimento di Fisica, Universita di Roma Tor Vergata, Via della Ricerca Scientifica 1, I-00133 Roma (Italy); Walker, A. R. [Cerro Tololo Inter-American Observatory, National Optical Astronomy Observatory, Casilla 603, La Serena (Chile); Koch, A. [Landessternwarte, Zentrum für Astronomie der Universität Heidelberg, Königstuhl 12, D-69117 Heidelberg (Germany); De Propris, R. [Finnish Centre for Astronomy with ESO (FINCA), University of Turku, Turku (Finland); Udalski, A.; Szymanski, M. K.; Soszynski, I.; Pietrzynski, G.; Ulaczyk, K.; Wyrzykowski, Ł. [Warsaw University Observatory, Al. Ujazdowskie 4, 00-478 Warszawa (Poland); and others
2015-07-20
We report on the RR Lyrae variable star, MACHO 176.18833.411, located toward the Galactic bulge and observed within the data from the ongoing Bulge RR Lyrae Radial Velocity Assay, which has the unusual radial velocity of −372 ± 8 km s{sup −1} and true space velocity of −482 ± 22 km s{sup −1} relative to the Galactic rest frame. Located less than 1 kpc from the Galactic center and toward a field at (l, b) = (3, −2.5), this pulsating star has properties suggesting it belongs to the bulge RR Lyrae star population, yet a velocity indicating it is abnormal, at least with respect to bulge giants and red clump stars. We show that this star is most likely a halo interloper and therefore suggest that halo contamination is not insignificant when studying metal-poor stars found within the bulge area, even for stars within 1 kpc of the Galactic center. We discuss the possibility that MACHO 176.18833.411 is on the extreme edge of the bulge RR Lyrae radial velocity distribution, and also consider a more exotic scenario in which it is a runaway star moving through the Galaxy.
11. Variable stars in the VVV globular clusters. I. 2MASS-GC 02 and Terzan 10
Energy Technology Data Exchange (ETDEWEB)
Alonso-García, Javier; Dékány, István; Catelan, Márcio; Ramos, Rodrigo Contreras; Gran, Felipe; Leyton, Paul; Minniti, Dante [Instituto de Astrofísica, Facultad de Física, Pontificia Universidad Católica de Chile, Av. Vicuña Mackenna 4860, 782-0436 Macul, Santiago (Chile); Amigo, Pía, E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected] [Millennium Institute of Astrophysics, Av. Vicuña Mackenna 4860, 782-0436 Macul, Santiago (Chile)
2015-03-01
The VISTA Variables in the Vía Láctea (VVV) ESO Public Survey is opening a new window to study inner Galactic globular clusters (GCs) using their variable stars. These GCs have been neglected in the past due to the difficulties caused by the presence of elevated extinction and high field stellar densities in their lines of sight. However, the discovery and study of any present variables in these clusters, especially RR Lyrae stars, can help to greatly improve the accuracy of their physical parameters. It can also help to shed some light on the questions raised by the intriguing Oosterhoff dichotomy in the Galactic GC system. In a series of papers we plan to explore variable stars in the GCs falling inside the field of the VVV survey. In this first paper, we search for and study the variables present in two highly reddened, moderately metal-poor, faint, inner Galactic GCs: 2MASS-GC 02 and Terzan 10. We report the discovery of sizable populations of RR Lyrae stars in both GCs. We use near-infrared period–luminosity relations to determine the color excess of each RR Lyrae star, from which we obtain both accurate distances to the GCs and the ratios of the selective-to-total extinction in their directions. We find the extinction toward both clusters to be elevated, non-standard, and highly differential. We also find both clusters to be closer to the Galactic center than previously thought, with Terzan 10 being on the far side of the Galactic bulge. Finally, we discuss their Oosterhoff properties, and conclude that both clusters stand out from the dichotomy followed by most Galactic GCs.
12. Living with a parastomal bulge
DEFF Research Database (Denmark)
Krogsgaard, Marianne; Thomsen, Thordis; Vinther, Anders
2017-01-01
was performed using a phenomenological-hermeneutic approach. FINDINGS: The bulge caused different unfamiliar bodily sensations that interacted with patients' everyday lives. Some but not all of these sensations were modifiable. As the bulge and the ostomy changed size and shape, patients had to adjust...... and readjust stoma care continuously. The physical change called for patients' awareness and posed a threat to patients' control of the ostomy and challenged stoma self-care. The bulge caused a bodily asymmetry that deformed the patients' bodies in a way that exceeded the perceived alteration already caused...... is limited and highly warranted to improve clinical outcome. RELEVANCE TO CLINICAL PRACTICE: The ever-changing bulge posed a threat to patients' control of the ostomy and required specific care from the stoma therapist. Needs-based access to counselling, advice and supplementary materials is important....
13. Globular Clusters for Faint Galaxies
Science.gov (United States)
Kohler, Susanna
2017-07-01
.The most striking feature of these galaxies, however, is that they are surrounded by a large number of compact objects that appear to be globular clusters. From the observations, Van Dokkum and collaborators estimate that Dragonfly 44 and DFX1 have approximately 74 and 62 globulars, respectively significantly more than the low numbers expected for galaxies of this luminosity.Armed with this knowledge, the authors went back and looked at archival observations of 14 other UDGs also located in the Coma cluster. They found that these smaller and fainter galaxies dont host quite as many globular clusters as Dragonfly 44 and DFX1, but more than half also show significant overdensities of globulars.Main panel: relation between the number of globular clusters and total absolute magnitude for Coma UDGs (solid symbols) compared to normal galaxies (open symbols). Top panel: relation between effective radius and absolute magnitude. The UDGs are significantly larger and have more globular clusters than normal galaxies of the same luminosity. [van Dokkum et al. 2017]Evidence of FailureIn general, UDGs appear to have more globular clusters than other galaxies of the same total luminosity, by a factor of nearly 7. These results are consistent with the scenario in which UDGs are failed galaxies: they likely have the halo mass to have formed a large number of globular clusters, but they were quenched before they formed a disk and bulge. Because star formation never got going in UDGs, they are now much dimmer than other galaxies of the same size.The authors suggest that the next step is to obtain dynamical measurements of the UDGs to determine whether these faint galaxies really do have the halo mass suggested by their large numbers of globulars. Future observations will continue to help us pin down the origin of these dim giants.CitationPieter van Dokkum et al 2017 ApJL 844 L11. doi:10.3847/2041-8213/aa7ca2
14. The Most Metal-poor Stars in the Large Magellanic Cloud
Science.gov (United States)
Schlaufman, Kevin C.
2018-06-01
The chemical abundances of the most metal-poor stars in a galaxy can be used to investigate the earliest stages of its formation and chemical evolution. Differences between the abundances of the most metal-poor stars in the Milky Way and in its satellite dwarf galaxies have been noted and provide the strongest available constraints on the earliest stages of general galactic chemical evolution models. However, the masses of the Milky Way and its satellite dwarf galaxies differ by four orders of magnitude, leaving a gap in our knowledge of the early chemical evolution of intermediate mass galaxies like the Magellanic Clouds. To close that gap, we have initiated a survey of the metal-poor stellar populations of the Magellanic Clouds using the mid-infrared metal-poor star selection of Schlaufman & Casey (2014). We have discovered the three most metal-poor giant stars known in the Large Magellanic Cloud (LMC) and reobserved the previous record holder. The stars have metallicities in the range -2.70 < [Fe/H] < -2.00 and three show r-process enhancement: one has [Eu II/Fe] = +1.65 and two others have [Eu II/Fe] = +0.65. The probability that four randomly selected very metal-poor stars in the halo of the Milky Way are as r-process enhanced is 0.0002. For that reason, the early chemical enrichment of the heaviest elements in the LMC and Milky Way were qualitatively different. It is also suggestive of a possible chemical link between the LMC and the ultra-faint dwarf galaxies nearby with evidence of r-process enhancement (e.g., Reticulum II and Tucana III). Like Reticulum II, the most metal-poor star in our LMC sample is the only one not enhanced in r-process elements.
15. SPECTROSCOPIC STUDIES OF EXTREMELY METAL-POOR STARS WITH THE SUBARU HIGH DISPERSION SPECTROGRAPH. V. THE Zn-ENHANCED METAL-POOR STAR BS 16920-017
International Nuclear Information System (INIS)
Honda, Satoshi; Aoki, Wako; Beers, Timothy C.; Takada-Hidai, Masahide
2011-01-01
We report Zn abundances for 18 very metal-poor stars studied in our previous work, covering the metallicity range -3.2< [Fe/H] <-2.5. The [Zn/Fe] values of most stars show an increasing trend with decreasing [Fe/H] in this metallicity range, confirming the results found by previous studies. However, the extremely metal-poor star BS 16920-017([Fe/H] =-3.2) exhibits a significantly high [Zn/Fe] ratio ([Zn/Fe] = +1.0). Comparison of the chemical abundances of this object with HD 4306, which has similar atmospheric parameters to BS 16920-017, clearly demonstrates a deficiency of α elements and neutron-capture elements in this star, along with enhancements of Mn and Ni, as well as Zn. The association with a hypernova explosion that has been proposed to explain the high Zn abundance ratios found in extremely metal-poor stars is a possible explanation, although further studies are required to fully interpret the abundance pattern of this object.
16. New ultra metal-poor stars from SDSS: follow-up GTC medium-resolution spectroscopy
Science.gov (United States)
Aguado, D. S.; Allende Prieto, C.; González Hernández, J. I.; Rebolo, R.; Caffau, E.
2017-07-01
Context. The first generation of stars formed in the Galaxy left behind the chemical signatures of their nucleosynthesis in the interstellar medium, visible today in the atmospheres of low-mass stars that formed afterwards. Sampling the chemistry of those low-mass provides insight into the first stars. Aims: We aim to increase the samples of stars with extremely low metal abundances, identifying ultra metal-poor stars from spectra with modest spectral resolution and signal-to-noise ratio (S/N). Achieving this goal involves deriving reliable metallicities and carbon abundances from such spectra. Methods: We carry out follow-up observations of faint, V > 19, metal-poor candidates selected from SDSS spectroscopy and observed with the Optical System for Imaging and low-Intermediate-Resolution Integrated Spectroscopy (OSIRIS) at GTC. The SDSS and follow-up OSIRIS spectra were analyzed using the FERRE code to derive effective temperatures, surface gravities, metallicities and carbon abundances. In addition, a well-known extremely metal-poor star has been included in our sample to calibrate the analysis methodology. Results: We observed and analyzed five metal-poor candidates from modest-quality SDSS spectra. All stars in our sample have been confirmed as extremely metal-poor stars, in the [Fe/H] Palma. Programme ID GTC2E-16A and ID GTC65-16B.
17. VLT/UVES spectroscopy of individual stars in three globular clusters in the Fornax dwarf spheroidal galaxy
NARCIS (Netherlands)
Letarte, B; Hill, [No Value; Jablonka, P; Tolstoy, E; Francois, P; Meylan, G
We present a high resolution ( R similar to 43 000) abundance analysis of a total of nine stars in three of the five globular clusters associated with the nearby Fornax dwarf spheroidal galaxy. These three clusters ( 1, 2 and 3) trace the oldest, most metal-poor stellar populations in Fornax. We
18. DETAILED ABUNDANCES OF TWO VERY METAL-POOR STARS IN DWARF GALAXIES
Energy Technology Data Exchange (ETDEWEB)
Kirby, Evan N.; Cohen, Judith G. [Department of Astronomy, California Institute of Technology, 1200 E. California Blvd., MC 249-17, Pasadena, CA 91125 (United States)
2012-12-01
The most metal-poor stars in dwarf spheroidal galaxies (dSphs) can show the nucleosynthetic patterns of one or a few supernovae (SNe). These SNe could have zero metallicity, making metal-poor dSph stars the closest surviving links to Population III stars. Metal-poor dSph stars also help to reveal the formation mechanism of the Milky Way (MW) halo. We present the detailed abundances from Keck/HIRES spectroscopy for two very metal-poor stars in two MW dSphs. One star, in the Sculptor dSph, has [Fe I/H] = -2.40. The other star, in the Ursa Minor dSph, has [Fe I/H] = -3.16. Both stars fall in the previously discovered low-metallicity, high-[{alpha}/Fe] plateau. Most abundance ratios of very metal-poor stars in these two dSphs are largely consistent with very metal-poor halo stars. However, the abundances of Na and some r-process elements lie at the lower end of the envelope defined by inner halo stars of similar metallicity. We propose that the metallicity dependence of SN yields is the cause. The earliest SNe in low-mass dSphs have less gas to pollute than the earliest SNe in massive halo progenitors. As a result, dSph stars at -3 < [Fe/H] < -2 sample SNe with [Fe/H] << -3, whereas halo stars in the same metallicity range sample SNe with [Fe/H] {approx} -3. Consequently, enhancements in [Na/Fe] and [r/Fe] were deferred to higher metallicity in dSphs than in the progenitors of the inner halo.
19. Oxygen abundance in metal-poor dwarfs, derived from the forbidden line
Science.gov (United States)
Spite, M.; Spite, F.
1991-12-01
The oxygen abundance is redetermined in a few metal-poor dwarfs, using the oxygen forbidden line at 630 nm rather than the oxygen triplet at 777 nm previously used by Abia and Rebolo (1989). The ratios form O/Fe are clearly lower than the previous ones and are in agreement with the ratios found in the metal-poor red giants, suggesting that no real difference exists between dwarfs and giants. Finally, it can be argued that, pending the acquisition of additional information, the oxygen abundances derived from the forbidden line are more reliable than the abundances found from the triplet.
20. Improved guide tube bulge tool
International Nuclear Information System (INIS)
Vaill, R.E.; Phillips, W.D.
1979-01-01
A guide tube bulge tool for securing control rod guide tubes to a fuel assembly grid, includes a cylinder having several flexible tines each of which is equipped with a semispherical radially outwardly extending projection. A tapered ram fits into the cylinder so as to force the tines outwardly when the ram is pulled into the cylinder while supporting the other tines. (UK)
1. Extremely metal-poor stars in classical dwarf spheroidal galaxies : Fornax, Sculptor, and Sextans
NARCIS (Netherlands)
Tafelmeyer, M.; Jablonka, P.; Hill, V.; Shetrone, M.; Tolstoy, E.; Irwin, M. J.; Battaglia, G.; Helmi, A.; Starkenburg, E.; Venn, K. A.; Abel, T.; Francois, P.; Kaufer, A.; North, P.; Primas, F.; Szeifert, T.
2010-01-01
We present the results of a dedicated search for extremely metal-poor stars in the Fornax, Sculptor, and Sextans dSphs. Five stars were selected from two earlier VLT/Giraffe and HET/HRS surveys and subsequently followed up at high spectroscopic resolution with VLT/UVES. All of them turned out to
2. Extremely metal-poor stars in classical dwarf spheroidal galaxies: Fornax, Sculptor, and Sextans
NARCIS (Netherlands)
Tafelmeyer, M.; Jablonka, P.; Hill, V.; Shetrone, M.; Tolstoy, E.; Irwin, M. J.; Battaglia, G.; Helmi, A.; Starkenburg, E.; Venn, K. A.; Abel, T.; Francois, P.; Kaufer, A.; North, P.; Primas, F.; Szeifert, T.
2010-01-01
We present the results of a dedicated search for extremely metal-poor stars in the Fornax, Sculptor, and Sextans dSphs. Five stars were selected from two earlier VLT/Giraffe and HET/HRS surveys and subsequently followed up at high spectroscopic resolution with VLT/UVES. All of them turned out to
3. Chemical composition of extremely metal-poor stars in the Sextans dwarf spheroidal galaxy
NARCIS (Netherlands)
Aoki, W.; Arimoto, N.; Sadakane, K.; Tolstoy, E.; Battaglia, G.; Jablonka, P.; Shetrone, M.; Letarte, B.; Irwin, M.; Hill, V.; Francois, P.; Venn, K.; Primas, F.; Helmi, A.; Kaufer, A.; Tafelmeyer, M.; Szeifert, T.; Babusiaux, C.
Context. Individual stars in dwarf spheroidal galaxies around the Milky Way Galaxy have been studied both photometrically and spectroscopically. Extremely metal-poor stars among them are very valuable because they should record the early enrichment in the Local Group. However, our understanding of
4. Formation and Evolution of Carbon-Enhanced Metal-Poor Stars
NARCIS (Netherlands)
Abate, C.; Pols, O.R.; Izzard, R.G.
2010-01-01
Very metal-poor stars observed in the Galactic halo constitute a window on the primordial conditions under which the Milky Way was formed. A large fraction of these stars show a great enhancement in the abundance of carbon and other heavy elements. One explanation of this observation is that these
5. Keck Spectroscopy of Globular Clusters in the Elliptical Galaxy NGC 3610
OpenAIRE
Strader, Jay; Brodie, Jean P.; Schweizer, Francois; Larsen, Soeren S.; Seitzer, Patrick
2002-01-01
We present moderate-resolution Keck spectra of nine candidate globular clusters in the possible merger-remnant elliptical galaxy NGC 3610. Eight of the objects appear to be bona fide globular clusters of NGC 3610. We find that two of the clusters belong to an old metal-poor population, five to an old metal-rich population, and only one to an intermediate-age metal-rich population. The estimated age of the intermediate-age cluster is 1-5 Gyr, which is in agreement with earlier estimates of the...
6. The Luminosity Functions of Old and Intermediate-Age Globular Clusters in NGC 3610
OpenAIRE
Whitmore, B. C.; Schweizer, F.; Kundu, A.; Miller, B. W.
2002-01-01
The WFPC2 Camera on board HST has been used to obtain high-resolution images of NGC 3610, a dynamically young elliptical galaxy. These observations supersede shorter, undithered HST observations where an intermediate-age population of globular clusters was first discovered. The new observations show the bimodal color distribution of globular clusters more clearly, with peaks at (V-I)o = 0.95 and 1.17. The luminosity function (LF) of the blue, metal-poor population of clusters in NGC 3610 turn...
7. Lithium-rich very metal-poor stars discovered with LAMOST and Subaru
Science.gov (United States)
Aoki, Wako; Li, Haining; Matsuno, Tadafumi; Kumar, Yerra Bharat; Shi, Jianrong; Suda, Takuma; Zhao, Gang
2018-04-01
Lithium is a unique element that is produced in the Big Bang nucleosynthesis but is destroyed by nuclear reactions inside stars. As a result, almost constant lithium abundance is found in unevolved main-sequence metal-poor stars, although the value is systematically lower than that expected from the standard Big Bang nucleosynthesis models, whereas lithium abundances of red giants are more than one order of magnitudes lower than those of unevolved stars. There are, however, a small fraction of metal-poor stars that show extremely high lithium abundances, which is not explained by standard stellar evolution models. We have discovered 12 new very metal-poor stars that have enhancement of lithium by more than 10 times compared with typical metal-poor stars at similar evolutionary stages by the large-scale spectroscopic survey with LAMOST and the follow-up high-resolution spectroscopy with the Subaru Telescope. The sample shows a wide distribution of evolutionary stages from subgiants to red giants with the metallicity of -3.3 <[Fe/H]< -1.6. The chemical abundance ratios of other elements have been obtained by our spectroscopic study, and an estimate of the binary frequency by radial velocity monitoring is ongoing. The observational results provide new constraints on the scenarios to explain lithium-rich metal-poor stars, such as extra mixing during the evolution along the red giant branch, mass-transfer from a companion AGB star, and engulfment of planet-like objects. These explanations are very unlikely for at least some of lithium-rich objects in our sample, suggesting a new mechanism that enhances lithium during the low-mass star evolution.
8. The s-Process Nucleosynthesis in Extremely Metal-Poor Stars as the Generating Mechanism of Carbon Enhanced Metal-Poor Stars
Science.gov (United States)
Suda, Takuma; Yamada, Shimako; Fujimoto, Masayuki Y.
The origin of carbon-enhanced metal-poor (CEMP) stars plays a key role in characterising the formation and evolution of the first stars and the Galaxy since the extremely-metal-poor (EMP) stars with [Fe/H] ≤ -2.5 share the common features of carbon enhancement in their surface chemical compositions. The origin of these stars is not yet established due to the controversy of the origin of CEMP stars without the enhancement of s-process element abundances, i.e., so called CEMP-no stars. In this paper, we elaborate the s-process nucleosynthesis in the EMP AGB stars and explore the origin of CEMP stars. We find that the efficiency of the s-process is controlled by O rather than Fe at [Fe/H] ≲ -2. We demonstrate that the relative abundances of Sr, Ba, Pb to C are explained in terms of the wind accretion from AGB stars in binary systems.
9. Chemically Dissected Rotation Curves of the Galactic Bulge from Main-sequence Proper Motions
Science.gov (United States)
Clarkson, William I.; Calamida, Annalisa; Sahu, Kailash C.; Brown, Thomas M.; Gennaro, Mario; Avila, Roberto J.; Valenti, Jeff; Debattista, Victor P.; Rich, R. Michael; Minniti, Dante; Zoccali, Manuela; Aufdemberge, Emily R.
2018-05-01
We report results from an exploratory study implementing a new probe of Galactic evolution using archival Hubble Space Telescope imaging observations. Precise proper motions are combined with photometric relative metallicity and temperature indices, to produce the proper-motion rotation curves of the Galactic bulge separately for metal-poor and metal-rich main-sequence samples. This provides a “pencil-beam” complement to large-scale wide-field surveys, which to date have focused on the more traditional bright giant branch tracers. We find strong evidence that the Galactic bulge rotation curves drawn from “metal-rich” and “metal-poor” samples are indeed discrepant. The “metal-rich” sample shows greater rotation amplitude and a steeper gradient against line-of-sight distance, as well as possibly a stronger central concentration along the line of sight. This may represent a new detection of differing orbital anisotropy between metal-rich and metal-poor bulge objects. We also investigate selection effects that would be implied for the longitudinal proper-motion cut often used to isolate a “pure-bulge” sample. Extensive investigation of synthetic stellar populations suggests that instrumental and observational artifacts are unlikely to account for the observed rotation curve differences. Thus, proper-motion-based rotation curves can be used to probe chemodynamical correlations for main-sequence tracer stars, which are orders of magnitude more numerous in the Galactic bulge than the bright giant branch tracers. We discuss briefly the prospect of using this new tool to constrain detailed models of Galactic formation and evolution. Based on observations made with the NASA/ESA Hubble Space Telescope and obtained from the data archive at the Space Telescope Science Institute. STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.
10. SPECTROSCOPIC ANALYSIS OF METAL-POOR STARS FROM LAMOST: EARLY RESULTS
International Nuclear Information System (INIS)
Li, Hai-Ning; Zhao, Gang; Wang, Liang; Wang, Wei; Yuan, Hailong; Christlieb, Norbert; Zhang, Yong; Hou, Yonghui
2015-01-01
We report on early results from a pilot program searching for metal-poor stars with LAMOST and follow-up high-resolution observation acquired with the MIKE spectrograph attached to the Magellan II telescope. We performed detailed abundance analysis for eight objects with iron abundances [Fe/H] < -2.0, including five extremely metal-poor (EMP; [Fe/H] < -3.0) stars with two having [Fe/H] < -3.5. Among these objects, three are newly discovered EMP stars, one of which is confirmed for the first time with high-resolution spectral observations. Three program stars are regarded as carbon-enhanced metal-poor (CEMP) stars, including two stars with no enhancement in their neutron-capture elements, which thus possibly belong to the class of CEMP-no stars; one of these objects also exhibits significant enhancement in nitrogen, and is thus a potential carbon and nitrogen-enhanced metal-poor star. The [X/Fe] ratios of the sample stars generally agree with those reported in the literature for other metal-poor stars in the same [Fe/H] range. We also compared the abundance patterns of individual program stars with the average abundance pattern of metal-poor stars and find only one chemically peculiar object with abundances of at least two elements (other than C and N) showing deviations larger than 0.5 dex. The distribution of [Sr/Ba] versus [Ba/H] agrees that an additional nucleosynthesis mechanism is needed aside from a single r-process. Two program stars with extremely low abundances of Sr and Ba support the prospect that both main and weak r-processes may have operated during the early phase of Galactic chemical evolution. The distribution of [C/N] shows that there are two groups of carbon-normal giants with different degrees of mixing. However, it is difficult to explain the observed behavior of the [C/N] of the nitrogen-enhanced unevolved stars based on current data
11. A KECK HIRES DOPPLER SEARCH FOR PLANETS ORBITING METAL-POOR DWARFS. II. ON THE FREQUENCY OF GIANT PLANETS IN THE METAL-POOR REGIME
International Nuclear Information System (INIS)
Sozzetti, Alessandro; Torres, Guillermo; Latham, David W.; Stefanik, Robert P.; Korzennik, Sylvain G.; Boss, Alan P.; Carney, Bruce W.; Laird, John B.
2009-01-01
We present an analysis of three years of precision radial velocity (RV) measurements of 160 metal-poor stars observed with HIRES on the Keck 1 telescope. We report on variability and long-term velocity trends for each star in our sample. We identify several long-term, low-amplitude RV variables worthy of followup with direct imaging techniques. We place lower limits on the detectable companion mass as a function of orbital period. Our survey would have detected, with a 99.5% confidence level, over 95% of all companions on low-eccentricity orbits with velocity semiamplitude K ∼> 100 m s -1 , or M p sin i ∼> 3.0 M J (P/yr) (1/3) , for orbital periods P ∼ p p ≅ 1%. Our results can usefully inform theoretical studies of the process of giant-planet formation across two orders of magnitude in metallicity.
12. Short-term X-ray variability of the globular cluster source 4U 1820 - 30 (NGC 6624)
Science.gov (United States)
Stella, L.; Kahn, S. M.; Grindlay, J. E.
1984-01-01
Analytical techniques for improved identification of the temporal and spectral variability properties of globular cluster and galactic bulge X-ray sources are described in terms of their application to a large set of observations of the source 4U 1820 - 30 in the globular cluster NGC 6624. The autocorrelation function, cross-correlations, time skewness function, erratic periodicities, and pulse trains are examined. The results are discussed in terms of current models with particular emphasis on recent accretion disk models. It is concluded that the analyzed observations provide the first evidence for shot-noise variability in a globular cluster X-ray source.
13. Chemical composition of extremely metal-poor stars in the Sextans dwarf spheroidal galaxy
OpenAIRE
Aoki, W.; Arimoto, N.; Sadakane, K.; Tolstoy, E.; Battaglia, G.; Jablonka, P.; Shetrone, M.; Letarte, B.; Irwin, M.; Hill, V.; Francois, P.; Venn, K.; Primas, F.; Helmi, A.; Kaufer, A.
2009-01-01
Context. Individual stars in dwarf spheroidal galaxies around the Milky Way Galaxy have been studied both photometrically and spectroscopically. Extremely metal-poor stars among them are very valuable because they should record the early enrichment in the Local Group. However, our understanding of these stars is very limited because detailed chemical abundance measurements are needed from high resolution spectroscopy. Aims. To constrain the formation and chemical evolution of dwarf galaxi...
14. Metal-Poor Stars and the Chemical Enrichment of the Universe
OpenAIRE
Frebel, Anna; Norris, John E.
2011-01-01
Metal-poor stars hold the key to our understanding of the origin of the elements and the chemical evolution of the Universe. This chapter describes the process of discovery of these rare stars, the manner in which their surface abundances (produced in supernovae and other evolved stars) are determined from the analysis of their spectra, and the interpretation of their abundance patterns to elucidate questions of origin and evolution. More generally, studies of these stars contribute to other ...
15. Lithium isotopic abundances in metal-poor stars: a problem for standard big bang nucleosynthesis?
International Nuclear Information System (INIS)
Nissen, P.E.; Asplund, M.; Lambert, D.L.; Primas, F.; Smith, V.V.
2005-01-01
Spectral obtained with VLT/UVES suggest the existence of the 6 Li isotope in several metal-poor stars at a level that challenges ideas about its synthesis. The 7 Li abundance is, on the other hand, a factor of three lower than predicted by standard Big Bang nucleosynthesis theory. Both problems may be explained if decaying suppersymmetric particles affect the synthesis of light elements in the Big Bang. (orig.)
16. Lithium evolution in metal-poor stars: from Pre-Main Sequence to the Spite plateau
OpenAIRE
Fu, Xiaoting; Bressan, Alessandro; Molaro, Paolo; Marigo, Paola
2015-01-01
Lithium abundance derived in metal-poor main sequence stars is about three times lower than the value of primordial Li predicted by the standard Big Bang nucleosynthesis when the baryon density is taken from the CMB or the deuterium measurements. This disagreement is generally referred as the lithium problem. We here reconsider the stellar Li evolution from the pre-main sequence to the end of the main sequence phase by introducing the effects of convective overshooting and residual mass accre...
17. Empirical Determination of Dark Matter Velocities Using Metal-Poor Stars.
Science.gov (United States)
Herzog-Arbeitman, Jonah; Lisanti, Mariangela; Madau, Piero; Necib, Lina
2018-01-26
The Milky Way dark matter halo is formed from the accretion of smaller subhalos. These sub-units also harbor stars-typically old and metal-poor-that are deposited in the Galactic inner regions by disruption events. In this Letter, we show that the dark matter and metal-poor stars in the Solar neighborhood share similar kinematics due to their common origin. Using the high-resolution eris simulation, which traces the evolution of both the dark matter and baryons in a realistic Milky Way analog galaxy, we demonstrate that metal-poor stars are indeed effective tracers for the local, virialized dark matter velocity distribution. The local dark matter velocities can therefore be inferred from observations of the stellar halo made by the Sloan Digital Sky Survey within 4 kpc of the Sun. This empirical distribution differs from the standard halo model in important ways and suggests that the bounds on the spin-independent scattering cross section may be weakened for dark matter masses below ∼10 GeV. Data from Gaia will allow us to further refine the expected distribution for the smooth dark matter component, and to test for the presence of local substructure.
18. The Pristine survey - I. Mining the Galaxy for the most metal-poor stars
Science.gov (United States)
Starkenburg, Else; Martin, Nicolas; Youakim, Kris; Aguado, David S.; Allende Prieto, Carlos; Arentsen, Anke; Bernard, Edouard J.; Bonifacio, Piercarlo; Caffau, Elisabetta; Carlberg, Raymond G.; Côté, Patrick; Fouesneau, Morgan; François, Patrick; Franke, Oliver; González Hernández, Jonay I.; Gwyn, Stephen D. J.; Hill, Vanessa; Ibata, Rodrigo A.; Jablonka, Pascale; Longeard, Nicolas; McConnachie, Alan W.; Navarro, Julio F.; Sánchez-Janssen, Rubén; Tolstoy, Eline; Venn, Kim A.
2017-11-01
We present the Pristine survey, a new narrow-band photometric survey focused on the metallicity-sensitive Ca H&K lines and conducted in the Northern hemisphere with the wide-field imager MegaCam on the Canada-France-Hawaii Telescope. This paper reviews our overall survey strategy and discusses the data processing and metallicity calibration. Additionally we review the application of these data to the main aims of the survey, which are to gather a large sample of the most metal-poor stars in the Galaxy, to further characterize the faintest Milky Way satellites, and to map the (metal-poor) substructure in the Galactic halo. The current Pristine footprint comprises over 1000 deg2 in the Galactic halo ranging from b ˜ 30° to ˜78° and covers many known stellar substructures. We demonstrate that, for Sloan Digital Sky Survey (SDSS) stellar objects, we can calibrate the photometry at the 0.02-mag level. The comparison with existing spectroscopic metallicities from SDSS/Sloan Extension for Galactic Understanding and Exploration (SEGUE) and Large Sky Area Multi-Object Fiber Spectroscopic Telescope shows that, when combined with SDSS broad-band g and I photometry, we can use the CaHK photometry to infer photometric metallicities with an accuracy of ˜0.2 dex from [Fe/H] = -0.5 down to the extremely metal-poor regime ([Fe/H] < -3.0). After the removal of various contaminants, we can efficiently select metal-poor stars and build a very complete sample with high purity. The success rate of uncovering [Fe/H]SEGUE < -3.0 stars among [Fe/H]Pristine < -3.0 selected stars is 24 per cent, and 85 per cent of the remaining candidates are still very metal poor ([Fe/H]<-2.0). We further demonstrate that Pristine is well suited to identify the very rare and pristine Galactic stars with [Fe/H] < -4.0, which can teach us valuable lessons about the early Universe.
19. Globular Clusters Shine in a Galaxy Lacking Dark Matter
Science.gov (United States)
Kohler, Susanna
2018-04-01
( 9.3 billion years) population and metal-poor.Rethinking Formation TheoriesThe long-standing picture of galaxies has closely connected old, metal-poor globular clusters to the galaxies dark-matter halos. Past studies have found that the ratio between the total globular-cluster mass and the overall mass of a galaxy (i.e., all dark + baryonic matter) holds remarkably constant across galaxies its typically 3 x 10-5. This has led researchers to believe that properties of the dark-matter halo may determine globular-cluster formation.The luminosity function of the compact objects in NGC 1052DF2. The red and blue curves show the luminosity functions of globular clusters in the Milky Way and in the typical ultra-diffuse galaxies of the Coma cluster, respectively. NGC 1052DF2s globular clusters peak at a significantly higher luminosity. [Adapted from van Dokkum et al. 2018]NGC 1052DF2, with a globular-cluster mass thats 3% of the mass of the galaxy ( 1000 times the expected ratio!), defies this picture. This unusual galaxy therefore demonstrates that the usual relation between globular-cluster mass and total galaxy mass probably isnt due to a fundamental connection between the dark-matter halo and globular-cluster formation. Instead, van Dokkum and collaborators suggest, globular-cluster formation may ultimately be a baryon-driven process.As with all unexpected discoveries in astronomy, we must now determine whether NGC 1052DF2 is simply a fluke, or whether it represents a new class of object we can expect to find more of. Either way, this unusual galaxy is forcing us to rethink what we know about galaxies and the star clusters they host.CitationPieter van Dokkum et al 2018 ApJL 856 L30. doi:10.3847/2041-8213/aab60b
20. Rates of collapse and evaporation of globular clusters
Science.gov (United States)
Hut, Piet; Djorgovski, S.
1992-01-01
Observational estimates of the dynamical relaxation times of Galactic globular clusters are used here to estimate the present rate at which core collapse and evaporation are occurring in them. A core collapse rate of 2 +/- 1 per Gyr is found, which for a Galactic age of about 12 Gyr agrees well with the fact that 27 clusters have surface brightness profiles with the morphology expected for the postcollapse phase. A destruction and evaporation rate of 5 +/- 3 per Gyr is found, suggesting that a significant fraction of the Galaxy's original complement of globular clusters have perished through the combined effects of mechanisms such as relaxation-driven evaporation and shocking due to interaction with the Galactic disk and bulge.
1. CHEMICAL ABUNDANCE ANALYSIS OF A NEUTRON-CAPTURE ENHANCED RED GIANT IN THE BULGE PLAUT FIELD
Energy Technology Data Exchange (ETDEWEB)
Johnson, Christian I.; Rich, R. Michael [Department of Physics and Astronomy, UCLA, 430 Portola Plaza, Box 951547, Los Angeles, CA 90095-1547 (United States); McWilliam, Andrew, E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected] [Observatories of the Carnegie Institution of Washington, Pasadena, CA 91101 (United States)
2013-09-20
We present chemical abundances for 27 elements ranging from oxygen to erbium in the metal-poor ([Fe/H] = –1.67) bulge red giant branch star 2MASS 18174532-3353235. The results are based on equivalent width and spectrum synthesis analyses of a high-resolution (R ∼ 30, 000) spectrum obtained with the Magellan-MIKE spectrograph. While the light (Z ∼< 30) element abundance patterns match those of similar metallicity bulge and halo stars, the strongly enhanced heavy element abundances are more similar to 'r-II' halo stars (e.g., CS 22892-052) typically found at [Fe/H] ∼< – 2.5. We find that the heaviest elements (Z ≥ 56) closely follow the scaled-solar r-process abundance pattern. We do not find evidence supporting significant s-process contributions; however, the intermediate mass elements (e.g., Y and Zr) appear to have been produced through a different process than the heaviest elements. The light and heavy element abundance patterns of 2MASS 18174532-3353235 are in good agreement with the more metal-poor r-process enhanced stars CS 22892-052 and BD +17{sup o}3248. 2MASS 18174532-3353235 also shares many chemical characteristics with the similar metallicity but comparatively α-poor Ursa Minor dwarf galaxy giant COS 82. Interestingly, the Mo and Ru abundances of 2MASS 18174532-3353235 are also strongly enhanced and follow a similar trend recently found to be common in moderately metal-poor main-sequence turn-off halo stars.
2. The most metal-poor damped Lyα systems: insights into chemical evolution in the very metal-poor regime
DEFF Research Database (Denmark)
Cooke, Ryan; Pettini, Max; Steidel, Charles C.
2011-01-01
We present a high spectral resolution survey of the most metal-poor damped Lyα absorption systems (DLAs) aimed at probing the nature and nucleosynthesis of the earliest generations of stars. Our survey comprises 22 systems with iron abundance less than 1/100 solar; observations of seven...... agreement with the values measured in Galactic halo stars when the oxygen abundance is measured from the [O i] λ6300 line. We speculate that such good agreement in the observed abundance trends points to a universal origin for these metals. In view of this agreement, we construct the abundance pattern...... the near-solar values of C/O in DLAs at the lowest metallicities probed, and find that their distribution is in agreement with that seen in Galactic halo stars. We find that the O/Fe ratio in VMP DLAs is essentially constant, and shows very little dispersion, with a mean [〈O/Fe〉]=+0.39 ± 0.12, in good...
3. Are the Formation and Abundances of Metal-poor Stars the Result of Dust Dynamics?
Energy Technology Data Exchange (ETDEWEB)
Hopkins, Philip F. [TAPIR, Mailcode 350-17, California Institute of Technology, Pasadena, CA 91125 (United States); Conroy, Charlie, E-mail: [email protected] [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States)
2017-02-01
Large dust grains can fluctuate dramatically in their local density, relative to the gas, in neutral turbulent disks. Small, high-redshift galaxies (before reionization) represent ideal environments for this process. We show via simple arguments and simulations that order-of-magnitude fluctuations are expected in local abundances of large grains (>100 Å) under these conditions. This can have important consequences for star formation and stellar metal abundances in extremely metal-poor stars. Low-mass stars can form in dust-enhanced regions almost immediately after some dust forms even if the galaxy-average metallicity is too low for fragmentation to occur. We argue that the metal abundances of these “promoted” stars may contain interesting signatures as the CNO abundances (concentrated in large carbonaceous grains and ices) and Mg and Si (in large silicate grains) can be enhanced and/or fluctuate almost independently. Remarkably, the otherwise puzzling abundance patterns of some metal-poor stars can be well fit by standard IMF-averaged core-collapse SNe yields if we allow for fluctuating local dust-to-gas ratios. We also show that the observed log-normal distribution of enhancements in pure SNe yields, shows very large enhancements and variations up to factors of ≳100 as expected in the dust-promoted model, preferentially in the [C/Fe]-enhanced metal-poor stars. Together, this suggests that (1) dust exists in second-generation star formation, (2) local dust-to-gas ratio fluctuations occur in protogalaxies and can be important for star formation, and (3) the light element abundances of these stars may be affected by the local chemistry of dust where they formed, rather than directly tracing nucleosynthesis from earlier populations.
4. Are the Formation and Abundances of Metal-poor Stars the Result of Dust Dynamics?
International Nuclear Information System (INIS)
Hopkins, Philip F.; Conroy, Charlie
2017-01-01
Large dust grains can fluctuate dramatically in their local density, relative to the gas, in neutral turbulent disks. Small, high-redshift galaxies (before reionization) represent ideal environments for this process. We show via simple arguments and simulations that order-of-magnitude fluctuations are expected in local abundances of large grains (>100 Å) under these conditions. This can have important consequences for star formation and stellar metal abundances in extremely metal-poor stars. Low-mass stars can form in dust-enhanced regions almost immediately after some dust forms even if the galaxy-average metallicity is too low for fragmentation to occur. We argue that the metal abundances of these “promoted” stars may contain interesting signatures as the CNO abundances (concentrated in large carbonaceous grains and ices) and Mg and Si (in large silicate grains) can be enhanced and/or fluctuate almost independently. Remarkably, the otherwise puzzling abundance patterns of some metal-poor stars can be well fit by standard IMF-averaged core-collapse SNe yields if we allow for fluctuating local dust-to-gas ratios. We also show that the observed log-normal distribution of enhancements in pure SNe yields, shows very large enhancements and variations up to factors of ≳100 as expected in the dust-promoted model, preferentially in the [C/Fe]-enhanced metal-poor stars. Together, this suggests that (1) dust exists in second-generation star formation, (2) local dust-to-gas ratio fluctuations occur in protogalaxies and can be important for star formation, and (3) the light element abundances of these stars may be affected by the local chemistry of dust where they formed, rather than directly tracing nucleosynthesis from earlier populations.
5. Ages and Heavy Element Abundances from Very Metal-poor Stars in the Sagittarius Dwarf Galaxy
Science.gov (United States)
Hansen, Camilla Juul; El-Souri, Mariam; Monaco, Lorenzo; Villanova, Sandro; Bonifacio, Piercarlo; Caffau, Elisabetta; Sbordone, Luca
2018-03-01
Sagittarius (Sgr) is a massive disrupted dwarf spheroidal galaxy in the Milky Way halo that has undergone several stripping events. Previous chemical studies were restricted mainly to a few, metal-rich ([Fe/H] \\gtrapprox -1) stars that suggested a top-light initial mass function (IMF). Here we present the first high-resolution, very metal-poor ([Fe/H] =‑1 to ‑3) sample of 13 giant stars in the main body of Sgr. We derive abundances of 13 elements, namely C, Ca, Co, Fe, Sr, Ba, La, Ce, Nd, Eu, Dy, Pb, and Th, that challenge the interpretation based on previous studies. Our abundances from Sgr mimic those of the metal-poor halo, and our most metal-poor star ([Fe/H] ∼ -3) indicates a pure r-process pollution. Abundances of Sr, Pb, and Th are presented for the first time in Sgr, allowing for age determination using nuclear cosmochronology. We calculate ages of 9+/- 2.5 {Gyr}. Most of the sample stars have been enriched by a range of asymptotic giant branch (AGB) stars with masses between 1.3 and 5 M ⊙. Sgr J190651.47–320147.23 shows a large overabundance of Pb (2.05 dex) and a peculiar abundance pattern best fit by a 3 M ⊙ AGB star. Based on star-to-star scatter and observed abundance patterns, a mixture of low- and high-mass AGB stars and supernovae (15–25 M ⊙) is necessary to explain these patterns. The high level (0.29 ± 0.05 dex) of Ca indicates that massive supernovae must have existed and polluted the early ISM of Sgr before it lost its gas. This result is in contrast with a top-light IMF with no massive stars polluting Sgr. Based on data obtained UVES/VLT ID: 083.B-0774, 075.B-0127.
6. METAL-POOR LITHIUM-RICH GIANTS IN THE RADIAL VELOCITY EXPERIMENT SURVEY
International Nuclear Information System (INIS)
Ruchti, Gregory R.; Fulbright, Jon P.; Wyse, Rosemary F. G.; Gilmore, Gerard F.; Grebel, Eva K.; Bienaymé, Olivier; Siebert, Arnaud; Bland-Hawthorn, Joss; Freeman, Ken C.; Gibson, Brad K.; Munari, Ulisse; Navarro, Julio F.; Parker, Quentin A.; Watson, Fred G.; Reid, Warren; Seabroke, George M.; Siviero, Alessandro; Steinmetz, Matthias; Williams, Mary; Zwitter, Tomaz
2011-01-01
We report the discovery of eight lithium-rich field giants found in a high-resolution spectroscopic sample of over 700 metal-poor stars ([Fe/H] 7 Li), A(Li) = log (n(Li)/n(H)) + 12, between 2.30 and 3.63, well above the typical upper red giant branch (RGB) limit, A(Li) 7 Be (which burns to 7 Li) is transported to the stellar surface via the Cameron-Fowler mechanism. We discuss and discriminate among several models for the extra mixing that can cause Li production, given the detailed abundances of the Li-rich giants in our sample.
7. A NON-LOCAL THERMODYNAMIC EQUILIBRIUM ANALYSIS OF BORON ABUNDANCES IN METAL-POOR STARS
International Nuclear Information System (INIS)
Tan Kefeng; Shi Jianrong; Zhao Gang
2010-01-01
The non-local thermodynamic equilibrium (NLTE) line formation of neutral boron in the atmospheres of cool stars are investigated. Our results confirm that NLTE effects for the B I resonance lines, which are due to a combination of overionization and optical pumping effects, are most important for hot, metal-poor, and low-gravity stars; however, the amplitude of departures from local thermodynamic equilibrium (LTE) found by this work is smaller than that of previous studies. In addition, our calculation shows that the line formation of B I will get closer to LTE if the strength of collisions with neutral hydrogen increases, which is contrary to the result of previous studies. The NLTE line formation results are applied to the determination of boron abundances for a sample of 16 metal-poor stars with the method of spectrum synthesis of the B I 2497 A resonance lines using the archived HST/GHRS spectra. Beryllium and oxygen abundances are also determined for these stars with the published equivalent widths of the Be II 3131 A resonance and O I 7774 A triplet lines, respectively. The abundances of the nine stars which are not depleted in Be or B show that, no matter what the strength of collisions with neutral hydrogen may be, both Be and B increase with O quasilinearly in the logarithmic plane, which confirms the conclusions that Be and B are mainly produced by the primary process in the early Galaxy. The most noteworthy result of this work is that B increases with Fe or O at a very similar speed as, or a bit faster than, Be does, which is in accord with the theoretical models. The B/Be ratios remain almost constant over the metallicity range investigated here. Our average B/Be ratio falls in the interval [13 ± 4, 17 ± 4], which is consistent with the predictions of the spallation process. The contribution of B from the ν-process may be required if the 11 B/ 10 B isotopic ratios in metal-poor stars are the same as the meteoric value. An accurate measurement of the
8. Chances for earth-like planets and life around metal-poor stars
OpenAIRE
Zinnecker, Hans
2003-01-01
We discuss the difficulties of forming earth-like planets in metal-poor environments, such as those prevailing in the Galactic halo (Pop II), the Magellanic Clouds, and the early universe. We suggest that, with less heavy elements available, terrestrial planets will be smaller size and lower mass than in our solar system (solar metallicity). Such planets may not be able to sustain life as we know it. Therefore, the chances of very old lifeforms in the universe are slim, and a threshold metall...
9. Imprints of fast-rotating massive stars in the Galactic Bulge.
Science.gov (United States)
Chiappini, Cristina; Frischknecht, Urs; Meynet, Georges; Hirschi, Raphael; Barbuy, Beatriz; Pignatari, Marco; Decressin, Thibaut; Maeder, André
2011-04-28
The first stars that formed after the Big Bang were probably massive, and they provided the Universe with the first elements heavier than helium ('metals'), which were incorporated into low-mass stars that have survived to the present. Eight stars in the oldest globular cluster in the Galaxy, NGC 6522, were found to have surface abundances consistent with the gas from which they formed being enriched by massive stars (that is, with higher α-element/Fe and Eu/Fe ratios than those of the Sun). However, the same stars have anomalously high abundances of Ba and La with respect to Fe, which usually arises through nucleosynthesis in low-mass stars (via the slow-neutron-capture process, or s-process). Recent theory suggests that metal-poor fast-rotating massive stars are able to boost the s-process yields by up to four orders of magnitude, which might provide a solution to this contradiction. Here we report a reanalysis of the earlier spectra, which reveals that Y and Sr are also overabundant with respect to Fe, showing a large scatter similar to that observed in extremely metal-poor stars, whereas C abundances are not enhanced. This pattern is best explained as originating in metal-poor fast-rotating massive stars, which might point to a common property of the first stellar generations and even of the 'first stars'.
10. Globular Clusters - Guides to Galaxies
CERN Document Server
Richtler, Tom; Joint ESO-FONDAP Workshop on Globular Clusters
2009-01-01
The principal question of whether and how globular clusters can contribute to a better understanding of galaxy formation and evolution is perhaps the main driving force behind the overall endeavour of studying globular cluster systems. Naturally, this splits up into many individual problems. The objective of the Joint ESO-FONDAP Workshop on Globular Clusters - Guides to Galaxies was to bring together researchers, both observational and theoretical, to present and discuss the most recent results. Topics covered in these proceedings are: internal dynamics of globular clusters and interaction with host galaxies (tidal tails, evolution of cluster masses), accretion of globular clusters, detailed descriptions of nearby cluster systems, ultracompact dwarfs, formations of massive clusters in mergers and elsewhere, the ACS Virgo survey, galaxy formation and globular clusters, dynamics and kinematics of globular cluster systems and dark matter-related problems. With its wide coverage of the topic, this book constitute...
11. Globular clusters, old and young
International Nuclear Information System (INIS)
Samus', N.N.
1984-01-01
The problem of similarity of and difference in the globular and scattered star clusters is considered. Star clusters in astronomy are related either to globular or to scattered ones according to the structure of Hertzsprung-Russell diagram constructed for star clusters, but not according to the appearance. The qlobular clusters in the Galaxy are composed of giants and subgiants, which testifies to the old age of the globular clusters. The Globular clusters in the Magellanic clouds are classified into ''red'' ones - similar to the globular clusters of the Galaxy, and ''blue'' ones - similar to them in appearance but differing extremely by the star composition and so by the age. The old star clusters are suggested to be called globular ones, while another name (''populous'', for example) is suggested to be used for other clusters similar to globular ones only in appearance
12. Ultrasonographic findings in patients with peristomal bulging
DEFF Research Database (Denmark)
Sjödahl, Rune I; Thorelius, Lars; Hallböök, Olof J
2011-01-01
The aim of this study was to obtain a classification of peristomal bulging based on findings at ultrasonography in patients with a sigmoid colostomy.......The aim of this study was to obtain a classification of peristomal bulging based on findings at ultrasonography in patients with a sigmoid colostomy....
13. Using photometrically selected metal-poor stars to study dwarf galaxies and the Galactic stellar halo
Science.gov (United States)
Youakim, Kris; Starkenburg, Else; Martin, Nicolas; Pristine Team
2018-06-01
The Pristine survey is a narrow-band photometric survey designed to efficiently search for extremely metal-poor (EMP) stars. In the first three years of the survey, it has demonstrated great efficiency at finding EMP stars, and also great promise for increasing the current, small sample of the most metal-poor stars. The present sky coverage is ~2500 square degrees in the Northern Galactic Halo, including several individual fields targeting dwarf galaxies. By efficiently identifying member stars in the outskirts of known faint dwarf galaxies, the dynamical histories and chemical abundance patterns of these systems can be understood in greater detail. Additionally, with reliable photometric metallicities over a large sky coverage it is possible to perform a large scale clustering analysis in the Milky Way halo, and investigate the characteristic scale of substructure at different metallicities. This can reveal important details about the process of building up the halo through dwarf galaxy accretion, and offer insight into the connection between dwarf galaxies and the Milky Way halo. In this talk I will outline our results on the search for the most pristine stars, with a focus on how we are using this information to advance our understanding of dwarf galaxies and their contribution to the formation of the Galactic stellar halo.
14. The origin of light neutron-capture elements in very metal-poor stars
International Nuclear Information System (INIS)
Honda, S.; Aoki, W.; Kajino, T.; Ando, H.; Beers, T.C.
2005-01-01
We obtained high resolution spectra of 40 very metal-poor stars, and measured the abundances of heavy elements. The abundance pattern of the heavy neutron-capture elements (56=< Z=<70) in r-process-enhanced, metal-poor stars are quite similar to that of the r-process component in solar-system material. In contrast, the abundance ratios of the light neutron-capture elements (38=< Z=<40) to heavier ones show a large dispersion. We investigated the correlation between Sr(Z=38) and Ba(Z=56) abundances, and obtained two clear results: (1) Ba-enhanced stars also show large excess of Sr (there is no object which is Ba-rich and Sr-poor); (2) stars with low Ba abundance show large scatter in Sr abundance. This trend is naturally explained by hypothesizing the existence of two processes, one that produces Sr without Ba and the other that produces Sr and Ba in similar proportions
15. Searching for chemical classes among metal-poor stars using medium-resolution spectroscopy
Science.gov (United States)
Cruz, Monique A.; Cogo-Moreira, Hugo; Rossi, Silvia
2018-04-01
Astronomy is in the era of large spectroscopy surveys, with the spectra of hundreds of thousands of stars in the Galaxy being collected. Although most of these surveys have low or medium resolution, which makes precise abundance measurements not possible, there is still important information to be extracted from the available data. Our aim is to identify chemically distinct classes among metal-poor stars, observed by the Sloan Digital Sky Survey, using line indices. The present work focused on carbon-enhanced metal-poor (CEMP) stars and their subclasses. We applied the latent profile analysis technique to line indices for carbon, barium, iron and europium, in order to separate the sample into classes with similar chemical signatures. This technique provides not only the number of possible groups but also the probability of each object to belong to each class. The method was able to distinguish at least two classes among the observed sample, with one of them being probable CEMP stars enriched in s-process elements. However, it was not able to separate CEMP-no stars from the rest of the sample. Latent profile analysis is a powerful model-based tool to be used in the identification of patterns in astrophysics. Our tests show the potential of the technique for the attainment of additional chemical information from poor' data.
16. HIERARCHICAL FORMATION OF THE GALACTIC HALO AND THE ORIGIN OF HYPER METAL-POOR STARS
International Nuclear Information System (INIS)
Komiya, Yutaka; Habe, Asao; Suda, Takuma; Fujimoto, Masayuki Y.
2009-01-01
Extremely metal-poor (EMP) stars in the Galactic halo are unique probes into the early universe and the first stars. We construct a new program to calculate the formation history of EMP stars in the early universe with the chemical evolution, based on the merging history of the Galaxy. We show that the hierarchical structure formation model reproduces the observed metallicity distribution function and also the total number of observed EMP stars, when we take into account the high-mass initial mass function and the contribution of binaries, as proposed by Komiya et al. The low-mass survivors divide into two groups of those born before and after the mini-halos are polluted by their own first supernovae. The former has observational counterparts in the hyper metal-poor (HMP) stars below [Fe/H] - 4. In this Letter, we focus on the origin of the extremely small iron abundances of HMP stars. We compute the change in the surface abundances of individual stars through the accretion of the metal-enriched interstellar gas along with the dynamical and chemical evolution of the Galaxy, to demonstrate that after-birth pollution of Population III stars is sufficiently effective to explain the observed abundances of HMP stars. Metal pre-enrichment by possible pair instability supernovae is also discussed, to derive constraints on their roles and on the formation of the first low-mass stars.
17. Chemical Characterization of the Inner Galactic bulge: North-South Symmetry
Science.gov (United States)
Nandakumar, G.; Ryde, N.; Schultheis, M.; Thorsbro, B.; Jönsson, H.; Barklem, P. S.; Rich, R. M.; Fragkoudi, F.
2018-05-01
While the number of stars in the Galactic bulge with detailed chemical abundance measurements is increasing rapidly, the inner Galactic bulge (|b| detect a bimodal MDF with a metal-rich peak at ˜ +0.3 dex and a metal-poor peak at ˜ -0.5 dex, and no stars with [Fe/H] > +0.6 dex. The Galactic Center field reveals in contrast a mainly metal-rich population with a mean metallicity of +0.3 dex. We derived [Mg/Fe] and [Si/Fe] abundances which are consistent with trends from the outer bulge. We confirm for the supersolar metallicity stars the decreasing trend in [Mg/Fe] and [Si/Fe] as expected from chemical evolution models. With the caveat of a relatively small sample, we do not find significant differences in the chemical abundances between the Northern and the Southern fields, hence the evidence is consistent with symmetry in chemistry between North and South.
18. DETECTION OF THE SECOND r-PROCESS PEAK ELEMENT TELLURIUM IN METAL-POOR STARS ,
International Nuclear Information System (INIS)
Roederer, Ian U.; Lawler, James E.; Cowan, John J.; Beers, Timothy C.; Frebel, Anna; Ivans, Inese I.; Schatz, Hendrik; Sobeck, Jennifer S.; Sneden, Christopher
2012-01-01
Using near-ultraviolet spectra obtained with the Space Telescope Imaging Spectrograph on board the Hubble Space Telescope, we detect neutral tellurium in three metal-poor stars enriched by products of r-process nucleosynthesis, BD +17 3248, HD 108317, and HD 128279. Tellurium (Te, Z = 52) is found at the second r-process peak (A ≈ 130) associated with the N = 82 neutron shell closure, and it has not been detected previously in Galactic halo stars. The derived tellurium abundances match the scaled solar system r-process distribution within the uncertainties, confirming the predicted second peak r-process residuals. These results suggest that tellurium is predominantly produced in the main component of the r-process, along with the rare earth elements.
19. THE METALLICITY BIMODALITY OF GLOBULAR CLUSTER SYSTEMS: A TEST OF GALAXY ASSEMBLY AND OF THE EVOLUTION OF THE GALAXY MASS-METALLICITY RELATION
International Nuclear Information System (INIS)
Tonini, Chiara
2013-01-01
We build a theoretical model to study the origin of the globular cluster metallicity bimodality in the hierarchical galaxy assembly scenario. The model is based on empirical relations such as the galaxy mass-metallicity relation [O/H]-M star as a function of redshift, and on the observed galaxy stellar mass function up to redshift z ∼ 4. We make use of the theoretical merger rates as a function of mass and redshift from the Millennium simulation to build galaxy merger trees. We derive a new galaxy [Fe/H]-M star relation as a function of redshift, and by assuming that globular clusters share the metallicity of their original parent galaxy at the time of their formation, we populate the merger tree with globular clusters. We perform a series of Monte Carlo simulations of the galaxy hierarchical assembly, and study the properties of the final globular cluster population as a function of galaxy mass, assembly and star formation history, and under different assumptions for the evolution of the galaxy mass-metallicity relation. The main results and predictions of the model are the following. (1) The hierarchical clustering scenario naturally predicts a metallicity bimodality in the galaxy globular cluster population, where the metal-rich subpopulation is composed of globular clusters formed in the galaxy main progenitor around redshift z ∼ 2, and the metal-poor subpopulation is composed of clusters accreted from satellites, and formed at redshifts z ∼ 3-4. (2) The model reproduces the observed relations by Peng et al. for the metallicities of the metal-rich and metal-poor globular cluster subpopulations as a function of galaxy mass; the positions of the metal-poor and metal-rich peaks depend exclusively on the evolution of the galaxy mass-metallicity relation and the [O/Fe], both of which can be constrained by this method. In particular, we find that the galaxy [O/Fe] evolves linearly with redshift from a value of ∼0.5 at redshift z ∼ 4 to a value of ∼0.1 at
20. THE KENNICUTT–SCHMIDT RELATION IN EXTREMELY METAL-POOR DWARF GALAXIES
Energy Technology Data Exchange (ETDEWEB)
Filho, M. E.; Almeida, J. Sánchez; Muñoz-Tuñón, C. [Instituto Astrofísica de Canarias, E-38200 La Laguna, Tenerife (Spain); Amorín, R. [National Institute for Astrophysics, Astronomical Observatory of Rome, Via Frascati 33, I-00040 Monteporzio Catone (Rome) (Italy); Elmegreen, B. G. [IBM, T. J. Watson Research Center, 1101 Kitchawan Road, Yorktown Heights, NY 10598 (United States); Elmegreen, D. M., E-mail: [email protected] [Department of Physics and Astronomy, Vassar College, Poughkeepsie, NY 12604 (United States)
2016-04-01
The Kennicutt–Schmidt (KS) relation between the gas mass and star formation rate (SFR) describes the star formation regulation in disk galaxies. It is a function of gas metallicity, but the low-metallicity regime of the KS diagram is poorly sampled. We have analyzed data for a representative set of extremely metal-poor galaxies (XMPs), as well as auxiliary data, and compared these to empirical and theoretical predictions. The majority of the XMPs possess high specific SFRs, similar to high-redshift star-forming galaxies. On the KS plot, the XMP H i data occupy the same region as dwarfs and extend the relation for low surface brightness galaxies. Considering the H i gas alone, a considerable fraction of the XMPs already fall off the KS law. Significant quantities of “dark” H{sub 2} mass (i.e., not traced by CO) would imply that XMPs possess low star formation efficiencies (SFE{sub gas}). Low SFE{sub gas} in XMPs may be the result of the metal-poor nature of the H i gas. Alternatively, the H i reservoir may be largely inert, the star formation being dominated by cosmological accretion. Time lags between gas accretion and star formation may also reduce the apparent SFE{sub gas}, as may galaxy winds, which can expel most of the gas into the intergalactic medium. Hence, on global scales, XMPs could be H i-dominated, high-specific-SFR (≳10{sup −10} yr{sup −1}), low-SFE{sub gas} (≲10{sup −9} yr{sup −1}) systems, in which the total H i mass is likely not a good predictor of the total H{sub 2} mass, nor of the SFR.
1. A SEARCH FOR UNRECOGNIZED CARBON-ENHANCED METAL-POOR STARS IN THE GALAXY
International Nuclear Information System (INIS)
Placco, Vinicius M.; Rossi, Silvia; Kennedy, Catherine R.; Beers, Timothy C.; Lee, Young Sun; Christlieb, Norbert; Sivarani, Thirupathi; Reimers, Dieter; Wisotzki, Lutz
2010-01-01
We have developed a new procedure to search for carbon-enhanced metal-poor (CEMP) stars from the Hamburg/ESO (HES) prism-survey plates. This method employs an extended line index for the CH G band, which we demonstrate to have superior performance when compared to the narrower G-band index formerly employed to estimate G-band strengths for these spectra. Although CEMP stars have been found previously among candidate metal-poor stars selected from the HES, the selection on metallicity undersamples the population of intermediate-metallicity CEMP stars (-2.5 ≤ [Fe/H] ≤ -1.0); such stars are of importance for constraining the onset of the s-process in metal-deficient asymptotic giant branch stars (thought to be associated with the origin of carbon for roughly 80% of CEMP stars). The new candidates also include substantial numbers of warmer carbon-enhanced stars, which were missed in previous HES searches for carbon stars due to selection criteria that emphasized cooler stars. A first subsample, biased toward brighter stars (B< 15.5), has been extracted from the scanned HES plates. After visual inspection (to eliminate spectra compromised by plate defects, overlapping spectra, etc., and to carry out rough spectral classifications), a list of 669 previously unidentified candidate CEMP stars was compiled. Follow-up spectroscopy for a pilot sample of 132 candidates was obtained with the Goodman spectrograph on the SOAR 4.1 m telescope. Our results show that most of the observed stars lie in the targeted metallicity range, and possess prominent carbon absorption features at 4300 A. The success rate for the identification of new CEMP stars is 43% (13 out of 30) for [Fe/H] < -2.0. For stars with [Fe/H] < -2.5, the ratio increases to 80% (four out of five objects), including one star with [Fe/H] < -3.0.
2. THE SYNTHETIC-OVERSAMPLING METHOD: USING PHOTOMETRIC COLORS TO DISCOVER EXTREMELY METAL-POOR STARS
Energy Technology Data Exchange (ETDEWEB)
Miller, A. A., E-mail: [email protected] [Jet Propulsion Laboratory, 4800 Oak Grove Drive, MS 169-506, Pasadena, CA 91109 (United States)
2015-09-20
Extremely metal-poor (EMP) stars ([Fe/H] ≤ −3.0 dex) provide a unique window into understanding the first generation of stars and early chemical enrichment of the universe. EMP stars are exceptionally rare, however, and the relatively small number of confirmed discoveries limits our ability to exploit these near-field probes of the first ∼500 Myr after the Big Bang. Here, a new method to photometrically estimate [Fe/H] from only broadband photometric colors is presented. I show that the method, which utilizes machine-learning algorithms and a training set of ∼170,000 stars with spectroscopically measured [Fe/H], produces a typical scatter of ∼0.29 dex. This performance is similar to what is achievable via low-resolution spectroscopy, and outperforms other photometric techniques, while also being more general. I further show that a slight alteration to the model, wherein synthetic EMP stars are added to the training set, yields the robust identification of EMP candidates. In particular, this synthetic-oversampling method recovers ∼20% of the EMP stars in the training set, at a precision of ∼0.05. Furthermore, ∼65% of the false positives from the model are very metal-poor stars ([Fe/H] ≤ −2.0 dex). The synthetic-oversampling method is biased toward the discovery of warm (∼F-type) stars, a consequence of the targeting bias from the Sloan Digital Sky Survey/Sloan Extension for Galactic Understanding survey. This EMP selection method represents a significant improvement over alternative broadband optical selection techniques. The models are applied to >12 million stars, with an expected yield of ∼600 new EMP stars, which promises to open new avenues for exploring the early universe.
3. TESTING THE ASTEROSEISMIC MASS SCALE USING METAL-POOR STARS CHARACTERIZED WITH APOGEE AND KEPLER
Energy Technology Data Exchange (ETDEWEB)
Epstein, Courtney R.; Johnson, Jennifer A.; Tayar, Jamie; Pinsonneault, Marc [Department of Astronomy, Ohio State University, 140 W. 18th Avenue, Columbus, OH 43210 (United States); Elsworth, Yvonne P.; Chaplin, William J. [School of Physics and Astronomy, University of Birmingham, Edgbaston Park Road, West Midlands, Birmingham B15 2TT (United Kingdom); Shetrone, Matthew [McDonald Observatory, The University of Texas at Austin, 1 University Station, C1400, Austin, TX 78712-0259 (United States); Mosser, Benoît [LESIA, CNRS, Université Pierre et Marie Curie, Université Denis Diderot, Observatoire de Paris, F-92195 Meudon Cedex (France); Hekker, Saskia [Max-Planck-Institut für Sonnensystemforschung, Justus-von-Liebig-Weg 3, D-37077 Göttingen (Germany); Harding, Paul [Department of Astronomy, Case Western Reserve University, Cleveland, OH 44106-7215 (United States); Silva Aguirre, Víctor [Stellar Astrophysics Centre, Department of Physics and Astronomy, Aarhus University, Ny Munkegade 120, DK-8000 Aarhus C (Denmark); Basu, Sarbani [Department of Astronomy, Yale University, P.O. Box 208101, New Haven, CT 06520-8101 (United States); Beers, Timothy C. [National Optical Astronomy Observatory, Tucson, AZ 85719, USA and JINA: Joint Institute for Nuclear Astrophysics (United States); Bizyaev, Dmitry [Apache Point Observatory, Sunspot, NM 88349 (United States); Bedding, Timothy R. [Sydney Institute for Astronomy (SIfA), School of Physics, University of Sydney, NSW 2006 (Australia); Frinchaboy, Peter M. [Department of Physics and Astronomy, Texas Christian University, TCU Box 298840, Fort Worth, TX 76129 (United States); García, Rafael A. [Laboratoire AIM, CEA/DSM-CNRS, Universit Paris 7 Diderot, IRFU/SAp, Centre de Saclay, F-91191, Gif-sur-Yvette (France); Pérez, Ana E. García; Hearty, Fred R., E-mail: [email protected] [Department of Astronomy, University of Virginia, Charlottesville, VA 22904 (United States); and others
2014-04-20
Fundamental stellar properties, such as mass, radius, and age, can be inferred using asteroseismology. Cool stars with convective envelopes have turbulent motions that can stochastically drive and damp pulsations. The properties of the oscillation frequency power spectrum can be tied to mass and radius through solar-scaled asteroseismic relations. Stellar properties derived using these scaling relations need verification over a range of metallicities. Because the age and mass of halo stars are well-constrained by astrophysical priors, they provide an independent, empirical check on asteroseismic mass estimates in the low-metallicity regime. We identify nine metal-poor red giants (including six stars that are kinematically associated with the halo) from a sample observed by both the Kepler space telescope and the Sloan Digital Sky Survey-III APOGEE spectroscopic survey. We compare masses inferred using asteroseismology to those expected for halo and thick-disk stars. Although our sample is small, standard scaling relations, combined with asteroseismic parameters from the APOKASC Catalog, produce masses that are systematically higher (<ΔM > =0.17 ± 0.05 M {sub ☉}) than astrophysical expectations. The magnitude of the mass discrepancy is reduced by known theoretical corrections to the measured large frequency separation scaling relationship. Using alternative methods for measuring asteroseismic parameters induces systematic shifts at the 0.04 M {sub ☉} level. We also compare published asteroseismic analyses with scaling relationship masses to examine the impact of using the frequency of maximum power as a constraint. Upcoming APOKASC observations will provide a larger sample of ∼100 metal-poor stars, important for detailed asteroseismic characterization of Galactic stellar populations.
4. KINEMATICS OF EXTREMELY METAL-POOR GALAXIES: EVIDENCE FOR STELLAR FEEDBACK
Energy Technology Data Exchange (ETDEWEB)
Olmo-García, A.; Sánchez Almeida, J.; Muñoz-Tuñón, C.; Filho, M. E. [Instituto Astrofísica de Canarias, E-38200 La Laguna, Tenerife (Spain); Elmegreen, B. G. [IBM Research Division, T. J. Watson Research Center, Yorktown Heights, NY 10598 (United States); Elmegreen, D. M. [Department of Physics and Astronomy, Vassar College, Poughkeepsie, NY 12604 (United States); Pérez-Montero, E. [Instituto de Astrofísica de Andalucía, CSIC, Granada (Spain); Méndez-Abreu, J., E-mail: [email protected] [School of Physics and Astronomy, University of St Andrews, St Andrews (United Kingdom)
2017-01-10
The extremely metal-poor (XMP) galaxies analyzed in a previous paper have large star-forming regions with a metallicity lower than the rest of the galaxy. Such a chemical inhomogeneity reveals the external origin of the metal-poor gas fueling star formation, possibly indicating accretion from the cosmic web. This paper studies the kinematic properties of the ionized gas in these galaxies. Most XMPs have a rotation velocity around a few tens of km s{sup −1}. The star-forming regions appear to move coherently. The velocity is constant within each region, and the velocity dispersion sometimes increases within the star-forming clump toward the galaxy midpoint, suggesting inspiral motion toward the galaxy center. Other regions present a local maximum in velocity dispersion at their center, suggesting a moderate global expansion. The H α line wings show a number of faint emission features with amplitudes around a few per cent of the main H α component, and wavelength shifts between 100 and 400 km s{sup −1}. The components are often paired, so that red and blue emission features with similar amplitudes and shifts appear simultaneously. Assuming the faint emission to be produced by expanding shell-like structures, the inferred mass loading factor (mass loss rate divided by star formation rate) exceeds 10. Since the expansion velocity far exceeds the rotational and turbulent velocities, the gas may eventually escape from the galaxy disk. The observed motions involve energies consistent with the kinetic energy released by individual core-collapse supernovae. Alternative explanations for the faint emission have been considered and discarded.
5. Impact of Lyman alpha pressure on metal-poor dwarf galaxies
Science.gov (United States)
Kimm, Taysun; Haehnelt, Martin; Blaizot, Jérémy; Katz, Harley; Michel-Dansac, Léo; Garel, Thibault; Rosdahl, Joakim; Teyssier, Romain
2018-04-01
Understanding the origin of strong galactic outflows and the suppression of star formation in dwarf galaxies is a key problem in galaxy formation. Using a set of radiation-hydrodynamic simulations of an isolated dwarf galaxy embedded in a 1010 M⊙ halo, we show that the momentum transferred from resonantly scattered Lyman-α (Lyα) photons is an important source of stellar feedback which can shape the evolution of galaxies. We find that Lyα feedback suppresses star formation by a factor of two in metal-poor galaxies by regulating the dynamics of star-forming clouds before the onset of supernova explosions (SNe). This is possible because each Lyα photon resonantly scatters and imparts ˜10-300 times greater momentum than in the single scattering limit. Consequently, the number of star clusters predicted in the simulations is reduced by a factor of ˜5, compared to the model without the early feedback. More importantly, we find that galactic outflows become weaker in the presence of strong Lyα radiation feedback, as star formation and associated SNe become less bursty. We also examine a model in which radiation field is arbitrarily enhanced by a factor of up to 10, and reach the same conclusion. The typical mass-loading factors in our metal-poor dwarf system are estimated to be ˜5-10 near the mid-plane, while it is reduced to ˜1 at larger radii. Finally, we find that the escape of ionizing radiation and hence the reionization history of the Universe is unlikely to be strongly affected by Lyα feedback.
6. GRANULATION SIGNATURES IN THE SPECTRUM OF THE VERY METAL-POOR RED GIANT HD 122563
International Nuclear Information System (INIS)
RamIrez, I.; Collet, R.; Asplund, M.; Lambert, D. L.; Allende Prieto, C.
2010-01-01
A very high resolution (R = λ/Δλ = 200, 000), high signal-to-noise ratio (S/N ≅ 340) blue-green spectrum of the very metal-poor ([Fe/H] ≅ -2.6) red giant star HD 122563 has been obtained by us at McDonald Observatory. We measure the asymmetries and core wavelengths of a set of unblended Fe I lines covering a wide range of line strength. Line bisectors exhibit the characteristic C-shape signature of surface convection (granulation) and they span from about 100 m s -1 in the strongest Fe I features to 800 m s -1 in the weakest ones. Core wavelength shifts range from about -100 to -900 m s -1 , depending on line strength. In general, larger blueshifts are observed in weaker lines, but there is increasing scatter with increasing residual flux. Assuming local thermodynamic equilibrium (LTE), we synthesize the same set of spectral lines using a state-of-the-art three-dimensional (3D) hydrodynamic simulation for a stellar atmosphere of fundamental parameters similar to those of HD 122563. We find good agreement between model predictions and observations. This allows us to infer an absolute zero point for the line shifts and radial velocity. Moreover, it indicates that the structure and dynamics of the simulation are realistic, thus providing support to previous claims of large 3D-LTE corrections to elemental abundances and fundamental parameters of very metal-poor red giant stars obtained with standard 1D-LTE spectroscopic analyses, as suggested by the hydrodynamic model used here.
7. An observational study of disk-population globular clusters
International Nuclear Information System (INIS)
Armandroff, T.E.
1988-01-01
Integrated-light spectroscopy was obtained for twenty-seven globular clusters at the Ca II infrared triplet. Line strengths and radial velocities were measured from the spectra. For the well-studied clusters in the sample, the strength of the CA II lines is very well correlated with previous metallicity estimates obtained using a variety of techniques. The greatly reduced effect of interstellar extinction at these wavelengths compared to the blue region of the spectrum has permitted observations of some of the most heavily reddened clusters in the Galaxy. For several such clusters, the Ca II triplet metallicities are in poor agreement with metallicity estimates from infrared photometry by Malkan. Color-magnitude diagrams were constructed for six previously unstudied metal-rich globular clusters and for the well-studied cluster 47 Tuc. The V magnitudes of the horizontal branch stars in the six clusters are in poor agreement with previous estimates based on secondary methods. The horizontal branch morphologies and reddenings of the program clusters were also determined. Using the improved set of metallicities, radial velocities, and distance moduli, the spatial distribution, kinematics, and metallicity distribution of the Galactic globulars were analyzed. The revised data supports Zinn's conclusion that the metal-rich clusters form a highly flattened, rapidly rotating disk system, while the metal-poor clusters make up the familiar, spherically distributed, slowly rotating halo population. The scale height, metallicity distribution, and kinematics of the metal-rich globulars are in good agreement with those of the stellar thick disk. Luminosity functions were constructed, and no significant difference is found between disk and halo samples. Metallicity gradients seem to be present in the disk cluster system. The implications of these results for the formation and evol
8. Regional intercostal bulging of the parietal pleura
International Nuclear Information System (INIS)
Jantsch, H.; Greene, R.; Lechner, G.; Mavritz, W.; Pichler, W.; Winkler, M.; Zadrobilek, E.
1989-01-01
This paper describes bedside radiographs with localized intercostal bulging as the sole indication of tension pneumothorax in six patients with acute deterioration in gas exchange. Relief of the pneumothorax was followed by a rush of gas from the tension space and a prompt improvement in gas exchange. The authors concluded the regional intercostal bulging of the parietal pleura may be the sole indicator of life-threatening tension pneumothorax in patients on mechanical ventilation
9. Constraining cosmic scatter in the Galactic halo through a differential analysis of metal-poor stars
Science.gov (United States)
Reggiani, Henrique; Meléndez, Jorge; Kobayashi, Chiaki; Karakas, Amanda; Placco, Vinicius
2017-12-01
Context. The chemical abundances of metal-poor halo stars are important to understanding key aspects of Galactic formation and evolution. Aims: We aim to constrain Galactic chemical evolution with precise chemical abundances of metal-poor stars (-2.8 ≤ [Fe/H] ≤ -1.5). Methods: Using high resolution and high S/N UVES spectra of 23 stars and employing the differential analysis technique we estimated stellar parameters and obtained precise LTE chemical abundances. Results: We present the abundances of Li, Na, Mg, Al, Si, Ca, Sc, Ti, V, Cr, Mn, Co, Ni, Zn, Sr, Y, Zr, and Ba. The differential technique allowed us to obtain an unprecedented low level of scatter in our analysis, with standard deviations as low as 0.05 dex, and mean errors as low as 0.05 dex for [X/Fe]. Conclusions: By expanding our metallicity range with precise abundances from other works, we were able to precisely constrain Galactic chemical evolution models in a wide metallicity range (-3.6 ≤ [Fe/H] ≤ -0.4). The agreements and discrepancies found are key for further improvement of both models and observations. We also show that the LTE analysis of Cr II is a much more reliable source of abundance for chromium, as Cr I has important NLTE effects. These effects can be clearly seen when we compare the observed abundances of Cr I and Cr II with GCE models. While Cr I has a clear disagreement between model and observations, Cr II is very well modeled. We confirm tight increasing trends of Co and Zn toward lower metallicities, and a tight flat evolution of Ni relative to Fe. Our results strongly suggest inhomogeneous enrichment from hypernovae. Our precise stellar parameters results in a low star-to-star scatter (0.04 dex) in the Li abundances of our sample, with a mean value about 0.4 dex lower than the prediction from standard Big Bang nucleosynthesis; we also study the relation between lithium depletion and stellar mass, but it is difficult to assess a correlation due to the limited mass range. We
10. Fast Winds and Mass Loss from Metal-Poor Field Giants
Science.gov (United States)
Dupree, A. K.; Smith, Graeme H.; Strader, Jay
2009-11-01
Echelle spectra of the infrared He I λ10830 line were obtained with NIRSPEC on the Keck 2 telescope for 41 metal-deficient field giant stars including those on the red giant branch (RGB), asymptotic giant branch (AGB), and red horizontal branch (RHB). The presence of this He I line is ubiquitous in stars with T effgsim 4500 K and MV fainter than -1.5, and reveals the dynamics of the atmosphere. The line strength increases with effective temperature for T effgsim 5300 K in RHB stars. In AGB and RGB stars, the line strength increases with luminosity. Fast outflows (gsim 60 km s-1) are detected from the majority of the stars and about 40% of the outflows have sufficient speed as to allow escape of material from the star as well as from a globular cluster. Outflow speeds and line strengths do not depend on metallicity for our sample ([Fe/H]= -0.7 to -3.0), suggesting the driving mechanism for these winds derives from magnetic and/or hydrodynamic processes. Gas outflows are present in every luminous giant, but are not detected in all stars of lower luminosity indicating possible variability. Mass loss rates ranging from ~3 × 10-10 to ~6 × 10-8 M sun yr-1 estimated from the Sobolev approximation for line formation represent values with evolutionary significance for red giants and RHB stars. We estimate that 0.2 M sun will be lost on the RGB, and the torque of this wind can account for observations of slowly rotating RHB stars in the field. About 0.1-0.2 M sun will be lost on the RHB itself. This first empirical determination of mass loss on the RHB may contribute to the appearance of extended horizontal branches in globular clusters. The spectra appear to resolve the problem of missing intracluster material in globular clusters. Opportunities exist for "wind smothering" of dwarf stars by winds from the evolved population, possibly leading to surface pollution in regions of high stellar density. Data presented herein were obtained at the W. M. Keck Observatory, which
11. Light, Alpha, and Fe-peak Element Abundances in the Galactic Bulge
Science.gov (United States)
Johnson, Christian I.; Rich, R. Michael; Kobayashi, Chiaki; Kunder, Andrea; Koch, Andreas
2014-10-01
We present radial velocities and chemical abundances of O, Na, Mg, Al, Si, Ca, Cr, Fe, Co, Ni, and Cu for a sample of 156 red giant branch stars in two Galactic bulge fields centered near (l, b) = (+5.25,-3.02) and (0,-12). The (+5.25,-3.02) field also includes observations of the bulge globular cluster NGC 6553. The results are based on high-resolution (R ~ 20,000), high signal-to-noise ration (S/N >~ 70) FLAMES-GIRAFFE spectra obtained through the European Southern Observatory archive. However, we only selected a subset of the original observations that included spectra with both high S/N and that did not show strong TiO absorption bands. This work extends previous analyses of this data set beyond Fe and the α-elements Mg, Si, Ca, and Ti. While we find reasonable agreement with past work, the data presented here indicate that the bulge may exhibit a different chemical composition than the local thick disk, especially at [Fe/H] >~ -0.5. In particular, the bulge [α/Fe] ratios may remain enhanced to a slightly higher [Fe/H] than the thick disk, and the Fe-peak elements Co, Ni, and Cu appear enhanced compared to the disk. There is also some evidence that the [Na/Fe] (but not [Al/Fe]) trends between the bulge and local disk may be different at low and high metallicity. We also find that the velocity dispersion decreases as a function of increasing [Fe/H] for both fields, and do not detect any significant cold, high-velocity populations. A comparison with chemical enrichment models indicates that a significant fraction of hypernovae may be required to explain the bulge abundance trends, and that initial mass functions that are steep, top-heavy (and do not include strong outflow), or truncated to avoid including contributions from stars >40 M ⊙ are ruled out, in particular because of disagreement with the Fe-peak abundance data. For most elements, the NGC 6553 stars exhibit abundance trends nearly identical to comparable metallicity bulge field stars. However, the
12. Light, alpha, and Fe-peak element abundances in the galactic bulge
International Nuclear Information System (INIS)
Johnson, Christian I.; Rich, R. Michael; Kobayashi, Chiaki; Kunder, Andrea; Koch, Andreas
2014-01-01
We present radial velocities and chemical abundances of O, Na, Mg, Al, Si, Ca, Cr, Fe, Co, Ni, and Cu for a sample of 156 red giant branch stars in two Galactic bulge fields centered near (l, b) = (+5.25,–3.02) and (0,–12). The (+5.25,–3.02) field also includes observations of the bulge globular cluster NGC 6553. The results are based on high-resolution (R ∼ 20,000), high signal-to-noise ration (S/N ≳ 70) FLAMES-GIRAFFE spectra obtained through the European Southern Observatory archive. However, we only selected a subset of the original observations that included spectra with both high S/N and that did not show strong TiO absorption bands. This work extends previous analyses of this data set beyond Fe and the α-elements Mg, Si, Ca, and Ti. While we find reasonable agreement with past work, the data presented here indicate that the bulge may exhibit a different chemical composition than the local thick disk, especially at [Fe/H] ≳ –0.5. In particular, the bulge [α/Fe] ratios may remain enhanced to a slightly higher [Fe/H] than the thick disk, and the Fe-peak elements Co, Ni, and Cu appear enhanced compared to the disk. There is also some evidence that the [Na/Fe] (but not [Al/Fe]) trends between the bulge and local disk may be different at low and high metallicity. We also find that the velocity dispersion decreases as a function of increasing [Fe/H] for both fields, and do not detect any significant cold, high-velocity populations. A comparison with chemical enrichment models indicates that a significant fraction of hypernovae may be required to explain the bulge abundance trends, and that initial mass functions that are steep, top-heavy (and do not include strong outflow), or truncated to avoid including contributions from stars >40 M ☉ are ruled out, in particular because of disagreement with the Fe-peak abundance data. For most elements, the NGC 6553 stars exhibit abundance trends nearly identical to comparable metallicity bulge field stars
13. Light, alpha, and Fe-peak element abundances in the galactic bulge
Energy Technology Data Exchange (ETDEWEB)
Johnson, Christian I. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, MS-15, Cambridge, MA 02138 (United States); Rich, R. Michael [Department of Physics and Astronomy, UCLA, 430 Portola Plaza, Box 951547, Los Angeles, CA 90095-1547 (United States); Kobayashi, Chiaki [Centre for Astrophysics Research, University of Hertfordshire, Hatfield AL10 9AB (United Kingdom); Kunder, Andrea [Leibniz-Institute für Astrophysik Potsdam (AIP), Ander Sternwarte 16, D-14482, Potsdam (Germany); Koch, Andreas, E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected] [Zentrum für Astronomie der Universität Heidelberg, Landessternwarte, Königstuhl 12, Heidelberg (Germany)
2014-10-01
We present radial velocities and chemical abundances of O, Na, Mg, Al, Si, Ca, Cr, Fe, Co, Ni, and Cu for a sample of 156 red giant branch stars in two Galactic bulge fields centered near (l, b) = (+5.25,–3.02) and (0,–12). The (+5.25,–3.02) field also includes observations of the bulge globular cluster NGC 6553. The results are based on high-resolution (R ∼ 20,000), high signal-to-noise ration (S/N ≳ 70) FLAMES-GIRAFFE spectra obtained through the European Southern Observatory archive. However, we only selected a subset of the original observations that included spectra with both high S/N and that did not show strong TiO absorption bands. This work extends previous analyses of this data set beyond Fe and the α-elements Mg, Si, Ca, and Ti. While we find reasonable agreement with past work, the data presented here indicate that the bulge may exhibit a different chemical composition than the local thick disk, especially at [Fe/H] ≳ –0.5. In particular, the bulge [α/Fe] ratios may remain enhanced to a slightly higher [Fe/H] than the thick disk, and the Fe-peak elements Co, Ni, and Cu appear enhanced compared to the disk. There is also some evidence that the [Na/Fe] (but not [Al/Fe]) trends between the bulge and local disk may be different at low and high metallicity. We also find that the velocity dispersion decreases as a function of increasing [Fe/H] for both fields, and do not detect any significant cold, high-velocity populations. A comparison with chemical enrichment models indicates that a significant fraction of hypernovae may be required to explain the bulge abundance trends, and that initial mass functions that are steep, top-heavy (and do not include strong outflow), or truncated to avoid including contributions from stars >40 M {sub ☉} are ruled out, in particular because of disagreement with the Fe-peak abundance data. For most elements, the NGC 6553 stars exhibit abundance trends nearly identical to comparable metallicity bulge field
14. Deep JHK Photometry and the Infrared Luminosity Function of the Galactic Bulge
Science.gov (United States)
Tiede, Glenn P.; Frogel, Jay A.; Terndrup, D. M.
1995-03-01
We derive the deepest, most complete near-IR luminosity function for Galactic bulge stars yet obtained based on new JHK photometry for stars in two fields of Baade's Window. When combined with previously published data, we are able to construct a luminosity function over the range 5.5 Blanco, V.M., & Whitford, A.E. 1990, ApJ, 353, 494). Between b = -3 and -12 we find a gradient in [Fe/H] of -0.06 +/- 0.03 dex/degree, consistent with other, independent derivations. We derive a helium abundance for Baade's Window with the R and R(') methods and find that Y = 0.27 +/- 0.03. Finally, we find that the bolometric corrections for bulge K giants (V - K >= 2) are in excellent agreement with empirical derivations based on observations of globular cluster and local field stars. However, for the redder M giants we find, as did Frogel and Whitford 1987, that the bolometric corrections differ by several tenths of a magnitude from those derived for field giants and adopted in the Revised Yale Isochrones. This difference most likely arises from the excess molecular blanketing in the V and I bands of the bulge giants relative to that seen in field stars.
15. J0811+4730: the most metal-poor star-forming dwarf galaxy known
Science.gov (United States)
Izotov, Y. I.; Thuan, T. X.; Guseva, N. G.; Liss, S. E.
2018-01-01
We report the discovery of the most metal-poor dwarf star-forming galaxy (SFG) known to date, J0811+4730. This galaxy, at a redshift z = 0.04444, has a Sloan Digital Sky Survey (SDSS) g-band absolute magnitude Mg = -15.41 mag. It was selected by inspecting the spectroscopic data base in the Data Release 13 (DR13) of the SDSS. Large Binocular Telescope/Multi-Object Double spectrograph (LBT/MODS) spectroscopic observations reveal its oxygen abundance to be 12 + log O/H = 6.98 ± 0.02, the lowest ever observed for an SFG. J0811+4730 strongly deviates from the main sequence defined by SFGs in the emission line diagnostic diagrams and the metallicity-luminosity diagram. These differences are caused mainly by the extremely low oxygen abundance in J0811+4730, which is ∼10 times lower than that in main-sequence SFGs with similar luminosities. By fitting the spectral energy distributions of the SDSS and LBT spectra, we derive a stellar mass of M⋆ = 106.24-106.29 M⊙, and we find that a considerable fraction of the galaxy stellar mass was formed during the most recent burst of star formation.
16. Linking dwarf galaxies to halo building blocks with the most metal-poor star in Sculptor.
Science.gov (United States)
Frebel, Anna; Kirby, Evan N; Simon, Joshua D
2010-03-04
Current cosmological models indicate that the Milky Way's stellar halo was assembled from many smaller systems. On the basis of the apparent absence of the most metal-poor stars in present-day dwarf galaxies, recent studies claimed that the true Galactic building blocks must have been vastly different from the surviving dwarfs. The discovery of an extremely iron-poor star (S1020549) in the Sculptor dwarf galaxy based on a medium-resolution spectrum cast some doubt on this conclusion. Verification of the iron-deficiency, however, and measurements of additional elements, such as the alpha-element Mg, are necessary to demonstrate that the same type of stars produced the metals found in dwarf galaxies and the Galactic halo. Only then can dwarf galaxy stars be conclusively linked to early stellar halo assembly. Here we report high-resolution spectroscopic abundances for 11 elements in S1020549, confirming its iron abundance of less than 1/4,000th that of the Sun, and showing that the overall abundance pattern follows that seen in low-metallicity halo stars, including the alpha-elements. Such chemical similarity indicates that the systems destroyed to form the halo billions of years ago were not fundamentally different from the progenitors of present-day dwarfs, and suggests that the early chemical enrichment of all galaxies may be nearly identical.
17. On the Dearth of Ultra-faint Extremely Metal-poor Galaxies
Energy Technology Data Exchange (ETDEWEB)
Sánchez Almeida, J.; Filho, M. E.; Vecchia, C. Dalla [Instituto Astrofísica de Canarias, E-38200 La Laguna, Tenerife (Spain); Skillman, E. D., E-mail: [email protected] [Minnesota Institute for Astrophysics, School of Physics and Astronomy, University of Minnesota, Minneapolis, MN (United States)
2017-02-01
Local extremely metal-poor galaxies (XMPs) are of particular astrophysical interest since they allow us to look into physical processes characteristic of the early universe, from the assembly of galaxy disks to the formation of stars in conditions of low metallicity. Given the luminosity–metallicity relationship, all galaxies fainter than M{sub r} ≃ −13 are expected to be XMPs. Therefore, XMPs should be common in galaxy surveys. However, they are not common, because several observational biases hamper their detection. This work compares the number of faint XMPs in the SDSS-DR7 spectroscopic survey with the expected number, given the known biases and the observed galaxy luminosity function (LF). The faint end of the LF is poorly constrained observationally, but it determines the expected number of XMPs. Surprisingly, the number of observed faint XMPs (∼10) is overpredicted by our calculation, unless the upturn in the faint end of the LF is not present in the model. The lack of an upturn can be naturally understood if most XMPs are central galaxies in their low-mass dark matter halos, which are highly depleted in baryons due to interaction with the cosmic ultraviolet background and to other physical processes. Our result also suggests that the upturn toward low luminosity of the observed galaxy LF is due to satellite galaxies.
18. ALFALFA DISCOVERY OF THE MOST METAL-POOR GAS-RICH GALAXY KNOWN: AGC 198691
Energy Technology Data Exchange (ETDEWEB)
Hirschauer, Alec S.; Salzer, John J.; Rhode, Katherine L., E-mail: [email protected], E-mail: [email protected], E-mail: [email protected] [Department of Astronomy, Indiana University, 727 East Third Street, Bloomington, IN 47405 (United States); and others
2016-05-10
We present spectroscopic observations of the nearby dwarf galaxy AGC 198691. This object is part of the Survey of H i in Extremely Low-Mass Dwarfs project, which is a multi-wavelength study of galaxies with H i masses in the range of 10{sup 6}–10{sup 7.2} M {sub ⊙}, discovered by the Arecibo Legacy Fast ALFA (ALFALFA) survey. We have obtained spectra of the lone H ii region in AGC 198691 with the new high-throughput KPNO Ohio State Multi-Object Spectrograph on the Mayall 4 m, as well as with the Blue Channel spectrograph on the MMT 6.5 m telescope. These observations enable the measurement of the temperature-sensitive [O iii] λ 4363 line and hence the determination of a “direct” oxygen abundance for AGC 198691. We find this system to be an extremely metal-deficient (XMD) system with an oxygen abundance of 12+log(O/H) = 7.02 ± 0.03, making AGC 198691 the lowest-abundance star-forming galaxy known in the local universe. Two of the five lowest-abundance galaxies known have been discovered by the ALFALFA blind H i survey; this high yield of XMD galaxies represents a paradigm shift in the search for extremely metal-poor galaxies.
19. Metal-poor star formation triggered by the feedback effects from Pop III stars
Science.gov (United States)
Chiaki, Gen; Susa, Hajime; Hirano, Shingo
2018-04-01
Metal enrichment by first-generation (Pop III) stars is the very first step of the matter cycle in structure formation and it is followed by the formation of extremely metal-poor (EMP) stars. To investigate the enrichment process by Pop III stars, we carry out a series of numerical simulations including the feedback effects of photoionization and supernovae (SNe) of Pop III stars with a range of masses of minihaloes (MHs), Mhalo, and Pop III stars, MPopIII. We find that the metal-rich ejecta reach neighbouring haloes and external enrichment (EE) occurs when the H II region expands before the SN explosion. The neighbouring haloes are only superficially enriched, and the metallicity of the clouds is [Fe/H] < -5. Otherwise, the SN ejecta fall back and recollapse to form an enriched cloud, i.e. an internal-enrichment (IE) process takes place. In the case where a Pop III star explodes as a core-collapse SN (CCSN), the MH undergoes IE, and the metallicity in the recollapsing region is -5 ≲ [Fe/H] ≲ -3 in most cases. We conclude that IE from a single CCSN can explain the formation of EMP stars. For pair-instability SNe (PISNe), EE takes place for all relevant mass ranges of MHs, consistent with the lack of observational signs of PISNe among EMP stars.
20. KINEMATIC SIGNATURES OF BULGES CORRELATE WITH BULGE MORPHOLOGIES AND SÉRSIC INDEX
International Nuclear Information System (INIS)
Fabricius, Maximilian H.; Saglia, Roberto P.; Bender, Ralf; Hopp, Ulrich; Fisher, David B.; Drory, Niv
2012-01-01
We use the Marcario Low Resolution Spectrograph at the Hobby-Eberly Telescope to study the kinematics of pseudobulges and classical bulges in the nearby universe. We present major axis rotational velocities, velocity dispersions, and h 3 and h 4 moments derived from high-resolution (σ inst ≈ 39 km s –1 ) spectra for 45 S0 to Sc galaxies; for 27 of the galaxies we also present minor axis data. We combine our kinematics with bulge-to-disk decompositions. We demonstrate for the first time that purely kinematic diagnostics of the bulge dichotomy agree systematically with those based on Sérsic index. Low Sérsic index bulges have both increased rotational support (higher v/σ values) and on average lower central velocity dispersions. Furthermore, we confirm that the same correlation also holds when visual morphologies are used to diagnose bulge type. The previously noted trend of photometrically flattened bulges to have shallower velocity dispersion profiles turns out to be significant and systematic if the Sérsic index is used to distinguish between pseudobulges and classical bulges. The anti-correlation between h 3 and v/σ observed in elliptical galaxies is also observed in intermediate-type galaxies, irrespective of bulge type. Finally, we present evidence for formerly undetected counter-rotation in the two systems NGC 3945 and NGC 4736.
1. Optics of globular photonic crystals
International Nuclear Information System (INIS)
Gorelik, V S
2007-01-01
The results of experimental and theoretical studies of the optical properties of globular photonic crystals - new physical objects having a crystal structure with the lattice period exceeding considerably the atomic size, are presented. As globular photonic crystals, artificial opal matrices consisting of close-packed silica globules of diameter ∼200 nm were used. The reflection spectra of these objects characterising the parameters of photonic bands existing in these crystals in the visible spectral region are presented. The idealised models of the energy band structure of photonic crystals investigated in the review give analytic dispersion dependences for the group velocity and the effective photon mass in a globular photonic crystal. The characteristics of secondary emission excited in globular photonic crystals by monochromatic and broadband radiation are presented. The results of investigations of single-photon-excited delayed scattering of light observed in globular photonic crystals exposed to cw UV radiation and radiation from a repetitively pulsed copper vapour laser are presented. The possibilities of using globular photonic crystals as active media for lasing in different spectral regions are considered. It is proposed to use globular photonic crystals as sensitive sensors in optoelectronic devices for molecular analysis of organic and inorganic materials by the modern methods of laser spectroscopy. The results of experimental studies of spontaneous and stimulated globular scattering of light are discussed. The conditions for observing resonance and two-photon-excited delayed scattering of light are found. The possibility of accumulation and localisation of the laser radiation energy inside a globular photonic crystal is reported. (review)
2. CHEMICAL ANALYSIS OF A CARBON-ENHANCED VERY METAL-POOR STAR: CD-27 14351
Energy Technology Data Exchange (ETDEWEB)
Karinkuzhi, Drisya; Goswami, Aruna [Indian Institute of Astrophysics, Koramangala, Bangalore 560034 (India); Masseron, Thomas [Institute of Astronomy, Madingley Road, Cambridge CB3 0HA (United Kingdom)
2017-01-01
We present, for the first time, an abundance analysis of a very metal-poor carbon-enhanced star CD-27 14351 based on a high-resolution ( R ∼ 48,000) FEROS spectrum. Our abundance analysis performed using local thermodynamic equilibrium model atmospheres shows that the object is a cool star with stellar atmospheric parameters, effective temperature T {sub eff} = 4335 K, surface gravity log g = 0.5, microturbulence ξ = 2.42 km s{sup −1}, and metallicity [Fe/H] = −2.6. The star exhibits high carbon and nitrogen abundances with [C/Fe] = 2.89 and [N/Fe] = 1.89. Overabundances of neutron-capture elements are evident in Ba, La, Ce, and Nd, with estimated [X/Fe] > 1, the largest enhancement being seen in Ce with [Ce/Fe] = 2.63. While the first peak s -process elements Sr and Y are found to be enhanced with respect to Fe, ([Sr/Fe] = 1.73 and [Y/Fe] = 1.91), the third peak s -process element Pb could not be detected in our spectrum at the given resolution. Europium, primarily an r -process element also shows an enhancement with [Eu/Fe] = 1.65. With [Ba/Eu] = 0.12, the object CD-27 14351 satisfies the classification criterion for a CEMP-r/s star. The elemental abundance distributions observed in this star are discussed in light of the chemical abundances observed in other CEMP stars in the literature.
3. Quantitative spectroscopy of blue supergiants in metal-poor dwarf galaxy NGC 3109
International Nuclear Information System (INIS)
Hosek, Matthew W. Jr.; Kudritzki, Rolf-Peter; Bresolin, Fabio; Urbaneja, Miguel A.; Przybilla, Norbert; Evans, Christopher J.; Pietrzyński, Grzegorz; Gieren, Wolfgang; Carraro, Giovanni
2014-01-01
We present a quantitative analysis of the low-resolution (∼4.5 Å) spectra of 12 late-B and early-A blue supergiants (BSGs) in the metal-poor dwarf galaxy NGC 3109. A modified method of analysis is presented which does not require use of the Balmer jump as an independent T eff indicator, as used in previous studies. We determine stellar effective temperatures, gravities, metallicities, reddening, and luminosities, and combine our sample with the early-B-type BSGs analyzed by Evans et al. to derive the distance to NGC 3109 using the flux-weighted gravity-luminosity relation (FGLR). Using primarily Fe-group elements, we find an average metallicity of [ Z-bar ] = –0.67 ± 0.13, and no evidence of a metallicity gradient in the galaxy. Our metallicities are higher than those found by Evans et al. based on the oxygen abundances of early-B supergiants ([ Z-bar ] = –0.93 ± 0.07), suggesting a low α/Fe ratio for the galaxy. We adjust the position of NGC 3109 on the BSG-determined galaxy mass-metallicity relation accordingly and compare it to metallicity studies of H II regions in star-forming galaxies. We derive an FGLR distance modulus of 25.55 ± 0.09 (1.27 Mpc) that compares well with Cepheid and tip of the red giant branch distances. The FGLR itself is consistent with those found in other galaxies, demonstrating the reliability of this method as a measure of extragalactic distances.
4. Possible evidence for metal accretion onto the surfaces of metal-poor main-sequence stars
Energy Technology Data Exchange (ETDEWEB)
Hattori, Kohei; Yoshii, Yuzuru [Institute of Astronomy, School of Science, University of Tokyo, 2-21-1 Osawa, Mitaka, Tokyo 181-0015 (Japan); Beers, Timothy C. [National Optical Astronomy Observatories, Tucson, AZ 85719 (United States); Carollo, Daniela [Department of Physics and Astronomy, Macquarie University, Sydney, 2109 NSW (Australia); Lee, Young Sun, E-mail: [email protected] [Department of Astronomy, New Mexico State University, Las Cruces, NM 88003 (United States)
2014-04-01
The entire evolution of the Milky Way, including its mass-assembly and star-formation history, is imprinted onto the chemo-dynamical distribution function of its member stars, f(x, v, [X/H]), in the multi-dimensional phase space spanned by position, velocity, and elemental abundance ratios. In particular, the chemo-dynamical distribution functions for low-mass stars (e.g., G- or K-type dwarfs) are precious tracers of the earliest stages of the Milky Way's formation, since their main-sequence lifetimes approach or exceed the age of the universe. A basic tenet of essentially all previous analyses is that the stellar metallicity, usually parameterized as [Fe/H], is conserved over time for main-sequence stars (at least those that have not been polluted due to mass transfer from binary companions). If this holds true, any correlations between metallicity and kinematics for long-lived main-sequence stars of different masses, effective temperatures, or spectral types must strictly be the same, since they reflect the same mass-assembly and star-formation histories. By analyzing a sample of nearby metal-poor halo and thick-disk stars on the main sequence, taken from Data Release 8 of the Sloan Digital Sky Survey, we find that the median metallicity of G-type dwarfs is systematically higher (by about 0.2 dex) than that of K-type dwarfs having the same median rotational velocity about the Galactic center. If it can be confirmed, this finding may invalidate the long-accepted assumption that the atmospheric metallicities of long-lived stars are conserved over time.
5. DISCOVERY OF A GAS-RICH COMPANION TO THE EXTREMELY METAL-POOR GALAXY DDO 68
Energy Technology Data Exchange (ETDEWEB)
Cannon, John M.; Alfvin, Erik D. [Department of Physics and Astronomy, Macalester College, 1600 Grand Avenue, Saint Paul, MN 55105 (United States); Johnson, Megan; Koribalski, Baerbel [Australia Telescope National Facility, CSIRO Astronomy and Space Science, P.O. Box 76, NSW 1710, Epping (Australia); McQuinn, Kristen B. W.; Skillman, Evan D. [Minnesota Institute for Astrophysics, University of Minnesota, Minneapolis, MN 55455 (United States); Bailin, Jeremy [Department of Physics and Astronomy, University of Alabama, P.O. Box 870324, Tuscaloosa, AL 35487-0324 (United States); Ford, H. Alyson [National Radio Astronomy Observatory, P.O. Box 2, Green Bank, WV 24944 (United States); Girardi, Léo [Osservatorio Astronomico di Padova—INAF, Vicolo dell' Osservatorio 5, I-35122 Padova (Italy); Hirschauer, Alec S.; Janowiecki, Steven; Salzer, John J.; Van Sistine, Angela [Department of Astronomy, Indiana University, 727 East Third Street, Bloomington, IN 47405 (United States); Dolphin, Andrew [Raytheon Company, 1151 E. Hermans Road, Tucson, AZ 85756 (United States); Elson, E. C. [Astrophysics, Cosmology and Gravity Centre (ACGC), Department of Astronomy, University of Cape Town, Private Bag X3, Rondebosch 7701 (South Africa); Marigo, Paola; Rosenfield, Philip [Dipartimento di Fisica e Astronomia Galileo Galilei, Universitá degli Studi di Padova, Vicolo dell' Osservatorio 3, I-35122 Padova (Italy); Rosenberg, Jessica L. [School of Physics, Astronomy, and Computational Science, George Mason University, Fairfax, VA 22030 (United States); Venkatesan, Aparna [Department of Physics and Astronomy, University of San Francisco, 2130 Fulton Street, San Francisco, CA 94117 (United States); Warren, Steven R., E-mail: [email protected] [Department of Astronomy, University of Maryland, CSS Bldg., Rm. 1024, Stadium Drive, College Park, MD 20742-2421 (United States)
2014-05-20
We present H I spectral-line imaging of the extremely metal-poor galaxy DDO 68. This system has a nebular oxygen abundance of only ∼3% Z {sub ☉}, making it one of the most metal-deficient galaxies known in the local volume. Surprisingly, DDO 68 is a relatively massive and luminous galaxy for its metal content, making it a significant outlier in the mass-metallicity and luminosity-metallicity relationships. The origin of such a low oxygen abundance in DDO 68 presents a challenge for models of the chemical evolution of galaxies. One possible solution to this problem is the infall of pristine neutral gas, potentially initiated during a gravitational interaction. Using archival H I spectral-line imaging obtained with the Karl G. Jansky Very Large Array, we have discovered a previously unknown companion of DDO 68. This low-mass (M{sub H} {sub I} = 2.8 × 10{sup 7} M {sub ☉}), recently star-forming (SFR{sub FUV} = 1.4 × 10{sup –3} M {sub ☉} yr{sup –1}, SFR{sub Hα} < 7 × 10{sup –5} M {sub ☉} yr{sup –1}) companion has the same systemic velocity as DDO 68 (V {sub sys} = 506 km s{sup –1}; D = 12.74 ± 0.27 Mpc) and is located at a projected distance of ∼42 kpc. New H I maps obtained with the 100 m Robert C. Byrd Green Bank Telescope provide evidence that DDO 68 and this companion are gravitationally interacting at the present time. Low surface brightness H I gas forms a bridge between these objects.
6. Quantitative spectroscopy of blue supergiants in metal-poor dwarf galaxy NGC 3109
Energy Technology Data Exchange (ETDEWEB)
Hosek, Matthew W. Jr.; Kudritzki, Rolf-Peter; Bresolin, Fabio [Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822 (United States); Urbaneja, Miguel A.; Przybilla, Norbert [Institute for Astro and Particle Physics, A-6020 Innsbruck University (Austria); Evans, Christopher J. [UK Astronomy Technology Centre, Royal Observatory, Blackford Hill, Edinburgh (United Kingdom); Pietrzyński, Grzegorz; Gieren, Wolfgang [Departamento de Astronomía, Universidad de Concepción, Casilla 160-C, Concepción (Chile); Carraro, Giovanni, E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected] [European Southern Observatory, La Silla Paranal Observatory (Chile)
2014-04-20
We present a quantitative analysis of the low-resolution (∼4.5 Å) spectra of 12 late-B and early-A blue supergiants (BSGs) in the metal-poor dwarf galaxy NGC 3109. A modified method of analysis is presented which does not require use of the Balmer jump as an independent T {sub eff} indicator, as used in previous studies. We determine stellar effective temperatures, gravities, metallicities, reddening, and luminosities, and combine our sample with the early-B-type BSGs analyzed by Evans et al. to derive the distance to NGC 3109 using the flux-weighted gravity-luminosity relation (FGLR). Using primarily Fe-group elements, we find an average metallicity of [ Z-bar ] = –0.67 ± 0.13, and no evidence of a metallicity gradient in the galaxy. Our metallicities are higher than those found by Evans et al. based on the oxygen abundances of early-B supergiants ([ Z-bar ] = –0.93 ± 0.07), suggesting a low α/Fe ratio for the galaxy. We adjust the position of NGC 3109 on the BSG-determined galaxy mass-metallicity relation accordingly and compare it to metallicity studies of H II regions in star-forming galaxies. We derive an FGLR distance modulus of 25.55 ± 0.09 (1.27 Mpc) that compares well with Cepheid and tip of the red giant branch distances. The FGLR itself is consistent with those found in other galaxies, demonstrating the reliability of this method as a measure of extragalactic distances.
7. SYSTEMATIC SEARCH FOR EXTREMELY METAL-POOR GALAXIES IN THE SLOAN DIGITAL SKY SURVEY
Energy Technology Data Exchange (ETDEWEB)
Morales-Luis, A. B.; Sanchez Almeida, J.; Aguerri, J. A. L.; Munoz-Tunon, C., E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected] [Instituto de Astrofisica de Canarias, E-38205 La Laguna, Tenerife (Spain)
2011-12-10
We carry out a systematic search for extremely metal-poor (XMP) galaxies in the spectroscopic sample of Sloan Digital Sky Survey (SDSS) data release 7 (DR7). The XMP candidates are found by classifying all the galaxies according to the form of their spectra in a region 80 A wide around H{alpha}. Due to the data size, the method requires an automatic classification algorithm. We use k-means. Our systematic search renders 32 galaxies having negligible [N II] lines, as expected in XMP galaxy spectra. Twenty-one of them have been previously identified as XMP galaxies in the literature-the remaining 11 are new. This was established after a thorough bibliographic search that yielded only some 130 galaxies known to have an oxygen metallicity 10 times smaller than the Sun (explicitly, with 12 + log (O/H) {<=} 7.65). XMP galaxies are rare; they represent 0.01% of the galaxies with emission lines in SDSS/DR7. Although the final metallicity estimate of all candidates remains pending, strong-line empirical calibrations indicate a metallicity about one-tenth solar, with the oxygen metallicity of the 21 known targets being 12 + log (O/H) {approx_equal} 7.61 {+-} 0.19. Since the SDSS catalog is limited in apparent magnitude, we have been able to estimate the volume number density of XMP galaxies in the local universe, which turns out to be (1.32 {+-} 0.23) Multiplication-Sign 10{sup -4} Mpc{sup -3}. The XMP galaxies constitute 0.1% of the galaxies in the local volume, or {approx}0.2% considering only emission-line galaxies. All but four of our candidates are blue compact dwarf galaxies, and 24 of them have either cometary shape or are formed by chained knots.
8. THE INTERMEDIATE NEUTRON-CAPTURE PROCESS AND CARBON-ENHANCED METAL-POOR STARS
Energy Technology Data Exchange (ETDEWEB)
Hampel, Melanie [Zentrum für Astronomie der Universität Heidelberg, Landessternwarte, Königstuhl 12, D-69117 Heidelberg (Germany); Stancliffe, Richard J. [Argelander-Institut für Astronomie, University of Bonn, Auf dem Hügel 71, D-53121 Bonn (Germany); Lugaro, Maria [Konkoly Observatory, Research Centre for Astronomy and Earth Sciences, Hungarian Academy of Sciences, H-1121 Budapest (Hungary); Meyer, Bradley S., E-mail: [email protected] [Department of Physics and Astronomy, Clemson University, Clemson, SC 29634-0978 (United States)
2016-11-10
Carbon-enhanced metal-poor (CEMP) stars in the Galactic Halo display enrichments in heavy elements associated with either the s (slow) or the r (rapid) neutron-capture process (e.g., barium and europium, respectively), and in some cases they display evidence of both. The abundance patterns of these CEMP- s / r stars, which show both Ba and Eu enrichment, are particularly puzzling, since the s and the r processes require neutron densities that are more than ten orders of magnitude apart and, hence, are thought to occur in very different stellar sites with very different physical conditions. We investigate whether the abundance patterns of CEMP- s / r stars can arise from the nucleosynthesis of the intermediate neutron-capture process (the i process), which is characterized by neutron densities between those of the s and the r processes. Using nuclear network calculations, we study neutron capture nucleosynthesis at different constant neutron densities n ranging from 10{sup 7}–10{sup 15} cm{sup -3}. With respect to the classical s process resulting from neutron densities on the lowest side of this range, neutron densities on the highest side result in abundance patterns, which show an increased production of heavy s -process and r -process elements, but similar abundances of the light s -process elements. Such high values of n may occur in the thermal pulses of asymptotic giant branch stars due to proton ingestion episodes. Comparison to the surface abundances of 20 CEMP- s / r stars shows that our modeled i -process abundances successfully reproduce observed abundance patterns, which could not be previously explained by s -process nucleosynthesis. Because the i -process models fit the abundances of CEMP- s / r stars so well, we propose that this class should be renamed as CEMP- i .
9. Does the Galactic Bulge Have Fewer Planets?
Science.gov (United States)
Kohler, Susanna
2016-12-01
The Milky Ways dense central bulge is a very different environment than the surrounding galactic disk in which we live. Do the differences affect the ability of planets to form in the bulge?Exploring Galactic PlanetsSchematic illustrating how gravitational microlensing by an extrasolar planet works. [NASA]Planet formation is a complex process with many aspects that we dont yet understand. Do environmental properties like host star metallicity, the density of nearby stars, or the intensity of the ambient radiation field affect the ability of planets to form? To answer these questions, we will ultimately need to search for planets around stars in a large variety of different environments in our galaxy.One way to detect recently formed, distant planets is by gravitational microlensing. In this process, light from a distant source star is bent by a lens star that is briefly located between us and the source. As the Earth moves, this momentary alignment causes a blip in the sources light curve that we can detect and planets hosted by the lens star can cause an additional observable bump.Artists impression of the Milky Way galaxy. The central bulge is much denserthan the surroundingdisk. [ESO/NASA/JPL-Caltech/M. Kornmesser/R. Hurt]Relative AbundancesMost source stars reside in the galactic bulge, so microlensing events can probe planetary systems at any distance between the Earth and the galactic bulge. This means that planet detections from microlensing could potentially be used to measure the relative abundances of exoplanets in different parts of our galaxy.A team of scientists led by Matthew Penny, a Sagan postdoctoral fellow at Ohio State University, set out to do just that. The group considered a sample of 31 exoplanetary systems detected by microlensing and asked the following question: are the planet abundances in the galactic bulge and the galactic disk the same?A Paucity of PlanetsTo answer this question, Penny and collaborators derived the expected
10. Metal-rich, Metal-poor: Updated Stellar Population Models for Old Stellar Systems
Science.gov (United States)
Conroy, Charlie; Villaume, Alexa; van Dokkum, Pieter G.; Lind, Karin
2018-02-01
We present updated stellar population models appropriate for old ages (>1 Gyr) and covering a wide range in metallicities (‑1.5 ≲ [Fe/H] ≲ 0.3). These models predict the full spectral variation associated with individual element abundance variation as a function of metallicity and age. The models span the optical–NIR wavelength range (0.37–2.4 μm), include a range of initial mass functions, and contain the flexibility to vary 18 individual elements including C, N, O, Mg, Si, Ca, Ti, and Fe. To test the fidelity of the models, we fit them to integrated light optical spectra of 41 Galactic globular clusters (GCs). The value of testing models against GCs is that their ages, metallicities, and detailed abundance patterns have been derived from the Hertzsprung–Russell diagram in combination with high-resolution spectroscopy of individual stars. We determine stellar population parameters from fits to all wavelengths simultaneously (“full spectrum fitting”), and demonstrate explicitly with mock tests that this approach produces smaller uncertainties at fixed signal-to-noise ratio than fitting a standard set of 14 line indices. Comparison of our integrated-light results to literature values reveals good agreement in metallicity, [Fe/H]. When restricting to GCs without prominent blue horizontal branch populations, we also find good agreement with literature values for ages, [Mg/Fe], [Si/Fe], and [Ti/Fe].
11. THE SIZE DIFFERENCE BETWEEN RED AND BLUE GLOBULAR CLUSTERS IS NOT DUE TO PROJECTION EFFECTS
International Nuclear Information System (INIS)
Webb, Jeremy J.; Harris, William E.; Sills, Alison
2012-01-01
Metal-rich (red) globular clusters in massive galaxies are, on average, smaller than metal-poor (blue) globular clusters. One of the possible explanations for this phenomenon is that the two populations of clusters have different spatial distributions. We test this idea by comparing clusters observed in unusually deep, high signal-to-noise images of M87 with a simulated globular cluster population in which the red and blue clusters have different spatial distributions, matching the observations. We compare the overall distribution of cluster effective radii as well as the relationship between effective radius and galactocentric distance for both the observed and simulated red and blue sub-populations. We find that the different spatial distributions does not produce a significant size difference between the red and blue sub-populations as a whole or at a given galactocentric distance. These results suggest that the size difference between red and blue globular clusters is likely due to differences during formation or later evolution.
12. The Size Difference between Red and Blue Globular Clusters is not due to Projection Effects
Science.gov (United States)
Webb, Jeremy J.; Harris, William E.; Sills, Alison
2012-11-01
Metal-rich (red) globular clusters in massive galaxies are, on average, smaller than metal-poor (blue) globular clusters. One of the possible explanations for this phenomenon is that the two populations of clusters have different spatial distributions. We test this idea by comparing clusters observed in unusually deep, high signal-to-noise images of M87 with a simulated globular cluster population in which the red and blue clusters have different spatial distributions, matching the observations. We compare the overall distribution of cluster effective radii as well as the relationship between effective radius and galactocentric distance for both the observed and simulated red and blue sub-populations. We find that the different spatial distributions does not produce a significant size difference between the red and blue sub-populations as a whole or at a given galactocentric distance. These results suggest that the size difference between red and blue globular clusters is likely due to differences during formation or later evolution.
13. Chemical Abundances of Red Giant Branch Stars in the Globular Clusters NGC 6333 and NGC 6366
Science.gov (United States)
Johnson, Christian I.; Rich, R. M.; Pilachowski, C. A.; Kunder, A. M.
2013-01-01
We present chemical abundances and radial velocities for >20 red giant branch (RGB) stars in the Galactic globular clusters NGC 6333 ([Fe/H]≈-1.8) and NGC 6366 ([Fe/H]≈-0.6). The results are based on moderate resolution (R=18,000), high signal-to-noise ratio (>100) spectra obtained with the Hydra multifiber positioner and bench spectrograph on the WIYN 3.5m telescope at Kitt Peak National Observatory. Both objects are likely associated with the Galactic bulge globular cluster system, and we therefore compare the cluster abundance patterns with those of nearby bulge field stars. Additionally, we investigate differences in the O-Na anticorrelation and neutron-capture element dispersion between the two clusters, and compare their abundance patterns with those of similar metallicity halo globular clusters. This material is based upon work supported by the National Science Foundation under award No. AST-1003201 to C.I.J. C.A.P. gratefully acknowledges support from the Daniel Kirkwood Research Fund at Indiana University. R.M.R. acknowledges support from NSF grant AST-0709479 and AST-121120995.
14. Relativistic Binaries in Globular Clusters
Directory of Open Access Journals (Sweden)
Matthew J. Benacquista
2013-03-01
Full Text Available Galactic globular clusters are old, dense star systems typically containing 10^4 – 10^6 stars. As an old population of stars, globular clusters contain many collapsed and degenerate objects. As a dense population of stars, globular clusters are the scene of many interesting close dynamical interactions between stars. These dynamical interactions can alter the evolution of individual stars and can produce tight binary systems containing one or two compact objects. In this review, we discuss theoretical models of globular cluster evolution and binary evolution, techniques for simulating this evolution that leads to relativistic binaries, and current and possible future observational evidence for this population. Our discussion of globular cluster evolution will focus on the processes that boost the production of tight binary systems and the subsequent interaction of these binaries that can alter the properties of both bodies and can lead to exotic objects. Direct N-body integrations and Fokker–Planck simulations of the evolution of globular clusters that incorporate tidal interactions and lead to predictions of relativistic binary populations are also discussed. We discuss the current observational evidence for cataclysmic variables, millisecond pulsars, and low-mass X-ray binaries as well as possible future detection of relativistic binaries with gravitational radiation.
15. The metal-poor knee in the Fornax dwarf spheroidal galaxy
Energy Technology Data Exchange (ETDEWEB)
Hendricks, Benjamin; Koch, Andreas [Zentrum für Astronomie der Universität Heidelberg, Landessternwarte, Königstuhl 12, D-69117, Heidelberg (Germany); Lanfranchi, Gustavo A. [Núcleo de Astrofísica Teórica, Universidade Cruzeiro do Sul, R. Galvão Bueno 868, Liberdade, 01506-000, São Paulo, SP (Brazil); Boeche, Corrado [Zentrum für Astronomie der Universität Heidelberg, Astronomisches Rechen-Institut, Mönchhofstr. 12-14, D-69120, Heidelberg (Germany); Walker, Matthew [McWilliams Center for Cosmology, Carnegie Mellon University, 5000 Forbes Ave., Pittsburgh, PA 15213 (United States); Johnson, Christian I. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, MS-15, Cambridge, MA 02138 (United States); Peñarrubia, Jorge [Institute for Astronomy, University of Edinburgh, Royal Observatory, Blackford Hill, Edinburgh EH9 3HJ (United Kingdom); Gilmore, Gerard, E-mail: [email protected] [Institute of Astronomy, Cambridge University, Madingley Rd, Cambridge CB3 OHA (United Kingdom)
2014-04-20
We present α-element abundances of Mg, Si, and Ti for a large sample of field stars in two outer fields of the Fornax dwarf spheroidal (dSph) galaxy, obtained with Very Large Telescope/GIRAFFE (R ∼ 16, 000). Due to the large fraction of metal-poor (MP) stars in our sample, we are able to follow the α-element evolution from [Fe/H] ≈ –2.5 continuously to [Fe/H] ≈ –0.7. For the first time we are able to resolve the turnover from the Type II supernovae (SNe) dominated, α-enhanced plateau down to subsolar [α/Fe] values, due to the onset of SNe Ia, and thus to trace the chemical enrichment efficiency of the galaxy. Our data support the general concept of an α-enhanced plateau at early epochs, followed by a well-defined 'knee' caused by the onset of SNe Ia, and finally a second plateau with sub-solar [α/Fe] values. We find the position of this knee to be at [Fe/H] ≈ –1.9 and therefore significantly more MP than expected from comparison with other dSphs and standard evolutionary models. Surprisingly, this value is rather comparable to the knee in Sculptor, a dSph ∼10 times less luminous than Fornax. Using chemical evolution models, we find that the position of the knee and the subsequent plateau at the sub-solar level can hardly be explained unless the galaxy experienced several discrete star formation (SF) events with a drastic variation in SF efficiency, while a uniform SF can be ruled out. One possible evolutionary scenario is that Fornax experienced one or several major accretion events from gas-rich systems in the past, so that its current stellar mass is not indicative of the chemical evolution environment at ancient times. If Fornax is the product of several smaller buildings blocks, this may also have implications for the understanding of the formation process of dSphs in general.
16. Rotational mixing in carbon-enhanced metal-poor stars with s-process enrichment
Science.gov (United States)
Matrozis, E.; Stancliffe, R. J.
2017-10-01
Carbon-enhanced metal-poor (CEMP) stars with s-process enrichment (CEMP-s) are believed to be the products of mass transfer from an asymptotic giant branch (AGB) companion, which has long since become a white dwarf. The surface abundances of CEMP-s stars are thus commonly assumed to reflect the nucleosynthesis output of the first AGB stars. We have previously shown that, for this to be the case, some physical mechanism must counter atomic diffusion (gravitational settling and radiative levitation) in these nearly fully radiative stars, which otherwise leads to surface abundance anomalies clearly inconsistent with observations. Here we take into account angular momentum accretion by these stars. We compute in detail the evolution of typical CEMP-s stars from the zero-age main sequence, through the mass accretion, and up the red giant branch for a wide range of specific angular momentum ja of the accreted material, corresponding to surface rotation velocities, vrot, between about 0.3 and 300 kms-1. We find that only for ja ≳ 1017 cm2s-1 (vrot > 20 kms-1, depending on mass accreted) angular momentum accretion directly causes chemical dilution of the accreted material. This could nevertheless be relevant to CEMP-s stars, which are observed to rotate more slowly, if they undergo continuous angular momentum loss akin to solar-like stars. In models with rotation velocities characteristic of CEMP-s stars, rotational mixing primarily serves to inhibit atomic diffusion, such that the maximal surface abundance variations (with respect to the composition of the accreted material) prior to first dredge-up remain within about 0.4 dex without thermohaline mixing or about 0.5-1.5 dex with thermohaline mixing. Even in models with the lowest rotation velocities (vrot ≲ 1 kms-1), rotational mixing is able to severely inhibit atomic diffusion, compared to non-rotating models. We thus conclude that it offers a natural solution to the problem posed by atomic diffusion and cannot be
17. Physical conditions of the molecular gas in metal-poor galaxies
Science.gov (United States)
Hunt, L. K.; Weiß, A.; Henkel, C.; Combes, F.; García-Burillo, S.; Casasola, V.; Caselli, P.; Lundgren, A.; Maiolino, R.; Menten, K. M.; Testi, L.
2017-10-01
Studying the molecular component of the interstellar medium (ISM) in metal-poor galaxies has been challenging because of the faintness of carbon monoxide emission, the most common proxy of H2. Here we present new detections of molecular gas at low metallicities, and assess the physical conditions in the gas through various CO transitions for 8 galaxies. For one, NGC 1140 (Z/Z⊙ 0.3), two detections of 13CO isotopologues and atomic carbon, [Ci](1-0) and an upper limit for HCN(1-0) are also reported. After correcting to a common beam size, we compared 12CO(2-1)/12CO(1-0) (R21) and 12CO(3-2)/12CO(1-0) (R31) line ratios of our sample with galaxies from the literature and find that only NGC 1140 shows extreme values (R21 R31 2). Fitting physical models to the 12CO and 13CO emission in NGC 1140 suggests that the molecular gas is cool (kinetic temperature Tkin ≲ 20 K), dense (H2 volume density nH2 ≳ 106 cm-3), with moderate CO column density (NCO 1016 cm-2) and low filling factor. Surprisingly, the [12CO]/[13CO] abundance ratio in NGC 1140 is very low ( 8-20), lower even than the value of 24 found in the Galactic Center. The young age of the starburst in NGC 1140 precludes 13CO enrichment from evolved intermediate-mass stars; instead we attribute the low ratio to charge-exchange reactions and fractionation, because of the enhanced efficiency of these processes in cool gas at moderate column densities. Fitting physical models to 12CO and [Ci](1-0) emission in NGC 1140 gives an unusually low [12CO]/[12C] abundance ratio, suggesting that in this galaxy atomic carbon is at least 10 times more abundant than 12CO. Based on observations carried out with the IRAM 30 m and the Atacama Pathfinder Experiment (APEX). IRAM is supported by the INSU/CNRS (France), MPG (Germany), and IGN (Spain), and APEX is a collaboration between the Max-Planck-Institut fur Radioastronomie, the European Southern Observatory, and the Onsala Space Observatory.
18. Functional myelographic differentiation of lumbar bulging annulus
Energy Technology Data Exchange (ETDEWEB)
Park, Choong Ki; Kim, Hong Kil; Park, Sang Gyu; Lee, Young Jung; Yoon, Jong Sup [Hallym University College of Medicine, Seoul (Korea, Republic of)
1988-08-15
Herniated disk and bulging annulus are the major causes of lower back pain. It is necessary to differentiate bulging annulus from herniated disk because of their different methods of treatment. Myelography is one of the useful diagnostic methods for disk diseases even though advanced diagnostic modalities such as CT and MRI are more accurate. Functional myelography is not a new technology expect for two additional views, flexion and extension, are obtained with conventional myelography. Differentiation between bulging annulus and herniated disk by conventional myelography is based on the extent and multiplicity of extradural deformity of the contrast filled dural sac and neural sleeve as well as the changes of nerve root. There is no previous report about differential points between bulging annulus and herniated disk according to functional myelography. It is the purpose of this study to find any additional differential points on functional myelography between bulging annulus and herniated disk over convectional myelography. Authors analysed functional myelographic findings of 152 cases from July 1986 to July 1987. Among them, 22 cases who had been suffered from cervical abnormality or vague lower back pain were diagnosed as normal by myelography, and 30 cases of L4-5 herniated disk and 21 cases of L4-5 bulging annulus which had been finally diagnosed by operation were studied. The results were as follows. 1. In normal group, anterior epidural space was gradually widened from the upper lumbar vertebra downward. And anterior epidural space was more sidened at the disk level in extension view than in flexion except for L5-S1 lever. 2. In bulging annulus group, the shape of anterior epidural space in flexion state was as similar as normal. Anoterior epidural space in extension state was more sidened at the buldging annulus than normal, but lesser than herniated disk. 3. In herniated disk group, widening of anterior epidural space at the herniated disk level was
19. Detection of a Population of Carbon-enhanced Metal-poor Stars in the Sculptor Dwarf Spheroidal Galaxy
Science.gov (United States)
Chiti, Anirudh; Simon, Joshua D.; Frebel, Anna; Thompson, Ian B.; Shectman, Stephen A.; Mateo, Mario; Bailey, John I., III; Crane, Jeffrey D.; Walker, Matthew
2018-04-01
The study of the chemical abundances of metal-poor stars in dwarf galaxies provides a venue to constrain paradigms of chemical enrichment and galaxy formation. Here we present metallicity and carbon abundance measurements of 100 stars in Sculptor from medium-resolution (R ∼ 2000) spectra taken with the Magellan/Michigan Fiber System mounted on the Magellan-Clay 6.5 m telescope at Las Campanas Observatory. We identify 24 extremely metal-poor star candidates ([Fe/H] 1.0). The existence of a large number of CEMP stars both in the halo and in Sculptor suggests that some halo CEMP stars may have originated from accreted early analogs of dwarf galaxies. This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile.
20. HORIZONTAL BRANCH MORPHOLOGY OF GLOBULAR CLUSTERS: A MULTIVARIATE STATISTICAL ANALYSIS
International Nuclear Information System (INIS)
2009-01-01
The proper interpretation of horizontal branch (HB) morphology is crucial to the understanding of the formation history of stellar populations. In the present study a multivariate analysis is used (principal component analysis) for the selection of appropriate HB morphology parameter, which, in our case, is the logarithm of effective temperature extent of the HB (log T effHB ). Then this parameter is expressed in terms of the most significant observed independent parameters of Galactic globular clusters (GGCs) separately for coherent groups, obtained in a previous work, through a stepwise multiple regression technique. It is found that, metallicity ([Fe/H]), central surface brightness (μ v ), and core radius (r c ) are the significant parameters to explain most of the variations in HB morphology (multiple R 2 ∼ 0.86) for GGC elonging to the bulge/disk while metallicity ([Fe/H]) and absolute magnitude (M v ) are responsible for GGC belonging to the inner halo (multiple R 2 ∼ 0.52). The robustness is tested by taking 1000 bootstrap samples. A cluster analysis is performed for the red giant branch (RGB) stars of the GGC belonging to Galactic inner halo (Cluster 2). A multi-episodic star formation is preferred for RGB stars of GGC belonging to this group. It supports the asymptotic giant branch (AGB) model in three episodes instead of two as suggested by Carretta et al. for halo GGC while AGB model is suggested to be revisited for bulge/disk GGC.
1. THE MOST METAL-POOR DAMPED Lyα SYSTEMS: AN INSIGHT INTO DWARF GALAXIES AT HIGH-REDSHIFT
International Nuclear Information System (INIS)
Cooke, Ryan J.; Pettini, Max; Jorgenson, Regina A.
2015-01-01
In this paper we analyze the kinematics, chemistry, and physical properties of a sample of the most metal-poor damped Lyα systems (DLAs), to uncover their links to modern-day galaxies. We present evidence that the DLA population as a whole exhibits a ''knee'' in the relative abundances of the α-capture and Fe-peak elements when the metallicity is [Fe/H] ≅ –2.0, assuming that Zn traces the buildup of Fe-peak elements. In this respect, the chemical evolution of DLAs is clearly different from that experienced by Milky Way halo stars, but resembles that of dwarf spheroidal galaxies in the Local Group. We also find a close correspondence between the kinematics of Local Group dwarf galaxies and of high-redshift metal-poor DLAs, which further strengthens this connection. On the basis of such similarities, we propose that the most metal-poor DLAs provide us with a unique opportunity to directly study the dwarf galaxy population more than ten billion years in the past, at a time when many dwarf galaxies were forming the bulk of their stars. To this end, we have measured some of the key physical properties of the DLA gas, including their neutral gas mass, size, kinetic temperature, density, and turbulence. We find that metal-poor DLAs contain a warm neutral medium with T gas ≅ 9600 K predominantly held up by thermal pressure. Furthermore, all of the DLAs in our sample exhibit a subsonic turbulent Mach number, implying that the gas distribution is largely smooth. These results are among the first empirical descriptions of the environments where the first few generations of stars may have formed in the universe
2. The Little Cub: Discovery of an Extremely Metal-poor Star-forming Galaxy in the Local Universe
Energy Technology Data Exchange (ETDEWEB)
Hsyu, Tiffany; Prochaska, J. Xavier; Bolte, Michael [Department of Astronomy and Astrophysics, University of California Santa Cruz, 1156 High Street, Santa Cruz, CA 95060 (United States); Cooke, Ryan J. [Centre for Extragalactic Astronomy, Department of Physics, Durham University, South Road, Durham DH1 3LE (United Kingdom)
2017-08-20
We report the discovery of the Little Cub, an extremely metal-poor star-forming galaxy in the local universe, found in the constellation Ursa Major (a.k.a. the Great Bear). We first identified the Little Cub as a candidate metal-poor galaxy based on its Sloan Digital Sky Survey photometric colors, combined with spectroscopy using the Kast spectrograph on the Shane 3 m telescope at Lick Observatory. In this Letter, we present high-quality spectroscopic data taken with the Low Resolution Imaging Spectrometer at Keck Observatory, which confirm the extremely metal-poor nature of this galaxy. Based on the weak [O iii] λ 4363 Å emission line, we estimate a direct oxygen abundance of 12 + log(O/H) = 7.13 ± 0.08, making the Little Cub one of the lowest-metallicity star-forming galaxies currently known in the local universe. The Little Cub appears to be a companion of the spiral galaxy NGC 3359 and shows evidence of gas stripping. We may therefore be witnessing the quenching of a near-pristine galaxy as it makes its first passage about a Milky Way–like galaxy.
3. The Little Cub: Discovery of an Extremely Metal-poor Star-forming Galaxy in the Local Universe
Science.gov (United States)
Hsyu, Tiffany; Cooke, Ryan J.; Prochaska, J. Xavier; Bolte, Michael
2017-08-01
We report the discovery of the Little Cub, an extremely metal-poor star-forming galaxy in the local universe, found in the constellation Ursa Major (a.k.a. the Great Bear). We first identified the Little Cub as a candidate metal-poor galaxy based on its Sloan Digital Sky Survey photometric colors, combined with spectroscopy using the Kast spectrograph on the Shane 3 m telescope at Lick Observatory. In this Letter, we present high-quality spectroscopic data taken with the Low Resolution Imaging Spectrometer at Keck Observatory, which confirm the extremely metal-poor nature of this galaxy. Based on the weak [O III] λ4363 Å emission line, we estimate a direct oxygen abundance of 12 + log(O/H) = 7.13 ± 0.08, making the Little Cub one of the lowest-metallicity star-forming galaxies currently known in the local universe. The Little Cub appears to be a companion of the spiral galaxy NGC 3359 and shows evidence of gas stripping. We may therefore be witnessing the quenching of a near-pristine galaxy as it makes its first passage about a Milky Way-like galaxy.
4. The Little Cub: Discovery of an Extremely Metal-poor Star-forming Galaxy in the Local Universe
International Nuclear Information System (INIS)
Hsyu, Tiffany; Prochaska, J. Xavier; Bolte, Michael; Cooke, Ryan J.
2017-01-01
We report the discovery of the Little Cub, an extremely metal-poor star-forming galaxy in the local universe, found in the constellation Ursa Major (a.k.a. the Great Bear). We first identified the Little Cub as a candidate metal-poor galaxy based on its Sloan Digital Sky Survey photometric colors, combined with spectroscopy using the Kast spectrograph on the Shane 3 m telescope at Lick Observatory. In this Letter, we present high-quality spectroscopic data taken with the Low Resolution Imaging Spectrometer at Keck Observatory, which confirm the extremely metal-poor nature of this galaxy. Based on the weak [O iii] λ 4363 Å emission line, we estimate a direct oxygen abundance of 12 + log(O/H) = 7.13 ± 0.08, making the Little Cub one of the lowest-metallicity star-forming galaxies currently known in the local universe. The Little Cub appears to be a companion of the spiral galaxy NGC 3359 and shows evidence of gas stripping. We may therefore be witnessing the quenching of a near-pristine galaxy as it makes its first passage about a Milky Way–like galaxy.
5. The intrinsic shape of bulges in the CALIFA survey
Science.gov (United States)
Costantin, L.; Méndez-Abreu, J.; Corsini, E. M.; Eliche-Moral, M. C.; Tapia, T.; Morelli, L.; Dalla Bontà, E.; Pizzella, A.
2018-02-01
Context. The intrinsic shape of galactic bulges in nearby galaxies provides crucial information to separate bulge types. Aims: We aim to derive accurate constraints to the intrinsic shape of bulges to provide new clues on their formation mechanisms and set new limitations for future simulations. Methods: We retrieved the intrinsic shape of a sample of CALIFA bulges using a statistical approach. Taking advantage of GalMer numerical simulations of binary mergers we estimated the reliability of the procedure. Analyzing the i-band mock images of resulting lenticular remnants, we studied the intrinsic shape of their bulges at different galaxy inclinations. Finally, we introduced a new (B/A, C/A) diagram to analyze possible correlations between the intrinsic shape and the properties of bulges. Results: We tested the method on simulated lenticular remnants, finding that for galaxies with inclinations of 25° ≤ θ ≤ 65° we can safely derive the intrinsic shape of their bulges. We found that our CALIFA bulges tend to be nearly oblate systems (66%), with a smaller fraction of prolate spheroids (19%), and triaxial ellipsoids (15%). The majority of triaxial bulges are in barred galaxies (75%). Moreover, we found that bulges with low Sérsic indices or in galaxies with low bulge-to-total luminosity ratios form a heterogeneous class of objects; additionally, bulges in late-type galaxies or in less massive galaxies have no preference for being oblate, prolate, or triaxial. On the contrary, bulges with high Sérsic index, in early-type galaxies, or in more massive galaxies are mostly oblate systems. Conclusions: We concluded that various evolutionary pathways may coexist in galaxies, with merging events and dissipative collapse being the main mechanisms driving the formation of the most massive oblate bulges and bar evolution reshaping the less massive triaxial bulges.
6. Globular clusters and galaxy halos
International Nuclear Information System (INIS)
Van Den Bergh, S.
1984-01-01
Using semipartial correlation coefficients and bootstrap techniques, a study is made of the important features of globular clusters with respect to the total number of galaxy clusters and dependence of specific galaxy cluster on parent galaxy type, cluster radii, luminosity functions and cluster ellipticity. It is shown that the ellipticity of LMC clusters correlates significantly with cluster luminosity functions, but not with cluster age. The cluter luminosity value above which globulars are noticeably flattened may differ by a factor of about 100 from galaxy to galaxy. Both in the Galaxy and in M31 globulars with small core radii have a Gaussian distribution over luminosity, whereas clusters with large core radii do not. In the cluster systems surrounding the Galaxy, M31 and NGC 5128 the mean radii of globular clusters was found to increase with the distance from the nucleus. Central galaxies in rich clusters have much higher values for specific globular cluster frequency than do other cluster ellipticals, suggesting that such central galaxies must already have been different from normal ellipticals at the time they were formed
7. SEARCHES FOR METAL-POOR STARS FROM THE HAMBURG/ESO SURVEY USING THE CH G BAND
Energy Technology Data Exchange (ETDEWEB)
Placco, Vinicius M.; Rossi, Silvia [Departamento de Astronomia-Instituto de Astronomia, Geofisica e Ciencias Atmosfericas, Universidade de Sao Paulo, Sao Paulo, SP 05508-090 (Brazil); Kennedy, Catherine R.; Beers, Timothy C.; Lee, Young Sun [Department of Physics and Astronomy and JINA (Joint Institute for Nuclear Astrophysics), Michigan State University, East Lansing, MI 48824 (United States); Christlieb, Norbert [Zentrum fuer Astronomie der Universitaet Heidelberg, Landessternwarte, Koenigstuhl 12, 69117 Heidelberg (Germany); Sivarani, Thirupathi [Indian Institute of Astrophysics, 2nd Block, Koramangala, Bangalore 560034 (India); Reimers, Dieter [Hamburger Sternwarte, Universitaet Hamburg, Gojenbergsweg 112, 21029 Hamburg (Germany); Wisotzki, Lutz, E-mail: [email protected] [Astrophysical Institute Potsdam, An der Sternwarte 16, 14482 Potsdam (Germany)
2011-12-15
We describe a new method to search for metal-poor candidates from the Hamburg/ESO objective-prism survey (HES) based on identifying stars with apparently strong CH G-band strengths for their colors. The hypothesis we exploit is that large overabundances of carbon are common among metal-poor stars, as has been found by numerous studies over the past two decades. The selection was made by considering two line indices in the 4300 A region, applied directly to the low-resolution prism spectra. This work also extends a previously published method by adding bright sources to the sample. The spectra of these stars suffer from saturation effects, compromising the index calculations and leading to an undersampling of the brighter candidates. A simple numerical procedure, based on available photometry, was developed to correct the line indices and overcome this limitation. Visual inspection and classification of the spectra from the HES plates yielded a list of 5288 new metal-poor (and by selection, carbon-rich) candidates, which are presently being used as targets for medium-resolution spectroscopic follow-up. Estimates of the stellar atmospheric parameters, as well as carbon abundances, are now available for 117 of the first candidates, based on follow-up medium-resolution spectra obtained with the SOAR 4.1 m and Gemini 8 m telescopes. We demonstrate that our new method improves the metal-poor star fractions found by our pilot study by up to a factor of three in the same magnitude range, as compared with our pilot study based on only one CH G-band index. Our selection scheme obtained roughly a 40% success rate for identification of stars with [Fe/H] <-1.0; the primary contaminant is late-type stars with near-solar abundances and, often, emission line cores that filled in the Ca II K line on the prism spectrum. Because the selection is based on carbon, we greatly increase the numbers of known carbon-enhanced metal-poor stars from the HES with intermediate metallicities -2
8. Effect of massive disks on bulge isophotes
International Nuclear Information System (INIS)
Monet, D.G.; Richstone, D.O.; Schechter, P.L.
1981-01-01
Massive disks produce flattened equipotentials. Unless the stars in a galaxy bulge are preferentially hotter in the z direction than in the plane, the isophotes will be at least as flat as the equipotentials. The comparison of two galaxy models having flat rotation curves with the available surface photometry for five external galaxies does not restrict the mass fraction which might reside in the disk. However, star counts in our own Galaxy indicate that unless the disk terminates close to the solar circle, no more than half the mass within that circle lies in the disk. The remaining half must lie either in the bulge or, more probably, in a third dark, round, dynamically distinct component
9. Rotation of the bulge components of barred galaxies
International Nuclear Information System (INIS)
Kormendy, J.
1982-01-01
Stellar rotation and velocity-dispersion measurements are presented for the bulge components of the SBO galaxies NGC 1023, 2859, 2950, 4340, 4371, and 7743. The kinematics of nine SB bulges with data available are compared with bulges of unbarred galaxies studied by Kormendy and Illingworth. All of the SB bulges are found to rotate at least as rapidly as oblate-spheroid dynamical models which are flattened by rotation. This result confirms the conclusion of Kormendy and Illingworth that bulges rotate very rapidly. Six SB bulges found by Kormendy and Koo to be triaxial rotate even more rapidly than the oblate models. In this respect, they resemble published n-body models of bars. That is, triaxial bulges are dynamically like bars and unlike elliptical galaxies, which are also believed to be triaxial, but which rotate slowly. Measured velocity anisotropies are found to be consistent with these conclusions. Two ordinary bulges whose rotation is well described by isotropic modes have a ratio of radial to azimuthal velocity dispersion of sigma/sub r//sigma/sub theta/ = 0.96 +- 0.03. In contrast, the triaxial bulge of NGC 3945, which rotates much faster than the isotropic models, has sigma/sub r//sigma/sub theta/ approx.1.31 +- 0.06. This is similar to the degree of anisotropy, sigma/sub r//sigma/sub theta/approx.1.21 +- 0.03, found in a recent n-body bar model by Hohl and Zang. Altogether the kinematic observations imply the triaxial bulges are more disklike than SA bulges. They appear to have been formed with more dissipation than ordinary bulges. These results are consistent with the hypothesis that part of the bulge in many SB galaxies consists of disk material (i.e., gas) which has been transported to the center by the bar. The resulting star formation may produce a very centrally concentrated light distribution which resembles a bulge but which has dislike dynamics
10. The Lithium-, r- and s-Enhanced Metal-Poor Giant HK-II 17435-00532
International Nuclear Information System (INIS)
Roederer, Ian U.; Prieto, Carlos Allende; Sneden, Christopher; Frebel, Anna; Shetrone, Matthew; Rhee, Jaehyon; Gallino, Roberto; Bisterzo, Sara; Beers, Timothy C.; Cowan, John J.
2008-01-01
We present the first detailed abundance analysis of the metal-poor giant HK-II 17435-00532. This star was observed as part of the University of Texas Long-Term Chemical Abundances of Stars in the Halo (CASH) Project. A spectrum was obtained with the High Resolution Spectrograph (HRS) on the Hobby-Eberly Telescope with a resolving power of R∼15000. Our analysis reveals that this star may be located on the red giant branch, red horizontal branch, or early asymptotic giant branch. We find that this metal-poor ([Fe/H] = -2.2) star has an unusually high lithium abundance (logε(Li) = +2.1), mild carbon ([C/Fe] = +0.7) and sodium ([Na/Fe] = +0.6) enhancement, as well as enhancement of both s-process ([Ba/Fe] = +0.8) and r-process ([Eu/Fe] = +0.5) material. The high Li abundance can be explained by self-enrichment through extra mixing mechanisms that connect the convective envelope with the outer regions of the H-burning shell. If so, HK-II 17435-00532 is the most metal-poor starin which this short-lived phase of Li enrichment has been observed. The r- and s-process material was not produced in this star but was either present in the gas from which HK-II 17435-00532 formed or was transferred to it from a more massive binary companion. Despite the current non-detection of radial velocity variations (over a time span of ∼180 days), it is possible that HK-II 17435-00532 is in a long-period binary system, similar to other stars with both r and s enrichment
11. Observational Constraints on First-Star Nucleosynthesis. II. Spectroscopy of an Ultra metal-poor CEMP-no Star
Science.gov (United States)
Placco, Vinicius M.; Frebel, Anna; Beers, Timothy C.; Yoon, Jinmi; Chiti, Anirudh; Heger, Alexander; Chan, Conrad; Casey, Andrew R.; Christlieb, Norbert
2016-12-01
We report on the first high-resolution spectroscopic analysis of HE 0020-1741, a bright (V = 12.9), ultra metal-poor ([{Fe}/{{H}}] = -4.1), carbon-enhanced ([{{C}}/{Fe}] = +1.7) star selected from the Hamburg/ESO Survey. This star exhibits low abundances of neutron-capture elements ([{Ba}/{Fe}] = -1.1) and an absolute carbon abundance A(C) = 6.1 based on either criterion, HE 0020-1741 is subclassified as a carbon-enhanced metal-poor star without enhancements in neutron-capture elements (CEMP-no). We show that the light-element abundance pattern of HE 0020-1741 is consistent with predicted yields from a massive (M = 21.5 {M}⊙ ), primordial-composition, supernova (SN) progenitor. We also compare the abundance patterns of other ultra metal-poor stars from the literature with available measures of C, N, Na, Mg, and Fe abundances with an extensive grid of SN models (covering the mass range 10{--}100 {M}⊙ ), in order to probe the nature of their likely stellar progenitors. Our results suggest that at least two classes of progenitors are required at [{Fe}/{{H}}] \\lt -4.0, as the abundance patterns for more than half of the sample studied in this work (7 out of 12 stars) cannot be easily reproduced by the predicted yields. Based on observations gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile, and the New Technology Telescope (NTT) of the European Southern Observatory (088.D-0344A), La Silla, Chile.
12. NEW RARE EARTH ELEMENT ABUNDANCE DISTRIBUTIONS FOR THE SUN AND FIVE r-PROCESS-RICH VERY METAL-POOR STARS
International Nuclear Information System (INIS)
Sneden, Christopher; Lawler, James E.; Den Hartog, Elizabeth A.; Cowan, John J.; Ivans, Inese I.
2009-01-01
We have derived new abundances of the rare earth elements Pr, Dy, Tm, Yb, and Lu for the solar photosphere and for five very metal-poor, neutron-capture r-process-rich giant stars. The photospheric values for all five elements are in good agreement with meteoritic abundances. For the low-metallicity sample, these abundances have been combined with new Ce abundances from a companion paper, and reconsideration of a few other elements in individual stars, to produce internally consistent Ba, rare earth, and Hf (56 ≤ Z ≤ 72) element distributions. These have been used in a critical comparison between stellar and solar r-process abundance mixes.
13. Abundance patterns of the light neutron-capture elements in very and extremely metal-poor stars
Science.gov (United States)
Spite, F.; Spite, M.; Barbuy, B.; Bonifacio, P.; Caffau, E.; François, P.
2018-03-01
Aims: The abundance patterns of the neutron-capture elements in metal-poor stars provide a unique record of the nucleosynthesis products of the earlier massive primitive objects. Methods: We measured new abundances of so-called light neutron-capture of first peak elements using local thermodynamic equilibrium (LTE) 1D analysis; this analysis resulted in a sample of 11 very metal-poor stars, from [Fe/H] = -2.5 to [Fe/H] = -3.4, and one carbon-rich star, CS 22949-037 with [Fe/H] = -4.0. The abundances were compared to those observed in two classical metal-poor stars: the typical r-rich star CS 31082-001 ([Eu/Fe] > +1.0) and the r-poor star HD 122563 ([Eu/Fe] < 0.0), which are known to present a strong enrichment of the first peak neutron-capture elements relative to the second peak. Results: Within the first peak, the abundances are well correlated in analogy to the well-known correlation inside the abundances of the second-peak elements. In contrast, there is no correlation between any first peak element with any second peak element. We show that the scatter of the ratio of the first peak abundance over second peak abundance increases when the mean abundance of the second peak elements decreases from r-rich to r-poor stars. We found two new r-poor stars that are very similar to HD 122563. A third r-poor star, CS 22897-008, is even more extreme; this star shows the most extreme example of first peak elements enrichment to date. On the contrary, another r-poor star (BD-18 5550) has a pattern of first peak elements that is similar to the typical r-rich stars CS 31082-001, however this star has some Mo enrichment. Conclusions: The distribution of the neutron-capture elements in our very metal-poor stars can be understood as the combination of at least two mechanisms: one that enriches the forming stars cloud homogeneously through the main r-process and leads to an element pattern similar to the r-rich stars, such as CS 31082-001; and another that forms mainly lighter
14. Cold gelation of globular proteins
NARCIS (Netherlands)
Alting, A.C.
2003-01-01
Keywords : globular proteins, whey protein, ovalbumin, cold gelation, disulfide bonds, texture, gel hardnessProtein gelation in food products is important to obtain desirable sensory and textural properties. Cold gelation is a novel method to produce protein-based gels. It is a two step process in
15. Nuclear starburst activity induced by elongated bulges in spiral galaxies
Science.gov (United States)
Kim, Eunbin; Kim, Sungsoo S.; Choi, Yun-Young; Lee, Gwang-Ho; de Grijs, Richard; Lee, Myung Gyoon; Hwang, Ho Seong
2018-06-01
We study the effects of bulge elongation on the star formation activity in the centres of spiral galaxies using the data from the Sloan Digital Sky Survey Data Release 7. We construct a volume-limited sample of face-on spiral galaxies with Mr nuclear starbursts using the fibre specific star formation rates derived from the SDSS spectra. We find a statistically significant correlation between bulge elongation and nuclear starbursts in the sense that the fraction of nuclear starbursts increases with bulge elongation. This correlation is more prominent for fainter and redder galaxies, which exhibit higher ratios of elongated bulges. We find no significant environmental dependence of the correlation between bulge elongation and nuclear starbursts. These results suggest that non-axisymmetric bulges can efficiently feed the gas into the centre of galaxies to trigger nuclear starburst activity.
16. FORMATION OF CARBON-ENHANCED METAL-POOR STARS IN THE PRESENCE OF FAR-ULTRAVIOLET RADIATION
Energy Technology Data Exchange (ETDEWEB)
Bovino, S.; Schleicher, D. R. G.; Latif, M. A. [Institut für Astrophysik Georg-August-Universität, Friedrich-Hund Platz 1, 37077 Göttingen (Germany); Grassi, T., E-mail: [email protected] [Centre for Star and Planet Formation, Natural History Museum of Denmark, Øster Voldgade 5-7, 1350 Copenhagen (Denmark)
2014-08-01
Recent discoveries of carbon-enhanced metal-poor stars like SMSS J031300.36–670839.3 provide increasing observational insights into the formation conditions of the first second-generation stars in the universe, reflecting the chemical conditions after the first supernova explosion. Here, we present the first cosmological simulations with a detailed chemical network including primordial species as well as C, C{sup +}, O, O{sup +}, Si, Si{sup +}, and Si{sup 2+} following the formation of carbon-enhanced metal-poor stars. The presence of background UV flux delays the collapse from z = 21 to z = 15 and cool the gas down to the cosmic microwave background temperature for a metallicity of Z/Z {sub ☉} = 10{sup –3}. This can potentially lead to the formation of lower-mass stars. Overall, we find that the metals have a stronger effect on the collapse than the radiation, yielding a comparable thermal structure for large variations in the radiative background. We further find that radiative backgrounds are not able to delay the collapse for Z/Z {sub ☉} = 10{sup –2} or a carbon abundance as in SMSS J031300.36–670839.3.
17. Accurate effective temperatures of the metal-poor benchmark stars HD 140283, HD 122563, and HD 103095 from CHARA interferometry
Science.gov (United States)
Karovicova, I.; White, T. R.; Nordlander, T.; Lind, K.; Casagrande, L.; Ireland, M. J.; Huber, D.; Creevey, O.; Mourard, D.; Schaefer, G. H.; Gilmore, G.; Chiavassa, A.; Wittkowski, M.; Jofré, P.; Heiter, U.; Thévenin, F.; Asplund, M.
2018-03-01
Large stellar surveys of the Milky Way require validation with reference to a set of benchmark' stars whose fundamental properties are well determined. For metal-poor benchmark stars, disagreement between spectroscopic and interferometric effective temperatures has called the reliability of the temperature scale into question. We present new interferometric measurements of three metal-poor benchmark stars, HD 140283, HD 122563, and HD 103095, from which we determine their effective temperatures. The angular sizes of all the stars were determined from observations with the PAVO beam combiner at visible wavelengths at the CHARA array, with additional observations of HD 103095 made with the VEGA instrument, also at the CHARA array. Together with photometrically derived bolometric fluxes, the angular diameters give a direct measurement of the effective temperature. For HD 140283, we find θLD = 0.324 ± 0.005 mas, Teff = 5787 ± 48 K; for HD 122563, θLD = 0.926 ± 0.011 mas, Teff = 4636 ± 37 K; and for HD 103095, θLD = 0.595 ± 0.007 mas, Teff = 5140 ± 49 K. Our temperatures for HD 140283 and HD 103095 are hotter than the previous interferometric measurements by 253 and 322 K, respectively. We find good agreement between our temperatures and recent spectroscopic and photometric estimates. We conclude some previous interferometric measurements have been affected by systematic uncertainties larger than their quoted errors.
18. THE CHEMICAL ABUNDANCES OF STARS IN THE HALO (CASH) PROJECT. II. A SAMPLE OF 14 EXTREMELY METAL-POOR STARS ,
International Nuclear Information System (INIS)
Hollek, Julie K.; Sneden, Christopher; Shetrone, Matthew; Frebel, Anna; Roederer, Ian U.; Beers, Timothy C.; Kang, Sung-ju; Thom, Christopher
2011-01-01
We present a comprehensive abundance analysis of 20 elements for 16 new low-metallicity stars from the Chemical Abundances of Stars in the Halo (CASH) project. The abundances have been derived from both Hobby-Eberly Telescope High Resolution Spectrograph snapshot spectra (R ∼15, 000) and corresponding high-resolution (R ∼35, 000) Magellan Inamori Kyocera Echelle spectra. The stars span a metallicity range from [Fe/H] from –2.9 to –3.9, including four new stars with [Fe/H] < –3.7. We find four stars to be carbon-enhanced metal-poor (CEMP) stars, confirming the trend of increasing [C/Fe] abundance ratios with decreasing metallicity. Two of these objects can be classified as CEMP-no stars, adding to the growing number of these objects at [Fe/H]< – 3. We also find four neutron-capture-enhanced stars in the sample, one of which has [Eu/Fe] of 0.8 with clear r-process signatures. These pilot sample stars are the most metal-poor ([Fe/H] ∼< –3.0) of the brightest stars included in CASH and are used to calibrate a newly developed, automated stellar parameter and abundance determination pipeline. This code will be used for the entire ∼500 star CASH snapshot sample. We find that the pipeline results are statistically identical for snapshot spectra when compared to a traditional, manual analysis from a high-resolution spectrum.
19. Chemical Compositions of Stars in the Globular Cluster NGC 3201: Tracers of Multi-Epoch Star Formation
Science.gov (United States)
Simmerer, Jennifer A.; Ivans, I. I.; Filler, D.
2012-01-01
The retrograde halo globular cluster NGC 3201 contains stars of substantially different iron abundance ([Fe/H]), a property that puts it at odds with the vast majority of the Galactic cluster system. Though its unusual orbit prompted speculation that NGC 3201 was the remnant of a captured object, much like the multi-metallicity globular cluster Omega Centauri, NGC 3201 is much less massive than Omega Centauri and all of the other halo globular clusters that have internal metallicity variations. We present the abundances of 21 elements in 24 red giant branch stars in NGC 3201 based on high-resolution (R 40,000), high signal-to-noise (S/N 70) spectra. We find that the detailed abundance pattern of NGC 3201 is unique amongst multi-metallicity halo clusters. Unlike M22, Omega Centauri, and NGC 1851, neither metal-poor nor metal-rich stars show any evidence of s-process enrichment (a product of the advanced evolution of low- and intermediate-mass stars). We find that while Na, O, and Al vary from star to star as is typical in globular clusters, there is no systematic difference between the abundance pattern in the metal-poor cluster stars and that of the metal-rich cluster stars. Furthermore, we find that the metallicity variations in NGC 3201 are independent of the well-known Na-O anticorrelation, which separates it from every other multi-metallicity cluster. In the context of a multi-episode star formation model, this implies that NGC 3201 began life with the [Fe/H] variations we measure now.
20. Deep CCD photometry in globular clusters. VII. M30
International Nuclear Information System (INIS)
Richer, H.B.; Fahlman, G.G.; Vandenberg, D.A.
1988-01-01
New UBV CCD photometry in a single field of the globular cluster M30 was obtained, and the data were used to obtain the color magnitude diagram (CMD) of the cluster, its luminosity function, and to derive fundamental cluster parameters. No blue stragglers were found, nor any evidence of a binary sequence in the data even though the field under study is only 21 core radii from the cluster center. The cluster reddening is observed to be 0.068 + or - 0.035, significantly higher than that adopted in most current papers on M30. An intercomparison of the CMDs of three very metal-poor clusters clearly shows that there is no evidence for any age difference between them. The age of M30 itself is found to be about 14 Gyr. The luminosity function of M30 is determined to be M(V) = 8. Comparison of this function with one found by Bolte (1987) at 65 core radii shows clear evidence of mass segregation in the low-mass stars. 44 references
1. HUBBLE SPACE TELESCOPE PHOTOMETRY OF GLOBULAR CLUSTERS IN M81
International Nuclear Information System (INIS)
Nantais, Julie B.; Huchra, John P.; Zezas, Andreas; Gazeas, Kosmas; Strader, Jay
2011-01-01
We perform aperture photometry and profile fitting on 419 globular cluster (GC) candidates with m V ≤ 23 mag identified in Hubble Space Telescope/Advanced Camera for Surveys BVI imaging, and estimate the effective radii of the clusters. We identify 85 previously known spectroscopically confirmed clusters, and newly identify 136 objects as good cluster candidates within the 3σ color and size ranges defined by the spectroscopically confirmed clusters, yielding a total of 221 probable GCs. The luminosity function peak for the 221 probable GCs with estimated total dereddening applied is V ∼ (20.26 ± 0.13) mag, corresponding to a distance of ∼3.7 ± 0.3 Mpc. The blue and red GC candidates, and the metal-rich and metal-poor spectroscopically confirmed clusters, respectively, are similar in half-light radius. Red confirmed clusters are about 6% larger in median half-light radius than blue confirmed clusters, and red and blue good GC candidates are nearly identical in half-light radius. The total population of confirmed and 'good' candidates shows an increase in half-light radius as a function of galactocentric distance.
2. Nanofibers made of globular proteins.
Science.gov (United States)
Dror, Yael; Ziv, Tamar; Makarov, Vadim; Wolf, Hila; Admon, Arie; Zussman, Eyal
2008-10-01
Strong nanofibers composed entirely of a model globular protein, namely, bovine serum albumin (BSA), were produced by electrospinning directly from a BSA solution without the use of chemical cross-linkers. Control of the spinnability and the mechanical properties of the produced nanofibers was achieved by manipulating the protein conformation, protein aggregation, and intra/intermolecular disulfide bonds exchange. In this manner, a low-viscosity globular protein solution could be modified into a polymer-like spinnable solution and easily spun into fibers whose mechanical properties were as good as those of natural fibers made of fibrous protein. We demonstrate here that newly formed disulfide bonds (intra/intermolecular) have a dominant role in both the formation of the nanofibers and in providing them with superior mechanical properties. Our approach to engineer proteins into biocompatible fibrous structures may be used in a wide range of biomedical applications such as suturing, wound dressing, and wound closure.
3. BOO-1137-AN EXTREMELY METAL-POOR STAR IN THE ULTRA-FAINT DWARF SPHEROIDAL GALAXY BOOeTES I
International Nuclear Information System (INIS)
Norris, John E.; Yong, David; Gilmore, Gerard; Wyse, Rosemary F. G.
2010-01-01
We present high-resolution (R ∼ 40,000), high-signal-to-noise ratio (20-90) spectra of an extremely metal-poor giant star Boo-1137 in the 'ultra-faint' dwarf spheroidal galaxy (dSph) Booetes I, absolute magnitude M V ∼ -6.3. We derive an iron abundance of [Fe/H] = -3.7, making this the most metal-poor star as yet identified in an ultra-faint dSph. Our derived effective temperature and gravity are consistent with its identification as a red giant in Booetes I. Abundances for a further 15 elements have also been determined. Comparison of the relative abundances, [X/Fe], with those of the extremely metal-poor red giants of the Galactic halo shows that Boo-1137 is 'normal' with respect to C and N, the odd-Z elements Na and Al, the iron-peak elements, and the neutron-capture elements Sr and Ba, in comparison with the bulk of the Milky Way halo population having [Fe/H] ∼<-3.0. The α-elements Mg, Si, Ca, and Ti are all higher by Δ[X/Fe] ∼ 0.2 than the average halo values. Monte Carlo analysis indicates that Δ[α/Fe] values this large are expected with a probability ∼0.02. The elemental abundance pattern in Boo-1137 suggests inhomogeneous chemical evolution, consistent with the wide internal spread in iron abundances we previously reported. The similarity of most of the Boo-1137 relative abundances with respect to halo values, and the fact that the α-elements are all offset by a similar small amount from the halo averages, points to the same underlying galaxy-scale stellar initial mass function, but that Boo-1137 likely originated in a star-forming region where the abundances reflect either poor mixing of supernova (SN) ejecta, or poor sampling of the SN progenitor mass range, or both.
4. Atmospheric parameters and magnesium and calcium NLTE abundances for a sample of 16 ultra metal-poor stars
Science.gov (United States)
Sitnova, Tatyana; Mashonkina, Lyudmila; Ezzeddine, Rana; Frebel, Anna
2018-06-01
The most metal-poor stars provide important observational clues to the astrophysical objects that enriched the primordial gas with heavy elements. Accurate atmospheric parameters is a prerequisite of determination of accurate abundances. We present atmospheric parameters and abundances of calcium and magnesium for a sample of 16 ultra-metal poor (UMP) stars. In spectra of UMP stars, iron is represented only by lines of Fe I, while calcium is represented with lines of Ca I and Ca II, which can be used for determination/checking of effective temperature and surface gravity. Accurate calculations of synthetic spectra of UMP stars require non-local thermodynamic equilibrium (NLTE) treatment of line formation, since deviations from LTE grow with metallicity decreasing. The method of atmospheric parameter determination is based on NLTE analysis of lines of Ca I and Ca II, multi-band photometry, and isochrones. The method was tested in advance with the ultra metal-poor giant CD-38 245, where, in addition, trigonometric parallax measurements from Gaia DR1 and lines of Fe I and Fe II are available. Using photometric Teff = 4900 K and distance based log g = 2.0 for CD-38 245, we derived consistent within error bars NLTE abundances from Fe I and Fe II and Ca I and Ca II, while LTE leads to a discrepancy of 0.6 dex between Ca I and Ca II. We determined NLTE and LTE abundances of magnesium and calcium in 16 stars of the sample. For the majority of stars, as expected, [Ca/Mg] NLTE abundance ratios are close to 0, while LTE leads to systematically higher [Ca/Mg], by up to 0.3 dex, and larger spread of [Ca/Mg] for different stars. Three stars of our sample are strongly enhanced in magnesium, with [Mg/Ca] of 1.3 dex. It is worth noting that, for these three stars, we got very similar [Mg/Ca] of 1.30, 1.45, and 1.29, in contrast to the data from the literature, where, for the same stars, [Mg/Ca] vary from 0.7 to 1.4. Very similar [Mg/Ca] abundance ratios of these stars argue that
5. Speckle imaging of globular clusters
International Nuclear Information System (INIS)
Sams, B.J. III
1990-01-01
Speckle imaging is a powerful tool for high resolution astronomy. Its application to the core regions of globular clusters produces high resolution stellar maps of the bright stars, but is unable to image the faint stars which are most reliable dynamical indicators. The limits on resolving these faint, extended objects are physical, not algorithmic, and cannot be overcome using speckle. High resolution maps may be useful for resolving multicomponent stellar systems in the cluster centers. 30 refs
6. Investigation of a sample of carbon-enhanced metal-poor stars observed with FORS and GMOS
Science.gov (United States)
Caffau, E.; Gallagher, A. J.; Bonifacio, P.; Spite, M.; Duffau, S.; Spite, F.; Monaco, L.; Sbordone, L.
2018-06-01
Aims: Carbon-enhanced metal-poor (CEMP) stars represent a sizeable fraction of all known metal-poor stars in the Galaxy. Their formation and composition remains a significant topic of investigation within the stellar astrophysics community. Methods: We analysed a sample of low-resolution spectra of 30 dwarf stars, obtained using the visual and near UV FOcal Reducer and low dispersion Spectrograph for the Very Large Telescope (FORS/VLT) of the European Southern Observatory (ESO) and the Gemini Multi-Object Spectrographs (GMOS) at the GEMINI telescope, to derive their metallicity and carbon abundance. Results: We derived C and Ca from all spectra, and Fe and Ba from the majority of the stars. Conclusions: We have extended the population statistics of CEMP stars and have confirmed that in general, stars with a high C abundance belonging to the high C band show a high Ba-content (CEMP-s or -r/s), while stars with a normal C abundance or that are C-rich, but belong to the low C band, are normal in Ba (CEMP-no). Based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme ID 099.D-0791.Based on observations obtained at the Gemini Observatory (processed using the Gemini IRAF package), which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the National Research Council (Canada), CONICYT (Chile), Ministerio de Ciencia, Tecnología e Innovación Productiva (Argentina), and Ministério da Ciência, Tecnologia e Inovação (Brazil).Tables 1 and 2 are also available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/614/A68
7. Spectroscopic Comparison of Metal-rich RRab Stars of the Galactic Field with their Metal-poor Counterparts
Energy Technology Data Exchange (ETDEWEB)
Chadid, Merieme [Université Nice Sophia–Antipolis, Observatoire de la Côte dAzur, UMR 7293, Parc Valrose, F-06108, Nice Cedex 02 (France); Sneden, Christopher [Department of Astronomy and McDonald Observatory, The University of Texas, Austin, TX 78712 (United States); Preston, George W., E-mail: [email protected], E-mail: [email protected], E-mail: [email protected] [Carnegie Observatories, 813 Santa Barbara Street, Pasadena, CA 91101 (United States)
2017-02-01
We investigate atmospheric properties of 35 stable RRab stars that possess the full ranges of period, light amplitude, and metal abundance found in Galactic RR Lyrae stars. Our results are derived from several thousand echelle spectra obtained over several years with the du Pont telescope of Las Campanas Observatory. Radial velocities of metal lines and the H α line were used to construct curves of radial velocity versus pulsation phase. From these we estimated radial velocity amplitudes for metal lines (formed near the photosphere) and H α Doppler cores (formed at small optical depths). We also measured H α emission fluxes when they appear during primary light rises. Spectra shifted to rest wavelengths, binned into small phase intervals, and co-added were used to perform model atmospheric and abundance analyses. The derived metallicities and those of some previous spectroscopic surveys were combined to produce a new calibration of the Layden abundance scale. We then divided our RRab sample into metal-rich (disk) and metal-poor (halo) groups at [Fe/H] = −1.0; the atmospheres of RRab families, so defined, differ with respect to (a) peak strength of H α emission flux, (b) H α radial velocity amplitude, (c) dynamical gravity, (d) stellar radius variation, (e) secondary acceleration during the photometric bump that precedes minimum light, and (f) duration of H α line-doubling. We also detected H α line-doubling during the “bump” in the metal-poor family, but not in the metal-rich one. Although all RRab probably are core helium-burning horizontal branch stars, the metal-rich group appears to be a species sui generis.
8. A search for stars of very low metal abundance. VI. Detailed abundances of 313 metal-poor stars
International Nuclear Information System (INIS)
Roederer, Ian U.; Preston, George W.; Thompson, Ian B.; Shectman, Stephen A.; Burley, Gregory S.; Kelson, Daniel D.; Sneden, Christopher
2014-01-01
We present radial velocities, equivalent widths, model atmosphere parameters, and abundances or upper limits for 53 species of 48 elements derived from high resolution optical spectroscopy of 313 metal-poor stars. A majority of these stars were selected from the metal-poor candidates of the HK Survey of Beers, Preston, and Shectman. We derive detailed abundances for 61% of these stars for the first time. Spectra were obtained during a 10 yr observing campaign using the Magellan Inamori Kyocera Echelle spectrograph on the Magellan Telescopes at Las Campanas Observatory, the Robert G. Tull Coudé Spectrograph on the Harlan J. Smith Telescope at McDonald Observatory, and the High Resolution Spectrograph on the Hobby-Eberly Telescope at McDonald Observatory. We perform a standard LTE abundance analysis using MARCS model atmospheres, and we apply line-by-line statistical corrections to minimize systematic abundance differences arising when different sets of lines are available for analysis. We identify several abundance correlations with effective temperature. A comparison with previous abundance analyses reveals significant differences in stellar parameters, which we investigate in detail. Our metallicities are, on average, lower by ≈0.25 dex for red giants and ≈0.04 dex for subgiants. Our sample contains 19 stars with [Fe/H] ≤–3.5, 84 stars with [Fe/H] ≤–3.0, and 210 stars with [Fe/H] ≤–2.5. Detailed abundances are presented here or elsewhere for 91% of the 209 stars with [Fe/H] ≤–2.5 as estimated from medium resolution spectroscopy by Beers, Preston, and Shectman. We will discuss the interpretation of these abundances in subsequent papers.
9. REVIEW: Optics of globular photonic crystals
Science.gov (United States)
Gorelik, V. S.
2007-05-01
The results of experimental and theoretical studies of the optical properties of globular photonic crystals - new physical objects having a crystal structure with the lattice period exceeding considerably the atomic size, are presented. As globular photonic crystals, artificial opal matrices consisting of close-packed silica globules of diameter ~200 nm were used. The reflection spectra of these objects characterising the parameters of photonic bands existing in these crystals in the visible spectral region are presented. The idealised models of the energy band structure of photonic crystals investigated in the review give analytic dispersion dependences for the group velocity and the effective photon mass in a globular photonic crystal. The characteristics of secondary emission excited in globular photonic crystals by monochromatic and broadband radiation are presented. The results of investigations of single-photon-excited delayed scattering of light observed in globular photonic crystals exposed to cw UV radiation and radiation from a repetitively pulsed copper vapour laser are presented. The possibilities of using globular photonic crystals as active media for lasing in different spectral regions are considered. It is proposed to use globular photonic crystals as sensitive sensors in optoelectronic devices for molecular analysis of organic and inorganic materials by the modern methods of laser spectroscopy. The results of experimental studies of spontaneous and stimulated globular scattering of light are discussed. The conditions for observing resonance and two-photon-excited delayed scattering of light are found. The possibility of accumulation and localisation of the laser radiation energy inside a globular photonic crystal is reported.
10. Monitoring and Mapping the Galactic Bulge
Science.gov (United States)
Markwardt, Craig
Both neutron star and black hole binary transients are providing some of the most exciting RXTE science, and fortunately many are concentrated in the galactic bulge region. We propose to continue our twice weekly PCA scans of the region, which cover about 500 sq deg. The observations will be sensitive to new sources at the ~1 mCrab level (a factor of 10-60 more sensitive than the ASM in the region). We have had success finding new sources and new types of variability, including three millisecond pulsars, and new increased solid angle will improve the chances of finding more in the final RXTE years. We will continue efforts to search for variability in new and known sources. Companion follow-up proposals would be triggered by the results.
11. WHERE ARE MOST OF THE GLOBULAR CLUSTERS IN TODAY’S UNIVERSE?
Energy Technology Data Exchange (ETDEWEB)
Harris, William E., E-mail: [email protected] [Department of Physics and Astronomy, McMaster University, Hamilton, ON (Canada)
2016-04-15
The total number of globular clusters (GCs) in a galaxy rises continuously with the galaxy luminosity L, while the relative number of galaxies decreases with L following the Schechter function. The product of these two very nonlinear functions gives the relative number of GCs contained by all galaxies at a given L. It is shown that GCs, in this universal sense, are most commonly found in galaxies within a narrow range around L{sub ⋆}. In addition, blue (metal-poor) GCs outnumber the red (metal-richer) ones globally by 4 to 1 when all galaxies are added, pointing to the conclusion that the earliest stages of galaxy formation were especially favorable to forming massive, dense star clusters.
12. BULGE n AND B/T IN HIGH-MASS GALAXIES: CONSTRAINTS ON THE ORIGIN OF BULGES IN HIERARCHICAL MODELS
International Nuclear Information System (INIS)
Weinzirl, Tim; Jogee, Shardha; Kormendy, John; Khochfar, Sadegh; Burkert, Andreas
2009-01-01
We use the bulge Sersic index n and bulge-to-total mass ratio (B/T) to explore the fundamental question of how bulges form. We perform two-dimensional bulge-disk-bar decomposition on H-band images of 143 bright, high-mass (M * ≥ 1.0 x 10 10 M sun ) low-to-moderately inclined (i 0 ) spirals. Our results are as follows. (1) Our H-band bar fraction (∼58%) is consistent with that from ellipse fits. (2) 70% of the stellar mass is in disks, 10% in bars, and 20% in bulges. (3) A large fraction (∼69%) of bright spirals have B/T≤ 0.2, and ∼76% have low n ≤ 2 bulges. These bulges exist in barred and unbarred galaxies across a wide range of Hubble types. (4) About 65% (68%) of bright spirals with n ≤ 2 (B/T ≤ 0.2) bulges host bars, suggesting a possible link between bars and bulges. (5) We compare the results with predictions from a set of ΛCDM models. In the models, a high-mass spiral can have a bulge with a present-day low B/T≤ 0.2 only if it did not undergo a major merger since z ≤ 2. The predicted fraction (∼ 1.6%) of high-mass spirals, which have undergone a major merger since z ≤ 4 and host a bulge with a present-day low B/T ≤ 0.2, is a factor of over 30 smaller than the observed fraction (∼66%) of high-mass spirals with B/T ≤ 0.2. Thus, contrary to common perception, bulges built via major mergers since z ≤ 4 seriously fail to account for the bulges present in ∼66% of high mass spirals. Most of these present-day low B/T ≤ 0.2 bulges are likely to have been built by a combination of minor mergers and/or secular processes since z ≤ 4.
13. Gravitational microlensing by low-mass objects in the globular cluster M22.
Science.gov (United States)
Sahu, K C; Casertano, S; Livio, M; Gilliland, R L; Panagia, N; Albrow, M D; Potter, M
2001-06-28
Gravitational microlensing offers a means of determining directly the masses of objects ranging from planets to stars, provided that the distances and motions of the lenses and sources can be determined. A globular cluster observed against the dense stellar field of the Galactic bulge presents ideal conditions for such observations because the probability of lensing is high and the distances and kinematics of the lenses and sources are well constrained. The abundance of low-mass objects in a globular cluster is of particular interest, because it may be representative of the very early stages of star formation in the Universe, and therefore indicative of the amount of dark baryonic matter in such clusters. Here we report a microlensing event associated with the globular cluster M22. We determine the mass of the lens to be 0.13(+0.03)(-0.02) solar masses. We have also detected six events that are unresolved in time. If these are also microlensing events, they imply that a non-negligible fraction of the cluster mass resides in the form of free-floating planetary-mass objects.
14. Chemical Abundances of Red Giant Stars in the Globular Cluster M107 (NGC 6171)
Science.gov (United States)
O'Connell, Julia E.; Johnson, Christian I.; Pilachowski, Catherine A.; Burks, Geoffrey
2011-10-01
We present chemical abundances of Al and several Fe-Peak and neutron-capture elements for 13 red giant branch stars in the Galactic globular cluster NGC 6171 (M107). The abundances were determined using equivalent width and spectrum synthesis analyses of moderate-resolution ( R ˜ 15,000), moderate signal-to-noise ratio ( ˜ 80) spectra obtained with the WIYN telescope and Hydra multifiber spectrograph. A comparison between photometric and spectroscopic effective temperature estimates seems to indicate that a reddening value of E(B - V) = 0.46 may be more appropriate for this cluster than the more commonly used value of E(B - V) = 0.33. Similarly, we found that a distance modulus of (m - M)V ≈ 13.7 provided reasonable surface gravity estimates for the stars in our sample. Our spectroscopic analysis finds M107 to be moderately metal-poor with = -0.93 and also exhibits a small star-to-star metallicity dispersion (σ = 0.04). These results are consistent with previous photometric and spectroscopic studies. Aluminum appears to be moderately enhanced in all program stars ( = +0.39, σ = 0.11). The relatively small star-to-star scatter in [Al/Fe] differs from the trend found in more metal-poor globular clusters, and is more similar to what is found in clusters with [Fe/H] ≳ -1. The cluster also appears to be moderately r-process-enriched with = +0.32 (σ = 0.17).
15. Tube Bulge Process : Theoretical Analysis and Finite Element Simulations
International Nuclear Information System (INIS)
Velasco, Raphael; Boudeau, Nathalie
2007-01-01
This paper is focused on the determination of mechanics characteristics for tubular materials, using tube bulge process. A comparative study is made between two different models: theoretical model and finite element analysis. The theoretical model is completely developed, based first on a geometrical analysis of the tube profile during bulging, which is assumed to strain in arc of circles. Strain and stress analysis complete the theoretical model, which allows to evaluate tube thickness and state of stress, at any point of the free bulge region. Free bulging of a 304L stainless steel is simulated using Ls-Dyna 970. To validate FE simulations approach, a comparison between theoretical and finite elements models is led on several parameters such as: thickness variation at the free bulge region pole with bulge height, tube thickness variation with z axial coordinate, and von Mises stress variation with plastic strain. Finally, the influence of geometrical parameters deviations on flow stress curve is observed using analytical model: deviations of the tube outer diameter, its initial thickness and the bulge height measurement are taken into account to obtain a resulting error on plastic strain and von Mises stress
16. The Chemical Abundances of Stars in the Halo (CASH) Project. II. New Extremely Metal-poor Stars
Science.gov (United States)
Krugler, Julie A.; Frebel, A.; Roederer, I. U.; Sneden, C.; Shetrone, M.; Beers, T.; Christlieb, N.
2011-01-01
We present new abundance results from the Chemical Abundances of Stars in the Halo (CASH) project. The 500 CASH spectra were observed using the Hobby-Eberly Telescope in "snapshot" mode and are analyzed using an automated stellar parameter and abundance pipeline called CASHCODE. For the 20 most metal-poor stars of the CASH sample we have obtained high resolution spectra using the Magellan Telescope in order to test the uncertainties and systematic errors associated with the snapshot quality (i.e., R 15,000 and S/N 65) HET spectra and to calibrate the newly developed CASHCODE by making a detailed comparison between the stellar parameters and abundances determined from the high resolution and snapshot spectra. We find that the CASHCODE stellar parameters (effective temperature, surface gravity, metallicity, and microturbulence) agree well with the results of the manual analysis of the high resolution spectra. We present the abundances of three newly discovered stars with [Fe/H] ratios with alpha-enhancement and Fe-peak depletion and a range of n-capture elements. The full CASH sample will be used to derive statistically robust abundance trends and frequencies (e.g. carbon and n-capture), as well as placing constraints on nucleosynthetic processes that occurred in the early universe.
17. HIGH-RESOLUTION SPECTROSCOPY OF EXTREMELY METAL-POOR STARS IN THE LEAST EVOLVED GALAXIES: BOÖTES II
International Nuclear Information System (INIS)
Ji, Alexander P.; Frebel, Anna; Simon, Joshua D.; Geha, Marla
2016-01-01
We present high-resolution Magellan/MIKE spectra of the four brightest confirmed red giant stars in the ultra-faint dwarf galaxy Boötes II (Boo II). These stars all inhabit the metal-poor tail of the Boo II metallicity distribution function. The chemical abundance pattern of all detectable elements in these stars is consistent with that of the Galactic halo. However, all four stars have undetectable amounts of neutron-capture elements Sr and Ba, with upper limits comparable to the lowest ever detected in the halo or in other dwarf galaxies. One star exhibits significant radial velocity variations over time, suggesting it to be in a binary system. Its variable velocity has likely increased past determinations of the Boo II velocity dispersion. Our four stars span a limited metallicity range, but their enhanced α-abundances and low neutron-capture abundances are consistent with the interpretation that Boo II has been enriched by very few generations of stars. The chemical abundance pattern in Boo II confirms the emerging trend that the faintest dwarf galaxies have neutron-capture abundances distinct from the halo, suggesting the dominant source of neutron-capture elements in halo stars may be different than in ultra-faint dwarfs
18. Evidences of extragalactic origin and planet engulfment in the metal-poor twin pair HD 134439/HD 134440
Science.gov (United States)
Reggiani, Henrique; Meléndez, Jorge
2018-04-01
Recent studies of chemical abundances in metal-poor halo stars show the existence of different populations, which is important for studies of Galaxy formation and evolution. Here, we revisit the twin pair of chemically anomalous stars HD 134439 and HD 134440, using high resolution (R ˜ 72 000) and high S/N ratio (S/N ˜ 250) HDS/Subaru spectra. We compare them to the well-studied halo star HD 103095, using the line-by-line differential technique to estimate precise stellar parameters and LTE chemical abundances. We present the abundances of C, O, Na, Mg, Si, Ca, Sc, Ti, V, Cr, Mn, Co, Ni, Cu, Zn, Sr, Y, Ba, La, Ce, Nd, and Sm. We compare our results to the precise abundance patterns of Nissen & Schuster (2010) and data from dwarf Spheroidal galaxies (dSphs). We show that the abundance pattern of these stars appears to be closely linked to that of dSphs with [α/Fe] knee below [Fe/H] < -1.5. We also find a systematic difference of 0.06 ± 0.01 dex between the abundances of these twin binary stars, which could be explained by the engulfment of a planet, thus suggesting that planet formation is possible at low metallicities ([Fe/H] = -1.4).
19. The gravitational self-interaction of the Earth's tidal bulge
Science.gov (United States)
Norsen, Travis; Dreese, Mackenzie; West, Christopher
2017-09-01
According to a standard, idealized analysis, the Moon would produce a 54 cm equilibrium tidal bulge in the Earth's oceans. This analysis omits many factors (beyond the scope of the simple idealized model) that dramatically influence the actual height and timing of the tides at different locations, but it is nevertheless an important foundation for more detailed studies. Here, we show that the standard analysis also omits another factor—the gravitational interaction of the tidal bulge with itself—which is entirely compatible with the simple, idealized equilibrium model and which produces a surprisingly non-trivial correction to the predicted size of the tidal bulge. Our analysis uses ideas and techniques that are familiar from electrostatics, and should thus be of interest to teachers and students of undergraduate E&M, Classical Mechanics (and/or other courses that cover the tides), and geophysics courses that cover the closely related topic of Earth's equatorial bulge.
20. COBE diffuse infrared background experiment observations of the galactic bulge
Science.gov (United States)
Weiland, J. L.; Arendt, R. G.; Berriman, G. B.; Dwek, E.; Freudenreich, H. T.; Hauser, M. G.; Kelsall, T.; Lisse, C. M.; Mitra, M.; Moseley, S. H.
1994-01-01
Low angular resolution maps of the Galactic bulge at 1.25, 2.2, 3.5, and 4.9 micrometers obtained by the Diffuse Infrared Background Experiment (DIRBE) onboard NASA's Cosmic Background Explorer (COBE) are presented. After correction for extinction and subtraction of an empirical model for the Galactic disk, the surface brightness distribution of the bulge resembles a flattened ellipse with a minor-to-major axis ratio of approximately 0.6. The bulge minor axis scale height is found to be 2.1 deg +/- 0.2 deg for all four near-infrared wavelengths. Asymmetries in the longitudinal distribution of bulge brightness contours are qualitatively consistent with those expected for a triaxial bar with its near end in the first Galactic quadrant (0 deg less than l less than 90 deg). There is no evidence for an out-of-plane tilt of such a bar.
1. Analysis of Terminal Metallic Armor Plate Free-Surface Bulging
National Research Council Canada - National Science Library
Rapacki, Jr, E. J
2008-01-01
An analysis of the bulge formed on the free-surface of the terminal metallic plate of an armor array is shown to lead to reasonable estimates of the armor array's remaining penetration/perforation resistance...
2. Galaxies Grow Their Bulges and Black Holes in Diverse Ways
Energy Technology Data Exchange (ETDEWEB)
Bell, Eric F.; Harmsen, Benjamin; D’Souza, Richard [Department of Astronomy, University of Michigan, 1085 South University Avenue, Ann Arbor, MI 48109-1107 (United States); Monachesi, Antonela [Max Planck Institut für Astrophysik, Karl-Schwarzschild-Str. 1, Postfach 1317, D-85741 Garching (Germany); Jong, Roelof S. de [Leibniz-Institut für Astrophysik Potsdam (AIP), An der Sternwarte 16, D-14482 Potsdam (Germany); Bailin, Jeremy [Department of Physics and Astronomy, University of Alabama, Box 870324, Tuscaloosa, AL 35487-0324 (United States); Radburn-Smith, David J. [Department of Astronomy, University of Washington, 3910 15th Avenue NE, Seattle, WA 98195 (United States); Holwerda, Benne W., E-mail: [email protected] [Department of Physics and Astronomy, University of Louisville, 102 Natural Science Building, Louisville, KY 40292 (United States)
2017-03-01
Galaxies with Milky Way–like stellar masses have a wide range of bulge and black hole masses; in turn, these correlate with other properties such as star formation history. While many processes may drive bulge formation, major and minor mergers are expected to play a crucial role. Stellar halos offer a novel and robust measurement of galactic merger history; cosmologically motivated models predict that mergers with larger satellites produce more massive, higher-metallicity stellar halos, reproducing the recently observed stellar halo metallicity–mass relation. We quantify the relationship between stellar halo mass and bulge or black hole prominence using a sample of 18 Milky Way-mass galaxies with newly available measurements of (or limits on) stellar halo properties. There is an order of magnitude range in bulge mass, and two orders of magnitude in black hole mass, at a given stellar halo mass (or, equivalently, merger history). Galaxies with low-mass bulges show a wide range of quiet merger histories, implying formation mechanisms that do not require intense merging activity. Galaxies with massive “classical” bulges and central black holes also show a wide range of merger histories. While three of these galaxies have massive stellar halos consistent with a merger origin, two do not—merging appears to have had little impact on making these two massive “classical” bulges. Such galaxies may be ideal laboratories to study massive bulge formation through pathways such as early gas-rich accretion, violent disk instabilities, or misaligned infall of gas throughout cosmic time.
3. Non-LTE line formation of Fe in late-type stars - III. 3D non-LTE analysis of metal-poor stars
DEFF Research Database (Denmark)
Amarsi, A. M.; Lind, K.; Asplund, M.
2016-01-01
As one of the most important elements in astronomy, iron abundance determinations need to be as accurate as possible. We investigate the accuracy of spectroscopic iron abundance analyses using archetypal metal-poor stars. We perform detailed 3D non-LTE radiative transfer calculations based on 3D...
4. Evidence for a vanishing 6Li/7Li isotopic signature in the metal-poor halo star HD84937
DEFF Research Database (Denmark)
Lind, K.; Asplund, M.; Collet, Remo
2012-01-01
The claimed detections of 6Li in the atmospheres of some metal-poor halo stars have lead to speculative additions to the standard model of Big Bang nucleosynthesis and the early Universe, as the inferred abundances cannot be explained by Galactic cosmic ray production. A prominent example of a so...
5. The continuous rise of bulges out of galactic disks
Science.gov (United States)
2018-06-01
Context. A key subject in extragalactic astronomy concerns the chronology and driving mechanisms of bulge formation in late-type galaxies (LTGs). The standard scenario distinguishes between classical bulges and pseudo-bulges (CBs and PBs, respectively), the first thought to form monolithically prior to disks and the second gradually out of disks. These two bulge formation routes obviously yield antipodal predictions on the bulge age and bulge-to-disk age contrast, both expected to be high (low) in CBs (PBs). Aims: Our main goal is to explore whether bulges in present-day LTGs segregate into two evolutionary distinct classes, as expected from the standard scenario. Other questions motivating this study center on evolutionary relations between LTG bulges and their hosting disks, and the occurrence of accretion-powered nuclear activity as a function of bulge stellar mass ℳ⋆ and stellar surface density Σ⋆. Methods: In this study, we have combined three techniques - surface photometry, spectral modeling of integral field spectroscopy data and suppression of stellar populations younger than an adjustable age cutoff with the code REMOVEYOUNG (ℛ𝒴) - toward a systematic analysis of the physical and evolutionary properties (e.g., ℳ⋆, Σ⋆ and mass-weighted stellar age ℳ and metallicity ℳ, respectively) of a representative sample of 135 nearby (≤ 130 Mpc) LTGs from the CALIFA survey that cover a range between 108.9 M⊙ and 1011.5 M⊙ in total stellar mass ℳ⋆,T. In particular, the analysis here revolves around ⟨δμ9G⟩, a new distance- and formally extinction-independent measure of the contribution by stellar populations of age ≥ 9 Gyr to the mean r-band surface brightness of the bulge. We argue that ⟨δμ9G⟩ offers a handy semi-empirical tracer of the physical and evolutionary properties of LTG bulges and a promising means for their characterization. Results: The essential insight from this study is that LTG bulges form over 3 dex
6. New Target for an Old Method: Hubble Measures Globular Cluster Parallax
Science.gov (United States)
Hensley, Kerry
2018-05-01
Measuring precise distances to faraway objects has long been a challenge in astrophysics. Now, one of the earliest techniques used to measure the distance to astrophysical objects has been applied to a metal-poor globular cluster for the first time.A Classic TechniqueAn artists impression of the European Space Agencys Gaia spacecraft. Gaia is on track to map the positions and motions of a billion stars. [ESA]Distances to nearby stars are often measured using the parallax technique tracing the tiny apparent motion of a target star against the background of more distant stars as Earth orbits the Sun. This technique has come a long way since it was first used in the 1800s to measure the distance to stars a few tens of light-years away; with the advent of space observatories like Hipparcos and Gaia, parallax can now be used to map the positions of stars out to thousands of light-years.Precise distance measurements arent only important for setting the scale of the universe, however; they can also help us better understand stellar evolution over the course of cosmic history. Stellar evolution models are often anchored to a reference star cluster, the properties of which must be known precisely. These precise properties can be readily determined for young, nearby open clusters using parallax measurements. But stellar evolution models that anchor on themore-distant, ancient, metal-poor globular clusters have been hampered by theless-precise indirect methods used tomeasure distance to these faraway clusters until now.Top: An image of NGC 6397 overlaid with the area scanned by Hubble (dashed green) and the footprint of the camera (solid green). The blue ellipse represents the parallax motion of a star in the cluster, exaggerated by a factor of ten thousand. Bottom: An example scan from this field. [Adapted from Brown et al. 2018]New Measurement to an Old ClusterThomas Brown (Space Telescope Science Institute) and collaborators used the Hubble Space Telescope todetermine the
7. Bulge Growth Through Disc Instabilities in High-Redshift Galaxies
Science.gov (United States)
Bournaud, Frédéric
The role of disc instabilities, such as bars and spiral arms, and the associated resonances, in growing bulges in the inner regions of disc galaxies have long been studied in the low-redshift nearby Universe. There it has long been probed observationally, in particular through peanut-shaped bulges (Chap. 14 10.1007/978-3-319-19378-6_14"). This secular growth of bulges in modern disc galaxies is driven by weak, non-axisymmetric instabilities: it mostly produces pseudobulges at slow rates and with long star-formation timescales. Disc instabilities at high redshift (z > 1) in moderate-mass to massive galaxies (1010 to a few 1011 M⊙ of stars) are very different from those found in modern spiral galaxies. High-redshift discs are globally unstable and fragment into giant clumps containing 108-9 M⊙ of gas and stars each, which results in highly irregular galaxy morphologies. The clumps and other features associated to the violent instability drive disc evolution and bulge growth through various mechanisms on short timescales. The giant clumps can migrate inward and coalesce into the bulge in a few 108 years. The instability in the very turbulent media drives intense gas inflows toward the bulge and nuclear region. Thick discs and supermassive black holes can grow concurrently as a result of the violent instability. This chapter reviews the properties of high-redshift disc instabilities, the evolution of giant clumps and other features associated to the instability, and the resulting growth of bulges and associated sub-galactic components.
8. TOWARDS THE CONCEPT OF GLOBULAR PEARLITE
OpenAIRE
A. G. Anisovich; M. K. Stepankova; A. A. Andrushevich
2016-01-01
Explanatory imprecisions of concept of globular pearlite and ferrite-carbide-mixture are considered. The need of concept binding of globular pearlite to specific grain with 0.8% carbon content is explained with the assistance of exemplary data obtained at the present metallographic equipment. The question of educational material presentation concerning the process of teaching of discipline «Materials and construction materials technology» is discussed in relation to the educational process of...
9. Gamma-ray Emission from Globular Clusters
Directory of Open Access Journals (Sweden)
Pak-Hin T. Tam
2016-03-01
Full Text Available Over the last few years, the data obtained using the Large Area Telescope (LAT aboard the Fermi Gamma-ray Space Telescope has provided new insights on high-energy processes in globular clusters, particularly those involving compact objects such as MilliSecond Pulsars (MSPs. Gamma-ray emission in the 100 MeV to 10 GeV range has been detected from more than a dozen globular clusters in our galaxy, including 47 Tucanae and Terzan 5. Based on a sample of known gammaray globular clusters, the empirical relations between gamma-ray luminosity and properties of globular clusters such as their stellar encounter rate, metallicity, and possible optical and infrared photon energy densities, have been derived. The measured gamma-ray spectra are generally described by a power law with a cut-off at a few gigaelectronvolts. Together with the detection of pulsed γ-rays from two MSPs in two different globular clusters, such spectral signature lends support to the hypothesis that γ-rays from globular clusters represent collective curvature emission from magnetospheres of MSPs in the clusters. Alternative models, involving Inverse-Compton (IC emission of relativistic electrons that are accelerated close to MSPs or pulsar wind nebula shocks, have also been suggested. Observations at >100 GeV by using Fermi/LAT and atmospheric Cherenkov telescopes such as H.E.S.S.-II, MAGIC-II, VERITAS, and CTA will help to settle some questions unanswered by current data.
10. Discovery of a Metal-poor, Luminous Post-AGB Star that Failed the Third Dredge-up
Energy Technology Data Exchange (ETDEWEB)
Kamath, D.; Winckel, H. Van [Instituut voor Sterrenkunde, K.U.Leuven, Celestijnenlaan 200D bus 2401, B-3001 Leuven (Belgium); Wood, P. R.; Asplund, M.; Karakas, A. I. [Research School of Astronomy and Astrophysics, Australian National University, Canberra ACT 2611 (Australia); Lattanzio, J. C. [Monash Centre for Astrophysics, School of Physics and Astronomy, Monash University, VIC 3800 (Australia)
2017-02-10
Post-asymptotic giant branch (post-AGB) stars are known to be chemically diverse. In this paper we present the first observational evidence of a star that has failed the third dredge-up (TDU). J005252.87-722842.9 is an A-type ( T {sub eff} = 8250 ± 250 K) luminous (8200 ± 700 L {sub ⊙}) metal-poor ([Fe/H] = −1.18 ± 0.10) low-mass ( M {sub initial} ≈ 1.5–2.0 M {sub ⊙}) post-AGB star in the Small Magellanic Cloud. Through a systematic abundance study, using high-resolution optical spectra from UVES, we found that this likely post-AGB object shows an intriguing photospheric composition with no confirmed carbon-enhancement (upper limit of [C/Fe] < 0.50) nor enrichment of s -process elements. We derived an oxygen abundance of [O/Fe] = 0.29 ± 0.1. For Fe and O, we took the effects of nonlocal thermodynamic equilibrium into account. We could not derive an upper limit for the nitrogen abundance as there are no useful nitrogen lines within our spectral coverage. The chemical pattern displayed by this object has not been observed in single or binary post-AGBs. Based on its derived stellar parameters and inferred evolutionary state, single-star nucleosynthesis models predict that this star should have undergone TDU episodes while on the AGB, and it should be carbon enriched. However, our observations are in contrast with these predictions. We identify two possible Galactic analogs that are likely to be post-AGB stars, but the lack of accurate distances (hence luminosities) to these objects does not allow us to confirm their post-AGB status. If they have low luminosities, then they are likely to be dusty post-RGB stars. The discovery of J005252.87-722842.9 reveals a new stellar evolutionary channel whereby a star evolves without any TDU episodes.
11. K2-111 b - a short period super-Earth transiting a metal poor, evolved old star
Science.gov (United States)
Fridlund, Malcolm; Gaidos, Eric; Barragán, Oscar; Persson, Carina M.; Gandolfi, Davide; Cabrera, Juan; Hirano, Teruyuki; Kuzuhara, Masayuki; Csizmadia, Sz.; Nowak, Grzegorz; Endl, Michael; Grziwa, Sascha; Korth, Judith; Pfaff, Jeremias; Bitsch, Bertram; Johansen, Anders; Mustill, Alexander J.; Davies, Melvyn B.; Deeg, Hans J.; Palle, Enric; Cochran, William D.; Eigmüller, Philipp; Erikson, Anders; Guenther, Eike; Hatzes, Artie P.; Kiilerich, Amanda; Kudo, Tomoyuki; MacQueen, Phillip; Narita, Norio; Nespral, David; Pätzold, Martin; Prieto-Arranz, Jorge; Rauer, Heike; Van Eylen, Vincent
2017-07-01
12. CARBON-ENHANCED METAL-POOR STARS IN THE INNER AND OUTER HALO COMPONENTS OF THE MILKY WAY
International Nuclear Information System (INIS)
Carollo, Daniela; Norris, John E.; Freeman, Ken C.; Beers, Timothy C.; Lee, Young Sun; Kennedy, Catherine R.; Bovy, Jo; Sivarani, Thirupathi; Aoki, Wako
2012-01-01
Carbon-enhanced metal-poor (CEMP) stars in the halo components of the Milky Way are explored, based on accurate determinations of the carbon-to-iron ([C/Fe]) abundance ratios and kinematic quantities for over 30,000 calibration stars from the Sloan Digital Sky Survey. Using our present criterion that low-metallicity stars exhibiting [C/Fe] ratios ( c arbonicity ) in excess of [C/Fe] =+0.7 are considered CEMP stars, the global frequency of CEMP stars in the halo system for [Fe/H] 5 kpc, the CarDF exhibits a strong tail toward high values, up to [C/Fe] > +3.0. We also find a clear increase in the CEMP frequency with |Z|. For stars with –2.0 < [Fe/H] <–1.5, the frequency grows from 5% at |Z| ∼2 kpc to 10% at |Z| ∼10 kpc. For stars with [Fe/H] <–2.0, the frequency grows from 8% at |Z| ∼2 kpc to 25% at |Z| ∼10 kpc. For stars with –2.0 < [Fe/H] <–1.5, the mean carbonicity is ([C/Fe]) ∼+1.0 for 0 kpc < |Z| < 10 kpc, with little dependence on |Z|; for [Fe/H] <–2.0, ([C/Fe]) ∼+1.5, again roughly independent of |Z|. Based on a statistical separation of the halo components in velocity space, we find evidence for a significant contrast in the frequency of CEMP stars between the inner- and outer-halo components—the outer halo possesses roughly twice the fraction of CEMP stars as the inner halo. The carbonicity distribution also differs between the inner-halo and outer-halo components—the inner halo has a greater portion of stars with modest carbon enhancement ([C/Fe] ∼+0.5]); the outer halo has a greater portion of stars with large enhancements ([C/Fe] ∼+2.0), although considerable overlap still exists. We interpret these results as due to the possible presence of additional astrophysical sources of carbon production associated with outer-halo stars, beyond the asymptotic giant-branch source that may dominate for inner-halo stars, with implications for the progenitors of these populations.
13. TOPoS. IV. Chemical abundances from high-resolution observations of seven extremely metal-poor stars
Science.gov (United States)
Bonifacio, P.; Caffau, E.; Spite, M.; Spite, F.; Sbordone, L.; Monaco, L.; François, P.; Plez, B.; Molaro, P.; Gallagher, A. J.; Cayrel, R.; Christlieb, N.; Klessen, R. S.; Koch, A.; Ludwig, H.-G.; Steffen, M.; Zaggia, S.; Abate, C.
2018-04-01
Context. Extremely metal-poor (EMP) stars provide us with indirect information on the first generations of massive stars. The TOPoS survey has been designed to increase the census of these stars and to provide a chemical inventory that is as detailed as possible. Aims: Seven of the most iron-poor stars have been observed with the UVES spectrograph at the ESO VLT Kueyen 8.2 m telescope to refine their chemical composition. Methods: We analysed the spectra based on 1D LTE model atmospheres, but also used 3D hydrodynamical simulations of stellar atmospheres. Results: We measured carbon in six of the seven stars: all are carbon-enhanced and belong to the low-carbon band, defined in the TOPoS II paper. We measured lithium (A(Li) = 1.9) in the most iron-poor star (SDSS J1035+0641, [Fe/H] measure Li in three stars at [Fe/H] -4.0, two of which lie on the Spite plateau. We confirm that SDSS J1349+1407 is extremely rich in Mg, but not in Ca. It is also very rich in Na. Several of our stars are characterised by low α-to-iron ratios. Conclusions: The lack of high-carbon band stars at low metallicity can be understood in terms of evolutionary timescales of binary systems. The detection of Li in SDSS J1035+0641 places a strong constraint on theories that aim at solving the cosmological lithium problem. The Li abundance of the two warmer stars at [Fe/H] -4.0 places them on the Spite plateau, while the third, cooler star, lies below. We argue that this suggests that the temperature at which Li depletion begins increases with decreasing [Fe/H]. SDSS J1349+1407 may belong to a class of Mg-rich EMP stars. We cannot assess if there is a scatter in α-to-iron ratios among the EMP stars or if there are several discrete populations. However, the existence of stars with low α-to-iron ratios is supported by our observations. Based on observations obtained at ESO Paranal Observatory, Programmes 189.D-0165,090.D-0306, 093.D-0136, and 096.D-0468.
14. Investigation into the factors that influence inverse bulging effect during sheet hydro-mechanical deep drawing
Directory of Open Access Journals (Sweden)
Lang Lihui
2015-01-01
Full Text Available The factors that influence inverse bulging effect during sheet hydro-mechanical deep drawing are especially researched in this paper. According to the different inverse bulging process, two modes can be singled: the initial inverse bulging (IIB and the local inverse bulging (LIB. IIB includes two parameters: inverse bulging height ratio (HIb/t and inverse bulging pressure ratio (PIb/t. LIB is influenced by IIB and has a direct relationship with liquid chamber pressure in the forming process. The optimal inverse bulging parameters of hemispherical bottom cylindrical part and flat bottom cylindrical part are obtained by numerical simulation. Process parameters including the clearance between the punch and the blank holder and the blank holder entrance radius that have a large influence on inverse bulging effect are optimized, so as to make inverse bulging effect behave better in hydroforming process. Finally, the accuracy of the numerical simulation results was verified by experiments.
15. THE INNER GALACTIC BULGE: EVIDENCE FOR A NUCLEAR BAR?
International Nuclear Information System (INIS)
Gerhard, Ortwin; Martinez-Valpuesta, Inma
2012-01-01
Recent data from the VVV survey have strengthened evidence for a structural change in the Galactic bulge inward of |l| ≤ 4°. Here we show with an N-body barred galaxy simulation that a boxy bulge formed through the bar and buckling instabilities effortlessly matches measured bulge longitude profiles for red clump stars. The same simulation snapshot was earlier used to clarify the apparent boxy bulge—long bar dichotomy, for the same orientation and scaling. The change in the slope of the model longitude profiles in the inner few degrees is caused by a transition from highly elongated to more nearly axisymmetric isodensity contours in the inner boxy bulge. This transition is confined to a few degrees from the Galactic plane; thus the change of slope is predicted to disappear at higher Galactic latitudes. We also show that the nuclear star count map derived from this simulation snapshot displays a longitudinal asymmetry similar to that observed in the Two Micron All Sky Survey (2MASS) data, but is less flattened to the Galactic plane than the 2MASS map. These results support the interpretation that the Galactic bulge originated from disk evolution and question the evidence advanced from star count data for the existence of a secondary nuclear bar in the Milky Way.
16. The Star Formation History in the M31 Bulge
Science.gov (United States)
Dong, Hui; Olsen, Knut; Lauer, Tod; Saha, Abhijit; Li, Zhiyuan; García-Benito, Ruben; Schödel, Rainer
2018-05-01
We present the study of stellar populations in the central 5.5' (˜1.2 kpc) of the M31 bulge by using the optical color magnitude diagram derived from HST ACS WFC/HRC observations. In order to enhance image quality and then obtain deeper photometry, we construct Nyquist-sampled images and use a deconvolution method to detect sources and measure their photometry. We demonstrate that our method performs better than DOLPHOT in the extremely crowded region. The resolved stars in the M31 bulge have been divided into nine annuli and the color magnitude diagram fitting is performed for each of them. We confirm that the majority of stars (>70%) in the M31 bulge are indeed very old (> 5 Gyr) and metal-rich ([Fe/H]˜0.3). At later times, the star formation rate decreased and then experienced a significant rise around 1 Gyr ago, which pervaded the entire M31 bulge. After that, stars formed at less than 500 Myr ago in the central 130" . Through simulation, we find that these intermediate-age stars cannot be the artifacts introduced by the blending effect. Our results suggest that although the majority of the M31 bulge are very old, the secular evolutionary process still continuously builds up the M31 bulge slowly. We compare our star formation history with an older analysis derived from the spectral energy distribution fitting, which suggests that the latter one is still a reasonable tool for the study of stellar populations in remote galaxies.
17. Model for common growth of supermassive black holes, bulges and globular star clusters: Ripping off Jeans clusters
NARCIS (Netherlands)
Nieuwenhuizen, T.M.
2012-01-01
It is assumed that a galaxy starts as a dark halo of a few million Jeans clusters (JCs), each of which consists of nearly a trillion micro brown dwarfs, MACHOs of Earth mass. JCs in the galaxy center heat up their MACHOs by tidal forces, which makes them expand, so that coagulation and star
18. New Fe i Level Energies and Line Identifications from Stellar Spectra. II. Initial Results from New Ultraviolet Spectra of Metal-poor Stars
Energy Technology Data Exchange (ETDEWEB)
Peterson, Ruth C. [SETI Institute and Astrophysical Advances, 607 Marion Place, Palo Alto, CA 94301 (United States); Kurucz, Robert L. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Ayres, Thomas R., E-mail: [email protected] [Center for Astrophysics and Space Astronomy, University of Colorado, 389 UCB, Boulder, CO 80309-0389 (United States)
2017-04-01
The Fe i spectrum is critical to many areas of astrophysics, yet many of the high-lying levels remain uncharacterized. To remedy this deficiency, Peterson and Kurucz identified Fe i lines in archival ultraviolet and optical spectra of metal-poor stars, whose warm temperatures favor moderate Fe i excitation. Sixty-five new levels were recovered, with 1500 detectable lines, including several bound levels in the ionization continuum of Fe i. Here, we extend the previous work by identifying 59 additional levels, with 1400 detectable lines, by incorporating new high-resolution UV spectra of warm metal-poor stars recently obtained by the Hubble Space Telescope Imaging Spectrograph. We provide gf values for these transitions, both computed as well as adjusted to fit the stellar spectra. We also expand our spectral calculations to the infrared, confirming three levels by matching high-quality spectra of the Sun and two cool stars in the H -band. The predicted gf values suggest that an additional 3700 Fe i lines should be detectable in existing solar infrared spectra. Extending the empirical line identification work to the infrared would help confirm additional Fe i levels, as would new high-resolution UV spectra of metal-poor turnoff stars below 1900 Å.
19. FUNDAMENTAL PARAMETERS, INTEGRATED RED GIANT BRANCH MASS LOSS, AND DUST PRODUCTION IN THE GALACTIC GLOBULAR CLUSTER 47 TUCANAE
International Nuclear Information System (INIS)
McDonald, I.; Zijlstra, A. A.; Boyer, M. L.; Gordon, K.; Meixner, M.; Sewilo, M.; Shiao, B.; Whitney, B.; Van Loon, J. Th.; Hora, J. L.; Robitaille, T.; Babler, B.; Meade, M.; Block, M.; Misselt, K.
2011-01-01
Fundamental parameters and time evolution of mass loss are investigated for post-main-sequence stars in the Galactic globular cluster 47 Tucanae (NGC 104). This is accomplished by fitting spectral energy distributions (SEDs) to existing optical and infrared photometry and spectroscopy, to produce a true Hertzsprung-Russell diagram. We confirm the cluster's distance as d = 4611 +213 -200 pc and age as 12 ± 1 Gyr. Horizontal branch models appear to confirm that no more red giant branch mass loss occurs in 47 Tuc than in the more metal-poor ω Centauri, though difficulties arise due to inconsistencies between the models. Using our SEDs, we identify those stars that exhibit infrared excess, finding excess only among the brightest giants: dusty mass loss begins at a luminosity of ∼1000 L sun , becoming ubiquitous above L = 2000 L sun . Recent claims of dust production around lower-luminosity giants cannot be reproduced, despite using the same archival Spitzer imagery.
20. THE X-SHAPED BULGE OF THE MILKY WAY REVEALED BY WISE
International Nuclear Information System (INIS)
Ness, Melissa; Lang, Dustin
2016-01-01
The Milky Way bulge has a boxy/peanut morphology and an X-shaped structure. This X-shape has been revealed by the “split in the red clump” from star counts along the line of sight toward the bulge, measured from photometric surveys. This boxy, X-shaped bulge morphology is not unique to the Milky Way and such bulges are observed in other barred spiral galaxies. N -body simulations show that boxy and X-shaped bulges are formed from the disk via dynamical instabilities. It has also been proposed that the Milky Way bulge is not X-shaped, but rather, the apparent split in the red clump stars is a consequence of different stellar populations, in an old classical spheroidal bulge. We present a Wide-Field Infrared Survey Explorer ( WISE ) image of the Milky Way bulge, produced by downsampling the publicly available “unWISE” coadds. The WISE image of the Milky Way bulge shows that the X-shaped nature of the Milky Way bulge is self-evident and irrefutable. The X-shape morphology of the bulge in itself and the fraction of bulge stars that comprise orbits within this structure has important implications for the formation history of the Milky Way, and, given the ubiquity of boxy X-shaped bulges, spiral galaxies in general.
1. Dinamical properties of globular clusters: Primordial or evolutional?
Science.gov (United States)
Surdin, V. G.
1995-04-01
Some observable relations between globular cluster parameters appear as a result of dynamical evolution of the cluster system. These relations are inapplicable to the studies of the globular cluster origin
2. M31 GLOBULAR CLUSTER STRUCTURES AND THE PRESENCE OF X-RAY BINARIES
International Nuclear Information System (INIS)
Agar, J. R. R.; Barmby, P.
2013-01-01
likely contain more metal-poor clusters and make it possible to disentangle the two effects
3. Nova-driven winds in globular clusters
International Nuclear Information System (INIS)
Scott, E.H.; Durisen, R.H.
1978-01-01
Recent sensitive searches for Hα emission from ionized intracluster gas in globular clusters have set upper limits that conflict with theoretical predictions. We suggest that nova outbursts heat the gas, producing winds that resolve this discrepancy. The incidence of novae in globular clusters, the conversion of kinetic energy of the nova shell to thermal energy of the intracluster gas, and the characteristics of the resultant winds are discussed. Calculated emission from the nova-driven models does not conflict with any observations to date. Some suggestions are made concerning the most promising approaches for future detection of intracluster gas on the basis of these models. The possible relationship of nova-driven winds of globular cluster X-ray sources is also considered
4. Reconstructing galaxy histories from globular clusters.
Science.gov (United States)
West, Michael J; Côté, Patrick; Marzke, Ronald O; Jordán, Andrés
2004-01-01
Nearly a century after the true nature of galaxies as distant 'island universes' was established, their origin and evolution remain great unsolved problems of modern astrophysics. One of the most promising ways to investigate galaxy formation is to study the ubiquitous globular star clusters that surround most galaxies. Globular clusters are compact groups of up to a few million stars. They generally formed early in the history of the Universe, but have survived the interactions and mergers that alter substantially their parent galaxies. Recent advances in our understanding of the globular cluster systems of the Milky Way and other galaxies point to a complex picture of galaxy genesis driven by cannibalism, collisions, bursts of star formation and other tumultuous events.
5. Modeling the formation of globular cluster systems in the Virgo cluster
International Nuclear Information System (INIS)
Li, Hui; Gnedin, Oleg Y.
2014-01-01
The mass distribution and chemical composition of globular cluster (GC) systems preserve fossil record of the early stages of galaxy formation. The observed distribution of GC colors within massive early-type galaxies in the ACS Virgo Cluster Survey (ACSVCS) reveals a multi-modal shape, which likely corresponds to a multi-modal metallicity distribution. We present a simple model for the formation and disruption of GCs that aims to match the ACSVCS data. This model tests the hypothesis that GCs are formed during major mergers of gas-rich galaxies and inherit the metallicity of their hosts. To trace merger events, we use halo merger trees extracted from a large cosmological N-body simulation. We select 20 halos in the mass range of 2 × 10 12 to 7 × 10 13 M ☉ and match them to 19 Virgo galaxies with K-band luminosity between 3 × 10 10 and 3 × 10 11 L ☉ . To set the [Fe/H] abundances, we use an empirical galaxy mass-metallicity relation. We find that a minimal merger ratio of 1:3 best matches the observed cluster metallicity distribution. A characteristic bimodal shape appears because metal-rich GCs are produced by late mergers between massive halos, while metal-poor GCs are produced by collective merger activities of less massive hosts at early times. The model outcome is robust to alternative prescriptions for cluster formation rate throughout cosmic time, but a gradual evolution of the mass-metallicity relation with redshift appears to be necessary to match the observed cluster metallicities. We also affirm the age-metallicity relation, predicted by an earlier model, in which metal-rich clusters are systematically several billion younger than their metal-poor counterparts.
6. GLOBULAR CLUSTERS AND SPUR CLUSTERS IN NGC 4921, THE BRIGHTEST SPIRAL GALAXY IN THE COMA CLUSTER
International Nuclear Information System (INIS)
Lee, Myung Gyoon; Jang, In Sung
2016-01-01
We resolve a significant fraction of globular clusters (GCs) in NGC 4921, the brightest spiral galaxy in the Coma cluster. We also find a number of extended bright star clusters (star complexes) in the spur region of the arms. The latter are much brighter and bluer than those in the normal star-forming region, being as massive as 3 × 10 5 M ⊙ . The color distribution of the GCs in this galaxy is found to be bimodal. The turnover magnitudes of the luminosity functions of the blue (metal-poor) GCs (0.70 < (V − I) ≤ 1.05) in the halo are estimated V(max) = 27.11 ± 0.09 mag and I(max) = 26.21 ± 0.11 mag. We obtain similar values for NGC 4923, a companion S0 galaxy, and two Coma cD galaxies (NGC 4874 and NGC 4889). The mean value for the turnover magnitudes of these four galaxies is I(max) = 26.25 ± 0.03 mag. Adopting M I (max) = −8.56 ± 0.09 mag for the metal-poor GCs, we determine the mean distance to the four Coma galaxies to be 91 ± 4 Mpc. Combining this with the Coma radial velocity, we derive a value of the Hubble constant, H 0 = 77.9 ± 3.6 km s −1 Mpc −1 . We estimate the GC specific frequency of NGC 4921 to be S N = 1.29 ± 0.25, close to the values for early-type galaxies. This indicates that NGC 4921 is in the transition phase to S0s
7. Elemental abundances in the Galactic bulge from microlensed dwarf stars
NARCIS (Netherlands)
Bensby, T.; Feltzing, S.; Johnson, J.A.; Gould, A.; Sana, H.; Gal-Yam, A.; Asplund, M.; Lucatello, S.; Melendez, J.; Udalski, A.; Kubas, D.; James, G.; Adén, D.; Simmerer, J.
2010-01-01
We present elemental abundances of 13 microlensed dwarf and subgiant stars in the Galactic bulge, which constitute the largest sample to date. We show that these stars span the full range of metallicity from Fe/H= −0.8 to +0.4, and that they follow well-defined abundance trends, coincident with
8. Mapping the X-shaped Milky Way Bulge
Science.gov (United States)
Saito, R. K.; Zoccali, M.; McWilliam, A.; Minniti, D.; Gonzalez, O. A.; Hill, V.
2011-09-01
We analyzed the distribution of the red clump (RC) stars throughout the Galactic bulge using Two Micron All Sky Survey data. We mapped the position of the RC in 1 deg2 fields within the area |l| RC seen in the central area splits into two components at high Galactic longitudes in both hemispheres, produced by two structures at different distances along the same line of sight. The X-shape is clearly visible in the Z-X plane for longitudes close to the l = 0° axis. Crude measurements of the space densities of RC stars in the bright and faint RC populations are consistent with the adopted RC distances, providing further supporting evidence that the X-structure is real, and that there is approximate front-back symmetry in our bulge fields. We conclude that the Milky Way bulge has an X-shaped structure within |l| <~ 2°, seen almost edge-on with respect to the line of sight. Additional deep near-infrared photometry extending into the innermost bulge regions combined with spectroscopic data is needed in order to discriminate among the different possibilities that can cause the observed X-shaped structure.
9. Disk Model with Central Bulge for Galaxy M94
International Nuclear Information System (INIS)
Jalocha, J.; Bratek, L.; Kutschera, M.
2010-01-01
A global disk model for spiral galaxies is modified by adding a spherical component to the galactic center to account for the presence of a central spherical bulge. It is verified whether such modification could be substantial for predictions of total mass and of its distribution in spiral galaxy M94. (authors)
10. COLORS AND COLOR GRADIENTS IN BULGES OF GALAXIES
NARCIS (Netherlands)
BALCELLS, M; PELETIER, RF
We have obtained surface photometry in U, B, R, and I for a complete optically selected sample of 45 early-type spiral galaxies, to investigate the colors and color gradients of spiral bulges. Color profiles in U-R, B-R, U-B, and R-I have been determined in wedges opening on the semiminor axes.
11. THE BULGE OF M-104 - STELLAR CONTENT AND KINEMATICS
NARCIS (Netherlands)
HES, R; PELETIER, RF
Optical and near-infrared surface photometry of the bulge of M 104, the Sombrero Galaxy, is presented. From these data we have determined the radial variations of colours along the minor axis. We also present absorption line strength gradients of a number of metal lines and molecular bands. The
12. Recognition of thymine in DNA bulges by a Zn(II) macrocyclic complex.
Science.gov (United States)
del Mundo, Imee Marie A; Fountain, Matthew A; Morrow, Janet R
2011-08-14
A Zn(II) macrocyclic complex with appended quinoline is a bifunctional recognition agent that uses both the Zn(II) center and the pendent aromatic group to bind to thymine in bulges with good selectivity over DNA containing G, C or A bulges. Spectroscopic studies show that the stem containing the bulge stays largely intact in a DNA hairpin with the Zn(II) complex bound to the thymine bulge. This journal is © The Royal Society of Chemistry 2011
13. TOWARDS THE CONCEPT OF GLOBULAR PEARLITE
Directory of Open Access Journals (Sweden)
A. G. Anisovich
2016-01-01
Full Text Available Explanatory imprecisions of concept of globular pearlite and ferrite-carbide-mixture are considered. The need of concept binding of globular pearlite to specific grain with 0.8% carbon content is explained with the assistance of exemplary data obtained at the present metallographic equipment. The question of educational material presentation concerning the process of teaching of discipline «Materials and construction materials technology» is discussed in relation to the educational process of technical universities, in particular, the Belarusian State Agrarian Technical University.
14. Millisecond radio pulsars in globular clusters
Science.gov (United States)
Verbunt, Frank; Lewin, Walter H. G.; Van Paradijs, Jan
1989-01-01
It is shown that the number of millisecond radio pulsars, in globular clusters, should be larger than 100, applying the standard scenario that all the pulsars descend from low-mass X-ray binaries. Moreover, most of the pulsars are located in a small number of clusters. The prediction that Teran 5 and Liller 1 contain at least about a dozen millisecond radio pulsars each is made. The observations of millisecond radio pulsars in globular clusters to date, in particular the discovery of two millisecond radio pulsars in 47 Tuc, are in agreement with the standard scenario, in which the neutron star is spun up during the mass transfer phase.
15. Spectroscopic study of the elusive globular cluster ESO452-SC11 and its surroundings
Science.gov (United States)
Koch, Andreas; Hansen, Camilla Juul; Kunder, Andrea
2017-08-01
Globular clusters (GCs) have long been recognized as being amongst the oldest objects in the Galaxy. As such, they have the potential of playing a pivotal role in deciphering the Milky Way's early history. Here we present the first spectroscopic study of the low-mass system ESO452-SC11 using the AAOmega multifibre spectrograph at medium resolution. Given the stellar sparsity of this object and the high degree of foreground contamination due to its location toward the Galactic bulge, very few details are known for this cluster - there is no consensus, for instance, about its age, metallicity, or its association with the disk or bulge. We identify five member candidates based on common radial velocity, calcium-triplet metallicity, and position within the GC. Using spectral synthesis, the measurement of accurate Fe-abundances from Fe-lines, and abundances of several α-, Fe-peak, and neutron-capture elements (Si, Ca, Ti,Cr, Co, Ni, Sr, and Eu) is carried out, albeit with large uncertainties. We find that two of the five cluster candidates are likely non-members, as they have deviating iron abundances and [α/Fe] ratios. The cluster mean heliocentric velocity is 19 ± 2 km s-1 with a velocity dispersion of 2.8 ± 3.4 km s-1, a low value in line with its sparse nature and low mass. The mean Fe-abundance from spectral fitting is -0.88 ± 0.03 dex, where the spread is driven by observational errors. Furthermore, the α-elements of the GC candidates are marginally lower than expected for the bulge at similar metallicities. As spectra of hundreds of stars were collected in a 2-degree field centered on ESO452-SC11, a detailed abundance study of the surrounding field was also enabled. The majority of the non-members have slightly higher [α/Fe] ratios, in line with the typical nearby bulge population. A subset of the spectra with measured Fe-peak abundance ratios shows a large scatter around solar values, albeit with large uncertainties. Furthermore, our study provides the
16. Age bimodality in the central region of pseudo-bulges in S0 galaxies
Science.gov (United States)
2017-11-01
We present evidence for bimodal stellar age distribution of pseudo-bulges of S0 galaxies as probed by the Dn(4000) index. We do not observe any bimodality in age distribution for pseudo-bulges in spiral galaxies. Our sample is flux limited and contains 2067 S0 and 2630 spiral galaxies drawn from the Sloan Digital Sky Survey. We identify pseudo-bulges in S0 and spiral galaxies, based on the position of the bulge on the Kormendy diagram and their central velocity dispersion. Dividing the pseudo-bulges of S0 galaxies into those containing old and young stellar populations, we study the connection between global star formation and pseudo-bulge age on the u - r colour-mass diagram. We find that most old pseudo-bulges are hosted by passive galaxies while majority of young bulges are hosted by galaxies that are star forming. Dividing our sample of S0 galaxies into early-type S0s and S0/a galaxies, we find that old pseudo-bulges are mainly hosted by early-type S0 galaxies while most of the pseudo-bulges in S0/a galaxies are young. We speculate that morphology plays a strong role in quenching of star formation in the disc of these S0 galaxies, which stops the growth of pseudo-bulges, giving rise to old pseudo-bulges and the observed age bimodality.
17. Shaping Globular Clusters with Black Holes
Science.gov (United States)
Kohler, Susanna
2018-03-01
How many black holes lurk within the dense environments of globular clusters, and how do these powerful objects shape the properties of the cluster around them? One such cluster, NGC 3201, is now helping us to answer these questions.Hunting Stellar-Mass Black HolesSince the detection of merging black-hole binaries by the Laser Interferometer Gravitational-Wave Observatory (LIGO), the dense environments of globular clusters have received increasing attention as potential birthplaces of these compact binary systems.The central region of the globular star cluster NGC 3201, as viewed by Hubble. The black hole is in orbit with the star marked by the blue circle. [NASA/ESA]In addition, more and more stellar-mass black-hole candidates have been observed within globular clusters, lurking in binary pairs with luminous, non-compact companions. The most recent of these detections, found in the globular cluster NGC 3201, stands alone as the first stellar-mass black hole candidate discovered via radial velocity observations: the black holes main-sequence companion gave away its presence via a telltale wobble.Now a team of scientists led by Kyle Kremer (CIERA and Northwestern University) is using models of this system to better understand the impact that black holes might have on their host clusters.A Model ClusterThe relationship between black holes and their host clusters is complicated. Though the cluster environment can determine the dynamical evolution of the black holes, the retention rate of black holes in a globular cluster (i.e., how many remain in the cluster when they are born as supernovae, rather than being kicked out during the explosion) influences how the host cluster evolves.Kremer and collaborators track this complex relationship by modeling the evolution of a cluster similar to NGC 3201 with a Monte Carlo code. The code incorporates physics relevant to the evolution of black holes and black-hole binaries in globular clusters, such as two-body relaxation
18. The chemical composition of a regular halo globular cluster: NGC 5897
Science.gov (United States)
Koch, Andreas; McWilliam, Andrew
2014-05-01
We report for the first time on the chemical composition of the halo cluster NGC 5897 (R⊙ = 12.5 kpc), based on chemical abundance ratios for 27 α-, iron-peak, and neutron-capture elements in seven red giants. From our high-resolution, high signal-to-noise spectra obtained with the Magellan/MIKE spectrograph, we find a mean iron abundance from the neutral species of [Fe/H] = - 2.04 ± 0.01 (stat.) ± 0.15 (sys.), which is more metal-poor than implied by previous photometric and low-resolution spectroscopic studies. The cluster NGC 5897 is α-enhanced (to 0.34 ± 0.01 dex) and shows Fe-peak element ratios typical of other (metal-poor) halo globular clusters (GCs) with no overall, significant abundance spreads in iron or in any other heavy element. Like other GCs, NGC 5897 shows a clear Na-O anti-correlation, where we find a prominent primordial population of stars with enhanced O abundances and approximately solar Na/Fe ratios, while two stars are Na-rich, providing chemical proof of the presence of multiple populations in this cluster. Comparison of the heavy element abundances with the solar-scaled values and the metal-poor GC M15 from the literature confirms that NGC 5897 has experienced little contribution from s-process nucleosynthesis. One star of the first generation stands out in that it shows very low La and Eu abundances. Overall, NGC 5897 is a well behaved GC showing archetypical correlations and element-patterns, with little room for surprises in our data. We suggest that its lower metallicity could explain the unusually long periods of RR Lyr that were found in NGC 5897. This paper includes data gathered with the 6.5-m Magellan Telescopes located at Las Campanas Observatory, Chile.Table 5 is available in electronic form at http://www.aanda.orgFull Table 2 is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/565/A23
19. Ultracool Subdwarfs: Metal-poor Stars and Brown Dwarfs Extending into the Late-type M, L and T Dwarf Regimes
OpenAIRE
Burgasser, Adam J.; Kirkpatrick, J. Davy; Lepine, Sebastien
2004-01-01
Recent discoveries from red optical proper motion and wide-field near-infrared surveys have uncovered a new population of ultracool subdwarfs -- metal-poor stars and brown dwarfs extending into the late-type M, L and possibly T spectral classes. These objects are among the first low-mass stars and brown dwarfs formed in the Galaxy, and are valuable tracers of metallicity effects in low-temperature atmospheres. Here we review the spectral, photometric, and kinematic properties of recent discov...
20. The SAGES Legacy Unifying Globulars and Galaxies survey (SLUGGS): sample definition, methods, and initial results
Energy Technology Data Exchange (ETDEWEB)
Brodie, Jean P.; Romanowsky, Aaron J.; Jennings, Zachary G.; Pota, Vincenzo; Kader, Justin; Roediger, Joel C.; Villaume, Alexa; Arnold, Jacob A.; Woodley, Kristin A. [University of California Observatories, 1156 High Street, Santa Cruz, CA 95064 (United States); Strader, Jay [Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824 (United States); Forbes, Duncan A.; Pastorello, Nicola; Usher, Christopher; Blom, Christina; Kartha, Sreeja S. [Centre for Astrophysics and Supercomputing, Swinburne University, Hawthorn, VIC 3122 (Australia); Foster, Caroline; Spitler, Lee R., E-mail: [email protected] [Australian Astronomical Observatory, P.O. Box 915, North Ryde, NSW 1670 (Australia)
2014-11-20
We introduce and provide the scientific motivation for a wide-field photometric and spectroscopic chemodynamical survey of nearby early-type galaxies (ETGs) and their globular cluster (GC) systems. The SAGES Legacy Unifying Globulars and GalaxieS (SLUGGS) survey is being carried out primarily with Subaru/Suprime-Cam and Keck/DEIMOS. The former provides deep gri imaging over a 900 arcmin{sup 2} field-of-view to characterize GC and host galaxy colors and spatial distributions, and to identify spectroscopic targets. The NIR Ca II triplet provides GC line-of-sight velocities and metallicities out to typically ∼8 R {sub e}, and to ∼15 R {sub e} in some cases. New techniques to extract integrated stellar kinematics and metallicities to large radii (∼2-3 R {sub e}) are used in concert with GC data to create two-dimensional (2D) velocity and metallicity maps for comparison with simulations of galaxy formation. The advantages of SLUGGS compared with other, complementary, 2D-chemodynamical surveys are its superior velocity resolution, radial extent, and multiple halo tracers. We describe the sample of 25 nearby ETGs, the selection criteria for galaxies and GCs, the observing strategies, the data reduction techniques, and modeling methods. The survey observations are nearly complete and more than 30 papers have so far been published using SLUGGS data. Here we summarize some initial results, including signatures of two-phase galaxy assembly, evidence for GC metallicity bimodality, and a novel framework for the formation of extended star clusters and ultracompact dwarfs. An integrated overview of current chemodynamical constraints on GC systems points to separate, in situ formation modes at high redshifts for metal-poor and metal-rich GCs.
1. A single population of red globular clusters around the massive compact galaxy NGC 1277
Science.gov (United States)
Beasley, Michael A.; Trujillo, Ignacio; Leaman, Ryan; Montes, Mireia
2018-03-01
Massive galaxies are thought to form in two phases: an initial collapse of gas and giant burst of central star formation, followed by the later accretion of material that builds up their stellar and dark-matter haloes. The systems of globular clusters within such galaxies are believed to form in a similar manner. The initial central burst forms metal-rich (spectrally red) clusters, whereas more metal-poor (spectrally blue) clusters are brought in by the later accretion of less-massive satellites. This formation process is thought to result in the multimodal optical colour distributions that are seen in the globular cluster systems of massive galaxies. Here we report optical observations of the massive relic-galaxy candidate NGC 1277—a nearby, un-evolved example of a high-redshift ‘red nugget’ galaxy. We find that the optical colour distribution of the cluster system of NGC 1277 is unimodal and entirely red. This finding is in strong contrast to other galaxies of similar and larger stellar mass, the cluster systems of which always exhibit (and are generally dominated by) blue clusters. We argue that the colour distribution of the cluster system of NGC 1277 indicates that the galaxy has undergone little (if any) mass accretion after its initial collapse, and use simulations of possible merger histories to show that the stellar mass due to accretion is probably at most ten per cent of the total stellar mass of the galaxy. These results confirm that NGC 1277 is a genuine relic galaxy and demonstrate that blue clusters constitute an accreted population in present-day massive galaxies.
2. FURTHER DEFINITION OF THE MASS-METALLICITY RELATION IN GLOBULAR CLUSTER SYSTEMS AROUND BRIGHTEST CLUSTER GALAXIES
International Nuclear Information System (INIS)
Cockcroft, Robert; Harris, William E.; Wehner, Elizabeth M. H.; Whitmore, Bradley C.; Rothberg, Barry
2009-01-01
We combine the globular cluster (GC) data for 15 brightest cluster galaxies and use this material to trace the mass-metallicity relations (MMRs) in their globular cluster systems (GCSs). This work extends previous studies which correlate the properties of the MMR with those of the host galaxy. Our combined data sets show a mean trend for the metal-poor subpopulation that corresponds to a scaling of heavy-element abundance with cluster mass Z ∼ M 0.30±0.05 . No trend is seen for the metal-rich subpopulation which has a scaling relation that is consistent with zero. We also find that the scaling exponent is independent of the GCS specific frequency and host galaxy luminosity, except perhaps for dwarf galaxies. We present new photometry in (g',i') obtained with Gemini/GMOS for the GC populations around the southern giant ellipticals NGC 5193 and IC 4329. Both galaxies have rich cluster populations which show up as normal, bimodal sequences in the color-magnitude diagram. We test the observed MMRs and argue that they are statistically real, and not an artifact caused by the method we used. We also argue against asymmetric contamination causing the observed MMR as our mean results are no different from other contamination-free studies. Finally, we compare our method to the standard bimodal fitting method (KMM or RMIX) and find our results are consistent. Interpretation of these results is consistent with recent models for GC formation in which the MMR is determined by GC self-enrichment during their brief formation period.
3. New 2MASS near-infrared photometry for globular clusters in M31
Energy Technology Data Exchange (ETDEWEB)
Wang, Song; Ma, Jun; Wu, Zhenyu; Zhou, Xu, E-mail: [email protected] [Key Laboratory of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012 (China)
2014-07-01
We present Two Micron All Sky Survey JHK {sub s} photometry for 913 star clusters and candidates in the field of M31, which are selected from the latest Revised Bologna Catalog of M31 globular clusters (GCs) and candidates. The photometric measurements in this paper supplement this catalog, and provide the most comprehensive and homogeneous photometric catalog for M31 GCs in the JHK {sub s} bandpasses. In general, our photometry is consistent with previous measurements. The globular cluster luminosity function (GCLF) peaks for the confirmed GCs derived by fitting a t {sub 5} distribution using the maximum likelihood method are J{sub 0}=15.348{sub −0.208}{sup +0.206}, H{sub 0}=14.703{sub −0.180}{sup +0.176}, and K{sub s0}=14.534{sub −0.146}{sup +0.142}, all of which agree well with previous studies. The GCLFs are different between metal-rich (MR) and metal-poor (MP), and between inner and outer subpopulations, as MP clusters are fainter than their MR counterparts and the inner clusters are brighter than the outer ones, which confirm previous results. The NIR colors of the GC candidates are on average redder than those of the confirmed GCs, which leads to an obscure bimodal distribution of color indices. The relation of (V – K {sub s}){sub 0} and metallicity shows a notable departure from linearity, with a shallower slope toward the redder end. The color-magnitude diagram (CMD) and color-color diagram show that many GC candidates are located out of the evolutionary tracks, suggesting that some of them may be false M31 GC candidates. The CMD also shows that the initial mass function of M31 GCs covers a large range, and the majority of the clusters have initial masses between 10{sup 3} and 10{sup 6} M {sub ☉}.
4. Micro-Bulges Investigation on Laser Modified Tool Steel Surface
Directory of Open Access Journals (Sweden)
Fauzun Fazliana
2017-01-01
Full Text Available This paper presents micro-bulges investigation on laser modified tool steel. The aim of this study is to understand the effect of laser irradiance and interaction time on surface morphology configuration. An Nd:YAG laser system with TEM00 pulse processing mode was used to modify the samples. Metallographic study shows samples were analyzed for focal position effect on melted pool size, angle of peaks geometry and laser modified layer depth. Surface morphology were analyzed for surface roughness. Laser modified layer shows depth ranged between 42.22 and 420.12 μm. Angle of peak bulge was found to be increase with increasing peak power. The maximum roughness, Ra, achieved in modified H13 was 21.10 μm. These findings are significant to enhance surface properties of laser modified steel and cast iron for dies and high wear resistance applications.
5. The Gaia-ESO Survey: Separating disk chemical substructures with cluster models. Evidence of a separate evolution in the metal-poor thin disk
Science.gov (United States)
Rojas-Arriagada, A.; Recio-Blanco, A.; de Laverny, P.; Schultheis, M.; Guiglion, G.; Mikolaitis, Š.; Kordopatis, G.; Hill, V.; Gilmore, G.; Randich, S.; Alfaro, E. J.; Bensby, T.; Koposov, S. E.; Costado, M. T.; Franciosini, E.; Hourihane, A.; Jofré, P.; Lardo, C.; Lewis, J.; Lind, K.; Magrini, L.; Monaco, L.; Morbidelli, L.; Sacco, G. G.; Worley, C. C.; Zaggia, S.; Chiappini, C.
2016-02-01
Context. Recent spectroscopic surveys have begun to explore the Galactic disk system on the basis of large data samples, with spatial distributions sampling regions well outside the solar neighborhood. In this way, they provide valuable information for testing spatial and temporal variations of disk structure kinematics and chemical evolution. Aims: The main purposes of this study are to demonstrate the usefulness of a rigorous mathematical approach to separate substructures of a stellar sample in the abundance-metallicity plane, and provide new evidence with which to characterize the nature of the metal-poor end of the thin disk sequence. Methods: We used a Gaussian mixture model algorithm to separate in the [Mg/Fe] vs. [Fe/H] plane a clean disk star subsample (essentially at RGC -0.25 dex) highlight a change in the slope at solar metallicity. This holds true at different radial regions of the Milky Way. The distribution of Galactocentric radial distances of the metal-poor part of the thin disk ([Fe/H] Cambridge Astronomy Survey Unit (CASU) at the Institute of Astronomy, University of Cambridge, and by the FLAMES/UVES reduction team at INAF/Osservatorio Astrofisico di Arcetri. These data have been obtained from the Gaia-ESO Survey Data Archive, prepared and hosted by the Wide Field Astronomy Unit, Institute for Astronomy, University of Edinburgh, which is funded by the UK Science and Technology Facilities Council.
6. OBSERVATIONS OF BINARY STARS WITH THE DIFFERENTIAL SPECKLE SURVEY INSTRUMENT. V. TOWARD AN EMPIRICAL METAL-POOR MASS–LUMINOSITY RELATION
International Nuclear Information System (INIS)
Horch, Elliott P.; Van Altena, William F.; Demarque, Pierre; Howell, Steve B.; Everett, Mark E.; Ciardi, David R.; Teske, Johanna K.; Henry, Todd J.; Winters, Jennifer G.
2015-01-01
In an effort to better understand the details of the stellar structure and evolution of metal-poor stars, the Gemini North telescope was used on two occasions to take speckle imaging data of a sample of known spectroscopic binary stars and other nearby stars in order to search for and resolve close companions. The observations were obtained using the Differential Speckle Survey Instrument, which takes data in two filters simultaneously. The results presented here are of 90 observations of 23 systems in which one or more companions was detected, and six stars where no companion was detected to the limit of the camera capabilities at Gemini. In the case of the binary and multiple stars, these results are then further analyzed to make first orbit determinations in five cases, and orbit refinements in four other cases. The mass information is derived, and since the systems span a range in metallicity, a study is presented that compares our results with the expected trend in total mass as derived from the most recent Yale isochrones as a function of metal abundance. These data suggest that metal-poor main-sequence stars are less massive at a given color than their solar-metallicity analogues in a manner consistent with that predicted from the theory
7. EXAMINATION OF THE MASS-DEPENDENT Li DEPLETION HYPOTHESIS BY THE Li ABUNDANCES OF THE VERY METAL-POOR DOUBLE-LINED SPECTROSCOPIC BINARY G166-45
International Nuclear Information System (INIS)
Aoki, Wako; Ito, Hiroko; Tajitsu, Akito
2012-01-01
The Li abundances of the two components of the very metal-poor ([Fe/H] –2.5) double-lined spectroscopic binary G166-45 (BD+26°2606) are determined separately based on high-resolution spectra obtained with the Subaru Telescope High Dispersion Spectrograph and its image slicer. From the photometric colors and the mass ratio, the effective temperatures of the primary and secondary components are estimated to be 6350 ± 100 K and 5830 ± 170 K, respectively. The Li abundance of the primary (A(Li) = 2.23) agrees well with the Spite plateau value, while that of the secondary is slightly lower (A(Li) = 2.11). Such a discrepancy of the Li abundances between the two components is previously found in the extremely metal-poor, double-lined spectroscopic binary CS 22876-032; however, the discrepancy in G166-45 is much smaller. The results agree with the trends found for Li abundance as a function of effective temperature (and of stellar mass) of main-sequence stars with –3.0 eff ∼ 5800 K is not particularly large in this metallicity range. The significant Li depletion found in CS 22876-032B is a phenomenon only found in the lowest metallicity range ([Fe/H] < –3).
8. Bulging of cans containing plutonium residues. Summary report
International Nuclear Information System (INIS)
Van Konynenburg, R.A.; Wood, D.H.; Condit, R.H.; Shikany, S.D.
1996-03-01
In 1994, two cans in the Lawrence Livermore National Laboratory Plutonium Facility were found to be bulging as a result of the generation of gases form the plutonium ash residues contained in the cans. This report describes the chronology of this discovery, the response actions that revealed other pressurized cans, the analysis of the causes, the short-term remedial action, a followup inspection of the short-term storage packages, and a review of proposed long-term remedial options
9. Spiral Galaxy Central Bulge Tangential Speed of Revolution Curves
Science.gov (United States)
Taff, Laurence
2013-03-01
The objective was to, for the first time in a century, scientifically analyze the rotation curves'' (sic) of the central bulges of scores of spiral galaxies. I commenced with a methodological, rational, geometrical, arithmetic, and statistical examination--none of them carried through before--of the radial velocity data. The requirement for such a thorough treatment is the paucity of data typically available for the central bulge: fewer than 10 observations and frequently only five. The most must be made of these. A consequence of this logical handling is the discovery of a unique model for the central bulge volume mass density resting on the positive slope, linear, rise of its tangential speed of revolution curve and hence--for the first time--a reliable mass estimate. The deduction comes from a known physics-based, mathematically valid, derivation (not assertion). It rests on the full (not partial) equations of motion plus Poisson's equation. Following that is a prediction for the gravitational potential energy and thence the gravitational force. From this comes a forecast for the tangential speed of revolution curve. It was analyzed in a fashion identical to that of the data thereby closing the circle and demonstrating internal self-consistency. This is a hallmark of a scientific method-informed approach to an experimental problem. Multiple plots of the relevant quantities and measures of goodness of fit will be shown. Astronomy related
10. The Metallicity Gradient of the Old Galactic Bulge Population
Science.gov (United States)
Sans Fuentes, Sara Alejandra; De Ridder, Joris
Understanding the structure, formation and evolution of the Galactic Bulge requires the proper determination of spatial metallicity gradients in both the radial and vertical directions. RR Lyrae pulsators, known to be excellent distance indicators, may hold the key to determining these gradients. Jurcsik and Kovacs (A&A 312:111, 1996) has shown that RR Lyrae light curves and the phase difference of their Fourier decomposition, ϕ 31, can be used to estimate photometric metallicities. The existence of galactic bulge metallicity gradients is a currently debated topic that would help pinpoint the Galaxy's formation and evolution. A recent study of the OGLE-III Galactic Bulge RR Lyrae Population by Pietrukowicz et al. (ApJ 750:169, 2012) suggests that the spatial distribution is uniform. We investigate how small a gradient would be detectable within the current S/N levels of the present data set, given the random and systematic errors associated with the derivation of a photometric metallicity versus spatial position relationship.
11. a Study of the AGB in Local Group Bulge Populations
Science.gov (United States)
Rich, R.
1994-01-01
We propose to survey the bolometric luminosities, colors, and space distribution of the most luminous asymptotic giant branch (AGB) stars in the bulges of M31, M32, and M33. We seek to discover whether the bulges of these galaxies are relatively young, of order 10 Gyr rather than 15 Gyr. We will use WFPC2 and the R, I, and F1042M (1 micron) filters. Knowing that F1042M falls on the first continuum point of M giants, we have shown that we can use 1.04 micron fluxes to reliably calculate bolometric magnitudes for these very red stars. Color information from R and I will permit (1) comparison with Galactic bulge M giants, (2) an estimate of the spread of abundance and (3) increase the accuracy of the bolometric magnitudes. Frames with the damaged HST show signs of resolution to within 3" of the M31 nucleus; Red images with the aberrated HST show a red star cluster associated with the nucleus. Ground-based studies of M32 find an intermediate-age population from spectroscopy and infrared photometry. The repaired HST should resolve stars close to the nuclei of these galaxies. We will measure bolometric luminosity functions to determine if the populations are intermediate age, and attempt to measure the abundance range for stars near the nuclei of these galaxies. If metals have been lost due to winds, theory predicts that we should see a substantial spread of abundances even near the nucleus.
12. The Mystery of Globular Clusters: Uncovering the Complexities of Their Evolution
Science.gov (United States)
O'Malley, Erin Marie
In recent years, evidence has grown for the existence of multiple stellar populations in globular clusters (GCs). However, questions remain regarding the nature of these populations. Photometric observations clearly show discrete populations while spectroscopic observations seem to show a continuous spread. This dissertation provides steps to better understanding GCs and the complexities associated with their evolution. Calibration of stellar evolution models at low metallicity is necessary for comparison to GCs. Accurate abundances of metal-poor subdwarfs are determined and used in this calibration. A Monte Carlo analysis is then performed in order to determine accurate distances, absolute ages, and integrated orbital trajectories for 24 GCs. These results are of critical importance as they not only incorporate the observational uncertainty, but also the uncertainty incurred by the models themselves. Lastly, high resolution spectra of three GCs (NGC 6681, NGC 6584 and NGC 7099) are obtained for a detailed abundance analysis of red giant branch stars. The high resolution and signal-to-noise achieved in these observations allows for the discovery of a statistically significant Na-O anticorrelation in all three clusters, the populations of which agree with those from photometric observations. Although we cannot determine precisely the nature of the polluters that were the predecessors to the enhanced populations, we do know that both s-process and r-process mechanisms contributed to the evolution and these results can be used to help constrain future models of GC polluter candidates.
13. The Little Engines That Could? Globular Clusters Contribute Significantly to Reionization-era Star Formation
Science.gov (United States)
Boylan-Kolchin, Michael
2018-06-01
Metal-poor globular clusters (GCs) are both numerous and ancient, which indicates that they may be important contributors to ionizing radiation in the reionization era. Starting from the observed number density and stellar mass function of old GCs at z = 0, I compute the contribution of GCs to ultraviolet luminosity functions (UVLFs) in the high-redshift Universe (10 ≳ z ≳ 4). Even under absolutely minimal assumptions - no disruption of GCs and no reduction in GC stellar mass from early times to the present - GC star formation contributes non-negligibly to the UVLF at luminosities that are accessible to the Hubble Space Telescope (HST; M1500 ≈ -17). If the stellar masses of GCs were significantly higher in the past, as is predicted by most models explaining GC chemical anomalies, then GCs dominate the UV emission from many galaxies in existing deep-field observations. On the other hand, it is difficult to reconcile observed UVLFs with models requiring stellar masses at birth that exceed present-day stellar masses by more than a factor of 5. The James Webb Space Telescope (JWST) will be able to directly detect individual GCs at z ˜ 6 in essentially all bright galaxies, and many galaxies below the knee of the UVLF, for most of the scenarios considered here. The properties of a subset of high-redshift sources with -19 ≲ M_{1500} ≲ -14 in HST lensing fields indicate that they may actually be GCs in formation.
14. COMPARISON OF ALPHA-ELEMENT-ENHANCED SIMPLE STELLAR POPULATION MODELS WITH MILKY WAY GLOBULAR CLUSTERS
International Nuclear Information System (INIS)
Lee, Hyun-chul; Worthey, Guy; Dotter, Aaron
2009-01-01
We present simple stellar population (SSP) models with scaled-solar and α-element-enhanced abundances. The SSP models are based on the Dartmouth Stellar Evolution Database, our library of synthetic stellar spectra, and a detailed systematic variation of horizontal-branch (HB) morphology with age and metallicity. In order to test the relative importance of a variety of SSP model ingredients, we compare our SSP models with integrated spectra of 41 Milky Way globular clusters (MWGCs) from Schiavon et al. Using the Mg b and Ca4227 indices, we confirm that Mg and Ca are enhanced by about +0.4 and +0.2 dex, respectively, in agreement with results from high-resolution spectra of individual stars in MWGCs. Balmer lines, particularly Hγ and Hδ, of MWGCs are reproduced by our α-enhanced SSP models not only because of the combination of isochrone and spectral effects but also because of our reasonable HB treatment. Moreover, it is shown that the Mg abundance significantly influences Balmer and iron line indices. Finally, the investigation of power-law initial mass function (IMF) variations suggests that an IMF much shallower than Salpeter is unrealistic because the Balmer lines are too strong on the metal-poor side to be compatible with observations.
15. The Age of the Young Bulge-like Population in the Stellar System Terzan 5: Linking the Galactic Bulge to the High-z Universe
NARCIS (Netherlands)
Ferraro, F. R.; Massari, D.; Dalessandro, E.; Lanzoni, B.; Origlia, L.; Rich, R. M.; Mucciarelli, A.
2016-01-01
The Galactic bulge is dominated by an old, metal-rich stellar population. The possible presence and the amount of a young (a few gigayears old) minor component is one of the major issues debated in the literature. Recently, the bulge stellar system Terzan 5 was found to harbor three sub-populations
16. THE HST/ACS COMA CLUSTER SURVEY. IV. INTERGALACTIC GLOBULAR CLUSTERS AND THE MASSIVE GLOBULAR CLUSTER SYSTEM AT THE CORE OF THE COMA GALAXY CLUSTER
International Nuclear Information System (INIS)
Peng, Eric W.; Ferguson, Henry C.; Goudfrooij, Paul; Hammer, Derek; Lucey, John R.; Marzke, Ronald O.; Puzia, Thomas H.; Carter, David; Balcells, Marc; Bridges, Terry; Chiboucas, Kristin; Del Burgo, Carlos; Graham, Alister W.; Guzman, Rafael; Hudson, Michael J.; Matkovic, Ana
2011-01-01
Intracluster stellar populations are a natural result of tidal interactions in galaxy clusters. Measuring these populations is difficult, but important for understanding the assembly of the most massive galaxies. The Coma cluster of galaxies is one of the nearest truly massive galaxy clusters and is host to a correspondingly large system of globular clusters (GCs). We use imaging from the HST/ACS Coma Cluster Survey to present the first definitive detection of a large population of intracluster GCs (IGCs) that fills the Coma cluster core and is not associated with individual galaxies. The GC surface density profile around the central massive elliptical galaxy, NGC 4874, is dominated at large radii by a population of IGCs that extend to the limit of our data (R +4000 -5000 (systematic) IGCs out to this radius, and that they make up ∼70% of the central GC system, making this the largest GC system in the nearby universe. Even including the GC systems of other cluster galaxies, the IGCs still make up ∼30%-45% of the GCs in the cluster core. Observational limits from previous studies of the intracluster light (ICL) suggest that the IGC population has a high specific frequency. If the IGC population has a specific frequency similar to high-S N dwarf galaxies, then the ICL has a mean surface brightness of μ V ∼ 27 mag arcsec -2 and a total stellar mass of roughly 10 12 M sun within the cluster core. The ICL makes up approximately half of the stellar luminosity and one-third of the stellar mass of the central (NGC 4874+ICL) system. The color distribution of the IGC population is bimodal, with blue, metal-poor GCs outnumbering red, metal-rich GCs by a ratio of 4:1. The inner GCs associated with NGC 4874 also have a bimodal distribution in color, but with a redder metal-poor population. The fraction of red IGCs (20%), and the red color of those GCs, implies that IGCs can originate from the halos of relatively massive, L* galaxies, and not solely from the disruption of
17. NONLINEAR COLOR-METALLICITY RELATIONS OF GLOBULAR CLUSTERS. III. ON THE DISCREPANCY IN METALLICITY BETWEEN GLOBULAR CLUSTER SYSTEMS AND THEIR PARENT ELLIPTICAL GALAXIES
International Nuclear Information System (INIS)
Yoon, Suk-Jin; Lee, Sang-Yoon; Cho, Jaeil; Kim, Hak-Sub; Chung, Chul; Kim, Sooyoung; Lee, Young-Wook; Blakeslee, John P.; Peng, Eric W.; Sohn, Sangmo T.
2011-01-01
One of the conundrums in extragalactic astronomy is the discrepancy in observed metallicity distribution functions (MDFs) between the two prime stellar components of early-type galaxies—globular clusters (GCs) and halo field stars. This is generally taken as evidence of highly decoupled evolutionary histories between GC systems and their parent galaxies. Here we show, however, that new developments in linking the observed GC colors to their intrinsic metallicities suggest nonlinear color-to-metallicity conversions, which translate observed color distributions into strongly peaked, unimodal MDFs with broad metal-poor tails. Remarkably, the inferred GC MDFs are similar to the MDFs of resolved field stars in nearby elliptical galaxies and those produced by chemical evolution models of galaxies. The GC MDF shape, characterized by a sharp peak with a metal-poor tail, indicates a virtually continuous chemical enrichment with a relatively short timescale. The characteristic shape emerges across three orders of magnitude in the host galaxy mass, suggesting a universal process of chemical enrichment among various GC systems. Given that GCs are bluer than field stars within the same galaxy, it is plausible that the chemical enrichment processes of GCs ceased somewhat earlier than that of the field stellar population, and if so, GCs preferentially trace the major, vigorous mode of star formation events in galactic formation. We further suggest a possible systematic age difference among GC systems, in that the GC systems in more luminous galaxies are older. This is consistent with the downsizing paradigm whereby stars of brighter galaxies, on average, formed earlier than those of dimmer galaxies; this additionally supports the similar nature shared by GCs and field stars. Although the sample used in this study (the Hubble Space Telescope Advanced Camera for Surveys/Wide Field Channel, WFPC2, and WFC3 photometry for the GC systems in the Virgo galaxy cluster) confines our
18. Photometric studies of globular clusters in the Andromeda Nebula. Luminosity function for old globular clusters
International Nuclear Information System (INIS)
Sharov, A.S.; Lyutyj, V.M.
1989-01-01
The luminosity function for old globular clusters in M 31 is presented. The objects were selected according to their structural and photometric properties. At the usually accepted normal (Gaussian) distribution, the luminosity function is characterized by the following parameters: the mean magnitude, corrected for the extinction inside M 31, V-bar 0 =16 m ,38±0 m .08, and the absolute magnitude M-bar v =-8 m .29 assuming )m-M) v =23 m .67, standard deviation σ M v =1 m .16±0 m .08 and total object number N=300±17. Old globular clusters in M 31 are in the average about one magnitude more luminous then those in our Galaxy (M v ≅ -7 m .3). Intrinsic luminosity dispersions of globular clusters are nearly the same in both galaxies. Available data on globular clusters in the Local Group galaxies against the universality of globular luminosity function with identical parameters M v and σ M v
19. YOUNG STARS IN AN OLD BULGE: A NATURAL OUTCOME OF INTERNAL EVOLUTION IN THE MILKY WAY
International Nuclear Information System (INIS)
Ness, M.; Debattista, Victor P.; Cole, D. R.; Bensby, T.; Feltzing, S.; Roškar, R.; Johnson, J. A.; Freeman, K.
2014-01-01
The center of our disk galaxy, the Milky Way, is dominated by a boxy/peanut-shaped bulge. Numerous studies of the bulge based on stellar photometry have concluded that the bulge stars are exclusively old. The perceived lack of young stars in the bulge strongly constrains its likely formation scenarios, providing evidence that the bulge is a unique population that formed early and separately from the disk. However, recent studies of individual bulge stars using the microlensing technique have reported that they span a range of ages, emphasizing that the bulge may not be a monolithic structure. In this Letter we demonstrate that the presence of young stars that are located predominantly nearer to the plane is expected for a bulge that has formed from the disk via dynamical instabilities. Using an N-body+ smoothed particle hydrodynamics simulation of a disk galaxy forming out of gas cooling inside a dark matter halo and forming stars, we find a qualitative agreement between our model and the observations of younger metal-rich stars in the bulge. We are also able to partially resolve the apparent contradiction in the literature between results that argue for a purely old bulge population and those that show a population comprised of a range in ages; the key is where to look
20. Infrared dust emission from globular clusters
International Nuclear Information System (INIS)
Angeletti, L.; Capuzzo-Dolcetta, R.; Giannone, P.; Blanco, A.; Bussoletti, E.
1982-01-01
The implications of the presence of a central cloud in the cores of globular clusters were investigated recently. A possible mechanism of confinement of dust in the central region of our cluster models was also explored. The grain temperature and infrared emission have now been computed for rather realistic grain compositions. The grain components were assumed to be graphite and/or silicates. The central clouds turned out to be roughly isothermal. The wavelengths of maximum emission came out to be larger than 20 μm in all studied cases. An application of the theoretical results to five globular clusters showed that the predictable infrared emission for 47 Tuc, M4 and M22 should be detectable by means of present instrumentation aboard flying platforms. (author)
1. Infrared dust emission from globular clusters
Energy Technology Data Exchange (ETDEWEB)
Angeletti, L; Capuzzo-Dolcetta, R; Giannone, P. (Rome Univ. (Italy). Osservatorio Astronomico); Blanco, A; Bussoletti, E [Lecce Univ. (Italy). Ist. di Fisica
1982-05-01
The implications of the presence of a central cloud in the cores of globular clusters were investigated recently. A possible mechanism of confinement of dust in the central region of our cluster models was also explored. The grain temperature and infrared emission have now been computed for rather realistic grain compositions. The grain components were assumed to be graphite and/or silicates. The central clouds turned out to be roughly isothermal. The wavelengths of maximum emission came out to be larger than 20 ..mu..m in all studied cases. An application of the theoretical results to five globular clusters showed that the predictable infrared emission for 47 Tuc, M4 and M22 should be detectable by means of present instrumentation aboard flying platforms.
2. Chemical Abundances of Giants in Globular Clusters
Science.gov (United States)
Gratton, Raffaele G.; Bragaglia, Angela; Carretta, Eugenio; D'Orazi, Valentina; Lucatello, Sara
A large fraction of stars form in clusters. According to a widespread paradigma, stellar clusters are prototypes of single stellar populations. According to this concept, they formed on a very short time scale, and all their stars share the same chemical composition. Recently it has been understood that massive stellar clusters (the globular clusters) rather host various stellar populations, characterized by different chemical composition: these stellar populations have also slightly different ages, stars of the second generations being formed from the ejecta of part of those of an earlier one. Furthermore, it is becoming clear that the efficiency of the process is quite low: many more stars formed within this process than currently present in the clusters. This implies that a significant, perhaps even dominant fraction of the ancient population of galaxies formed within the episodes that lead to formation the globular clusters.
3. Pyroelectricity in globular protein lysozyme films
Science.gov (United States)
Stapleton, A.; Noor, M. R.; Haq, E. U.; Silien, C.; Soulimane, T.; Tofail, S. A. M.
2018-03-01
Pyroelectricity is the ability of certain non-centrosymmetric materials to generate an electric charge in response to a change in temperature and finds use in a range of applications from burglar alarms to thermal imaging. Some biological materials also exhibit pyroelectricity but the examples of the effect are limited to fibrous proteins, polypeptides, and tissues and organs of animals and plants. Here, we report pyroelectricity in polycrystalline aggregate films of lysozyme, a globular protein.
4. Statistical interior properties of globular proteins
International Nuclear Information System (INIS)
Zhou-Ting, Jiang; Tai-Quan, Wu; Lin-Xi, Zhang; Ting-Ting, Sun
2009-01-01
The character of forming long-range contacts affects the three-dimensional structure of globular proteins deeply. As the different ability to form long-range contacts between 20 types of amino acids and 4 categories of globular proteins, the statistical properties are thoroughly discussed in this paper. Two parameters N C and N D are defined to confine the valid residues in detail. The relationship between hydrophobicity scales and valid residue percentage of each amino acid is given in the present work and the linear functions are shown in our statistical results. It is concluded that the hydrophobicity scale defined by chemical derivatives of the amino acids and nonpolar phase of large unilamellar vesicle membranes is the most effective technique to characterise the hydrophobic behavior of amino acid residues. Meanwhile, residue percentage P i and sequential residue length L i of a certain protein i are calculated under different conditions. The statistical results show that the average value of P i as well as L i of all-α proteins has a minimum among these 4 classes of globular proteins, indicating that all-α proteins are hardly capable of forming long-range contacts one by one along their linear amino acid sequences. All-β proteins have a higher tendency to construct long-range contacts along their primary sequences related to the secondary configurations, i.e. parallel and anti-parallel configurations of β sheets. The investigation of the interior properties of globular proteins give us the connection between the three-dimensional structure and its primary sequence data or secondary configurations, and help us to understand the structure of protein and its folding process well. (cross-disciplinary physics and related areas of science and technology)
5. Close stellar encounters in globular clusters
International Nuclear Information System (INIS)
Bailyn, C.D.
1989-01-01
Stellar encounters are expected to produce a variety of interesting objects in the cores of globular clusters, either through the formation of binaries by tidal capture, or direct collisions. Here, I describe several attempts to observe the products of stellar encounters. In particular, the use of color maps has demonstrated the existence of a color gradient in the core of M15, which seems to be caused by a population of faint blue objects concentrated towards the cluster center. (author)
6. Exploring the Internal Dynamics of Globular Clusters
Science.gov (United States)
Watkins, Laura L.; van der Marel, Roeland; Bellini, Andrea; Luetzgendorf, Nora; HSTPROMO Collaboration
2018-01-01
Exploring the Internal Dynamics of Globular ClustersThe formation histories and structural properties of globular clusters are imprinted on their internal dynamics. Energy equipartition results in velocity differences for stars of different mass, and leads to mass segregation, which results in different spatial distributions for stars of different mass. Intermediate-mass black holes significantly increase the velocity dispersions at the centres of clusters. By combining accurate measurements of their internal kinematics with state-of-the-art dynamical models, we can characterise both the velocity dispersion and mass profiles of clusters, tease apart the different effects, and understand how clusters may have formed and evolved.Using proper motions from the Hubble Space Telescope Proper Motion (HSTPROMO) Collaboration for a set of 22 Milky Way globular clusters, and our discrete dynamical modelling techniques designed to work with large, high-quality datasets, we are studying a variety of internal cluster properties. We will present the results of theoretical work on simulated clusters that demonstrates the efficacy of our approach, and preliminary results from application to real clusters.
7. WASP-36b: A NEW TRANSITING PLANET AROUND A METAL-POOR G-DWARF, AND AN INVESTIGATION INTO ANALYSES BASED ON A SINGLE TRANSIT LIGHT CURVE
Energy Technology Data Exchange (ETDEWEB)
Smith, A. M. S.; Anderson, D. R.; Hellier, C.; Maxted, P. F. L.; Smalley, B.; Southworth, J. [Astrophysics Group, Keele University, Staffordshire, ST5 5BG (United Kingdom); Collier Cameron, A. [SUPA, School of Physics and Astronomy, University of St Andrews, North Haugh, Fife, KY16 9SS (United Kingdom); Gillon, M.; Jehin, E. [Institut d' Astrophysique et de Geophysique, Universite de Liege, Allee du 6 Aout, 17 Bat. B5C, Liege 1 (Belgium); Lendl, M.; Queloz, D.; Triaud, A. H. M. J.; Pepe, F.; Segransan, D.; Udry, S. [Observatoire de Geneve, Universite de Geneve, 51 Chemin des Maillettes, 1290 Sauverny (Switzerland); West, R. G. [Department of Physics and Astronomy, University of Leicester, Leicester, LE1 7RH (United Kingdom); Barros, S. C. C.; Pollacco, D. [Astrophysics Research Centre, School of Mathematics and Physics, Queen' s University, University Road, Belfast, BT7 1NN (United Kingdom); Street, R. A., E-mail: [email protected] [Las Cumbres Observatory, 6740 Cortona Drive Suite 102, Goleta, CA 93117 (United States)
2012-04-15
We report the discovery, from WASP and CORALIE, of a transiting exoplanet in a 1.54 day orbit. The host star, WASP-36, is a magnitude V = 12.7, metal-poor G2 dwarf (T{sub eff} = 5959 {+-} 134 K), with [Fe/H] =-0.26 {+-} 0.10. We determine the planet to have mass and radius, respectively, 2.30 {+-} 0.07 and 1.28 {+-} 0.03 times that of Jupiter. We have eight partial or complete transit light curves, from four different observatories, which allow us to investigate the potential effects on the fitted system parameters of using only a single light curve. We find that the solutions obtained by analyzing each of these light curves independently are consistent with our global fit to all the data, despite the apparent presence of correlated noise in at least two of the light curves.
8. METAL-POOR STARS OBSERVED WITH THE MAGELLAN TELESCOPE. I. CONSTRAINTS ON PROGENITOR MASS AND METALLICITY OF AGB STARS UNDERGOING s-PROCESS NUCLEOSYNTHESIS
Energy Technology Data Exchange (ETDEWEB)
Placco, Vinicius M.; Rossi, Silvia [Departamento de Astronomia-Instituto de Astronomia, Geofisica e Ciencias Atmosfericas, Universidade de Sao Paulo, Sao Paulo, SP 05508-900 (Brazil); Frebel, Anna [Massachusetts Institute of Technology and Kavli Institute for Astrophysics and Space Research, 77 Massachusetts Avenue, Cambridge, MA 02139 (United States); Beers, Timothy C. [National Optical Astronomy Observatory, Tucson, AZ 85719 (United States); Karakas, Amanda I.; Kennedy, Catherine R. [Research School of Astronomy and Astrophysics, The Australian National University, Cotter Road, Weston, ACT 2611 (Australia); Christlieb, Norbert [Zentrum fuer Astronomie der Universitaet Heidelberg, Landessternwarte, Koenigstuhl 12, D-69117 Heidelberg (Germany); Stancliffe, Richard J. [Argelander-Institut fuer Astronomie der Universitaet Bonn, Auf dem Huegel 71, D-53121 Bonn (Germany)
2013-06-20
We present a comprehensive abundance analysis of two newly discovered carbon-enhanced metal-poor (CEMP) stars. HE 2138-3336 is a s-process-rich star with [Fe/H] = -2.79, and has the highest [Pb/Fe] abundance ratio measured thus far, if non-local thermodynamic equilibrium corrections are included ([Pb/Fe] = +3.84). HE 2258-6358, with [Fe/H] = -2.67, exhibits enrichments in both s- and r-process elements. These stars were selected from a sample of candidate metal-poor stars from the Hamburg/ESO objective-prism survey, and followed up with medium-resolution (R {approx} 2000) spectroscopy with GEMINI/GMOS. We report here on derived abundances (or limits) for a total of 34 elements in each star, based on high-resolution (R {approx} 30, 000) spectroscopy obtained with Magellan-Clay/MIKE. Our results are compared to predictions from new theoretical asymptotic giant branch (AGB) nucleosynthesis models of 1.3 M{sub Sun} with [Fe/H] = -2.5 and -2.8, as well as to a set of AGB models of 1.0 to 6.0 M{sub Sun} at [Fe/H] = -2.3. The agreement with the model predictions suggests that the neutron-capture material in HE 2138-3336 originated from mass transfer from a binary companion star that previously went through the AGB phase, whereas for HE 2258-6358, an additional process has to be taken into account to explain its abundance pattern. We find that a narrow range of progenitor masses (1.0 {<=} M(M{sub Sun }) {<=} 1.3) and metallicities (-2.8 {<=} [Fe/H] {<=}-2.5) yield the best agreement with our observed elemental abundance patterns.
9. On the necessity of composition-dependent low-temperature opacity in models of metal-poor asymptotic giant branch stars
Energy Technology Data Exchange (ETDEWEB)
Constantino, Thomas; Campbell, Simon; Lattanzio, John [Monash Centre for Astrophysics, School of Mathematical Sciences, Monash University, Victoria 3800 (Australia); Gil-Pons, Pilar, E-mail: [email protected] [Department of Applied Physics, Polytechnic University of Catalonia, 08860 Barcelona (Spain)
2014-03-20
The vital importance of composition-dependent low-temperature opacity in low-mass (M ≤ 3 M {sub ☉}) asymptotic giant branch (AGB) stellar models of metallicity Z ≥ 0.001 has recently been demonstrated. Its significance to more metal-poor, intermediate-mass (M ≥ 2.5 M {sub ☉}) models has yet to be investigated. We show that its inclusion in lower-metallicity models ([Fe/H] ≤–2) is essential and that there exists no threshold metallicity below which composition-dependent molecular opacity may be neglected. We find it to be crucial in all intermediate-mass models investigated ([Fe/H] ≤–2 and 2.5 ≤ M/M {sub ☉} ≤ 5), because of the evolution of the surface chemistry, including the orders of magnitude increase in the abundance of molecule-forming species. Its effect on these models mirrors that previously reported for higher-metallicity models—increase in radius, decrease in T {sub eff}, faster mass loss, shorter thermally pulsing AGB lifetime, reduced enrichment in third dredge-up products (by a factor of 3-10), and an increase in the mass limit for hot bottom burning. We show that the evolution of low-metallicity models with composition-dependent low-temperature opacity is relatively independent of initial metal abundance because its contribution to the opacity is far outweighed by changes resulting from dredge-up. Our results imply a significant reduction in the expected number of nitrogen-enhanced metal-poor stars, which may help explain their observed paucity. We note that these findings are partially a product of the macrophysics adopted in our models, in particular, the Vassiliadis and Wood mass loss rate which is strongly dependent on radius.
10. Metal-Poor, Strongly Star-Forming Galaxies in the DEEP2 Survey: The Relationship Between Stellar Mass, Temperature-Based Metallicity, and Star Formation Rate
Science.gov (United States)
Ly, Chun; Rigby, Jane R.; Cooper, Michael; Yan, Renbin
2015-01-01
We report on the discovery of 28 redshift (z) approximately equal to 0.8 metal-poor galaxies in DEEP2. These galaxies were selected for their detection of the weak [O (sub III)] lambda 4363 emission line, which provides a "direct" measure of the gas-phase metallicity. A primary goal for identifying these rare galaxies is to examine whether the fundamental metallicity relation (FMR) between stellar mass, gas metallicity, and star formation rate (SFR) holds for low stellar mass and high SFR galaxies. The FMR suggests that higher SFR galaxies have lower metallicity (at fixed stellar mass). To test this trend, we combine spectroscopic measurements of metallicity and dust-corrected SFR with stellar mass estimates from modeling the optical photometry. We find that these galaxies are 1.05 plus or minus 0.61 dex above the redshift (z) approximately 1 stellar mass-SFR relation and 0.23 plus or minus 0.23 dex below the local mass-metallicity relation. Relative to the FMR, the latter offset is reduced to 0.01 dex, but significant dispersion remains dex with 0.16 dex due to measurement uncertainties). This dispersion suggests that gas accretion, star formation, and chemical enrichment have not reached equilibrium in these galaxies. This is evident by their short stellar mass doubling timescale of approximately equal to 100 (sup plus 310) (sub minus 75) million years which suggests stochastic star formation. Combining our sample with other redshift (z) of approximately 1 metal-poor galaxies, we find a weak positive SFR-metallicity dependence (at fixed stellar mass) that is significant at 94.4 percent confidence. We interpret this positive correlation as recent star formation that has enriched the gas but has not had time to drive the metal-enriched gas out with feedback mechanisms.
11. Pregalactic formation of globular clusters in cold dark matter
International Nuclear Information System (INIS)
Faber, S.M.; Blumenthal, G.R.; Rosenblatt, E.I.
1988-01-01
The pregalactic hypothesis for the formation of globular clusters is reconsidered in the light of Zinn's (1985) discovery of a two-component globular population in the Milky Way. For a cold dark matter spectrum, high-sigma fluctuations of 10 to the 5th - 10 to the 6th solar masses are assumed to be the progenitors of the spheroidal population of globular clusters. The mass fraction of globular clusters in galaxies then requires that perturbations above roughly 2.8 sigma survive as globulars, and their observed radii require baryonic collapse factors of order 10. Such an absolute density threshold for globular cluster formation achieves adequate fits to observed cluster radii and densities, the mass fraction of globulars versus Hubble type, the radial density profile of globulars within galaxies, and the globular luminosity function. However, a fixed density threshold criterion for cluster survival lacks convincing physical justification and does not by itself explain the homogeneous metallicities within clusters or the large metallicity variations from cluster to cluster and from galaxy to galaxy. 33 references
12. CVs and millisecond pulsar progenitors in globular clusters
Science.gov (United States)
Grindlay, J. E.; Cool, A. M.; Bailyn, C. D.
1991-01-01
The recent discovery of a large population of millisecond pulsars in globular clusters, together with earlier studies of both low luminosity X-ray sources and LMXBs in globulars, suggest there should be significant numbers of CVs in globulars. Although they have been searched for without success in selected cluster X-ray source fields, systematic surveys are lacking and would constrain binary production and both stellar and dynamical evolution in globular clusters. We describe the beginnings of such a search, using narrow band H-alpha imaging, and the sensitivities it might achieve.
13. Bulging of pressure tubes at hot spots under LOCA conditions
International Nuclear Information System (INIS)
Manu, C.; Shewfelt, R.S.W.; Wright, A.C.D.; Aboud, R.; Lau, J.H.K.; Sanderson, D.B.
1996-01-01
During certain postulated loss-of-coolant accidents (LOCA) in a CANDU reactor, some fuel channels can become highly voided within a very short time. Although the pressure tubes are heated mainly by convection and thermal radiation during the LOCA transient, additional heat flow occurs through the bearing pads that are in contact with the pressure tribe. This contact can lead to local hot spots and associated thermal stresses in the pressure tube wall. The two factors that affects the behavior of the pressure tubes during LOCA conditions are the internal pressure and the local heating. Although the effect of internal pressure and of axially uniform temperature has been studied elsewhere, the effect of the local heating on the pressure tube behavior has not been modelled before. This paper shows that the bulging of a pressure tube at a hot spot is the result of the thermal stresses that are developed in a pressure tube during a LOCA transient. To isolate the local heating effect from the internal pressure, a series of single-effect experiments was performed. In these experiments, sections of a CANDU pressure tube were subjected to local heating only. The thermal profile and the local deformation were measured function of time. To quantify the effect of the thermal stresses on the bulging of pressure tubes at hot spots and to develop numerical tools that can predict such bulging, finite element analyses were performed rising the ABAQUS finite element computer code. Use of the measured thermal profiles in the ABAQUS finite element analysis, resulted in very good agreement between the predicted and measured displacements. (author)
14. Chemical abundances of giant stars in NGC 5053 and NGC 5634, two globular clusters associated with the Sagittarius dwarf spheroidal galaxy?
Science.gov (United States)
Sbordone, L.; Monaco, L.; Moni Bidin, C.; Bonifacio, P.; Villanova, S.; Bellazzini, M.; Ibata, R.; Chiba, M.; Geisler, D.; Caffau, E.; Duffau, S.
2015-07-01
Context. The tidal disruption of the Sagittarius dwarf spheroidal galaxy (Sgr dSph) is producing the most prominent substructure in the Milky Way (MW) halo, the Sagittarius Stream. Aside from field stars, it is suspected that the Sgr dSph has lost a number of globular clusters (GC). Many Galactic GC are thought to have originated in the Sgr dSph. While for some candidates an origin in the Sgr dSph has been confirmed owing to chemical similarities, others exist whose chemical composition has never been investigated. Aims: NGC 5053 and NGC 5634 are two of these scarcely studied Sgr dSph candidate-member clusters. To characterize their composition we analyzed one giant star in NGC 5053, and two in NGC 5634. Methods: We analyze high-resolution and signal-to-noise spectra by means of the MyGIsFOS code, determining atmospheric parameters and abundances for up to 21 species between O and Eu. The abundances are compared with those of MW halo field stars, of unassociated MW halo globulars, and of the metal-poor Sgr dSph main body population. Results: We derive a metallicity of [Fe ii/H] = -2.26 ± 0.10 for NGC 5053, and of [Fe i/H] = -1.99 ± 0.075 and -1.97 ± 0.076 for the two stars in NGC 5634. This makes NGC 5053 one of the most metal-poor globular clusters in the MW. Both clusters display an α enhancement similar to the one of the halo at comparable metallicity. The two stars in NGC 5634 clearly display the Na-O anticorrelation widespread among MW globulars. Most other abundances are in good agreement with standard MW halo trends. Conclusions: The chemistry of the Sgr dSph main body populations is similar to that of the halo at low metallicity. It is thus difficult to discriminate between an origin of NGC 5053 and NGC 5634 in the Sgr dSph, and one in the MW. However, the abundances of these clusters do appear closer to that of Sgr dSph than of the halo, favoring an origin in the Sgr dSph system. Appendix A is available in electronic form at http
15. The hair follicle bulge: a niche for adult stem cells.
Science.gov (United States)
Pasolli, Hilda Amalia
2011-08-01
Adult stem cells (SCs) are essential for tissue homeostasis and wound repair. They have the ability to both self-renew and differentiate into multiple cell types. They often reside in specialized microenvironments or niches that preserve their proliferative and tissue regenerative capacity. The murine hair follicle (HF) has a specialized and permanent compartment--the bulge, which safely lodges SCs and provides the necessary molecular cues to regulate their function. The HF undergoes cyclic periods of destruction, regeneration, and rest, making it an excellent system to study SC biology.
16. Incidence and Risk Factors for Parastomal Bulging in Patients with Ileostomy or Colostomy
DEFF Research Database (Denmark)
Andersen, Rune M; Klausen, Tobias W; Danielsen, Anne K
2018-01-01
AIM: To investigate incidence and risk factors for parastomal bulging, a clinically important complication, in patients with an ileostomy or colostomy. METHOD: The Danish Stoma Database Capital Region prospectively collects data on patients with a stoma up to a year after surgery. Stoma care nurses...... an exploratory approach. RESULTS: In a study population of 5019, the cumulative incidence (with competing risks) of parastomal bulging was 36.2% at 400 days after surgery. Age, colostomy, male gender, alcohol consumption, and laparoscopy were associated with an increased risk of parastomal bulging. Compared...... for age and colostomy as being risk factors for parastomal bulging. This article is protected by copyright. All rights reserved....
17. A Comparison of Galaxy Bulge+Disk Decomposition Between Pan-STARRS and SDSS
Science.gov (United States)
Lokken, Martine Elena; McPartland, Conor; Sanders, David B.
2018-01-01
Measurements of the size and shape of bulges in galaxies provide key constraints for models of galaxy evolution. A comprehensive catalog of bulge measurements for Sloan Digital Sky Survey (SDSS) DR7 galaxies is currently available to the public. However, the Pan-STARRS1 (PS1) 3π survey now covers the same region with ~1-2 mag deeper photometry, a ~10-30% smaller PSF, and additional coverage in y-band. To test how much improvement in galaxy parameter measurements (e.g. bulge + disk) can be achieved using the new PS1 data, we make use of ultra-deep imaging data from the Hyper Suprime-Cam (HSC) Subaru Strategic Program (SSP). We fit bulge+disk models to images of 372 bright (mi SSP images shows a tighter correlation between PS1 and SSP measurements for both bulge and disk parameters. Bulge parameters, such as bulge-to-total fraction and bulge radius, show the strongest improvement. However, measurements of all parameters degrade for galaxies with total r-band magnitude below the SDSS spectroscopic limit, mr = 17.7. We plan to use the PS1 3π survey data to produce an updated catalog of bulge+disk decomposition measurements for the entire SDSS DR7 spectroscopic galaxy sample.
18. Observational constraints to boxy/peanut bulge formation time
Science.gov (United States)
Pérez, I.; Martínez-Valpuesta, I.; Ruiz-Lara, T.; de Lorenzo-Caceres, A.; Falcón-Barroso, J.; Florido, E.; González Delgado, R. M.; Lyubenova, M.; Marino, R. A.; Sánchez, S. F.; Sánchez-Blázquez, P.; van de Ven, G.; Zurita, A.
2017-09-01
Boxy/peanut bulges are considered to be part of the same stellar structure as bars and both could be linked through the buckling instability. The Milky Way is our closest example. The goal of this Letter is to determine if the mass assembly of the different components leaves an imprint in their stellar populations allowing the estimation the time of bar formation and its evolution. To this aim, we use integral field spectroscopy to derive the stellar age distributions, SADs, along the bar and disc of NGC 6032. The analysis clearly shows different SADs for the different bar areas. There is an underlying old (≥12 Gyr) stellar population for the whole galaxy. The bulge shows star formation happening at all times. The inner bar structure shows stars of ages older than 6 Gyr with a deficit of younger populations. The outer bar region presents an SAD similar to that of the disc. To interpret our results, we use a generic numerical simulation of a barred galaxy. Thus, we constrain, for the first time, the epoch of bar formation, the buckling instability period and the posterior growth from disc material. We establish that the bar of NGC 6032 is old, formed around 10 Gyr ago while the buckling phase possibly happened around 8 Gyr ago. All these results point towards bars being long-lasting even in the presence of gas.
19. HUBBLE PINPOINTS WHITE DWARFS IN GLOBULAR CLUSTER
Science.gov (United States)
2002-01-01
Peering deep inside a cluster of several hundred thousand stars, NASA's Hubble Space Telescope uncovered the oldest burned-out stars in our Milky Way Galaxy. Located in the globular cluster M4, these small, dying stars - called white dwarfs - are giving astronomers a fresh reading on one of the biggest questions in astronomy: How old is the universe? The ancient white dwarfs in M4 are about 12 to 13 billion years old. After accounting for the time it took the cluster to form after the big bang, astronomers found that the age of the white dwarfs agrees with previous estimates for the universe's age. In the top panel, a ground-based observatory snapped a panoramic view of the entire cluster, which contains several hundred thousand stars within a volume of 10 to 30 light-years across. The Kitt Peak National Observatory's 0.9-meter telescope took this picture in March 1995. The box at left indicates the region observed by the Hubble telescope. The Hubble telescope studied a small region of the cluster. A section of that region is seen in the picture at bottom left. A sampling of an even smaller region is shown at bottom right. This region is only about one light-year across. In this smaller region, Hubble pinpointed a number of faint white dwarfs. The blue circles pinpoint the dwarfs. It took nearly eight days of exposure time over a 67-day period to find these extremely faint stars. Globular clusters are among the oldest clusters of stars in the universe. The faintest and coolest white dwarfs within globular clusters can yield a globular cluster's age. Earlier Hubble observations showed that the first stars formed less than 1 billion years after the universe's birth in the big bang. So, finding the oldest stars puts astronomers within arm's reach of the universe's age. M4 is 7,000 light-years away in the constellation Scorpius. Hubble's Wide Field and Planetary Camera 2 made the observations from January through April 2001. These optical observations were combined to
20. Synthetic properties of models of globular clusters
Energy Technology Data Exchange (ETDEWEB)
Angeletti, L; Dolcetta, R; Giannone, P. (Rome Univ. (Italy). Osservatorio Astronomico)
1980-05-01
Synthetic and projected properties of models of globular clusters have been computed on the basis of stellar evolution and time changes of the dynamical cluster structure. Clusters with five and eight stellar groups (each group consisting of stars with the same mass) were studied. Mass loss from evolved stars was taken into account. Observational features were obtained at ages of 10-19 x 10/sup 9/ yr. The basic importance of the horizontal- and asymptotic-branch stars was pointed out. A comparison of the results with observed data of M3 is discussed with the purpose of obtaining general indications rather than a specific fit.
1. Synthetic properties of models of globular clusters
International Nuclear Information System (INIS)
Angeletti, L.; Dolcetta, R.; Giannone, P.
1980-01-01
Synthetic and projected properties of models of globular clusters have been computed on the basis of stellar evolution and time changes of the dynamical cluster structure. Clusters with five and eight stellar groups (each group consisting of stars with the same mass) were studied. Mass loss from evolved stars was taken into account. Observational features were obtained at ages of 10-19 x 10 9 yr. The basic importance of the horizontal- and asymptotic-branch stars was pointed out. A comparison of the results with observed data of M3 is discussed with the purpose of obtaining general indications rather than a specific fit. (orig.)
2. Deep radio synthesis images of globular clusters
International Nuclear Information System (INIS)
Kulkarni, S.R.; Goss, W.M.; Wolszczan, A.; Middleditch, J.
1990-01-01
Results are reported from a program of high-resolution and high-sensitivity imaging of globular clusters at 20 cm. The findings indicate that there is not a large number of pulsars in compact binaries which have escaped detection in single-dish pulse searches. Such binaries have been postulated to result from tidal captures of single main-sequence stars. It is suggested that most tidal captures involving neutron stars ultimately result in the formation of a spun-up single pulsar and the complete disruption of the main-sequence star. 27 refs
3. Near infrared photometry of globular clusters
International Nuclear Information System (INIS)
Evans, T.L.; Menzies, J.W.
1977-01-01
Photographic photometry on the V, Isub(K) system has been obtained for giant stars in the metal-rich globular clusters NGC 5927, 6171, 6352, 6356, 6388, 6522, 6528, 6712 and 6723. Colour-magnitude diagrams are presented. These data, with earlier observations of NGC 104 (47 Tuc), yield new parameters to describe the giant branch. These are the colour of the red variables, represented by their mean colour (V - Isub(K)) 0 or by the colour (V - Isub(K))sub(BO) of the bluest red variable on the giant branch of a cluster, and ΔV' which is the magnitude difference between the horizontal branch and the highest point on the giant branch. The latter is independent of reddening, since the giant branch of the most metal-rich clusters passes through a maximum in the V, V - Isub(K) plane. These parameters are correlated with the metal content, deduced from integrated photometry: the red variables are redder and the giant branch fainter the higher the metal content. Comparison with theoretical evolutionary tracks suggests that the range in metal content of these clusters is at most a factor of 10, the most metal-rich clusters possibly approaching the solar value. The cluster giant branches and those of open clusters, groups and field stars of the old disk population are compared. The assumption that all the globular clusters have an absolute magnitude on the horizontal branch of Msub(v) = + 0.9, as found recently for 47 Tuc, gives good agreement between the magnitudes of giant stars in the most metal rich of the globular clusters and those of field stars deduced from statistical parallaxes and moving group parallaxes. The values of the parameters ΔV' and (V - Isub(k))sub(BO) also approach those in the moving groups. The globular clusters have a longer horizontal branch, however, and the subgiants are bluer even when the values of ) 7Fe/H{ appear to be the same. (author). )
4. Why are classical bulges more common in S0 galaxies than in spiral galaxies?
Science.gov (United States)
2018-05-01
In this paper, we try to understand why the classical bulge fraction observed in S0 galaxies is significantly higher than that in spiral galaxies. We carry out a comparative study of the bulge and global properties of a sample of spiral and S0 galaxies in a fixed environment. Our sample is flux limited and contains 262 spiral and 155 S0 galaxies drawn from the Sloan Digital Sky Survey. We have classified bulges into classical and pseudobulge categories based on their position on the Kormendy diagram. Dividing our sample into bins of galaxy stellar mass, we find that the fraction of S0 galaxies hosting a classical bulge is significantly higher than the classical bulge fraction seen in spirals even at fixed stellar mass. We have compared the bulge and the global properties of spirals and S0 galaxies in our sample and find indications that spiral galaxies which host a classical bulge, preferentially get converted into S0 population as compared to pseudobulge hosting spirals. By studying the star formation properties of our galaxies in the NUV - r color-mass diagram, we find that the pseudobulge hosting spirals are mostly star forming while the majority of classical bulge host spirals are in the green valley or in the passive sequence. We suggest that some internal process, such as AGN feedback or morphological quenching due to the massive bulge, quenches these classical bulge hosting spirals and transforms them into S0 galaxies, thus resulting in the observed predominance of the classical bulge in S0 galaxies.
5. THE PRODUCTION RATE OF SN Ia EVENTS IN GLOBULAR CLUSTERS
International Nuclear Information System (INIS)
Washabaugh, Pearce C.; Bregman, Joel N.
2013-01-01
In globular clusters, dynamical evolution produces luminous X-ray emitting binaries at a rate about 200 times greater than in the field. If globular clusters also produce SN Ia at a high rate, it would account for many of the SN Ia production in early-type galaxies and provide insight into their formation. Here we use archival Hubble Space Telescope (HST) images of nearby galaxies that have hosted an SN Ia to examine the rate at which globular clusters produce these events. The location of the SN Ia is registered on an HST image obtained before the event or after the supernova (SN) faded. Of the 36 nearby galaxies examined, 21 had sufficiently good data to search for globular cluster hosts. None of the 21 SNe have a definite globular cluster counterpart, although there are some ambiguous cases. This places an upper limit to the enhancement rate of SN Ia production in globular clusters of about 42 at the 95% confidence level, which is an order of magnitude lower than the enhancement rate for luminous X-ray binaries. Even if all of the ambiguous cases are considered as having a globular cluster counterpart, the upper bound for the enhancement rate is 82 at the 95% confidence level, still a factor of several below that needed to account for half of the SN Ia events. Barring unforeseen selection effects, we conclude that globular clusters are not responsible for producing a significant fraction of the SN Ia events in early-type galaxies.
6. GLOBULAR CLUSTERS AND SPUR CLUSTERS IN NGC 4921, THE BRIGHTEST SPIRAL GALAXY IN THE COMA CLUSTER
Energy Technology Data Exchange (ETDEWEB)
Lee, Myung Gyoon; Jang, In Sung, E-mail: [email protected], E-mail: [email protected] [Astronomy Program, Department of Physics and Astronomy, Seoul National University, Gwanak-gu, Seoul 151-742 (Korea, Republic of)
2016-03-01
We resolve a significant fraction of globular clusters (GCs) in NGC 4921, the brightest spiral galaxy in the Coma cluster. We also find a number of extended bright star clusters (star complexes) in the spur region of the arms. The latter are much brighter and bluer than those in the normal star-forming region, being as massive as 3 × 10{sup 5} M{sub ⊙}. The color distribution of the GCs in this galaxy is found to be bimodal. The turnover magnitudes of the luminosity functions of the blue (metal-poor) GCs (0.70 < (V − I) ≤ 1.05) in the halo are estimated V(max) = 27.11 ± 0.09 mag and I(max) = 26.21 ± 0.11 mag. We obtain similar values for NGC 4923, a companion S0 galaxy, and two Coma cD galaxies (NGC 4874 and NGC 4889). The mean value for the turnover magnitudes of these four galaxies is I(max) = 26.25 ± 0.03 mag. Adopting M{sub I} (max) = −8.56 ± 0.09 mag for the metal-poor GCs, we determine the mean distance to the four Coma galaxies to be 91 ± 4 Mpc. Combining this with the Coma radial velocity, we derive a value of the Hubble constant, H{sub 0} = 77.9 ± 3.6 km s{sup −1} Mpc{sup −1}. We estimate the GC specific frequency of NGC 4921 to be S{sub N} = 1.29 ± 0.25, close to the values for early-type galaxies. This indicates that NGC 4921 is in the transition phase to S0s.
7. Chemical study of the metal-rich globular cluster NGC 5927
Science.gov (United States)
Mura-Guzmán, A.; Villanova, S.; Muñoz, C.; Tang, B.
2018-03-01
Globular clusters (GCs) are natural laboratories where stellar and chemical evolution can be studied in detail. In addition, their chemical patterns and kinematics can tell us to which Galactic structure (disc, bulge, halo or extragalactic) the cluster belongs to. NGC 5927 is one of most metal-rich GCs in the Galaxy and its kinematics links it to the thick disc. We present abundance analysis based on high-resolution spectra of seven giant stars. The data were obtained using Fibre Large Array Multi Element Spectrograph/Ultraviolet Echelle Spectrograph (UVES) spectrograph mounted on UT2 telescope of the European Southern Observatory. The principal objective of this work is to perform a wide and detailed chemical abundance analysis of the cluster and look for possible Multiple Populations (MPs). We determined stellar parameters and measured 22 elements corresponding to light (Na, Al), alpha (O, Mg, Si, Ca, Ti), iron-peak (Sc, V, Cr, Mn, Fe, Co, Ni, Cu, Zn), and heavy elements (Y, Zr, Ba, Ce, Nd, Eu). We found a mean iron content of [Fe/H] = -0.47 ± 0.02 (error on the mean). We confirm the existence of MPs in this GC with an O-Na anti-correlation, and moderate spread in Al abundances. We estimate a mean [α/Fe] = 0.25 ± 0.08. Iron-peak elements show no significant spread. The [Ba/Eu] ratios indicate a predominant contribution from SNeII for the formation of the cluster.
8. The Diverse Origins of Neutron-capture Elements in the Metal-poor Star HD 94028: Possible Detection of Products of I-Process Nucleosynthesis
Science.gov (United States)
Roederer, Ian U.; Karakas, Amanda I.; Pignatari, Marco; Herwig, Falk
2016-04-01
We present a detailed analysis of the composition and nucleosynthetic origins of the heavy elements in the metal-poor ([Fe/H] = -1.62 ± 0.09) star HD 94028. Previous studies revealed that this star is mildly enhanced in elements produced by the slow neutron-capture process (s process; e.g., [Pb/Fe] = +0.79 ± 0.32) and rapid neutron-capture process (r process; e.g., [Eu/Fe] = +0.22 ± 0.12), including unusually large molybdenum ([Mo/Fe] = +0.97 ± 0.16) and ruthenium ([Ru/Fe] = +0.69 ± 0.17) enhancements. However, this star is not enhanced in carbon ([C/Fe] = -0.06 ± 0.19). We analyze an archival near-ultraviolet spectrum of HD 94028, collected using the Space Telescope Imaging Spectrograph on board the Hubble Space Telescope, and other archival optical spectra collected from ground-based telescopes. We report abundances or upper limits derived from 64 species of 56 elements. We compare these observations with s-process yields from low-metallicity AGB evolution and nucleosynthesis models. No combination of s- and r-process patterns can adequately reproduce the observed abundances, including the super-solar [As/Ge] ratio (+0.99 ± 0.23) and the enhanced [Mo/Fe] and [Ru/Fe] ratios. We can fit these features when including an additional contribution from the intermediate neutron-capture process (I process), which perhaps operated through the ingestion of H in He-burning convective regions in massive stars, super-AGB stars, or low-mass AGB stars. Currently, only the I process appears capable of consistently producing the super-solar [As/Ge] ratios and ratios among neighboring heavy elements found in HD 94028. Other metal-poor stars also show enhanced [As/Ge] ratios, hinting that operation of the I process may have been common in the early Galaxy. These data are associated with Program 072.B-0585(A), PI. Silva. Some data presented in this paper were obtained from the Barbara A. Mikulski Archive for Space Telescopes (MAST). The Space Telescope Science Institute is
9. Edades relativas de cúmulos globulares
Science.gov (United States)
Miller Bertolami, M.; Forte, J. C.
10. X-ray bursters and the X-ray sources of the galactic bulge
International Nuclear Information System (INIS)
Lewin, W.H.G.; Joss, P.C.; Massachusetts Inst. of Tech., Cambridge; Massachusetts Inst. of Tech., Cambridge
1981-01-01
In this article we shall discuss the observed X-ray, optical, infrared and radio properties of the galactic bulge sources, with an emphasis on those that produce type I X-ray bursts. There is persuasive evidence that these burst sources and many other galactic bulge sources are neutron stars in low-mass, close-binary stellar systems. (orig./WL)
11. The INTEGRAL Galactic bulge monitoring program: the first 1.5 years
NARCIS (Netherlands)
Kuulkers, E.; Shaw, S.E.; Paizis, A.; Chenevez, J.; Brandt, S.; Courvoisier, T.J.L.; Domingo, A.; Ebisawa, K.; Kretschmar, P.; Markwardt, C.B.; Mowlavi, N.; Oosterbroek, T.; Orr, A.; Rísquez, D.; Sanchez-Fernandez, C.; Wijnands, R.
2007-01-01
Aims.The Galactic bulge region is a rich host of variable high-energy point sources. Since 2005, February 17 we are monitoring the source activity in the Galactic bulge region regularly and frequently, i.e., about every three days, with the instruments onboard INTEGRAL. Thanks to the large field of
12. Chemical abundances and dust in planetary nebulae in the Galactic bulge
NARCIS (Netherlands)
Gutenkunst, S.; Bernard-Salas, J.; Pottasch, S. R.; Sloan, G. C.; Houck, J. R.
2008-01-01
We present mid-infrared Spitzer spectra of 11 planetary nebulae in the Galactic bulge. We derive argon, neon, sulfur, and oxygen abundances for them using mainly infrared line fluxes combined with some optical line fluxes from the literature. Due to the high extinction toward the bulge, the infrared
13. No bulging of floor heating pipes to be expected in case of incomplete floor plastering
Energy Technology Data Exchange (ETDEWEB)
1983-02-01
According to advertising slogans floor heating pipes are said to be damaged prematurely by bulges if they are not completely surrounded by flooring plaster. The author has thoroughly dealt with this problem and made the respective measurements. He found out that there are so few bulges occurring that they cannot lead to damages.
14. The chemical evolution of the Galactic Bulge seen through micro-lensing events
Directory of Open Access Journals (Sweden)
Lucatello S.
2012-02-01
Full Text Available Galactic bulges are central to understanding galaxy formation and evolution. Here we report on recent studies using micro-lensing events to obtain spectra of high resolution and moderately high signal-to-noise ratios of dwarf stars in the Galactic bulge. Normally this is not feasible for the faint turn-off stars in the Galactic bulge, but micro-lensing offers this possibility. Elemental abundance trends in the Galactic bulge as traced by dwarf stars are very similar to those seen for dwarf stars in the solar neighbourhood. We discuss the implications of the ages and metallicity distribution function derived for the micro-lensed dwarf stars in the Galactic bulge.
15. Numerical Simulation of Bulging Deformation for Wide-Thick Slab Under Uneven Cooling Conditions
Science.gov (United States)
Wu, Chenhui; Ji, Cheng; Zhu, Miaoyong
2018-02-01
In the present work, the bulging deformation of a wide-thick slab under uneven cooling conditions was studied using finite element method. The non-uniform solidification was first calculated using a 2D heat transfer model. The thermal material properties were derived based on a microsegregation model, and the water flux distribution was measured and applied to calculate the cooling boundary conditions. Based on the solidification results, a 3D bulging model was established. The 2D heat transfer model was verified by the measured shell thickness and the slab surface temperature, and the 3D bulging model was verified by the calculated maximum bulging deflections using formulas. The bulging deformation behavior of the wide-thick slab under uneven cooling condition was then determined, and the effect of uneven solidification, casting speed, and roll misalignment were investigated.
16. Formation of the Lunar Fossil Bulges and Its Implication for the Early Earth and Moon
Science.gov (United States)
Qin, Chuan; Zhong, Shijie; Phillips, Roger
2018-02-01
First recognized by Laplace over two centuries ago, the Moon's present tidal-rotational bulges are significantly larger than hydrostatic predictions. They are likely relics of a former hydrostatic state when the Moon was closer to the Earth and had larger bulges, and they were established when stresses in a thickening lunar lithosphere could maintain the bulges against hydrostatic adjustment. We formulate the first dynamically self-consistent model of this process and show that bulge formation is controlled by the relative timing of lithosphere thickening and lunar orbit recession. Viable solutions indicate that lunar bulge formation was a geologically slow process lasting several hundred million years, that the process was complete about 4 Ga when the Moon-Earth distance was less than 32 Earth radii, and that the Earth in Hadean was significantly less dissipative to lunar tides than during the last 4 Gyr, possibly implying a frozen hydrosphere due to the fainter young Sun.
17. Numerical Simulation of Bulging Deformation for Wide-Thick Slab Under Uneven Cooling Conditions
Science.gov (United States)
Wu, Chenhui; Ji, Cheng; Zhu, Miaoyong
2018-06-01
In the present work, the bulging deformation of a wide-thick slab under uneven cooling conditions was studied using finite element method. The non-uniform solidification was first calculated using a 2D heat transfer model. The thermal material properties were derived based on a microsegregation model, and the water flux distribution was measured and applied to calculate the cooling boundary conditions. Based on the solidification results, a 3D bulging model was established. The 2D heat transfer model was verified by the measured shell thickness and the slab surface temperature, and the 3D bulging model was verified by the calculated maximum bulging deflections using formulas. The bulging deformation behavior of the wide-thick slab under uneven cooling condition was then determined, and the effect of uneven solidification, casting speed, and roll misalignment were investigated.
18. A New View of the Dwarf Spheroidal Satellites of the Milky Way From VLT/FLAMES: Where are the Very Metal Poor Stars?
Energy Technology Data Exchange (ETDEWEB)
Helmi, Amina; Irwin, M.J.; Tolstoy, E.; Battaglia, G.; Hill, V.; Jablonka, P.; Venn, K.; Shetrone, M.; Letarte, B.; Arimoto, N.; Abel, T.; Francois, P.; Kaufer, A.; Primas, F.; Sadakane, K.; Szeifert, T.; /Kapteyn Astron. Inst., Groningen /Cambridge U., Inst. of Astron. /Meudon Observ. /LASTRO Observ. /Victoria U. /Texas U., McDonald Observ.
2006-11-20
As part of the Dwarf galaxies Abundances and Radial-velocities Team (DART) Programme, we have measured the metallicities of a large sample of stars in four nearby dwarf spheroidal galaxies (dSph): Sculptor, Sextans, Fornax and Carina. The low mean metal abundances and the presence of very old stellar populations in these galaxies have supported the view that they are fossils from the early Universe. However, contrary to naive expectations, we find a significant lack of stars with metallicities below [Fe/H] {approx} -3 dex in all four systems. This suggests that the gas that made up the stars in these systems had been uniformly enriched prior to their formation. Furthermore, the metal-poor tail of the dSph metallicity distribution is significantly different from that of the Galactic halo. These findings show that the progenitors of nearby dSph appear to have been fundamentally different from the building blocks of the Milky Way, even at the earliest epochs.
19. Abundance analysis of SDSS J134338.67+484426.6; an extremely metal-poor star from the MARVELS pre-survey
Science.gov (United States)
Susmitha Rani, A.; Sivarani, T.; Beers, T. C.; Fleming, S.; Mahadevan, S.; Ge, J.
2016-05-01
We present an elemental-abundance analysis of an extremely metal-poor (EMP; [Fe/H] <-3.0) star, SDSS J134338.67+484426.6, identified during the course of the Multi-object Apache Point Observatory Radial Velocity Exoplanet Large-area Survey spectroscopic pre-survey of some 20 000 stars to identify suitable candidates for exoplanet searches. This star, with an apparent magnitude V = 12.14, is the lowest metallicity star found in the pre-survey, and is one of only ˜20 known EMP stars that are this bright or brighter. Our high-resolution spectroscopic analysis shows that this star is a subgiant with [Fe/H] = -3.42, having normal' carbon and no enhancement of neutron-capture abundances. Strontium is underabundant, [Sr/Fe] = -0.47, but the derived lower limit on [Sr/Ba] indicates that Sr is likely enhanced relative to Ba. This star belongs to the sparsely populated class of α-poor EMP stars that exhibit low ratios of [Mg/Fe], [Si/Fe], and [Ca/Fe] compared to typical halo stars at similar metallicity. The observed variations in radial velocity from several epochs of (low- and high-resolution) spectroscopic follow-up indicate that SDSS J134338.67+484426.6 is a possible long-period binary. We also discuss the abundance trends in EMP stars for r-process elements, and compare with other magnesium-poor stars.
20. A TWO-PHASE SCENARIO FOR BULGE ASSEMBLY IN {Lambda}CDM COSMOLOGIES
Energy Technology Data Exchange (ETDEWEB)
Obreja, A.; Dominguez-Tenreiro, R.; Brook, C. [Departamento de Fisica Teorica, Universidad Autonoma de Madrid, E-28049 Cantoblanco Madrid (Spain); Martinez-Serrano, F. J.; Domenech-Moral, M.; Serna, A. [Departamento de Fisica y Arquitectura de Computadores, Universidad Miguel Hernandez, E-03202 Elche (Spain); Molla, M. [Departamento de Investigacion Basica, CIEMAT, E-28040 Madrid (Spain); Stinson, G., E-mail: [email protected] [Max-Planck-Institut fuer Astronomie, Koenigstuhl 17, D-69117, Heidelberg (Germany)
2013-01-20
We analyze and compare the bulges of a sample of L {sub *} spiral galaxies in hydrodynamical simulations in a cosmological context, using two different codes, P-DEVA and GASOLINE. The codes regulate star formation in very different ways, with P-DEVA simulations inputting low star formation efficiency under the assumption that feedback occurs on subgrid scales, while the GASOLINE simulations have feedback that drives large-scale outflows. In all cases, the marked knee shape in mass aggregation tracks, corresponding to the transition from an early phase of rapid mass assembly to a later slower one, separates the properties of two populations within the simulated bulges. The bulges analyzed show an important early starburst resulting from the collapse-like fast phase of mass assembly, followed by a second phase with lower star formation, driven by a variety of processes such as disk instabilities and/or mergers. Classifying bulge stellar particles identified at z = 0 into old and young according to these two phases, we found bulge stellar sub-populations with distinct kinematics, shapes, stellar ages, and metal contents. The young components are more oblate, generally smaller, more rotationally supported, with higher metallicity and less alpha-element enhanced than the old ones. These results are consistent with the current observational status of bulges, and provide an explanation for some apparently paradoxical observations, such as bulge rejuvenation and metal-content gradients observed. Our results suggest that bulges of L {sub *} galaxies will generically have two bulge populations that can be likened to classical and pseudo-bulges, with differences being in the relative proportions of the two, which may vary due to galaxy mass and specific mass accretion and merger histories.
1. Monitoring and Mapping the Galactic Bulge (core Program)
Science.gov (United States)
Both neutron star and black hole binary transients are providing some of the most exciting RXTE science, and fortunately many are concentrated in the galactic bulge region. We propose to continue our twice weekly PCA scans of the region, which cover about 500 sq deg. The observations will be sensitive to new sources at the ~1 mCrab level (a factor of 10-60 more sensitive than the ASM in the region). We have had success finding new sources and new types of variability, including three millisecond pulsars, and new increased solid angle will improve the chances of finding more in the final RXTE years. We will continue efforts to search for variability in new and known sources. Companion follow-up proposals would be triggered by the results.
2. The Hubble Space Telescope UV Legacy Survey of Galactic Globular Clusters - XII. The RGB bumps of multiple stellar populations
Science.gov (United States)
Lagioia, E. P.; Milone, A. P.; Marino, A. F.; Cassisi, S.; Aparicio, A. J.; Piotto, G.; Anderson, J.; Barbuy, B.; Bedin, L. R.; Bellini, A.; Brown, T.; D'Antona, F.; Nardiello, D.; Ortolani, S.; Pietrinferni, A.; Renzini, A.; Salaris, M.; Sarajedini, A.; van der Marel, R.; Vesperini, E.
2018-04-01
The Hubble Space Telescope UV Legacy Survey of Galactic Globular Clusters is providing a major breakthrough in our knowledge of globular clusters (GCs) and their stellar populations. Among the main results, we discovered that all the studied GCs host two main discrete groups consisting of first generation (1G) and second generation (2G) stars. We exploit the multiwavelength photometry from this project to investigate, for the first time, the Red Giant Branch Bump (RGBB) of the two generations in a large sample of GCs. We identified, with high statistical significance, the RGBB of 1G and 2G stars in 26 GCs and found that their magnitude separation as a function of the filter wavelength follows comparable trends. The comparison of observations to synthetic spectra reveals that the RGBB luminosity depends on the stellar chemical composition and that the 2G RGBB is consistent with stars enhanced in He and N and depleted in C and O with respect to 1G stars. For metal-poor GCs the 1G and 2G RGBB relative luminosity in optical bands mostly depends on helium content, Y. We used the RGBB observations in F606W and F814W bands to infer the relative helium abundance of 1G and 2G stars in 18 GCs, finding an average helium enhancement ΔY = 0.011 ± 0.002 of 2G stars with respect to 1G stars. This is the first determination of the average difference in helium abundance of multiple populations in a large number of clusters and provides a lower limit to the maximum internal variation of helium in GCs.
3. Search for optical millisecond pulsars in globular clusters
International Nuclear Information System (INIS)
Middleditch, J.H.; Imamura, J.N.; Steiman-Cameron, T.Y.
1988-01-01
A search for millisecond optical pulsars in several bright, compact globular clusters was conducted. The sample included M28, and the X-ray clusters 47 Tuc, NGC 6441, NGC 6624, M22, and M15. The globular cluster M28 contains the recently discovered 327 Hz radio pulsar. Upper limits of 4 sigma to pulsed emission of (1-20) solar luminosities were found for the globular clusters tested, and 0.3 solar luminosity for the M28 pulsar for frequencies up to 500 Hz. 8 references
4. LISA Sources in Milky Way Globular Clusters
Science.gov (United States)
Kremer, Kyle; Chatterjee, Sourav; Breivik, Katelyn; Rodriguez, Carl L.; Larson, Shane L.; Rasio, Frederic A.
2018-05-01
We explore the formation of double-compact-object binaries in Milky Way (MW) globular clusters (GCs) that may be detectable by the Laser Interferometer Space Antenna (LISA). We use a set of 137 fully evolved GC models that, overall, effectively match the properties of the observed GCs in the MW. We estimate that, in total, the MW GCs contain ˜21 sources that will be detectable by LISA. These detectable sources contain all combinations of black hole (BH), neutron star, and white dwarf components. We predict ˜7 of these sources will be BH-BH binaries. Furthermore, we show that some of these BH-BH binaries can have signal-to-noise ratios large enough to be detectable at the distance of the Andromeda galaxy or even the Virgo cluster.
5. Constraints on H(0) from globular clusters
International Nuclear Information System (INIS)
Vandenberg, D.A.
1988-01-01
On the basis of canonical stellar evolutionary computations, the maximum age of the globular clusters is suggested to be near either 14 Gyr or 18 Gyr, depending on how (O/Fe) varies with (Fe/H) in the cluster stars. The lower estimate requires that H(0) = 65 km/s/Mpc or less, for all Omega(0) = O or greater, if the standard Big-Bang cosmological theory is correct - while the higher age value similarly constrains the Hubble constant to be smaller than 46 km/s/Mpc. Some reduction in the upper limit to cluster ages and a consequent increase in H(0) may be expected if helium diffusion is important in Population II stars; nevertheless, values of H(0) greater than 75 km/s/Mpc still appear to be precluded unless the cosmological constant is nonzero. 51 refs
6. Extragalactic globular clusters. I. The metallicity calibration
International Nuclear Information System (INIS)
Brodie, J.P.; Huchra, J.P.
1990-01-01
The ability of absorption-line strength indices, measured from integrated globular cluster spectra, to predict mean cluster metallicity is explored. Statistical criteria, are used to identify the six best indices out of about 20 measured in a large sample of Galactic and M31 cluster spectra. Linear relations between index and metallicity have been derived along with new calibrations of infrared colors (V - K, J - K, and CO) versus Fe/H. Estimates of metallicity from the six spectroscopic index-metallicity relations have been combined in three different ways to identify the most efficient estimator and the minimum bias estimator of Fe/H - the weighted mean. This provides an estimate of Fe/H accurate to about 15 percent. 37 refs
7. Gemini/GMOS Spectroscopy of Globular Clusters in the Merger Remnant Galaxy M85
Science.gov (United States)
Ko, Youkyung; Lee, Myung Gyoon; Park, Hong Soo; Sohn, Jubee; Lim, Sungsoon; Hwang, Narae
2018-06-01
M85 is a peculiar S0 galaxy in Virgo and a well-known merger remnant. We present the first spectroscopic study of globular clusters (GCs) in M85. We obtain spectra for 21 GC candidates and the nucleus of M85 using the Gemini Multi-Object Spectrograph on the Gemini North 8.1 m telescope. From their radial velocities, 20 of the GCs are found to be members of M85. We find a strong rotation signal of the M85 GC system with a rotation amplitude of 235 km s‑1. The rotation axis of the GC system has a position angle of about 161°, which is 51.°5 larger than that of the stellar light. The rotation-corrected radial velocity dispersion of the GC system is estimated to be {σ }{{r},{cor}}=160 km s‑1. The rotation parameter {{Ω }}{R}icor}/{σ }{{r},{cor}} of the GC system is derived to be {1.47}-0.48+1.05, which is one of the largest among known early-type galaxies. The ages and metallicities of the GCs, which show the same trend as the results based on Lick indices, are derived from full spectrum fitting (ULySS). About half of the GCs are an intermediate-age population whose mean age is ∼3.7 ± 1.9 Gyr, having a mean [Fe/H] value of ‑0.26. The other half are old and metal-poor. These results suggest that M85 experienced a wet merging event about 4 Gyr ago, forming a significant population of star clusters. The strong rotational feature of the GC system can be explained by an off-center major merging.
8. LITHIUM-RICH GIANTS IN GLOBULAR CLUSTERS
Energy Technology Data Exchange (ETDEWEB)
Kirby, Evan N.; Cohen, Judith G. [California Institute of Technology, 1200 E. California Boulevard, MC 249-17, Pasadena, CA 91125 (United States); Guhathakurta, Puragra [UCO/Lick Observatory and Department of Astronomy and Astrophysics, University of California, 1156 High Street, Santa Cruz, CA 95064 (United States); Zhang, Andrew J. [The Harker School, 500 Saratoga Avenue, San Jose, CA 95129 (United States); Hong, Jerry [Palo Alto High School, 50 Embarcadero Road, Palo Alto, CA, 94301 (United States); Guo, Michelle [Stanford University, 450 Serra Mall, Stanford, CA 94305 (United States); Guo, Rachel [Irvington High School, 41800 Blacow Road, Fremont, CA 94538 (United States); Cunha, Katia [Observatório Nacional, São Cristóvão Rio de Janeiro (Brazil)
2016-03-10
Although red giants deplete lithium on their surfaces, some giants are Li-rich. Intermediate-mass asymptotic giant branch (AGB) stars can generate Li through the Cameron–Fowler conveyor, but the existence of Li-rich, low-mass red giant branch (RGB) stars is puzzling. Globular clusters are the best sites to examine this phenomenon because it is straightforward to determine membership in the cluster and to identify the evolutionary state of each star. In 72 hours of Keck/DEIMOS exposures in 25 clusters, we found four Li-rich RGB and two Li-rich AGB stars. There were 1696 RGB and 125 AGB stars with measurements or upper limits consistent with normal abundances of Li. Hence, the frequency of Li-richness in globular clusters is (0.2 ± 0.1)% for the RGB, (1.6 ± 1.1)% for the AGB, and (0.3 ± 0.1)% for all giants. Because the Li-rich RGB stars are on the lower RGB, Li self-generation mechanisms proposed to occur at the luminosity function bump or He core flash cannot explain these four lower RGB stars. We propose the following origin for Li enrichment: (1) All luminous giants experience a brief phase of Li enrichment at the He core flash. (2) All post-RGB stars with binary companions on the lower RGB will engage in mass transfer. This scenario predicts that 0.1% of lower RGB stars will appear Li-rich due to mass transfer from a recently Li-enhanced companion. This frequency is at the lower end of our confidence interval.
9. Comparative Analysis of Bulge Deformation between 2D and 3D Finite Element Models
Directory of Open Access Journals (Sweden)
Qin Qin
2014-02-01
Full Text Available Bulge deformation of the slab is one of the main factors that affect slab quality in continuous casting. This paper describes an investigation into bulge deformation using ABAQUS to model the solidification process. A three-dimensional finite element analysis model of the slab solidification process has been first established because the bulge deformation is closely related to slab temperature distributions. Based on slab temperature distributions, a three-dimensional thermomechanical coupling model including the slab, the rollers, and the dynamic contact between them has also been constructed and applied to a case study. The thermomechanical coupling model produces outputs such as the rules of bulge deformation. Moreover, the three-dimensional model has been compared with a two-dimensional model to discuss the differences between the two models in calculating the bulge deformation. The results show that the platform zone exists in the wide side of the slab and the bulge deformation is affected strongly by the ratio of width-to-thickness. The indications are also that the difference of the bulge deformation for the two modeling ways is little when the ratio of width-to-thickness is larger than six.
10. Effect of an upstream bulge configuration on film cooling with and without mist injection.
Science.gov (United States)
Wang, Jin; Li, Qianqian; Sundén, Bengt; Ma, Ting; Cui, Pei
2017-12-01
To meet the economic requirements of power output, the increased inlet temperature of modern gas turbines is above the melting point of the material. Therefore, high-efficient cooling technology is needed to protect the blades from the hot mainstream. In this study, film cooling was investigated in a simplified channel. A bulge located upstream of the film hole was numerically investigated by analysis of the film cooling effectiveness distribution downstream of the wall. The flow distribution in the plate channel is first presented. Comparing with a case without bulge, different cases with bulge heights of 0.1d, 0.3d and 0.5d were examined with blowing ratios of 0.5 and 1.0. Cases with 1% mist injection were also included in order to obtain better cooling performance. Results show that the bulge configuration located upstream the film hole makes the cooling film more uniform, and enhanceslateral cooling effectiveness. Unlike other cases, the configuration with a 0.3d-height bulge shows a good balance in improving the downstream and lateral cooling effectiveness. Compared with the case without mist at M = 0.5, the 0.3d-height bulge with 1% mist injection increases lateral average effectiveness by 559% at x/d = 55. In addition, a reduction of the thermal stress concentration can be obtained by increasing the height of the bulge configuration. Copyright © 2017 Elsevier Ltd. All rights reserved.
11. Bulge testing of copper and niobium tubes for hydroformed RF cavities
Energy Technology Data Exchange (ETDEWEB)
Kim, H.S., E-mail: [email protected] [Department of Materials Science and Engineering, The Ohio State University, Columbus, OH (United States); Sumption, M.D. [Department of Materials Science and Engineering, The Ohio State University, Columbus, OH (United States); Susner, M.A. [Department of Materials Science and Engineering, The Ohio State University, Columbus, OH (United States); Oak Ridge National Laboratory, Oak Ridge, TN (United States); Lim, H. [Department of Materials Science and Engineering, The Ohio State University, Columbus, OH (United States); Sandia National Laboratories, Albuquerque, NM (United States); Collings, E.W. [Department of Materials Science and Engineering, The Ohio State University, Columbus, OH (United States)
2016-01-27
The heat treatment, tensile testing, and bulge testing of Cu and Nb tubes has been carried out to gain experience for the subsequent hydroforming of Nb tube into seamless superconducting radio frequency (SRF) cavities for high energy particle acceleration. In the experimental part of the study samples removed from representative tubes were prepared for heat treatment, tensile testing, residual resistance ratio measurement, and orientation imaging electron microscopy (OIM). After being optimally heat treated Cu and Nb tubes were subjected to hydraulic bulge testing and the results analyzed. In the final part of the study finite-element models (FEM) incorporating constitutive (stress–strain) relationships analytically derived from the tensile and bulge tests, respectively, were used to replicate the bulge test. As expected, agreement was obtained between the experimental bulge parameters and the FEM model based on the bulge-derived constitutive relationship. Not so for the FEM model based on tensile-test data. It is concluded that a constitutive relationship based on bulge testing is necessary to predict a material's performance under hydraulic deformation.
12. Bulge testing of copper and niobium tubes for hydroformed RF cavities
International Nuclear Information System (INIS)
Kim, H.S.; Sumption, M.D.; Susner, M.A.; Lim, H.; Collings, E.W.
2016-01-01
The heat treatment, tensile testing, and bulge testing of Cu and Nb tubes has been carried out to gain experience for the subsequent hydroforming of Nb tube into seamless superconducting radio frequency (SRF) cavities for high energy particle acceleration. In the experimental part of the study samples removed from representative tubes were prepared for heat treatment, tensile testing, residual resistance ratio measurement, and orientation imaging electron microscopy (OIM). After being optimally heat treated Cu and Nb tubes were subjected to hydraulic bulge testing and the results analyzed. In the final part of the study finite-element models (FEM) incorporating constitutive (stress–strain) relationships analytically derived from the tensile and bulge tests, respectively, were used to replicate the bulge test. As expected, agreement was obtained between the experimental bulge parameters and the FEM model based on the bulge-derived constitutive relationship. Not so for the FEM model based on tensile-test data. It is concluded that a constitutive relationship based on bulge testing is necessary to predict a material's performance under hydraulic deformation.
13. QUENCHED COLD ACCRETION OF A LARGE-SCALE METAL-POOR FILAMENT DUE TO VIRIAL SHOCKING IN THE HALO OF A MASSIVE z = 0.7 GALAXY
Energy Technology Data Exchange (ETDEWEB)
Churchill, Christopher W.; Holtzman, Jon; Nielsen, Nikole M.; Trujillo-Gomez, Sebastian [Department of Astronomy, New Mexico State University, MSC 4500, Las Cruces, NM 88003 (United States); Kacprzak, Glenn G.; Spitler, Lee R. [Centre for Astrophysics and Supercomputing, Swinburne University of Technology, P.O. Box 218, Hawthorn, Victoria 3122 (Australia); Steidel, Charles C. [Department of Astronomy, California Institute of Technology, MS 105-24, Pasadena, CA 91125 (United States)
2012-11-20
Using HST/COS/STIS and HIRES/Keck high-resolution spectra, we have studied a remarkable H I absorbing complex at z = 0.672 toward the quasar Q1317+277. The H I absorption has a velocity spread of {Delta}v = 1600 km s{sup -1}, comprises 21 Voigt profile components, and resides at an impact parameter of D = 58 kpc from a bright, high-mass (log M {sub vir}/M {sub Sun} {approx_equal} 13.7) elliptical galaxy that is deduced to have a 6 Gyr old, solar metallicity stellar population. Ionization models suggest the majority of the structure is cold gas surrounding a shock-heated cloud that is kinematically adjacent to a multi-phase group of clouds with detected C III, C IV, and O VI absorption, suggestive of a conductive interface near the shock. The deduced metallicities are consistent with the moderate in situ enrichment relative to the levels observed in the z {approx} 3 Ly{alpha} forest. We interpret the H I complex as a metal-poor filamentary structure being shock heated as it accretes into the halo of the galaxy. The data support the scenario of an early formation period (z > 4) in which the galaxy was presumably fed by cold-mode gas accretion that was later quenched via virial shocking by the hot halo such that, by intermediate redshift, the cold filamentary accreting gas is continuing to be disrupted by shock heating. Thus, continued filamentary accretion is being mixed into the hot halo, indicating that the star formation of the galaxy will likely remain quenched. To date, the galaxy and the H I absorption complex provide some of the most compelling observational data supporting the theoretical picture in which accretion is virial shocked in the hot coronal halos of high-mass galaxies.
14. Carbon-enhanced metal-poor stars in SDSS/Segue. II. Comparison of CEMP-star frequencies with binary population-synthesis models
Energy Technology Data Exchange (ETDEWEB)
Lee, Young Sun [Department of Astronomy, New Mexico State University, Las Cruces, NM 88003 (United States); Suda, Takuma [National Astronomical Observatory of Japan, Osawa 2-21-1, Mitaka, Tokyo 181-8588 (Japan); Beers, Timothy C. [National Optical Astronomy Observatory, Tucson, AZ 85719 (United States); Stancliffe, Richard J., E-mail: [email protected] [Argelander-Institut für Astronomie, Auf dem Hügel 71, D-53121 Bonn (Germany)
2014-06-20
We present a comparison of the frequencies of carbon-enhanced metal-poor (CEMP) giant and main-sequence turnoff (MSTO) stars with predictions from binary population-synthesis models involving asymptotic giant-branch (AGB) mass transfer. The giant and MSTO stars are selected from the Sloan Digital Sky Survey and the Sloan Extension for Galactic Understanding and Exploration. We consider two initial mass functions (IMFs)—a Salpeter IMF, and a mass function with a characteristic mass of 10 M {sub ☉}. For giant stars, the comparison indicates a good agreement between the observed CEMP frequencies and the AGB binary model using a Salpeter IMF for [Fe/H] > – 1.5, and a characteristic mass of 10 M {sub ☉} for [Fe/H] < – 2.5. This result suggests that the IMF shifted from high- to low-mass dominated in the early history of the Milky Way, which appears to have occurred at a 'chemical time' between [Fe/H] =–2.5 and [Fe/H] =–1.5. The CEMP frequency for the turnoff stars with [Fe/H] < – 3.0 is much higher than the AGB model prediction from the high-mass IMF, supporting the previous assertion that one or more additional mechanisms, not associated with AGB stars, are required for the production of carbon-rich material below [Fe/H] =–3.0. We also discuss possible effects of first dredge-up and extra mixing in red giants and internal mixing in turnoff stars on the derived CEMP frequencies.
15. Evidence of enrichment by individual SN from elemental abundance ratios in the very metal-poor dSph galaxy Boötes I
Science.gov (United States)
Feltzing, S.; Eriksson, K.; Kleyna, J.; Wilkinson, M. I.
2009-12-01
Aims. We establish the mean metallicity from high-resolution spectroscopy for the recently found dwarf spheroidal galaxy Boötes I and test whether it is a common feature for ultra-faint dwarf spheroidal galaxies to show signs of inhomogeneous chemical evolution (e.g. as found in the Hercules dwarf spheroidal galaxy). Methods: We analyse high-resolution, moderate signal-to-noise spectra for seven red giant stars in the Boötes I dSph galaxy using standard abundance analysis techniques. In particular, we assume local thermodynamic equilibrium and employ spherical model atmospheres and codes that take the sphericity of the star into account when calculating the elemental abundances. Results: We confirm previous determinations of the mean metallicity of the Boötes I dwarf spheroidal galaxy to be -2.3 dex. Whilst five stars are clustered around this metallicity, one is significantly more metal-poor, at -2.9 dex, and one is more metal-rich at, -1.9 dex. Additionally, we find that one of the stars, Boo-127, shows an atypically high [Mg/Ca] ratio, indicative of stochastic enrichment processes within the dSph galaxy. Similar results have previously only been found in the Hercules and Draco dSph galaxies and appear, so far, to be unique to this type of galaxy. The data presented herein were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation.
16. Classification of extremely metal-poor stars: absent region in A(C)-[Fe/H] plane and the role of dust cooling
Science.gov (United States)
Chiaki, Gen; Tominaga, Nozomu; Nozawa, Takaya
2017-11-01
Extremely metal-poor (EMP) stars are the living fossils with records of chemical enrichment history at the early epoch of galaxy formation. By the recent large observation campaigns, statistical samples of EMP stars have been obtained. This motivates us to reconsider their classification and formation conditions. From the observed lower limits of carbon and iron abundances of Acr(C) ∼ 6 and [Fe/H]cr ∼ -5 for C-enhanced EMP (CE-EMP) and C-normal EMP (CN-EMP) stars, we confirm that gas cooling by dust thermal emission is indispensable for the fragmentation of their parent clouds to form such low mass, i.e. long-lived stars, and that the dominant grain species are carbon and silicate, respectively. We constrain the grain radius r_i^cool of a species i and condensation efficiency fij of a key element j as r_C^cool / f_C,C = 10 {μ m} and r_Sil^cool / f_Sil,Mg = 0.1 {μ m} to reproduce Acr(C) and [Fe/H]cr, which give a universal condition 10[C/H] - 2.30 + 10[Fe/H] > 10-5.07 for the formation of every EMP star. Instead of the conventional boundary [C/Fe] = 0.7 between CE-EMP and CN-EMP stars, this condition suggests a physically meaningful boundary [C/Fe]b = 2.30 above and below which carbon and silicate grains are dominant coolants, respectively.
17. Morpho-kinematic properties of field S0 bulges in the CALIFA survey
Science.gov (United States)
Méndez-Abreu, J.; Aguerri, J. A. L.; Falcón-Barroso, J.; Ruiz-Lara, T.; Sánchez-Menguiano, L.; de Lorenzo-Cáceres, A.; Costantin, L.; Catalán-Torrecilla, C.; Zhu, L.; Sánchez-Blazquez, P.; Florido, E.; Corsini, E. M.; Wild, V.; Lyubenova, M.; van de Ven, G.; Sánchez, S. F.; Bland-Hawthorn, J.; Galbany, L.; García-Benito, R.; García-Lorenzo, B.; González Delgado, R. M.; López-Sánchez, A. R.; Marino, R. A.; Márquez, I.; Ziegler, B.; Califa Collaboration
2018-02-01
We study a sample of 28 S0 galaxies extracted from the integral field spectroscopic (IFS) survey Calar Alto Legacy Integral Field Area. We combine an accurate two-dimensional (2D) multicomponent photometric decomposition with the IFS kinematic properties of their bulges to understand their formation scenario. Our final sample is representative of S0s with high stellar masses (M⋆/M⊙ > 1010). They lay mainly on the red sequence and live in relatively isolated environments similar to that of the field and loose groups. We use our 2D photometric decomposition to define the size and photometric properties of the bulges, as well as their location within the galaxies. We perform mock spectroscopic simulations mimicking our observed galaxies to quantify the impact of the underlying disc on our bulge kinematic measurements (λ and v/σ). We compare our bulge corrected kinematic measurements with the results from Schwarzschild dynamical modelling. The good agreement confirms the robustness of our results and allows us to use bulge deprojected values of λ and v/σ. We find that the photometric (n and B/T) and kinematic (v/σ and λ) properties of our field S0 bulges are not correlated. We demonstrate that this morpho-kinematic decoupling is intrinsic to the bulges and it is not due to projection effects. We conclude that photometric diagnostics to separate different types of bulges (disc-like versus classical) might not be useful for S0 galaxies. The morpho-kinematics properties of S0 bulges derived in this paper suggest that they are mainly formed by dissipational processes happening at high redshift, but dedicated high-resolution simulations are necessary to better identify their origin.
18. Stellar populations of bulges in galaxies with a low surface-brightness disc
Science.gov (United States)
Morelli, L.; Corsini, E. M.; Pizzella, A.; Dalla Bontà, E.; Coccato, L.; Méndez-Abreu, J.
2015-03-01
The radial profiles of the Hβ, Mg, and Fe line-strength indices are presented for a sample of eight spiral galaxies with a low surface-brightness stellar disc and a bulge. The correlations between the central values of the line-strength indices and velocity dispersion are consistent to those known for early-type galaxies and bulges of high surface-brightness galaxies. The age, metallicity, and α/Fe enhancement of the stellar populations in the bulge-dominated region are obtained using stellar population models with variable element abundance ratios. Almost all the sample bulges are characterized by a young stellar population, on-going star formation, and a solar α/Fe enhancement. Their metallicity spans from high to sub-solar values. No significant gradient in age and α/Fe enhancement is measured, whereas only in a few cases a negative metallicity gradient is found. These properties suggest that a pure dissipative collapse is not able to explain formation of all the sample bulges and that other phenomena, like mergers or acquisition events, need to be invoked. Such a picture is also supported by the lack of a correlation between the central value and gradient of the metallicity in bulges with very low metallicity. The stellar populations of the bulges hosted by low surface-brightness discs share many properties with those of high surface-brightness galaxies. Therefore, they are likely to have common formation scenarios and evolution histories. A strong interplay between bulges and discs is ruled out by the fact that in spite of being hosted by discs with extremely different properties, the bulges of low and high surface-brightness discs are remarkably similar.
19. ALMA observations of molecular absorption in four directions toward the Galactic bulge
Science.gov (United States)
Liszt, H.; Gerin, M.
2018-02-01
Context. Alma Cycle 3 observations serendipitously showed strong absorption from diffuse molecular gas in the Galactic bulge at -200 km s-1 51(3σ) for the bulge gas toward J1744 and 58 ± 9 and 64 ± 4 for the disk gas toward J1717 and J1744, respectively, all well above the value of 20-25 typical of the central molecular zone. Conclusions: The kinematics and chemistry of the bulge gas observed toward J1744 more nearly resemble that of gas in the Milky Way disk than in the central molecular zone.
20. Diffraction and Smith-Purcell radiation on the hemispherical bulges in a metal plate
Science.gov (United States)
Syshchenko, V. V.; Larikova, E. A.; Gladkih, Yu. P.
2017-12-01
The radiation resulting from the uniform motion of a charged particle near a hemispheric bulge on a metal plane is considered. The description of the radiation process based on the method of images is developed for the case of non-relativistic particle and a perfectly conducting target. The spectral-angular and spectral densities of the diffraction radiation on the single bulge (as well as the Smith-Purcell radiation on the periodic string of bulges) are computed. The possibility of application of the developed approach to the case of relativistic incident particle is discussed.
1. A catalog of polychromatic bulge-disc decompositions of ˜17.600 galaxies in CANDELS
Science.gov (United States)
Dimauro, Paola; Huertas-Company, Marc; Daddi, Emanuele; Pérez-González, Pablo G.; Bernardi, Mariangela; Barro, Guillermo; Buitrago, Fernando; Caro, Fernando; Cattaneo, Andrea; Dominguez-Sánchez, Helena; Faber, Sandra M.; Häußler, Boris; Kocevski, Dale D.; Koekemoer, Anton M.; Koo, David C.; Lee, Christoph T.; Mei, Simona; Margalef-Bentabol, Berta; Primack, Joel; Rodriguez-Puebla, Aldo; Salvato, Mara; Shankar, Francesco; Tuccillo, Diego
2018-05-01
Understanding how bulges grow in galaxies is critical step towards unveiling the link between galaxy morphology and star-formation. To do so, it is necessary to decompose large sample of galaxies at different epochs into their main components (bulges and discs). This is particularly challenging, especially at high redshifts, where galaxies are poorly resolved. This work presents a catalog of bulge-disc decompositions of the surface brightness profiles of ˜17.600 H-band selected galaxies in the CANDELS fields (F160W https://lerma.obspm.fr/huertas/form_CANDELS and will be used for scientific analysis in forthcoming works.
2. Most Massive Globular Cluster in Our Galaxy
Science.gov (United States)
1994-05-01
Far down in the southern sky, in the constellation of Centaurus, a diffuse spot of light can be perceived with the unaided eye. It may be unimpressive, but when seen through a telescope, it turns out to be a beautiful, dense cluster of innumerable stars [1]. Omega Centauri, as this object is called, is the brightest of its type in the sky. We refer to it as a "globular cluster", due to its symmetric form. It belongs to our Milky Way galaxy and astrophysical investigations have shown that it is located at a distance of about 16,500 light-years (1 light-year = 9,460,000,000,000 km). Nobody knows for sure how many individual stars it contains, but recent estimates run into the millions. Most of these stars are more than 10,000 million years old and it is generally agreed that Omega Centauri has a similar age. Measurements of its motion indicate that Omega Centauri plows through the Milky Way in an elongated orbit. It is not easy to understand how it has managed to keep its stars together during such an extended period. MEASURING STELLAR VELOCITIES IN OMEGA CENTAURI A group of astronomers [2] have recently carried through a major investigation of Omega Centauri. After many nights of observations at the ESO La Silla observatory, they now conclude that not only is this globular cluster the brightest, it is indeed by far the most massive known in the Milky Way. The very time-consuming observations were made during numerous observing sessions over a period of no less than 13 years (1981-1993), with the photoelectric spectrometer CORAVEL mounted on the 1.5-m Danish telescope at La Silla. The CORAVEL instrument (COrelation RAdial VELocities) was built in a joint effort between the Geneva (Switzerland) and Marseilles (France) observatories. It functions according to the cross-correlation technique, by means of which the spectrum of the observed star is compared with a "standard stellar spectrum" [3]. HOW HEAVY IS OMEGA CENTAURI? In the present study, a total of 1701
3. MASS-TO-LIGHT RATIOS FOR M31 GLOBULAR CLUSTERS: AGE DATING AND A SURPRISING METALLICITY TREND
International Nuclear Information System (INIS)
Strader, Jay; Huchra, John P.; Smith, Graeme H.; Brodie, Jean P.; Larsen, Soeren
2009-01-01
We have obtained velocity dispersions from Keck high-resolution integrated spectroscopy of 10 M31 globular clusters (GCs), including three candidate intermediate-age GCs. We show that these candidates have the same V-band mass-to-light (M/L V ) ratios as the other GCs, implying that they are likely to be old. We also find a trend of derived velocity dispersion with wavelength, but cannot distinguish between a systematic error and a physical effect. Our new measurements are combined with photometric and spectroscopic data from the literature in a re-analysis of all M31 GC M/L V values. In a combined sample of 27 GCs, we show that the metal-rich GCs have lower M/L V than the metal-poor GCs, in conflict with predictions from stellar population models. Fragmentary data for other galaxies support this observation. The M31 GC fundamental plane is extremely tight, and we follow up an earlier suggestion by Djorgovski to show that the fundamental plane can be used to estimate accurate distances (potentially 10% or better).
4. Formation of globular cluster candidates in merging proto-galaxies at high redshift: a view from the FIRE cosmological simulations
Science.gov (United States)
Kim, Ji-hoon; Ma, Xiangcheng; Grudić, Michael Y.; Hopkins, Philip F.; Hayward, Christopher C.; Wetzel, Andrew; Faucher-Giguère, Claude-André; Kereš, Dušan; Garrison-Kimmel, Shea; Murray, Norman
2018-03-01
Using a state-of-the-art cosmological simulation of merging proto-galaxies at high redshift from the FIRE project, with explicit treatments of star formation and stellar feedback in the interstellar medium, we investigate the formation of star clusters and examine one of the formation hypotheses of present-day metal-poor globular clusters. We find that frequent mergers in high-redshift proto-galaxies could provide a fertile environment to produce long-lasting bound star clusters. The violent merger event disturbs the gravitational potential and pushes a large gas mass of ≳ 105-6 M⊙ collectively to high density, at which point it rapidly turns into stars before stellar feedback can stop star formation. The high dynamic range of the reported simulation is critical in realizing such dense star-forming clouds with a small dynamical time-scale, tff ≲ 3 Myr, shorter than most stellar feedback time-scales. Our simulation then allows us to trace how clusters could become virialized and tightly bound to survive for up to ˜420 Myr till the end of the simulation. Because the cluster's tightly bound core was formed in one short burst, and the nearby older stars originally grouped with the cluster tend to be preferentially removed, at the end of the simulation the cluster has a small age spread.
5. Observations of CO and OI in stars in globular clusters
International Nuclear Information System (INIS)
Wallerstein, G.; Pilachowski, C.
1978-01-01
Since studies at classification dispersion and early analyses of high dispersion spectra have yielded little quantitative data on the abundances of C, N, and O in globular clusters the authors have been endeavoring to establish their abundances in stars in several clusters. The problem has been approached in two ways, by observing the 2.3 micron CO bands and the 6300 A [OI] line in individual stars in globular clusters. (Auth.)
Energy Technology Data Exchange (ETDEWEB)
Stefano, R. Di [Harvard-Smithsonian Center for Astrophysics (United States); Ray, A., E-mail: [email protected], E-mail: [email protected] [Tata Institute of Fundamental Research (India)
2016-08-10
Globular clusters are ancient stellar populations in compact dense ellipsoids. There is no star formation and there are no core-collapse supernovae, but several lines of evidence suggest that globular clusters are rich in planets. If so, and if advanced civilizations can develop there, then the distances between these civilizations and other stars would be far smaller than typical distances between stars in the Galactic disk, facilitating interstellar communication and travel. The potent combination of long-term stability and high stellar densities provides a globular cluster opportunity. Yet the very proximity that promotes interstellar travel also brings danger, as stellar interactions can destroy planetary systems. We find, however, that large portions of many globular clusters are “sweet spots,” where habitable-zone planetary orbits are stable for long times. Globular clusters in our own and other galaxies are, therefore, among the best targets for searches for extraterrestrial intelligence (SETI). We use the Drake equation to compare the likelihood of advanced civilizations in globular clusters to that in the Galactic disk. We also consider free-floating planets, since wide-orbit planets can be ejected to travel through the cluster. Civilizations spawned in globular clusters may be able to establish self-sustaining outposts, reducing the probability that a single catastrophic event will destroy the civilization. Although individual civilizations may follow different evolutionary paths, or even be destroyed, the cluster may continue to host advanced civilizations once a small number have jumped across interstellar space. Civilizations residing in globular clusters could therefore, in a sense, be immortal.
7. Effect of radiation pressure in the cores of globular clusters
Energy Technology Data Exchange (ETDEWEB)
Angeletti, L; Capuzzo-Dolcetta, R; Giannone
1981-10-01
The possible effects of a presence of a dust cloud in the cores of globular clusters was investigated. Two cluster models were considered together with various models of clouds. The problem of radiation transfer was solved under some simplifying assumptions. Owing to a differential absorption of the star light in the cloud, radiation pressure turned out be inward-directed in some cloud models. This fact may lead to a confinement of some dust in the central regions of globular clusters.
8. Globular cluster metallicity scale: evidence from stellar models
International Nuclear Information System (INIS)
Demarque, P.; King, C.R.; Diaz, A.
1982-01-01
Theoretical giant branches have been constructed to determine their relative positions for metallicities in the range -2.3 0 )/sub 0,g/ based on these models is presented which yields good agreement over the observed range of metallicities for galactic globular clusters and old disk clusters. The metallicity of 47 Tuc and M71 given by this calibration is about -0.8 dex. Subject headings: clusters, globular: stars: abundances: stars: interiors
International Nuclear Information System (INIS)
Stefano, R. Di; Ray, A.
2016-01-01
Globular clusters are ancient stellar populations in compact dense ellipsoids. There is no star formation and there are no core-collapse supernovae, but several lines of evidence suggest that globular clusters are rich in planets. If so, and if advanced civilizations can develop there, then the distances between these civilizations and other stars would be far smaller than typical distances between stars in the Galactic disk, facilitating interstellar communication and travel. The potent combination of long-term stability and high stellar densities provides a globular cluster opportunity. Yet the very proximity that promotes interstellar travel also brings danger, as stellar interactions can destroy planetary systems. We find, however, that large portions of many globular clusters are “sweet spots,” where habitable-zone planetary orbits are stable for long times. Globular clusters in our own and other galaxies are, therefore, among the best targets for searches for extraterrestrial intelligence (SETI). We use the Drake equation to compare the likelihood of advanced civilizations in globular clusters to that in the Galactic disk. We also consider free-floating planets, since wide-orbit planets can be ejected to travel through the cluster. Civilizations spawned in globular clusters may be able to establish self-sustaining outposts, reducing the probability that a single catastrophic event will destroy the civilization. Although individual civilizations may follow different evolutionary paths, or even be destroyed, the cluster may continue to host advanced civilizations once a small number have jumped across interstellar space. Civilizations residing in globular clusters could therefore, in a sense, be immortal.
10. Isolated ellipticals and their globular cluster systems. III. NGC 2271, NGC 2865, NGC 3962, NGC 4240, and IC 4889
Science.gov (United States)
Salinas, R.; Alabi, A.; Richtler, T.; Lane, R. R.
2015-05-01
As tracers of star formation, galaxy assembly, and mass distribution, globular clusters have provided important clues to our understanding of early-type galaxies. But their study has been mostly constrained to galaxy groups and clusters where early-type galaxies dominate, leaving the properties of the globular cluster systems (GCSs) of isolated ellipticals as a mostly uncharted territory. We present Gemini-South/GMOS g'i' observations of five isolated elliptical galaxies: NGC 3962, NGC 2865, IC 4889, NGC 2271, and NGC 4240. Photometry of their GCSs reveals clear color bimodality in three of them, but remains inconclusive for the other two. All the studied GCSs are rather poor with a mean specific frequency SN ~ 1.5, independently of the parent galaxy luminosity. Considering information from previous work as well, it is clear that bimodality and especially the presence of a significant, even dominant, population of blue clusters occurs at even the most isolated systems, which casts doubts on a possible accreted origin of metal-poor clusters, as suggested by some models. Additionally, we discuss the possible existence of ultra-compact dwarfs around the isolated elliptical NGC 3962. Based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the Science and Technology Facilities Council (United Kingdom), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), Ministério da Ciência, Tecnologia e Inovação (Brazil) and Ministerio de Ciencia, Tecnología e Innovación Productiva (Argentina).Globular cluster photometry is available in electronic form at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/577/A59Appendices are available in
11. Dynamical manifestations of quantum chaos: correlation hole and bulge
Science.gov (United States)
Torres-Herrera, E. J.; Santos, Lea F.
2017-10-01
A main feature of a chaotic quantum system is a rigid spectrum where the levels do not cross. We discuss how the presence of level repulsion in lattice many-body quantum systems can be detected from the analysis of their time evolution instead of their energy spectra. This approach is advantageous to experiments that deal with dynamics, but have limited or no direct access to spectroscopy. Dynamical manifestations of avoided crossings occur at long times. They correspond to a drop, referred to as correlation hole, below the asymptotic value of the survival probability and to a bulge above the saturation point of the von Neumann entanglement entropy and the Shannon information entropy. By contrast, the evolution of these quantities at shorter times reflects the level of delocalization of the initial state, but not necessarily a rigid spectrum. The correlation hole is a general indicator of the integrable-chaos transition in disordered and clean models and as such can be used to detect the transition to the many-body localized phase in disordered interacting systems. This article is part of the themed issue 'Breakdown of ergodicity in quantum systems: from solids to synthetic matter'.
12. VARIABILITY OF OPTICAL COUNTERPARTS IN THE CHANDRA GALACTIC BULGE SURVEY
Energy Technology Data Exchange (ETDEWEB)
Britt, C. T.; Hynes, R. I.; Johnson, C. B.; Baldwin, A.; Collazzi, A.; Gossen, L. [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, LA 70803-4001 (United States); Jonker, P. G.; Torres, M. A. P. [SRON, Netherlands Institute for Space Research, Sorbonnelaan 2, 3584 CA Utrecht (Netherlands); Nelemans, G. [Department of Astrophysics, IMAPP, Radboud University Nijmegen, Heyendaalseweg 135, 6525 AJ, Nijmegen (Netherlands); Maccarone, T. [Department of Physics, Texas Tech University, Box 41051, Science Building, Lubbock, TX 79409-1051 (United States); Steeghs, D.; Greiss, S. [Astronomy and Astrophysics, Department of Physics, University of Warwick, Coventry, CV4 7AL (United Kingdom); Heinke, C. [Department of Physics, University of Alberta, CCIS 4-183, Edmonton, AB T6G 2E1 (Canada); Bassa, C. G. [Jodrell Bank Centre for Astrophysics, School of Physics and Astronomy, University of Manchester, Manchester M13 9PL (United Kingdom); Villar, A. [Department of Physics, Massachussettes Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139-4307 (United States); Gabb, M. [Department of Physics, Florida Atlantic University, 777 Glades Road, Boca Raton, FL 33431-0991 (United States)
2014-09-01
We present optical light curves of variable stars consistent with the positions of X-ray sources identified with the Chandra X-ray Observatory for the Chandra Galactic Bulge Survey (GBS). Using data from the Mosaic-II instrument on the Blanco 4 m Telescope at CTIO, we gathered time-resolved photometric data on timescales from ∼2 hr to 8 days over the 3/4 of the X-ray survey containing sources from the initial GBS catalog. Among the light curve morphologies we identify are flickering in interacting binaries, eclipsing sources, dwarf nova outbursts, ellipsoidal variations, long period variables, spotted stars, and flare stars. Eighty-seven percent of X-ray sources have at least one potential optical counterpart. Twenty-seven percent of these candidate counterparts are detectably variable; a much greater fraction than expected for randomly selected field stars, which suggests that most of these variables are real counterparts. We discuss individual sources of interest, provide variability information on candidate counterparts, and discuss the characteristics of the variable population.
13. Positron Transport and Annihilation in the Galactic Bulge
Directory of Open Access Journals (Sweden)
Fiona Helen Panther
2018-03-01
Full Text Available The annihilation of positrons in the Milky Way Galaxy has been observed for ∼50 years; however, the production sites of these positrons remains hard to identify. The observed morphology of positron annihilation gamma-rays provides information on the annihilation sites of these Galactic positrons. It is understood that the positrons responsible for the annihilation signal originate at MeV energies. The majority of sources of MeV positrons occupy the star-forming thin disk of the Milky Way. If positrons propagate far from their sources, we must develop accurate models of positron propagation through all interstellar medium (ISM phases in order to reveal the currently uncertain origin of these Galactic positrons. On the other hand, if positrons annihilate close to their sources, an alternative source of MeV positrons with a distribution that matches the annihilation morphology must be identified. In this work, I discuss the various models that have been developed to understand the origin of the 511 keV line from the direction of the Galactic bulge, and the propagation of positrons in the ISM.
14. EXPLORING THE UNUSUALLY HIGH BLACK-HOLE-TO-BULGE MASS RATIOS IN NGC 4342 AND NGC 4291: THE ASYNCHRONOUS GROWTH OF BULGES AND BLACK HOLES
International Nuclear Information System (INIS)
Bogdán, Ákos; Forman, William R.; Kraft, Ralph P.; Li, Zhiyuan; Vikhlinin, Alexey; Nulsen, Paul E. J.; Jones, Christine; Zhuravleva, Irina; Churazov, Eugene; Mihos, J. Christopher; Harding, Paul; Guo, Qi; Schindler, Sabine
2012-01-01
We study two nearby early-type galaxies, NGC 4342 and NGC 4291, that host unusually massive black holes relative to their low stellar mass. The observed black-hole-to-bulge mass ratios of NGC 4342 and NGC 4291 are 6.9 +3.8 –2.3 % and 1.9% ± 0.6%, respectively, which significantly exceed the typical observed ratio of ∼0.2%. As a consequence of the exceedingly large black-hole-to-bulge mass ratios, NGC 4342 and NGC 4291 are ≈5.1σ and ≈3.4σ outliers from the M . -M bulge scaling relation, respectively. In this paper, we explore the origin of the unusually high black-hole-to-bulge mass ratio. Based on Chandra X-ray observations of the hot gas content of NGC 4342 and NGC 4291, we compute gravitating mass profiles, and conclude that both galaxies reside in massive dark matter halos, which extend well beyond the stellar light. The presence of dark matter halos around NGC 4342 and NGC 4291 and a deep optical image of the environment of NGC 4342 indicate that tidal stripping, in which ∼> 90% of the stellar mass was lost, cannot explain the observed high black-hole-to-bulge mass ratios. Therefore, we conclude that these galaxies formed with low stellar masses, implying that the bulge and black hole did not grow in tandem. We also find that the black hole mass correlates well with the properties of the dark matter halo, suggesting that dark matter halos may play a major role in regulating the growth of the supermassive black holes.
15. Material characterization of Inconel 718 from free bulging test at high temperature
Energy Technology Data Exchange (ETDEWEB)
Yoo, Joon Tae; Yoon, Jong Hoon; Lee, Ho Sung [Korea Aerospace Research Institute, Daejeon (Korea, Republic of); Youn, Sung Kie [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)
2012-07-15
Macroscopic superplastic behavior of metallic or non metallic materials is usually represented by the strain rate sensitivity, and it can be determined by tensile tests in uniaxial stress state and bulging tests in multi axial stress state, which is the actual hot forming process. And macroscopic behavior of Non SPF grade materials could be described in a similar way as that of superplastic materials, including strain hardening, cavity and so on. In this study, the material characterization of non SPF grade Inconel 718 has been carried out to determine the material parameters for flow stress throughout free bulging test under constant temperature. The measured height of bulged plate during the test was used for estimation of strain rate sensitivity, strain hardening index and cavity volume fraction with the help of numerical analysis. The bulged height obtained from the simulation showed good agreement with the experimental findings. The effects of strain hardening and cavity volume fraction factor for flow stress were also compared.
16. Galactic bulge preferred over dark matter for the Galactic centre gamma-ray excess
Science.gov (United States)
Macias, Oscar; Gordon, Chris; Crocker, Roland M.; Coleman, Brendan; Paterson, Dylan; Horiuchi, Shunsaku; Pohl, Martin
2018-05-01
An anomalous gamma-ray excess emission has been found in the Fermi Large Area Telescope data1 covering the centre of the Galaxy2,3. Several theories have been proposed for this Galactic centre excess'. They include self-annihilation of dark-matter particles4, an unresolved population of millisecond pulsars5, an unresolved population of young pulsars6, or a series of burst events7. Here, we report on an analysis that exploits hydrodynamical modelling to register the position of interstellar gas associated with diffuse Galactic gamma-ray emission. We find evidence that the Galactic centre excess gamma rays are statistically better described by the stellar over-density in the Galactic bulge and the nuclear stellar bulge, rather than a spherical excess. Given its non-spherical nature, we argue that the Galactic centre excess is not a dark-matter phenomenon but rather associated with the stellar population of the Galactic bulge and the nuclear bulge.
17. Bulge growth and quenching since z = 2.5 in CANDELS/3D-HST
International Nuclear Information System (INIS)
Lang, Philipp; Wuyts, Stijn; Schreiber, Natascha M. Förster; Genzel, Reinhard; Lutz, Dieter; Rosario, David J.; Somerville, Rachel S.; Bell, Eric F.; Brammer, Gabe; Dekel, Avishai; Faber, Sandra M.; Momcheva, Ivelina; Ferguson, Henry C.; Grogin, Norman A.; Koekemoer, Anton M.; Kocevski, Dale D.; McGrath, Elizabeth J.; Nelson, Erica J.; Primack, Joel R.; Skelton, Rosalind E.
2014-01-01
Exploiting the deep high-resolution imaging of all five CANDELS fields, and accurate redshift information provided by 3D-HST, we investigate the relation between structure and stellar populations for a mass-selected sample of 6764 galaxies above 10 10 M ☉ , spanning the redshift range 0.5 < z < 2.5. For the first time, we fit two-dimensional models comprising a single Sérsic fit and two-component (i.e., bulge + disk) decompositions not only to the H-band light distributions, but also to the stellar mass maps reconstructed from resolved stellar population modeling. We confirm that the increased bulge prominence among quiescent galaxies, as reported previously based on rest-optical observations, remains in place when considering the distributions of stellar mass. Moreover, we observe an increase of the typical Sérsic index and bulge-to-total ratio (with median B/T reaching 40%-50%) among star-forming galaxies above 10 11 M ☉ . Given that quenching for these most massive systems is likely to be imminent, our findings suggest that significant bulge growth precedes a departure from the star-forming main sequence. We demonstrate that the bulge mass (and ideally knowledge of the bulge and total mass) is a more reliable predictor of the star-forming versus quiescent state of a galaxy than the total stellar mass. The same trends are predicted by the state-of-the-art, semi-analytic model by Somerville et al. In this model, bulges and black holes grow hand in hand through merging and/or disk instabilities, and feedback from active galactic nuclei shuts off star formation. Further observations will be required to pin down star formation quenching mechanisms, but our results imply that they must be internal to the galaxies and closely associated with bulge growth.
18. Bulge Growth and Quenching since z = 2.5 in CANDELS/3D-HST
Science.gov (United States)
Lang, Philipp; Wuyts, Stijn; Somerville, Rachel S.; Förster Schreiber, Natascha M.; Genzel, Reinhard; Bell, Eric F.; Brammer, Gabe; Dekel, Avishai; Faber, Sandra M.; Ferguson, Henry C.; Grogin, Norman A.; Kocevski, Dale D.; Koekemoer, Anton M.; Lutz, Dieter; McGrath, Elizabeth J.; Momcheva, Ivelina; Nelson, Erica J.; Primack, Joel R.; Rosario, David J.; Skelton, Rosalind E.; Tacconi, Linda J.; van Dokkum, Pieter G.; Whitaker, Katherine E.
2014-06-01
Exploiting the deep high-resolution imaging of all five CANDELS fields, and accurate redshift information provided by 3D-HST, we investigate the relation between structure and stellar populations for a mass-selected sample of 6764 galaxies above 1010 M ⊙, spanning the redshift range 0.5 < z < 2.5. For the first time, we fit two-dimensional models comprising a single Sérsic fit and two-component (i.e., bulge + disk) decompositions not only to the H-band light distributions, but also to the stellar mass maps reconstructed from resolved stellar population modeling. We confirm that the increased bulge prominence among quiescent galaxies, as reported previously based on rest-optical observations, remains in place when considering the distributions of stellar mass. Moreover, we observe an increase of the typical Sérsic index and bulge-to-total ratio (with median B/T reaching 40%-50%) among star-forming galaxies above 1011 M ⊙. Given that quenching for these most massive systems is likely to be imminent, our findings suggest that significant bulge growth precedes a departure from the star-forming main sequence. We demonstrate that the bulge mass (and ideally knowledge of the bulge and total mass) is a more reliable predictor of the star-forming versus quiescent state of a galaxy than the total stellar mass. The same trends are predicted by the state-of-the-art, semi-analytic model by Somerville et al. In this model, bulges and black holes grow hand in hand through merging and/or disk instabilities, and feedback from active galactic nuclei shuts off star formation. Further observations will be required to pin down star formation quenching mechanisms, but our results imply that they must be internal to the galaxies and closely associated with bulge growth.
19. The Chemical Composition of the Galactic Bulge and Implications for its Evolution
Science.gov (United States)
McWilliam, Andrew
2016-08-01
At a bulge latitude of b = -4°, the average [Fe/H] and [Mg/H] values are +0.06 and +0.17 dex, roughly 0.2 and 0.7 dex higher than the local thin and thick disk values, respectively, suggesting a large bulge effective yield, perhaps due to efficient retention of supernova ejecta. The bulge vertical [Fe/H] gradient, at ∼0.5 dex/kpc, appears to be due to a changing mixture of sub-populations (near +0.3 dex and -0.3 dex and one possibly near -0.7 dex) with latitude. At solar [Fe/H], the bulge [Al/Fe] and [α/Fe] ratios are ∼ +0.15 dex. Below [Fe/H] ∼ -0.5 dex, the bulge and local thick disk compositions are very similar; but the measured [Mg/Fe], [/Fe], [La/Eu] and dramatic [Cu/Fe] ratios suggest higher SFR in the bulge. However, these composition differences with the thick disk could be due to measurement errors and non-LTE effects. Unusual zig-zag trends of [Cu/Fe] and [Na/Fe] suggest metallicity-dependent nucleosynthesis by core-collapse supernovae in the Type Ia supernova time-delay scenario. The bulge sub-population compositions resemble the local thin and thick disks, but at higher [Fe/H], suggesting a radial [Fe/H] gradient of -0.04 to -0.05 dex/kpc for both the thin and thick disks. If the bulge formed through accretion of inner thin and thick disk stars, it appears that these stars retained vertical scale heights characteristic of their kinematic origin, resulting in the vertical [Fe/H] gradient and [α/Fe] trends seen today.
20. Nuclear planetology: understanding habitable planets as Galactic bulge stellar remnants (black dwarfs) in a Hertzsprung-Russell (HR) diagram
Science.gov (United States)
Roller, Goetz
2016-04-01
model constraining the evolution of a rocky planet like Earth or Mercury from a stellar precursor of the oldest population to a Fe-C BLD, shifting through different spectral classes in a HR diagram after massive decompression and tremendous energy losses. In the light of WD/BLD cosmochronology [1], solar system bodies like Earth, Mercury and Moon are regarded as captured interlopers from the Galactic bulge, Earth and Moon possibly representing remnants of an old binary system. Such a preliminary scenario is supported by similar ages obtained from WD's for the Galactic halo [1] and, independently, by means of 187Re-232Th-238U nuclear geochronometry [2, 4, 5], together with recent observations extremely metal-poor stars from the cosmic dawn in the bulge of the Milky Way [6]. This might be further elucidated in the near future by Th/U cosmochronometry based upon a nuclear production ratio Th/U = 0.96 [5] and additionally by means of a newly developed nucleogeochronometric age dating method for stellar spectroscopy, which will be presented in a forthcoming paper. The model shall stimulate geochemical data interpretation from a different perspective to constrain the (thermal) evolution of a habitable planet as to its geo-, bio-, hydro- and atmosphere. [1] Fontaine et al. (2001), Public. Astron. Soc. of the Pacific 113, 409-435. [2] Roller (2015), Abstract T34B-0407, AGU Spring Meeting 2015. [3] Arevalo et al. (2010), Chem. Geol. 271, 70-85. [4] Roller (2015), Geophys. Res. Abstr. 17, EGU2015-2399. [5] Roller (2015), 78th Annu. Meeting Met. Soc., Abstract #5041. [6] Howes et al. (2015), Nature 527, 484-487.
1. THE NUMBER OF TIDAL DWARF SATELLITE GALAXIES IN DEPENDENCE OF BULGE INDEX
International Nuclear Information System (INIS)
López-Corredoira, Martín; Kroupa, Pavel
2016-01-01
We show that a significant correlation (up to 5σ) emerges between the bulge index, defined to be larger for a larger bulge/disk ratio, in spiral galaxies with similar luminosities in the Galaxy Zoo 2 of the Sloan Digital Sky Survey and the number of tidal-dwarf galaxies in the catalog by Kaviraj et al. In the standard cold or warm dark matter cosmological models, the number of satellite galaxies correlates with the circular velocity of the dark matter host halo. In generalized gravity models without cold or warm dark matter, such a correlation does not exist, because host galaxies cannot capture infalling dwarf galaxies due to the absence of dark-matter-induced dynamical friction. However, in such models, a correlation is expected to exist between the bulge mass and the number of satellite galaxies because bulges and tidal-dwarf satellite galaxies form in encounters between host galaxies. This is not predicted by dark matter models in which bulge mass and the number of satellites are a priori uncorrelated because higher bulge/disk ratios do not imply higher dark/luminous ratios. Hence, our correlation reproduces the prediction of scenarios without dark matter, whereas an explanation is not found readily from the a priori predictions of the standard scenario with dark matter. Further research is needed to explore whether some application of the standard theory may explain this correlation
2. Role of the CCA bulge of prohead RNA of bacteriophage ø29 in DNA packaging.
Science.gov (United States)
Zhao, Wei; Morais, Marc C; Anderson, Dwight L; Jardine, Paul J; Grimes, Shelley
2008-11-14
The oligomeric ring of prohead RNA (pRNA) is an essential component of the ATP-driven DNA packaging motor of bacteriophage ø29. The A-helix of pRNA binds the DNA translocating ATPase gp16 (gene product 16) and the CCA bulge in this helix is essential for DNA packaging in vitro. Mutation of the bulge by base substitution or deletion showed that the size of the bulge, rather than its sequence, is primary in DNA packaging activity. Proheads reconstituted with CCA bulge mutant pRNAs bound the packaging ATPase gp16 and the packaging substrate DNA-gp3, although DNA translocation was not detected with several mutants. Prohead/bulge-mutant pRNA complexes with low packaging activity had a higher rate of ATP hydrolysis per base pair of DNA packaged than proheads with wild-type pRNA. Cryoelectron microscopy three-dimensional reconstruction of proheads reconstituted with a CCA deletion pRNA showed that the protruding pRNA spokes of the motor occupy a different position relative to the head when compared to particles with wild-type pRNA. Therefore, the CCA bulge seems to dictate the orientation of the pRNA spokes. The conformational changes observed for this mutant pRNA may affect gp16 conformation and/or subsequent ATPase-DNA interaction and, consequently, explain the decreased packaging activity observed for CCA mutants.
3. Improved Model for Predicting the Free Energy Contribution of Dinucleotide Bulges to RNA Duplex Stability.
Science.gov (United States)
Tomcho, Jeremy C; Tillman, Magdalena R; Znosko, Brent M
2015-09-01
Predicting the secondary structure of RNA is an intermediate in predicting RNA three-dimensional structure. Commonly, determining RNA secondary structure from sequence uses free energy minimization and nearest neighbor parameters. Current algorithms utilize a sequence-independent model to predict free energy contributions of dinucleotide bulges. To determine if a sequence-dependent model would be more accurate, short RNA duplexes containing dinucleotide bulges with different sequences and nearest neighbor combinations were optically melted to derive thermodynamic parameters. These data suggested energy contributions of dinucleotide bulges were sequence-dependent, and a sequence-dependent model was derived. This model assigns free energy penalties based on the identity of nucleotides in the bulge (3.06 kcal/mol for two purines, 2.93 kcal/mol for two pyrimidines, 2.71 kcal/mol for 5'-purine-pyrimidine-3', and 2.41 kcal/mol for 5'-pyrimidine-purine-3'). The predictive model also includes a 0.45 kcal/mol penalty for an A-U pair adjacent to the bulge and a -0.28 kcal/mol bonus for a G-U pair adjacent to the bulge. The new sequence-dependent model results in predicted values within, on average, 0.17 kcal/mol of experimental values, a significant improvement over the sequence-independent model. This model and new experimental values can be incorporated into algorithms that predict RNA stability and secondary structure from sequence.
4. Metallicity Spreads in M31 Globular Clusters
Science.gov (United States)
Bridges, Terry
2003-07-01
Our recent deep HST photometry of the M31 halo globular cluster {GC} Mayall II, also called G1, has revealed a red-giant branch with a clear spread that we attribute to an intrinsic metallicity dispersion of at least 0.4 dex in [Fe/H]. The only other GC exhibiting such a metallicity dispersion is Omega Centauri, the brightest and most massive Galactic GC, whose range in [Fe/H] is about 0.5 dex. These observations are obviously linked to the fact that both G1 and Omega Cen are bright and massive GC, with potential wells deep enough to keep part of their gas, which might have been recycled, producing a metallicity scatter among cluster stars. These observations dramatically challenge the notion of chemical homogeneity as a defining characteristic of GCs. It is critically important to find out how common this phenomenon is and how it can constrain scenarios/models of GC formation. The obvious targets are other bright and massive GCs, which exist in M31 but not in our Galaxy where Omega Cen is an isolated giant. We propose to acquire, with ACS/HRC, deep imaging of 3 of the brightest M31 GCs for which we have observed velocity dispersion values similar to those observed in G1 and Omega Cen. A sample of GCs with chemical abundance dispersions will provide essential information about their formation mechanism. This would represent a major step for the studies of the origin and evolution of stellar populations.
5. CENTRAL ROTATIONS OF MILKY WAY GLOBULAR CLUSTERS
International Nuclear Information System (INIS)
Fabricius, Maximilian H.; Rukdee, Surangkhana; Saglia, Roberto P.; Bender, Ralf; Hopp, Ulrich; Thomas, Jens; Williams, Michael J.; Noyola, Eva; Opitsch, Michael
2014-01-01
Most Milky Way globular clusters (GCs) exhibit measurable flattening, even if on a very low level. Both cluster rotation and tidal fields are thought to cause this flattening. Nevertheless, rotation has only been confirmed in a handful of GCs, based mostly on individual radial velocities at large radii. We are conducting a survey of the central kinematics of Galactic GCs using the new Integral Field Unit instrument VIRUS-W. We detect rotation in all 11 GCs that we have observed so far, rendering it likely that a large majority of the Milky Way GCs rotate. We use published catalogs of GCs to derive central ellipticities and position angles. We show that in all cases where the central ellipticity permits an accurate measurement of the position angle, those angles are in excellent agreement with the kinematic position angles that we derive from the VIRUS-W velocity fields. We find an unexpected tight correlation between central rotation and outer ellipticity, indicating that rotation drives flattening for the objects in our sample. We also find a tight correlation between central rotation and published values for the central velocity dispersion, most likely due to rotation impacting the old dispersion measurements
6. Central Rotations of Milky Way Globular Clusters
Science.gov (United States)
Fabricius, Maximilian H.; Noyola, Eva; Rukdee, Surangkhana; Saglia, Roberto P.; Bender, Ralf; Hopp, Ulrich; Thomas, Jens; Opitsch, Michael; Williams, Michael J.
2014-06-01
Most Milky Way globular clusters (GCs) exhibit measurable flattening, even if on a very low level. Both cluster rotation and tidal fields are thought to cause this flattening. Nevertheless, rotation has only been confirmed in a handful of GCs, based mostly on individual radial velocities at large radii. We are conducting a survey of the central kinematics of Galactic GCs using the new Integral Field Unit instrument VIRUS-W. We detect rotation in all 11 GCs that we have observed so far, rendering it likely that a large majority of the Milky Way GCs rotate. We use published catalogs of GCs to derive central ellipticities and position angles. We show that in all cases where the central ellipticity permits an accurate measurement of the position angle, those angles are in excellent agreement with the kinematic position angles that we derive from the VIRUS-W velocity fields. We find an unexpected tight correlation between central rotation and outer ellipticity, indicating that rotation drives flattening for the objects in our sample. We also find a tight correlation between central rotation and published values for the central velocity dispersion, most likely due to rotation impacting the old dispersion measurements. This Letter includes data taken at The McDonald Observatory of The University of Texas at Austin.
7. Polycyclic Aromatic Hydrocarbon Emission Toward the Galactic Bulge
Science.gov (United States)
Shannon, M. J.; Peeters, E.; Cami, J.; Blommaert, J. A. D. L.
2018-03-01
We examine polycyclic aromatic hydrocarbon (PAH), dust, and atomic/molecular emission toward the Galactic bulge using Spitzer Space Telescope observations of four fields: C32, C35, OGLE, and NGC 6522. These fields are approximately centered on (l, b) = (0.°0, 1.°0), (0.°0, ‑1.°0), (0.°4, ‑2.°4), and (1.°0, ‑3.°8), respectively. Far-infrared photometric observations complement the Spitzer/IRS spectroscopic data and are used to construct spectral energy distributions. We find that the dust and PAH emission are exceptionally similar between C32 and C35 overall, in part explained due to their locations—they reside on or near boundaries of a 7 Myr old Galactic outflow event and are partly shock-heated. Within the C32 and C35 fields, we identify a region of elevated Hα emission that is coincident with elevated fine-structure and [O IV] line emission and weak PAH feature strengths. We are likely tracing a transition zone of the outflow into the nascent environment. PAH abundances in these fields are slightly depressed relative to typical ISM values. In the OGLE and NGC 6522 fields, we observe weak features on a continuum dominated by zodiacal dust. SED fitting indicates that thermal dust grains in C32 and C35 have temperatures comparable to those of diffuse, high-latitude cirrus clouds. Little variability is detected in the PAH properties between C32 and C35, indicating that a stable population of PAHs dominates the overall spectral appearance. In fact, their PAH features are exceptionally similar to that of the M82 superwind, emphasizing that we are probing a local Galactic wind environment.
8. THE INTRIGUING STELLAR POPULATIONS IN THE GLOBULAR CLUSTERS NGC 6388 AND NGC 6441
International Nuclear Information System (INIS)
Bellini, A.; Anderson, J.; Piotto, G.; Nardiello, D.; Milone, A. P.; King, I. R.; Renzini, A.; Bedin, L. R.; Cassisi, S.; Pietrinferni, A.; Sarajedini, A.
2013-01-01
NGC 6388 and NGC 6441 are two massive Galactic bulge globular clusters that share many properties, including the presence of an extended horizontal branch (HB), quite unexpected because of their high metal content. In this paper we use Hubble Space Telescope's WFPC2, ACS, and WFC3 images and present a broad multicolor study of their stellar content, covering all main evolutionary branches. The color-magnitude diagrams (CMDs) give compelling evidence that both clusters host at least two stellar populations, which manifest themselves in different ways. NGC 6388 has a broadened main sequence (MS), a split sub-giant branch (SGB), and a split red giant branch (RGB) that becomes evident above the HB in our data set; its red HB is also split into two branches. NGC 6441 has a split MS, but only an indication of two SGB populations, while the RGB clearly splits in two from the SGB level upward, and no red HB structure. The multicolor analysis of the CMDs confirms that the He difference between the two main stellar populations in the two clusters must be similar. This is observationally supported by the HB morphology, but also confirmed by the color distribution of the stars in the MS optical band CMDs. However, a MS split becomes evident in NGC 6441 using UV colors, but not in NGC 6388, indicating that the chemical patterns of the different populations are different in the two clusters, with C, N, and O abundance differences likely playing a major role. We also analyze the radial distribution of the two populations.
9. MERGERS AND BULGE FORMATION IN ΛCDM: WHICH MERGERS MATTER?
International Nuclear Information System (INIS)
Hopkins, Philip F.; Bundy, Kevin; Wetzel, Andrew; Croton, Darren; Hernquist, Lars; Keres, Dusan; Younger, Joshua D.; Khochfar, Sadegh; Stewart, Kyle
2010-01-01
We use a suite of semi-empirical models to predict the galaxy-galaxy merger rate and relative contributions to bulge growth as a function of mass (both halo and stellar), redshift, and mass ratio. The models use empirical constraints on the halo occupation distribution, evolved forward in time, to robustly identify where and when galaxy mergers occur. Together with the results of high-resolution merger simulations, this allows us to quantify the relative contributions of mergers with different properties (e.g., mass ratios, gas fractions, redshifts) to the bulge population. We compare with observational constraints, and find good agreement. We also provide useful fitting functions and make public a code to reproduce the predicted merger rates and contributions to bulge mass growth. We identify several robust conclusions. (1) Major mergers dominate the formation and assembly of ∼L * bulges and the total spheroid mass density, but minor mergers contribute a non-negligible ∼30%. (2) This is mass dependent: bulge formation and assembly is dominated by more minor mergers in lower-mass systems. In higher-mass systems, most bulges originally form in major mergers near ∼L * , but assemble in increasingly minor mergers. (3) The minor/major contribution is also morphology dependent: higher B/T systems preferentially form in more major mergers, with B/T roughly tracing the mass ratio of the largest recent merger; lower B/T systems preferentially form in situ from minor mergers. (4) Low-mass galaxies, being gas-rich, require more mergers to reach the same B/T as high-mass systems. Gas-richness dramatically suppresses the absolute efficiency of bulge formation, but does not strongly influence the relative contribution of major versus minor mergers. (5) Absolute merger rates at fixed mass ratio increase with galaxy mass. (6) Predicted merger rates agree well with those observed in pair and morphology-selected samples, but there is evidence that some morphology
10. THE AGE OF THE YOUNG BULGE-LIKE POPULATION IN THE STELLAR SYSTEM TERZAN 5: LINKING THE GALACTIC BULGE TO THE HIGH- Z UNIVERSE
Energy Technology Data Exchange (ETDEWEB)
Ferraro, F. R.; Dalessandro, E.; Lanzoni, B.; Mucciarelli, A. [Dipartimento di Fisica e Astronomia, Università degli Studi di Bologna, Viale Berti Pichat 6/2, I–40127 Bologna (Italy); Massari, D. [INAF-Osservatorio Astronomico di Bologna, Via Ranzani, 1, I-40127 Bologna (Italy); Origlia, L. [Kapteyn Astronomical Institute, University of Gröningen, Kapteyn Astron Institute, NL-9747 AD Gröningen (Netherlands); Rich, R. M. [Department of Physics and Astronomy, 430 Portola Plaza, Box 951547, Los Angeles, CA 90095-1547 (United States)
2016-09-10
The Galactic bulge is dominated by an old, metal-rich stellar population. The possible presence and the amount of a young (a few gigayears old) minor component is one of the major issues debated in the literature. Recently, the bulge stellar system Terzan 5 was found to harbor three sub-populations with iron content varying by more than one order of magnitude (from 0.2 up to two times the solar value), with chemical abundance patterns strikingly similar to those observed in bulge field stars. Here we report on the detection of two distinct main-sequence turnoff points in Terzan 5, providing the age of the two main stellar populations: 12 Gyr for the (dominant) sub-solar component and 4.5 Gyr for the component at super-solar metallicity. This discovery classifies Terzan 5 as a site in the Galactic bulge where multiple bursts of star formation occurred, thus suggesting a quite massive progenitor possibly resembling the giant clumps observed in star-forming galaxies at high redshifts. This connection opens a new route of investigation into the formation process and evolution of spheroids and their stellar content.
11. Photometric studies of globular clusters in the Andromeda nebula
International Nuclear Information System (INIS)
Sharov, A.S.; Lyutyj, V.M.
1983-01-01
The comparison of the frequency distribution of Bergh Q and Racine R metallicity parameters for globular clusters in the Galaxy and M31 is given. Mean values of the parameters are: in the Galaxy anti Q=-0.31 and anti R=0.40, in M31 anti Q=-0.32 and anti R=0.42. Hence the mean metallicity of globular clusters in two galaxies is identical. The differences in the observed frequency distribution of the parameters, in particular in the limits of general metallicity, are related to the random errors of photometrical measurements of globular clusters, considerably greater in the case of M31. Thereby the preference should be given to Hanes conclusion that globular clusters form a uniform population at least in two close systems. It should not be excepted that in other galaxies mean colour characteristics and hence metallicity of clusters may be of other type. Thus globular clusters related to the M31-NGC 205 satellite have somewhat minor B-V colour factors
12. Young globular clusters in NGC 1316
Science.gov (United States)
Sesto, Leandro A.; Faifer, Favio R.; Smith Castelli, Analía V.; Forte, Juan C.; Escudero, Carlos G.
2018-05-01
We present multi-object spectroscopy of the inner zone of the globular cluster (GC) system associated with the intermediate-age merger remnant NGC 1316. Using the multi-object mode of the GMOS camera, we obtained spectra for 35 GCs. We find pieces of evidence that the innermost GCs of NGC 1316 rotate almost perpendicular to the stellar component of the galaxy. In a second stage, we determined ages, metallicities and α-element abundances for each GC present in the sample, through the measurement of different Lick/IDS indices and their comparison with simple stellar population models. We confirmed the existence of multiple GC populations associated with NGC 1316, where the presence of a dominant subpopulation of very young GCs, with an average age of 2.1 Gyr, metallicities between -0.5 < [Z/H] < 0.5 dex and α-element abundances in the range -0.2 < [α/Fe] < 0.3 dex, stands out. Several objects in our sample present subsolar values of [α/Fe] and a large spread of [Z/H] and ages. Some of these objects could actually be stripped nuclei, possibly accreted during minor merger events. Finally, the results have been analyzed with the aim of describing the different episodes of star formation and thus provide a more complete picture about the evolutionary history of the galaxy. We conclude that these pieces of evidence could indicate that this galaxy has cannibalized one or more gas-rich galaxies, where the last fusion event occurred about 2 Gyr ago.
13. Computational and theoretical studies of globular proteins
Science.gov (United States)
Pagan, Daniel L.
Protein crystallization is often achieved in experiment through a trial and error approach. To date, there exists a dearth of theoretical understanding of the initial conditions necessary to promote crystallization. While a better understanding of crystallization will help to create good crystals suitable for structure analysis, it will also allow us to prevent the onset of certain diseases. The core of this thesis is to model and, ultimately, understand the phase behavior of protein particles in solution. Toward this goal, we calculate the fluid-fluid coexistence curve in the vicinity of the metastable critical point of the modified Lennard-Jones potential, where it has been shown that nucleation is increased by many orders of magnitude. We use finite-size scaling techniques and grand canonical Monte Carlo simulation methods. This has allowed us to pinpoint the critical point and subcritical region with high accuracy in spite of the critical fluctuations that hinder sampling using other Monte Carlo techniques. We also attempt to model the phase behavior of the gamma-crystallins, mutations of which have been linked to genetic cataracts. The complete phase behavior of the square well potential at the ranges of attraction lambda = 1.15 and lambda = 1.25 is calculated and compared with that of the gammaII-crystallin. The role of solvent is also important in the crystallization process and affects the phase behavior of proteins in solution. We study a model that accounts for the contribution of the solvent free-energy to the free-energy of globular proteins. This model allows us to model phase behavior that includes solvent.
14. Nuclear fuel assembly grid sleeve/guide thimble bulge orientation gage and inspection method
International Nuclear Information System (INIS)
Widener, W.H.
1988-01-01
This patent describes a method of inspecting a fuel assembly to determine the orientation of externally-projecting mated bulges connecting a grid sleeve to a guide thimble of the assembly, the method comprising the steps of: (a) inserting a radially-expandable tubular member within the guide thimble, the tubular member having externally-projecting embossments thereon spaced circumferentially from one another about the tubular member, the embossments being the same in number as the bulges of the guide thimble and configured to fit therewithin; (b) axially moving an elongated expansion member, which extends through and rotatably mounts the tubular member, relative to the tubular member from a first position in which the expansion member permits inward contraction of the tubular member and displacement of embossments thereon away from the interior of the guide thimble bulges for removing the embossments from registry therewith and a second position in which the expansion member produces radial expansion of the tubular member and displacement of the embossments thereon toward the interior of the guide thimble bulges for placing the embossments in registry therewith; (c) rotating the tubular member relative to the expansion member so as to bring the embossments on the tubular member into alignment with the guide thimble bulges as the embossments on the tubular member are being displaced toward and into registry with the interior of the bulges; and (d) responsive to rotation of the tubular member away from a reference position, providing an indication of the orientation of the guide thimble bulges relative to a reference point upon displacement of the embossments into registry therewith
15. Supermassive Black Holes and Their Host Galaxies. I. Bulge Luminosities from Dedicated Near-infrared Data
Science.gov (United States)
Läsker, Ronald; Ferrarese, Laura; van de Ven, Glenn
2014-01-01
In an effort to secure, refine, and supplement the relation between central supermassive black hole masses, M •, and the bulge luminosities of their host galaxies, L bul, we obtained deep, high spatial resolution K-band images of 35 nearby galaxies with securely measured M •, using the wide-field WIRCam imager at the Canada-France-Hawaii-Telescope. A dedicated data reduction and sky subtraction strategy was adopted to estimate the brightness and structure of the sky, a critical step when tracing the light distribution of extended objects in the near-infrared. From the final image product, bulge and total magnitudes were extracted via two-dimensional profile fitting. As a first order approximation, all galaxies were modeled using a simple Sérsic-bulge+exponential-disk decomposition. However, we found that such models did not adequately describe the structure that we observed in a large fraction of our sample galaxies which often include cores, bars, nuclei, inner disks, spiral arms, rings, and envelopes. In such cases, we adopted profile modifications and/or more complex models with additional components. The derived bulge magnitudes are very sensitive to the details and number of components used in the models, although total magnitudes remain almost unaffected. Usually, but not always, the luminosities and sizes of the bulges are overestimated when a simple bulge+disk decomposition is adopted in lieu of a more complex model. Furthermore, we found that some spheroids are not well fit when the ellipticity of the Sérsic model is held fixed. This paper presents the details of the image processing and analysis, while we discuss how model-induced biases and systematics in bulge magnitudes impact the M •-L bul relation in a companion paper.
16. Pulsar-irradiated stars in dense globular clusters
Science.gov (United States)
Tavani, Marco
1992-01-01
We discuss the properties of stars irradiated by millisecond pulsars in 'hard' binaries of dense globular clusters. Irradiation by a relativistic pulsar wind as in the case of the eclipsing millisecond pulsar PSR 1957+20 alter both the magnitude and color of the companion star. Some of the blue stragglers (BSs) recently discovered in dense globular clusters can be irradiated stars in binaries containing powerful millisecond pulsars. The discovery of pulsar-driven orbital modulations of BS brightness and color with periods of a few hours together with evidence for radio and/or gamma-ray emission from BS binaries would valuably contribute to the understanding of the evolution of collapsed stars in globular clusters. Pulsar-driven optical modulation of cluster stars might be the only observable effect of a new class of binary pulsars, i.e., hidden millisecond pulsars enshrouded in the evaporated material lifted off from the irradiated companion star.
17. The influence of changes in cervical lordosis on bulging disk and spinal stenosis: functional MR imaging
International Nuclear Information System (INIS)
Lee, Young Joon; Eun, Choong Ki
2001-01-01
To assess the effect of lordotic curve change of the cervical spine on disk bulging and spinal stenosis by means of functional cervical MR imaging at the flexion and extension position. Using a 1.5T imager, kinematic MR examinations of 25 patients with degenerative spondylosis (average age, 41 years) were performed at the neutral, flexed and extended position of the cervical spine. Sagittal T2-weighted turbo spin-echo images were obtained during each of the three phases. Lordotic angle, bulging thickness of the disk, AP diameter of the spinal canal, and distance between the disk and spinal cord were measured on the workstation at each disk level. After qualitative independent observation of disk bulging, one of four grades(0, normal; 1, mild; 2, moderate; 3, marked) was assigned at each phase, and after further comparative observation, one of five scores (-2, prominent decrease; -1, mild decrease; 0, no change; 1, notable increase; 2 prominent increase) was also assigned. In addition, bulging thickness of the disk was measured and compared at the neutral, flexed, and extended positions. Average angles of the cervical spine were 160.5±5.9 deg (neutral position, lordotic angle); 185.4±8.5 deg (flexion, kyphotic angle); and 143.7±6.7 deg (extension, lordotic angle). Average grades of disk bulging were 0.55 at the neutral position. 0.16 at flexion, and 0.7 at extension. Comparative observation showed that average scores of disk bulging were -0.39 at flexion and 0.31 at extension. The bulging thickness of the disk decreased by 24.2% at flexion and increased by 30.3% at extension, while the diameter of the spinal canal increased by 4.5% at flexion and decreased by 3.6% at extension. The distance from the posterior margin of the disk to the anterior margin of the spinal cord decreased at both flexion(6.6%) and extension(19.1%). Functional MRI showed that compared with the neutral position, disk bulging and spinal stenosis are less prominent at flexion and accentuated
18. The influence of changes in cervical lordosis on bulging disk and spinal stenosis: functional MR imaging
Energy Technology Data Exchange (ETDEWEB)
Lee, Young Joon; Eun, Choong Ki [Pusan Paik Hospital, Inje Univ. College of Medicine, Pusan (Korea, Republic of)
2001-05-01
To assess the effect of lordotic curve change of the cervical spine on disk bulging and spinal stenosis by means of functional cervical MR imaging at the flexion and extension position. Using a 1.5T imager, kinematic MR examinations of 25 patients with degenerative spondylosis (average age, 41 years) were performed at the neutral, flexed and extended position of the cervical spine. Sagittal T2-weighted turbo spin-echo images were obtained during each of the three phases. Lordotic angle, bulging thickness of the disk, AP diameter of the spinal canal, and distance between the disk and spinal cord were measured on the workstation at each disk level. After qualitative independent observation of disk bulging, one of four grades(0, normal; 1, mild; 2, moderate; 3, marked) was assigned at each phase, and after further comparative observation, one of five scores (-2, prominent decrease; -1, mild decrease; 0, no change; 1, notable increase; 2 prominent increase) was also assigned. In addition, bulging thickness of the disk was measured and compared at the neutral, flexed, and extended positions. Average angles of the cervical spine were 160.5{+-}5.9 deg (neutral position, lordotic angle); 185.4{+-}8.5 deg (flexion, kyphotic angle); and 143.7{+-}6.7 deg (extension, lordotic angle). Average grades of disk bulging were 0.55 at the neutral position. 0.16 at flexion, and 0.7 at extension. Comparative observation showed that average scores of disk bulging were -0.39 at flexion and 0.31 at extension. The bulging thickness of the disk decreased by 24.2% at flexion and increased by 30.3% at extension, while the diameter of the spinal canal increased by 4.5% at flexion and decreased by 3.6% at extension. The distance from the posterior margin of the disk to the anterior margin of the spinal cord decreased at both flexion(6.6%) and extension(19.1%). Functional MRI showed that compared with the neutral position, disk bulging and spinal stenosis are less prominent at flexion and
19. THE ACS FORNAX CLUSTER SURVEY. X. COLOR GRADIENTS OF GLOBULAR CLUSTER SYSTEMS IN EARLY-TYPE GALAXIES
International Nuclear Information System (INIS)
Liu Chengze; Peng, Eric W.; Jordan, Andres; Ferrarese, Laura; Blakeslee, John P.; Cote, Patrick; Mei, Simona
2011-01-01
20. Seeing deconvolution of globular clusters in M31
International Nuclear Information System (INIS)
Bendinelli, O.; Zavatti, F.; Parmeggiani, G.; Djorgovski, S.
1990-01-01
The morphology of six M31 globular clusters is examined using seeing-deconvolved CCD images. The deconvolution techniques developed by Bendinelli (1989) are reviewed and applied to the M31 globular clusters to demonstrate the methodology. It is found that the effective resolution limit of the method is about 0.1-0.3 arcsec for CCD images obtained in FWHM = 1 arcsec seeing, and sampling of 0.3 arcsec/pixel. Also, the robustness of the method is discussed. The implications of the technique for future studies using data from the Hubble Space Telescope are considered. 68 refs
1. THE SPLIT RED CLUMP OF THE GALACTIC BULGE FROM OGLE-III
International Nuclear Information System (INIS)
Nataf, D. M.; Gould, A.; Stanek, K. Z.; Udalski, A.; Fouque, P.
2010-01-01
The red clump (RC) is found to be split into two components along several sightlines toward the Galactic bulge. This split is detected with high significance toward the areas (-3.5 < l < 1, b < -5) and (l, b) = (0, + 5.2), i.e., along the bulge minor axis and at least 5 deg off the plane. The fainter (hereafter 'main') component is the one that more closely follows the distance-longitude relation of the bulge RC. The main component is ∼0.5 mag fainter than the secondary component and with an overall approximately equal population. For sightlines further from the plane, the difference in brightness increases, and more stars are found in the secondary component than in the main component. The two components have very nearly equal (V - I) color.
2. A "bulged" double helix in a RNA-protein contact site
DEFF Research Database (Denmark)
Peattie, D A; Douthwaite, S; Garrett, R A
1981-01-01
as a singly bulged nucleotide extending the Fox and Woese central helix by two base pairs in the E. coli sequence (to positions 16-23/60-68) as well as in each of 61 (prokaryotic and eukaryotic) aligned 5S RNA sequences. In each case, the single bulged nucleotide is at the relative position of adenosine-66...... in the RNA sequences. The presence of this putative bulged nucleotide appears to have been conserved in 5S RNA sequences throughout evolution, and its identity varies with major phylogenetic divisions. This residue is likely involved in specific 5S RNA-protein recognition or interaction in prokaryotic...... and eukaryotic ribosomes. The uridine-65 to adenosine-66 internucleotide bond is protected from RNase A digestion in the complex, and carbethoxylation of E. coli adenosine-66 prior to L18 binding affects formation of a stable RNA-protein complex. Thus, we identify a region of E. coli 5S RNA protected...
3. EXPLORING ANTICORRELATIONS AND LIGHT ELEMENT VARIATIONS IN NORTHERN GLOBULAR CLUSTERS OBSERVED BY THE APOGEE SURVEY
Energy Technology Data Exchange (ETDEWEB)
Mészáros, Szabolcs [ELTE Gothard Astrophysical Observatory, H-9704 Szombathely, Szent Imre Herceg st. 112 (Hungary); Martell, Sarah L. [Department of Astrophysics, School of Physics, University of New South Wales, Sydney, NSW 2052 (Australia); Shetrone, Matthew [University of Texas at Austin, McDonald Observatory, Fort Davis, TX 79734 (United States); Lucatello, Sara [INAF-Osservatorio Astronomico di Padova, vicolo dell Osservatorio 5, I-35122 Padova (Italy); Troup, Nicholas W.; Pérez, Ana E. García; Majewski, Steven R. [Department of Astronomy, University of Virginia, Charlottesville, VA 22904-4325 (United States); Bovy, Jo [Institute for Advanced Study, Einstein Drive, Princeton, NJ 08540 (United States); Cunha, Katia [University of Arizona, Tucson, AZ 85719 (United States); García-Hernández, Domingo A.; Prieto, Carlos Allende [Instituto de Astrofísica de Canarias (IAC), E-38200 La Laguna, Tenerife (Spain); Overbeek, Jamie C. [Department of Astronomy, Indiana University, Bloomington, IN 47405 (United States); Beers, Timothy C. [Department of Physics and JINA Center for the Evolution of the Elements, University of Notre Dame, Notre Dame, IN 46556 (United States); Frinchaboy, Peter M. [Texas Christian University, Fort Worth, TX 76129 (United States); Hearty, Fred R.; Schneider, Donald P. [Department of Astronomy and Astrophysics, The Pennsylvania State University, University Park, PA 16802 (United States); Holtzman, Jon [New Mexico State University, Las Cruces, NM 88003 (United States); Nidever, David L. [Department of Astronomy, University of Michigan, Ann Arbor, MI 48109 (United States); Schiavon, Ricardo P. [Astrophysics Research Institute, IC2, Liverpool Science Park, Liverpool John Moores University, 146 Brownlow Hill, Liverpool, L3 5RF (United Kingdom); and others
2015-05-15
We investigate the light-element behavior of red giant stars in northern globular clusters (GCs) observed by the SDSS-III Apache Point Observatory Galactic Evolution Experiment. We derive abundances of 9 elements (Fe, C, N, O, Mg, Al, Si, Ca, and Ti) for 428 red giant stars in 10 GCs. The intrinsic abundance range relative to measurement errors is examined, and the well-known C–N and Mg–Al anticorrelations are explored using an extreme-deconvolution code for the first time in a consistent way. We find that Mg and Al drive the population membership in most clusters, except in M107 and M71, the two most metal-rich clusters in our study, where the grouping is most sensitive to N. We also find a diversity in the abundance distributions, with some clusters exhibiting clear abundance bimodalities (for example M3 and M53) while others show extended distributions. The spread of Al abundances increases significantly as cluster average metallicity decreases as previously found by other works, which we take as evidence that low metallicity, intermediate mass AGB polluters were more common in the more metal-poor clusters. The statistically significant correlation of [Al/Fe] with [Si/Fe] in M15 suggests that {sup 28}Si leakage has occurred in this cluster. We also present C, N, and O abundances for stars cooler than 4500 K and examine the behavior of A(C+N+O) in each cluster as a function of temperature and [Al/Fe]. The scatter of A(C+N+O) is close to its estimated uncertainty in all clusters and independent of stellar temperature. A(C+N+O) exhibits small correlations and anticorrelations with [Al/Fe] in M3 and M13, but we cannot be certain about these relations given the size of our abundance uncertainties. Star-to-star variations of α-element (Si, Ca, Ti) abundances are comparable to our estimated errors in all clusters.
4. Possible systematic decreases in the age of globular clusters
Energy Technology Data Exchange (ETDEWEB)
Shi, X. [Univ. of Chicago, Chicago, IL (United States); Schramm, D. N. [Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States); Univ. of Chicago, Chicago, IL (United States); Dearborn, D. S.P. [Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States); Truran, J. W. [Univ. of Chicago, Chicago, IL (United States)
1994-03-01
The ages of globular clusters inferred from observations depends sensitively on assumptions like the initial helium abundance and the mass loss rate. A high helium abundance (e.g., Y\\approx0.28) or a mass loss rate of \\sim10^{-11}M_\\odot yr^{-1} near the main sequence turn-off region lowers the current age estimate from 14 Gyr to about 10--12 Gyr, significantly relaxing the constraints on the Hubble constant, allowing values as high as 60km/sec/Mpc for a universe with the critical density and 90km/sec/Mpc for a baryon-only universe. Possible mechanisms for the helium enhancement in globular clusters are discussed, as are arguments for an instability strip induced mass loss near the turn-off. Ages lower than 10 Gyr are not possible even with the operation of both of these mechanisms unless the initial helium abundance in globular clusters is >0.30, which would conflict with indirect measurements of helium abundances in globular clusters.
5. Trazando la materia oscura con cúmulos globulares
Science.gov (United States)
Forte, J. C.
Se describe la estrategia adoptada para mapear la distribución de materia oscura y bariónica en galaxias elípticas cuyos cúmulos globulares están siendo observados con los telescopios VLT y Gemini. Se ejemplifican los resultados con los datos obtenidos en el cúmulo de Fornax.
6. BVRI CCD photometry of the globular cluster NGC 2808
International Nuclear Information System (INIS)
Alcaino, G.; Liller, W.; Alvarado, F.; Wenderoth, E.
1990-01-01
As a part of a continuing program, CCD color-magnitude diagrams are presented for the bright globular cluster NGC 2808 in the four colors comprising BVRI. From a comparison of four different CMDs with theoretical isochrones, an age of 16 + or - 2 Gyr is obtained, assuming a value for Fe/H near -1.3. 28 refs
7. Supra-galactic colour patterns in globular cluster systems
Science.gov (United States)
Forte, Juan C.
2017-07-01
An analysis of globular cluster systems associated with galaxies included in the Virgo and Fornax Hubble Space Telescope-Advanced Camera Surveys reveals distinct (g - z) colour modulation patterns. These features appear on composite samples of globular clusters and, most evidently, in galaxies with absolute magnitudes Mg in the range from -20.2 to -19.2. These colour modulations are also detectable on some samples of globular clusters in the central galaxies NGC 1399 and NGC 4486 (and confirmed on data sets obtained with different instruments and photometric systems), as well as in other bright galaxies in these clusters. After discarding field contamination, photometric errors and statistical effects, we conclude that these supra-galactic colour patterns are real and reflect some previously unknown characteristic. These features suggest that the globular cluster formation process was not entirely stochastic but included a fraction of clusters that formed in a rather synchronized fashion over large spatial scales, and in a tentative time lapse of about 1.5 Gy at redshifts z between 2 and 4. We speculate that the putative mechanism leading to that synchronism may be associated with large scale feedback effects connected with violent star-forming events and/or with supermassive black holes.
8. Integrated photometry of globular star clusters in the Vilnius system
International Nuclear Information System (INIS)
Zdanavichyus, K.V.
1983-01-01
Integrated colour indices in the Vilnius photometric system and newly determined colour excesses Esub(B-V) for 39 globular clusters are presented. It is shown that the coincidence of integrated spectral types are not a sufficient criterion for the identity of intrinsic colour indices of globular clusters. Relation of integrated colour indices with the slope of the giant branch S and with the horizontal branch morphological type D is investigated. Integrated colour indices of clusters with a blue horizontal branch show no correlation with either D or S. The increase of colour indices of the clusters of types D >= 4 correlates with the distribution of stars along the horizontal branch. Integrated photometry of globular star clusters in the Vilnius multicoloured photometric system permits to determine their colour excesses from some Q diagrams and normal colour index. Integral normal colour indexes and Q parameters for I globular star clusters of the Mironov group display small changes as compared to clusters of group 2. Colour indexes among star clusters having only red horizontal branches (D=7) change most considerably
9. The Split Red Clump of the Galactic Bulge from OGLE-III
Science.gov (United States)
Nataf, D. M.; Udalski, A.; Gould, A.; Fouqué, P.; Stanek, K. Z.
2010-09-01
The red clump (RC) is found to be split into two components along several sightlines toward the Galactic bulge. This split is detected with high significance toward the areas (-3.5 plane. The fainter (hereafter "main") component is the one that more closely follows the distance-longitude relation of the bulge RC. The main component is ~0.5 mag fainter than the secondary component and with an overall approximately equal population. For sightlines further from the plane, the difference in brightness increases, and more stars are found in the secondary component than in the main component. The two components have very nearly equal (V - I) color.
10. X-ray spectral models of Galactic bulge sources - the emission-line factor
International Nuclear Information System (INIS)
Vrtilek, S.D.; Swank, J.H.; Kallman, T.R.
1988-01-01
Current difficulties in finding unique and physically meaningful models for the X-ray spectra of Galactic bulge sources are exacerbated by the presence of strong, variable emission and absorption features that are not resolved by the instruments observing them. Nine Einstein solid state spectrometer (SSS) observations of five Galactic bulge sources are presented for which relatively high resolution objective grating spectrometer (OGS) data have been published. It is found that in every case the goodness of fit of simple models to SSS data is greatly improved by adding line features identified in the OGS that cannot be resolved by the SSS but nevertheless strongly influence the spectra observed by SSS. 32 references
11. Strömgren uvby photometry of the peculiar globular cluster NGC 2419
Science.gov (United States)
Frank, Matthias J.; Koch, Andreas; Feltzing, Sofia; Kacharov, Nikolay; Wilkinson, Mark I.; Irwin, Mike
2015-09-01
-enhanced second generation accounting for 53 ± 5 per cent of stars. Despite its known peculiarities, NGC 2419 appears to be very similar to other metal-poor Galactic globular clusters with a similarly nitrogen-enhanced second generation and little or no variation in [Fe/H], which sets it apart from other suspected accreted nuclei such as ωCen. The photometric catalogue is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/581/A72Based on observations made with the Isaac Newton Telescope operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias.
12. DARK MATTER HALOS IN GALAXIES AND GLOBULAR CLUSTER POPULATIONS
Energy Technology Data Exchange (ETDEWEB)
Hudson, Michael J.; Harris, Gretchen L. [Department of Physics and Astronomy, University of Waterloo, Waterloo, ON N2L 3G1 (Canada); Harris, William E., E-mail: [email protected] [Department of Physics and Astronomy, McMaster University, Hamilton, ON L8S 4M1 (Canada)
2014-05-20
We combine a new, comprehensive database for globular cluster populations in all types of galaxies with a new calibration of galaxy halo masses based entirely on weak lensing. Correlating these two sets of data, we find that the mass ratio η ≡ M {sub GCS}/M {sub h} (total mass in globular clusters, divided by halo mass) is essentially constant at (η) ∼ 4 × 10{sup –5}, strongly confirming earlier suggestions in the literature. Globular clusters are the only known stellar population that formed in essentially direct proportion to host galaxy halo mass. The intrinsic scatter in η appears to be at most 0.2 dex; we argue that some of this scatter is due to differing degrees of tidal stripping of the globular cluster systems between central and satellite galaxies. We suggest that this correlation can be understood if most globular clusters form at very early stages in galaxy evolution, largely avoiding the feedback processes that inhibited the bulk of field-star formation in their host galaxies. The actual mean value of η also suggests that about one-fourth of the initial gas mass present in protogalaxies collected into giant molecular clouds large enough to form massive, dense star clusters. Finally, our calibration of (η) indicates that the halo masses of the Milky Way and M31 are (1.2 ± 0.5) × 10{sup 12} M {sub ☉} and (3.9 ± 1.8) × 10{sup 12} M {sub ☉}, respectively.
13. RETENTION OF STELLAR-MASS BLACK HOLES IN GLOBULAR CLUSTERS
International Nuclear Information System (INIS)
Morscher, Meagan; Umbreit, Stefan; Farr, Will M.; Rasio, Frederic A.
2013-01-01
Globular clusters should be born with significant numbers of stellar-mass black holes (BHs). It has been thought for two decades that very few of these BHs could be retained through the cluster lifetime. With masses ∼10 M ☉ , BHs are ∼20 times more massive than an average cluster star. They segregate into the cluster core, where they may eventually decouple from the remainder of the cluster. The small-N core then evaporates on a short timescale. This is the so-called Spitzer instability. Here we present the results of a full dynamical simulation of a globular cluster containing many stellar-mass BHs with a realistic mass spectrum. Our Monte Carlo simulation code includes detailed treatments of all relevant stellar evolution and dynamical processes. Our main finding is that old globular clusters could still contain many BHs at present. In our simulation, we find no evidence for the Spitzer instability. Instead, most of the BHs remain well mixed with the rest of the cluster, with only the innermost few tens of BHs segregating significantly. Over the 12 Gyr evolution, fewer than half of the BHs are dynamically ejected through strong binary interactions in the cluster core. The presence of BHs leads to long-term heating of the cluster, ultimately producing a core radius on the high end of the distribution for Milky Way globular clusters (and those of other galaxies). A crude extrapolation from our model suggests that the BH-BH merger rate from globular clusters could be comparable to the rate in the field.
14. The zCOSMOS redshift survey : evolution of the light in bulges and discs since z ~ 0.8
NARCIS (Netherlands)
Tasca, L. A. M.; Tresse, L.; Le Fevre, O.; Ilbert, O.; Lilly, S. J.; Zamorani, G.; Lopez-Sanjuan, C.; Ho, L. C.; Bardelli, S.; Cattaneo, A.; Cucciati, O.; Farrah, D.; Iovino, A.; Koekemoer, A. M.; Liu, C. T.; Massey, R.; Renzini, A.; Taniguchi, Y.; Welikala, N.; Zucca, E.; Carollo, C. M.; Contini, T.; Kneib, J. -P.; Mainieri, V.; Scodeggio, M.; Bolzonella, M.; Bongiorno, A.; Caputi, K.; de la Torre, S.; Franzetti, P.; Garilli, B.; Guzzo, L.; Kampczyk, P.; Knobel, C.; Kovac, K.; Lamareille, F.; Le Borgne, J. -F.; Le Brun, V.; Maier, C.; Mignoli, M.; Pello, R.; Peng, Y.; Perez Montero, E.; Rich, R. M.; Tanaka, M.; Vergani, D.; Bordoloi, R.; Cappi, A.; Cimatti, A.; Coppa, G.; McCracken, H. J.; Moresco, M.; Pozzetti, L.; Sanders, D.; Sheth, K.
We studied the chronology of galactic bulge and disc formation by analysing the relative contributions of these components to the B-band rest-frame luminosity density at different epochs. We present the first estimate of the evolution of the fraction of rest-frame B-band light in galactic bulges and
15. The zCOSMOS redshift survey: evolution of the light in bulges and discs since z ~ 0.8
NARCIS (Netherlands)
Tasca, L. A. M.; Tresse, L.; Le Fèvre, O.; Ilbert, O.; Lilly, S. J.; Zamorani, G.; López-Sanjuan, C.; Ho, L. C.; Bardelli, S.; Cattaneo, A.; Cucciati, O.; Farrah, D.; Iovino, A.; Koekemoer, A. M.; Liu, C. T.; Massey, R.; Renzini, A.; Taniguchi, Y.; Welikala, N.; Zucca, E.; Carollo, C. M.; Contini, T.; Kneib, J.-P.; Mainieri, V.; Scodeggio, M.; Bolzonella, M.; Bongiorno, A.; Caputi, K.; de la Torre, S.; Franzetti, P.; Garilli, B.; Guzzo, L.; Kampczyk, P.; Knobel, C.; Kovač, K.; Lamareille, F.; Le Borgne, J.-F.; Le Brun, V.; Maier, C.; Mignoli, M.; Pello, R.; Peng, Y.; Perez Montero, E.; Rich, R. M.; Tanaka, M.; Vergani, D.; Bordoloi, R.; Cappi, A.; Cimatti, A.; Coppa, G.; McCracken, H. J.; Moresco, M.; Pozzetti, L.; Sanders, D.; Sheth, K.
We studied the chronology of galactic bulge and disc formation by analysing the relative contributions of these components to the B-band rest-frame luminosity density at different epochs. We present the first estimate of the evolution of the fraction of rest-frame B-band light in galactic bulges and
16. Ages of galaxy bulges and disks from optical and near-infrared colours
NARCIS (Netherlands)
Peletier, RF; Balcells, M; Bender, R; Davies, RL
1996-01-01
For a sample of bright nearby early-type galaxies we have obtained surface photometry in bands ranging from U to K. Since the galaxies have inclinations larger than 50 degrees it is easy to separate bulges and disks. By measuring the colours in special regions, we minimize the effects of extinction,
17. Photoionization modelling of planetary nebulae - II. Galactic bulge nebulae, a comparison with literature results
NARCIS (Netherlands)
van Hoof, PAM; Van de Steene, GC
1999-01-01
We have constructed photoionization models of five galactic bulge planetary nebulae using our automatic method, which enables a fully self-consistent determination of the physical parameters of a planetary nebula. The models are constrained using the spectrum, the IRAS and radio fluxes and the
18. Planetary nebula velocities in the disc and bulge of M31
NARCIS (Netherlands)
Halliday, C.; Carter, D.; Bridges, T. J.; Jackson, Z. C.; Wilkinson, M. I.; Quinn, D. P.; Evans, N. W.; Douglas, N. G.; Merrett, H. R.; Merrifield, M. R.; Romanowsky, A. J.; Kuijken, K.; Irwin, M. J.
2006-01-01
We present radial velocities for a sample of 723 planetary nebulae in the disc and bulge of M31, measured using the WYFFOS fibre spectrograph on the William Herschel Telescope. Velocities are determined using the [OIII] lambda 5007 emission line. Rotation and velocity dispersion are measured to a
19. Finite Element Analysis of Bulge Forming of Laser Welding Dimple Jacket
Directory of Open Access Journals (Sweden)
Peisi ZHONG
2015-11-01
Full Text Available The stress-strain states of the model of laser welded dimple jacket is analyzed using ANSYS/LS-DYNA in order to determine the relation between bulging height and pressure and to achieve the controllability of pressure distension of the jacket. It is shown that in the same conditions, the bulging height increases with the increasing of the bulging pressure and the space of honeycomb. And it will decrease when the thickness of jacket plate changing larger. A table showing the relation between bulging height and pressure is obtained. An experiment using a test panel is conducted to certify the reliability of finite element analysis. It turns out that the data of finite element analysis is coincident with experimental data, which support finite element method based ANSYS/LS-DYNA can be an efficient way to research the laser welded dimple jacket. The relation table is useful as guidance for the fabrication process.DOI: http://dx.doi.org/10.5755/j01.ms.21.4.9704
20. Stellar Sources in the ISOGAL Inner Galactic Bulge Field D. Κ. Ojha1 ...
tribpo
to study the stellar populations and the structure of the bulge. Multicolor mid infrared data ... Section 3 describes the cross identification of ISOGAL and ... observations for this field with a gap of 2 years (Table 1), which were used to check the.
1. SDSS-IV MaNGA: bulge-disc decomposition of IFU data cubes (BUDDI)
Science.gov (United States)
Johnston, Evelyn J.; Häußler, Boris; Aragón-Salamanca, Alfonso; Merrifield, Michael R.; Bamford, Steven; Bershady, Matthew A.; Bundy, Kevin; Drory, Niv; Fu, Hai; Law, David; Nitschelm, Christian; Thomas, Daniel; Roman Lopes, Alexandre; Wake, David; Yan, Renbin
2017-02-01
With the availability of large integral field unit (IFU) spectral surveys of nearby galaxies, there is now the potential to extract spectral information from across the bulges and discs of galaxies in a systematic way. This information can address questions such as how these components built up with time, how galaxies evolve and whether their evolution depends on other properties of the galaxy such as its mass or environment. We present bulge-disc decomposition of IFU data cubes (BUDDI), a new approach to fit the two-dimensional light profiles of galaxies as a function of wavelength to extract the spectral properties of these galaxies' discs and bulges. The fitting is carried out using GALFITM, a modified form of GALFIT which can fit multiwaveband images simultaneously. The benefit of this technique over traditional multiwaveband fits is that the stellar populations of each component can be constrained using knowledge over the whole image and spectrum available. The decomposition has been developed using commissioning data from the Sloan Digital Sky Survey-IV Mapping Nearby Galaxies at APO (MaNGA) survey with redshifts z 22 arcsec, but can be applied to any IFU data of a nearby galaxy with similar or better spatial resolution and coverage. We present an overview of the fitting process, the results from our tests, and we finish with example stellar population analyses of early-type galaxies from the MaNGA survey to give an indication of the scientific potential of applying bulge-disc decomposition to IFU data.
2. Ages of galaxy bulges and disks from optical and near-infrared colors
NARCIS (Netherlands)
Peletier, RF; Balcells, M
We compare optical and near-infrared colors of disks and bulges in a diameter-limited sample of inclined, bright, nearby, early-type spirals. Color profiles along wedge apertures at 15 degrees from the major axis and on the minor axis on the side of the galaxy opposite to the dust lane are used to
3. Black Holes and Galactic Density Cusps I Radial Orbit Cusps and Bulges
CERN Document Server
Henriksen, Richard N; Macmillan, Joseph D
2011-01-01
Aims. In this paper we study density cusps made from radial orbits that may contain central black holes. The actual co-eval self-similar growth would not distinguish between the central object and the surroundings. Methods. To study the environment of an existing black hole we seek distribution functions that may contain a black hole and that retain at least a memory of self-similarity. We refer to the environment in brief as the 'bulge' or sometimes the 'halo'. This depends on whether the black hole is a true singularity dominating its halo or rather a core mass concentration that dominates a larger bulge. The hierarchy might extend to include galactic bulge and halo. Results.We find simple descriptions of simulated collisionless matter in the process of examining the presence of central masses. The Fridmann & Polyachenko distribution function describes co-eval growth of a bulge and black hole that might explain the observed mass correlation. Conclusions. We derive our results from first principles assum...
4. Genetically induced cell death in bulge stem cells reveals their redundancy for hair and epidermal regeneration.
Science.gov (United States)
Driskell, Iwona; Oeztuerk-Winder, Feride; Humphreys, Peter; Frye, Michaela
2015-03-01
Adult mammalian epidermis contains multiple stem cell populations in which quiescent and more proliferative stem and progenitor populations coexist. However, the precise interrelation of these populations in homeostasis remains unclear. Here, we blocked the contribution of quiescent keratin 19 (K19)-expressing bulge stem cells to hair follicle formation through genetic ablation of the essential histone methyltransferase Setd8 that is required for the maintenance of adult skin. Deletion of Setd8 eliminated the contribution of bulge cells to hair follicle regeneration through inhibition of cell division and induction of cell death, but the growth and morphology of hair follicles were unaffected. Furthermore, ablation of Setd8 in the hair follicle bulge blocked the contribution of K19-postive stem cells to wounded epidermis, but the wound healing process was unaltered. Our data indicate that quiescent bulge stem cells are dispensable for hair follicle regeneration and epidermal injury in the short term and support the hypothesis that quiescent and cycling stem cell populations are equipotent. © 2014 AlphaMed Press.
5. An application of the tensor virial theorem to hole + vortex + bulge systems
Science.gov (United States)
Caimmi, R.
2009-04-01
The tensor virial theorem for subsystems is formulated for three-component systems and further effort is devoted to a special case where the inner subsystems and the central region of the outer one are homogeneous, the last surrounded by an isothermal homeoid. The virial equations are explicitly written under the additional restrictions: (i) similar and similarly placed inner subsystems, and (ii) spherical outer subsystem. An application is made to hole + vortex + bulge systems, in the limit of flattened inner subsystems, which implies three virial equations in three unknowns. Using the Faber-Jackson relation, R∝σ02, the standard M- σ0 form (M∝σ04) is deduced from qualitative considerations. The projected bulge velocity dispersion to projected vortex velocity ratio, η=(σ)33/{[(v)qq]2+[(σ)qq]2}, as a function of the fractional radius, y=R/R, and the fractional masses, m=M/M and m=M/M, is studied in the range of interest, 0⩽m=M/M⩽5 [Escala, A., 2006. ApJ, 648, L13] and 229⩽m⩽795 [Marconi, A., Hunt, L.H., 2003. ApJ 589, L21], consistent with observations. The related curves appear to be similar to Maxwell velocity distributions, which implies a fixed value of η below the maximum corresponds to two different configurations: a compact bulge on the left of the maximum, and an extended bulge on the right. All curves lie very close one to the other on the left of the maximum, and parallel one to the other on the right. On the other hand, fixed m or m, and y, are found to imply more massive bulges passing from bottom to top along a vertical line on the (Oyη) plane, and vice versa. The model is applied to NGC 4374 and NGC 4486, taking the fractional mass, m, and the fractional radius, y, as unknowns, and the bulge mass is inferred from the knowledge of the hole mass, and compared with results from different methods. In presence of a massive vortex (m=5), the hole mass has to be reduced by a factor 2-3 with respect to the case of a massless vortex, to get
6. First detection of the white dwarf cooling sequence of the galactic bulge
Energy Technology Data Exchange (ETDEWEB)
Calamida, A.; Sahu, K. C.; Anderson, J.; Casertano, S.; Brown, T.; Sokol, J.; Bond, H. E.; Ferguson, H.; Livio, M.; Valenti, J. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Cassisi, S.; Buonanno, R.; Pietrinferni, A. [Osservatorio Astronomico di Teramo—INAF, Via M. Maggini, I-64100 Teramo (Italy); Salaris, M. [Astrophysics Research Institute, Liverpool John Moores University, 146 Brownlow Hill, Liverpool L3 5RF (United Kingdom); Ferraro, I. [Osservatorio Astronomico di Roma—INAF, Via Frascati 33, I-00040 Monte Porzio Catone (Italy); Clarkson, W., E-mail: [email protected] [University of Michigan-Dearborn, 4901 Evergreen Road, Dearborn, MI 48128 (United States)
2014-08-01
We present Hubble Space Telescope data of the low-reddening Sagittarius window in the Galactic bulge. The Sagittarius Window Eclipsing Extrasolar Planet Search field (∼3'× 3'), together with three more Advanced Camera for Surveys and eight Wide-Field Camera 3 fields, were observed in the F606W and F814W filters, approximately every two weeks for 2 yr, with the principal aim of detecting a hidden population of isolated black holes and neutron stars through astrometric microlensing. Proper motions were measured with an accuracy of ≈0.1 mas yr{sup –1} (≈4 km s{sup –1}) at F606W ≈ 25.5 mag, and better than ≈0.5 mas yr{sup –1} (≈20 km s{sup –1}) at F606W ≈ 28 mag, in both axes. Proper-motion measurements allowed us to separate disk and bulge stars and obtain a clean bulge color-magnitude diagram. We then identified for the first time a white dwarf (WD) cooling sequence in the Galactic bulge, together with a dozen candidate extreme horizontal branch stars. The comparison between theory and observations shows that a substantial fraction of the WDs (≈30%) are systematically redder than the cooling tracks for CO-core H-rich and He-rich envelope WDs. This evidence would suggest the presence of a significant number of low-mass WDs and WD-main-sequence binaries in the bulge. This hypothesis is further supported by the finding of two dwarf novae in outburst, two short-period (P ≲ 1 day) ellipsoidal variables, and a few candidate cataclysmic variables in the same field.
7. Construction and functional characterization of double and triple mutants of parallel beta-bulge of ubiquitin.
Science.gov (United States)
Sharma, Mrinal; Prabha, C Ratna
2011-12-01
Ubiquitin, a small eukaryotic protein serving as a post-translational modification on many important proteins, plays central role in cellular homeostasis and cell cycle regulation. Ubiquitin features two beta-bulges, the second beta-bulge, located at the C-terminal region of the protein along with type II turn, holds 3 residues Glu64(1), Ser65(2) and Gln2(X). Percent frequency of occurrence of such a sequence in parallel beta-bulge is very low. However, the sequence and structure have been conserved in ubiquitin through out the evolution. Present study involves replacement of residues in unusual beta-bulge of ubiquitin by introducing mutations in combination through site directed mutagenesis, generating double and triple mutants and their functional characterization. Mutant ubiquitins cloned in yeast expression vector YEp96 tested for growth profile, viability assay and heat stress complementation study have revealed significant decrease in growth rate, loss of viability and non-complementation of heat sensitive phenotype with UbE64G-S65D and UbQ2N-E64G-S65D mutations. However, UbQ2N-S65D did not show any negative effects in the above assays. Present results show that, replacement of residues in beta-bulge of ubiquitin exerts severe effects on growth and viability in Saccharomyces cerevisiae due to functional failure of the mutant ubiquitins UbE64G-S65D and UbQ2N-E64G-S65D.
8. First detection of the white dwarf cooling sequence of the galactic bulge
International Nuclear Information System (INIS)
Calamida, A.; Sahu, K. C.; Anderson, J.; Casertano, S.; Brown, T.; Sokol, J.; Bond, H. E.; Ferguson, H.; Livio, M.; Valenti, J.; Cassisi, S.; Buonanno, R.; Pietrinferni, A.; Salaris, M.; Ferraro, I.; Clarkson, W.
2014-01-01
We present Hubble Space Telescope data of the low-reddening Sagittarius window in the Galactic bulge. The Sagittarius Window Eclipsing Extrasolar Planet Search field (∼3'× 3'), together with three more Advanced Camera for Surveys and eight Wide-Field Camera 3 fields, were observed in the F606W and F814W filters, approximately every two weeks for 2 yr, with the principal aim of detecting a hidden population of isolated black holes and neutron stars through astrometric microlensing. Proper motions were measured with an accuracy of ≈0.1 mas yr –1 (≈4 km s –1 ) at F606W ≈ 25.5 mag, and better than ≈0.5 mas yr –1 (≈20 km s –1 ) at F606W ≈ 28 mag, in both axes. Proper-motion measurements allowed us to separate disk and bulge stars and obtain a clean bulge color-magnitude diagram. We then identified for the first time a white dwarf (WD) cooling sequence in the Galactic bulge, together with a dozen candidate extreme horizontal branch stars. The comparison between theory and observations shows that a substantial fraction of the WDs (≈30%) are systematically redder than the cooling tracks for CO-core H-rich and He-rich envelope WDs. This evidence would suggest the presence of a significant number of low-mass WDs and WD-main-sequence binaries in the bulge. This hypothesis is further supported by the finding of two dwarf novae in outburst, two short-period (P ≲ 1 day) ellipsoidal variables, and a few candidate cataclysmic variables in the same field.
9. THE EFFECT OF SECOND-GENERATION POPULATIONS ON THE INTEGRATED COLORS OF METAL-RICH GLOBULAR CLUSTERS IN EARLY-TYPE GALAXIES
International Nuclear Information System (INIS)
Chung, Chul; Lee, Sang-Yoon; Yoon, Suk-Jin; Lee, Young-Wook
2013-01-01
The mean color of globular clusters (GCs) in early-type galaxies is in general bluer than the integrated color of halo field stars in host galaxies. Metal-rich GCs often appear more associated with field stars than metal-poor GCs, yet show bluer colors than their host galaxy light. Motivated by the discovery of multiple stellar populations in Milky Way GCs, we present a new scenario in which the presence of second-generation (SG) stars in GCs is responsible for the color discrepancy between metal-rich GCs and field stars. The model assumes that the SG populations have an enhanced helium abundance as evidenced by observations, and it gives a good explanation of the bluer optical colors of metal-rich GCs than field stars as well as strong Balmer lines and blue UV colors of metal-rich GCs. Ours may be complementary to the recent scenario suggesting the difference in stellar mass functions (MFs) as an origin for the GC-to-star color offset. A quantitative comparison is given between the SG and MF models.
10. Is age really the second parameter in globular clusters?
International Nuclear Information System (INIS)
Vandenberg, D.A.; Durrell, P.R.
1990-01-01
From the close similarity of the magnitude difference between the tip of the red giant branch and the turnoff in the Fe/H = about -1.3 globular cluster NGC 288, NGC 362, and M5, it is inferred that the ages of these three systems (and Palomar 5, whose horizonal branch is used to define its distance relative to the others) are not detectably different. An identical conclusion, by similar means, is reached for the Fe/H = about -2.1 globular clusters M15, M30, M68, and M92. Several recent claims that age is responsible for the wide variation in horizontal-branch morphology among clusters of the same metal abundance are not supported. 73 refs
11. Globular cluster seeding by primordial black hole population
Energy Technology Data Exchange (ETDEWEB)
Dolgov, A. [ITEP, Bol. Cheremushkinsaya ul., 25, 117218 Moscow (Russian Federation); Postnov, K., E-mail: [email protected], E-mail: [email protected] [Sternberg Astronomical Institute, Moscow M.V. Lomonosov State University, Universitetskij pr., 13, Moscow 119234 (Russian Federation)
2017-04-01
Primordial black holes (PBHs) that form in the early Universe in the modified Affleck-Dine (AD) mechanism of baryogenesis should have intrinsic log-normal mass distribution of PBHs. We show that the parameters of this distribution adjusted to provide the required spatial density of massive seeds (≥ 10{sup 4} M {sub ⊙}) for early galaxy formation and not violating the dark matter density constraints, predict the existence of the population of intermediate-mass PBHs with a number density of 0∼ 100 Mpc{sup −3}. We argue that the population of intermediate-mass AD PBHs can also seed the formation of globular clusters in galaxies. In this scenario, each globular cluster should host an intermediate-mass black hole with a mass of a few thousand solar masses, and should not obligatorily be immersed in a massive dark matter halo.
12. On tidal radius determination for a globular cluster
International Nuclear Information System (INIS)
Ninkovic, S.
1985-01-01
A tidal radius determination for a globular cluster based on its density minimum, which is caused by the galactic tidal forces and derivable from a model of the Galaxy, is proposed. Results obtained on the basis of the Schmidt model for two clusters are in a satisfactory agreement with those obtained earlier by means of other methods. A mass determination for the clusters through the tidal radius, when the latter one is identified with the cluster perigalactic distance, yields unusually large mass values. Probably, the tidal radius should be identified with the instantaneous galactocentric distance. Use of models more recent than the Schmidt one indicates that a globular cluster may contain a significant portion of an invisible interstellar matter. (author)
13. Color maps of X-ray globular clusters
International Nuclear Information System (INIS)
Bailyn, C.D.; Grindlay, J.E.; Cohn, H.; Lugger, P.M.
1988-01-01
The results of a search for optical counterparts to X-ray sources in six globular clusters, 47 Tuc, NGC 1851, NGC 6441, NGC 6624, NGC 6712, and M15, are reported. Maps of the U-B color of the central regions of the clusters were prepared. A candidate for the optical counterpart of the source in NGC 6712 was found, along with a blue region near the X-ray source in 47 Tuc. Upper limits on the colors and magnitudes of possible optical counterparts are reported for the other three clusters. The use of color maps to determine color gradients in globular clusters is explored. It is found that, while such gradients do exist and vary from cluster to cluster, they can be explained by crowding effects. Crude limits are placed on the excess populations of blue objects such as CVs, which have been postulated to be concentrated in the centers of dense clusters. 32 references
14. GEMINI/GeMS Observations Unveil the Structure of the Heavily Obscured Globular Cluster Liller 1.
Science.gov (United States)
Saracino, S.; Dalessandro, E.; Ferraro, F. R.; Lanzoni, B.; Geisler, D.; Mauro, F.; Villanova, S.; Moni Bidin, C.; Miocchi, P.; Massari, D.
2015-06-01
By exploiting the exceptional high-resolution capabilities of the near-IR camera GSAOI combined with the Gemini Multi-Conjugate Adaptive System at the GEMINI South Telescope, we investigated the structural and physical properties of the heavily obscured globular cluster Liller 1 in the Galactic bulge. We have obtained the deepest and most accurate color-magnitude diagram published so far for this cluster, reaching {{K}s}˜ 19 (below the main-sequence turnoff level). We used these data to redetermine the center of gravity of the system, finding that it is located about 2.″2 southeast from the literature value. We also built new star density and surface brightness profiles for the cluster and rederived its main structural and physical parameters (scale radii, concentration parameter, central mass density, total mass). We find that Liller 1 is significantly less concentrated (concentration parameter c=1.74) and less extended (tidal radius {{r}t}=298\\prime\\prime and core radius {{r}c}=5\\buildrel{\\prime\\prime}\\over{.} 39) than previously thought. By using these newly determined structural parameters, we estimated the mass of Liller 1 to be {{M}tot}=2.3+0.3-0.1× {{10}6} {{M}⊙ } ({{M}tot}=1.5+0.2-0.1× {{10}6} {{M}⊙ } for a Kroupa initial mass function), which is comparable to that of the most massive clusters in the Galaxy (ω Centari and Terzan 5). Also, Liller 1 has the second-highest collision rate (after Terzan 5) among all star clusters in the Galaxy, thus confirming that it is an ideal environment for the formation of collisional objects (such as millisecond pulsars). Based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), Ministério da
15. Evolution of redback radio pulsars in globular clusters
Science.gov (United States)
Benvenuto, O. G.; De Vito, M. A.; Horvath, J. E.
2017-02-01
Context. We study the evolution of close binary systems composed of a normal, intermediate mass star and a neutron star considering a chemical composition typical of that present in globular clusters (Z = 0.001). Aims: We look for similarities and differences with respect to solar composition donor stars, which we have extensively studied in the past. As a definite example, we perform an application on one of the redbacks located in a globular cluster. Methods: We performed a detailed grid of models in order to find systems that represent the so-called redback binary radio pulsar systems with donor star masses between 0.6 and 2.0 solar masses and orbital periods in the range 0.2-0.9 d. Results: We find that the evolution of these binary systems is rather similar to those corresponding to solar composition objects, allowing us to account for the occurrence of redbacks in globular clusters, as the main physical ingredient is the irradiation feedback. Redback systems are in the quasi-RLOF state, that is, almost filling their corresponding Roche lobe. During the irradiation cycle the system alternates between semi-detached and detached states. While detached the system appears as a binary millisecond pulsar, called a redback. Circumstellar material, as seen in redbacks, is left behind after the previous semi-detached phase. Conclusions: The evolution of binary radio pulsar systems considering irradiation successfully accounts for, and provides a way for, the occurrence of redback pulsars in low-metallicity environments such as globular clusters. This is the case despite possible effects of the low metal content of the donor star that could drive systems away from redback configuration.
16. [Study of beta-turns in globular proteins].
Science.gov (United States)
Amirova, S R; Milchevskiĭ, Iu V; Filatov, I V; Esipova, N G; Tumanian, V G
2005-01-01
The formation of beta-turns in globular proteins has been studied by the method of molecular mechanics. Statistical method of discriminant analysis was applied to calculate energy components and sequences of oligopeptide segments, and after this prediction of I type beta-turns has been drawn. The accuracy of true positive prediction is 65%. Components of conformational energy considerably affecting beta-turn formation were delineated. There are torsional energy, energy of hydrogen bonds, and van der Waals energy.
17. Globular Cluster Candidates for Hosting a Central Black Hole
Science.gov (United States)
Noyola, Eva
2009-07-01
We are continuing our study of the dynamical properties of globular clusters and we propose to obtain surface brightness profiles for high concentration clusters. Our results to date show that the distribution of central surface brightness slopes do not conform to standard models. This has important implications for how they form and evolve, and suggest the possible presence of central intermediate-mass black holes. From our previous archival proposals {AR-9542 and AR-10315}, we find that many high concentration globular clusters do not have flat cores or steep central cusps, instead they show weak cusps. Numerical simulations suggest that clusters with weak cusps may harbor intermediate-mass black holes and we have one confirmation of this connection with omega Centauri. This cluster shows a shallow cusp in its surface brightness profile, while kinematical measurements suggest the presence of a black hole in its center. Our goal is to extend these studies to a sample containing 85% of the Galactic globular clusters with concentrations higher than 1.7 and look for objects departing from isothermal behavior. The ACS globular cluster survey {GO-10775} provides enough objects to have an excellent coverage of a wide range of galactic clusters, but it contains only a couple of the ones with high concentration. The proposed sample consists of clusters whose light profile can only be adequately measured from space-based imaging. This would take us close to completeness for the high concentration cases and therefore provide a more complete list of candidates for containing a central black hole. The dataset will also be combined with our existing kinematic measurements and enhanced with future kinematic studies to perform detailed dynamical modeling.
18. Imprint of galaxy formation and evolution on globular cluster properties
OpenAIRE
Bekki, Kenji
2006-01-01
We discuss the origin of physical properties of globular cluster systems (GCSs) in galaxies in terms of galaxy formation and evolution processes. Based on numerical simulations of dynamical evolution of GCSs in galaxies, we particularly discuss (1) the origin of radial density profiles of GCSs, (2) kinematics of GCSs in elliptical galaxies, (3) transformation from nucleated dwarf galaxies into GCs (e.g., omega Centauri), and (4) the origin of GCSs in the Large Magellanic Cloud (LMC).
19. Theoretical stellar luminosity functions and globular cluster ages and compositions
International Nuclear Information System (INIS)
Ratcliff, S.J.
1985-01-01
The ages and chemical compositions of the stars in globular clusters are of great interest, particularly because age estimates from the well-known exercise of fitting observed color-magnitude diagrams to theoretical predictions tend to yield ages in excess of the Hubble time (an estimate to the age of the Universe) in standard cosmological models, for currently proposed high values of Hubble's constant (VandenBerg 1983). Relatively little use has been made of stellar luminosity functions of the globular clusters, for which reliable observations are now becoming available, to constrain the ages or compositions. The comparison of observed luminosity functions to theoretical ones allows one to take advantage of information not usually used, and has the advantage of being relatively insensitive to our lack of knowledge of the detailed structure of stellar envelopes and atmospheres. A computer program was developed to apply standard stellar evolutionary theory, using the most recently available input physics (opacities, nuclear reaction rates), to the calculation of the evolution of low-mass Population II stars. An algorithm for computing luminosity functions from the evolutionary tracks was applied to sets of tracks covering a broad range of chemical compositions and ages, such as may be expected for globular clusters
20. MOCK OBSERVATIONS OF BLUE STRAGGLERS IN GLOBULAR CLUSTER MODELS
International Nuclear Information System (INIS)
Sills, Alison; Glebbeek, Evert; Chatterjee, Sourav; Rasio, Frederic A.
2013-01-01
We created artificial color-magnitude diagrams of Monte Carlo dynamical models of globular clusters and then used observational methods to determine the number of blue stragglers in those clusters. We compared these blue stragglers to various cluster properties, mimicking work that has been done for blue stragglers in Milky Way globular clusters to determine the dominant formation mechanism(s) of this unusual stellar population. We find that a mass-based prescription for selecting blue stragglers will select approximately twice as many blue stragglers than a selection criterion that was developed for observations of real clusters. However, the two numbers of blue stragglers are well-correlated, so either selection criterion can be used to characterize the blue straggler population of a cluster. We confirm previous results that the simplified prescription for the evolution of a collision or merger product in the BSE code overestimates their lifetimes. We show that our model blue stragglers follow similar trends with cluster properties (core mass, binary fraction, total mass, collision rate) as the true Milky Way blue stragglers as long as we restrict ourselves to model clusters with an initial binary fraction higher than 5%. We also show that, in contrast to earlier work, the number of blue stragglers in the cluster core does have a weak dependence on the collisional parameter Γ in both our models and in Milky Way globular clusters
1. Interdependence of the rad50 hook and globular domain functions.
Science.gov (United States)
Hohl, Marcel; Kochańczyk, Tomasz; Tous, Cristina; Aguilera, Andrés; Krężel, Artur; Petrini, John H J
2015-02-05
2. Milky Way demographics with the VVV survey. I. The 84-million star colour-magnitude diagram of the Galactic bulge
Science.gov (United States)
Saito, R. K.; Minniti, D.; Dias, B.; Hempel, M.; Rejkuba, M.; Alonso-García, J.; Barbuy, B.; Catelan, M.; Emerson, J. P.; Gonzalez, O. A.; Lucas, P. W.; Zoccali, M.
2012-08-01
Context. The Milky Way (MW) bulge is a fundamental Galactic component for understanding the formation and evolution of galaxies, in particular our own. The ESO Public Survey VISTA Variables in the Vía Láctea is a deep near-IR survey mapping the Galactic bulge and southern plane. Particularly for the bulge area, VVV is covering ~315 deg2. Data taken during 2010 and 2011 covered the entire bulge area in the JHKs bands. Aims: We used VVV data for the whole bulge area as a single and homogeneous data set to build for the first time a single colour - magnitude diagram (CMD) for the entire Galactic bulge. Methods: Photometric data in the JHKs bands were combined to produce a single and huge data set containing 173 150 467 sources in the three bands, for the ~315 deg2 covered by VVV in the bulge. Selecting only the data points flagged as stellar, the total number of sources is 84 095 284. Results: We built the largest colour-magnitude diagrams published up to date, containing 173.1+ million sources for all data points, and more than 84.0 million sources accounting for the stellar sources only. The CMD has a complex shape, mostly owing to the complexity of the stellar population and the effects of extinction and reddening towards the Galactic centre. The red clump (RC) giants are seen double in magnitude at b ~ -8° -10°, while in the inner part (b ~ -3°) they appear to be spreading in colour, or even splitting into a secondary peak. Stellar population models show the predominance of main-sequence and giant stars. The analysis of the outermost bulge area reveals a well-defined sequence of late K and M dwarfs, seen at (J - Ks) ~ 0.7-0.9 mag and Ks ≳ 14 mag. Conclusions: The interpretation of the CMD yields important information about the MW bulge, showing the fingerprint of its structure and content. We report a well-defined red dwarf sequence in the outermost bulge, which is important for the planetary transit searches of VVV. The double RC in magnitude seen in the
3. The Blue Hook Populations of Massive Globular Clusters
Science.gov (United States)
Brown, Thomas
2006-07-01
Blue hook stars are a class of hot { 35,000 K} subluminous horizontal branch stars that have been recently discovered using HST ultraviolet images of the globular clusters omega Cen and NGC 2808. These stars occupy a region of the HR diagram that is unexplained by canonical stellar evolution theory. Using new theoretical evolutionary and atmospheric models, we have shown that the blue hook stars are very likely the progeny of stars that undergo extensive internal mixing during a late helium core flash on the white dwarf cooling curve. This "flash mixing" produces an enormous enhancement of the surface helium and carbon abundances, which suppresses the flux in the far ultraviolet. Although flash mixing is more likely to occur in stars that are born with high helium abundances, a high helium abundance, by itself, does not explain the presence of a blue hook population - flash mixing of the envelope is required. We propose ACS ultraviolet {SBC/F150LP and HRC/F250W} observations of the five additional globular clusters for which the presence of blue hook stars is suspected from longer wavelength observations. Like omega Cen and NGC 2808, these five targets are also among the most massive globular clusters, because less massive clusters show no evidence for blue hook stars. Because our targets span 1.5 dex in metallicity, we will be able to test our prediction that flash-mixing should be less drastic in metal-rich blue hook stars. In addition, our observations will test the hypothesis that blue hook stars only form in globular clusters massive enough to retain the helium-enriched ejecta from the first stellar generation. If this hypothesis is correct, then our observations will yield important constraints on the chemical evolution and early formation history in globular clusters, as well as the role of helium self-enrichment in producing blue horizontal branch morphologies and multiple main sequence turnoffs. Finally, our observations will provide new insight into the
4. Interactions between globular proteins and F-actin in isotonic saline solution.
Science.gov (United States)
Lakatos, S; Minton, A P
1991-10-05
Solutions of each of three different globular proteins (cytochrome c, chromophorically labeled serum albumin, and chromophorically labeled aldolase), mixed with another unlabeled globular protein or with fibrous actin, were prepared in pH 8.0 Tris-HCl buffer containing 0.15 M NaCl. Each solution was centrifuged at low speed, at 5 degrees C, until unassociated globular protein in solution achieved sedimentation equilibrium. Individual absorbance gradients of both macrosolutes in the mixtures subsequent to centrifugation were obtained via optical scans of the centrifuge tubes at two wavelengths. The gradients of each macrosolute in mixtures of two globular proteins revealed no association of globular proteins under the conditions of these experiments, but perturbation of the gradients of serum albumin, aldolase, and cytochrome c in the presence of F-actin indicated association of all three globular proteins with F-actin. Perturbation of actin gradients in the presence of serum albumin and aldolase suggested partial depolymerization of the F-actin by the globular protein. Analysis of the data with a simple phenomenological model relating free globular protein, bound globular protein, and total actin concentration provided estimates of the respective equilibrium constants for association of serum albumin and aldolase with F-actin, under the conditions of these experiments, of the order of 0.1 microM-1.
5. X-ray bursters and the X-ray sources of the galactic bulge
Science.gov (United States)
Lewin, W. H. G.; Joss, P. C.
An attempt is made to distill from observational and theoretical information on the galactic bulge X-ray sources in general, and on the X-ray burst sources in particular, those aspects which seem to have the greatest relevance to the understanding of these sources. Galactic bulge sources appear to be collapsed objects of roughly solar mass, in most cases neutron stars, which are accreting matter from low-mass stellar companions. Type I bursts seem to result from thermonuclear flashes in the surface layers of some of these neutron stars, while the type II bursts from the Rapid Burster are almost certainly due to an instability in the accretion flow onto a neutron star. It is concluded that the studies cited offer a new and powerful observational handle on the fundamental properties of neutron stars and of the interacting binary systems in which they are often contained.
6. The BRAVE Program. I. Improved Bulge Stellar Velocity Dispersion Estimates for a Sample of Active Galaxies
Energy Technology Data Exchange (ETDEWEB)
Batiste, Merida; Bentz, Misty C.; Manne-Nicholas, Emily R. [Department of Physics and Astronomy, Georgia State University, 25 Park Place, Atlanta, GA 30303 (United States); Onken, Christopher A. [Research School of Astronomy and Astrophysics, The Australian National University, Canberra, ACT 2611 (Australia); Bershady, Matthew A., E-mail: [email protected] [Department of Astronomy, University of Wisconsin, 475 N. Charter Street, Madison, WI 53706 (United States)
2017-02-01
We present new bulge stellar velocity dispersion measurements for 10 active galaxies with secure M {sub BH} determinations from reverberation mapping. These new velocity dispersion measurements are based on spatially resolved kinematics from integral-field (IFU) spectroscopy. In all but one case, the field of view of the IFU extends beyond the effective radius of the galaxy, and in the case of Mrk 79 it extends to almost one half the effective radius. This combination of spatial resolution and field of view allows for secure determinations of stellar velocity dispersion within the effective radius for all 10 target galaxies. Spatially resolved maps of the first ( V ) and second ( σ {sub ⋆}) moments of the line of sight velocity distribution indicate the presence of kinematic substructure in most cases. In future projects we plan to explore methods of correcting for the effects of kinematic substructure in the derived bulge stellar velocity dispersion measurements.
7. The SWELLS survey - VI. Hierarchical inference of the initial mass functions of bulges and discs
DEFF Research Database (Denmark)
Brewer, Brendon J.; Marshal, Philip J.; Auger, Matthew W.
2014-01-01
) and stellar masses (constrained by optical and near-infrared colours in the context of a stellar population synthesis model, up to an IMF normalization parameter). Using minimal assumptions apart from the physical constraint that the total stellar mass m* within any aperture must be less than the total mass...... mtot with in the aperture, we find that the bulges of the galaxies cannot have IMFs heavier (i.e. implying high mass per unit luminosity) than Salpeter, while the disc IMFs are not well constrained by this data set.We also discuss the necessity for hierarchical modelling when combining incomplete...... information about multiple astronomical objects. This modelling approach allows us to place upper limits on the size of any departures from universality. More data, including spatially resolved kinematics (as in Paper V) and stellar population diagnostics over a range of bulge and disc masses, are needed...
8. Forming limit diagram of aluminum AA6063 tubes at high temperatures by bulge tests
International Nuclear Information System (INIS)
Hashemi, Seyed Jalal; Naeini, Hassan Moslemi; Liaghat, Gholamhossein; Tafti, Rooholla Azizi; Rahmani, Farzad
2014-01-01
A free bulge test and ductile fracture criteria were used to obtain the forming limit diagrams (FLD) of aluminum alloy AA6063 tubes at high temperatures. Ductile fracture criteria were calibrated using the results of uniaxial tension tests at various elevated temperatures and different strain rates through adjusting the Zener-Holloman parameter. High temperature free bulge test of tubes was simulated in finite element software Abaqus, and tube bursting was predicted using ductile fracture criteria under different loading paths. FLDs which were obtained from finite element simulation were compared to experimental results to select the most accurate criterion for prediction of forming limit diagram. According to the results, all studied ductile fracture criteria predict similarly when forming condition is close to the uniaxial tension, while Ayada criterion predicts the FLD at 473 K and 573 K very well.
9. Forming limit diagram of aluminum AA6063 tubes at high temperatures by bulge tests
Energy Technology Data Exchange (ETDEWEB)
Hashemi, Seyed Jalal; Naeini, Hassan Moslemi; Liaghat, Gholamhossein [Tarbiat Modares University, Tehran (Iran, Islamic Republic of); Tafti, Rooholla Azizi [Yazd University, Yazd (Iran, Islamic Republic of); Rahmani, Farzad [Kar Higher Education Institute, Qazvin (Iran, Islamic Republic of)
2014-11-15
A free bulge test and ductile fracture criteria were used to obtain the forming limit diagrams (FLD) of aluminum alloy AA6063 tubes at high temperatures. Ductile fracture criteria were calibrated using the results of uniaxial tension tests at various elevated temperatures and different strain rates through adjusting the Zener-Holloman parameter. High temperature free bulge test of tubes was simulated in finite element software Abaqus, and tube bursting was predicted using ductile fracture criteria under different loading paths. FLDs which were obtained from finite element simulation were compared to experimental results to select the most accurate criterion for prediction of forming limit diagram. According to the results, all studied ductile fracture criteria predict similarly when forming condition is close to the uniaxial tension, while Ayada criterion predicts the FLD at 473 K and 573 K very well.
10. Chemical evolution of the Galactic bulge as traced by microlensed dwarf and subgiant stars
OpenAIRE
Bensby, T.; Johnson, J. A.; Cohen, J.; Feltzing, S.; Udalski, A.; Gould, A.; Huang, W.; Thompson, I.; Simmerer, J.; Adén, D.
2009-01-01
Aims. Our aims are twofold. First we aim to evaluate the robustness and accuracy of stellar parameters and detailed elemental abundances that can be derived from high-resolution spectroscopic observations of microlensed dwarf and subgiant stars. We then aim to use microlensed dwarf and subgiant stars to investigate the abundance structure and chemical evolution of the Milky Way Bulge. Contrary to the cool giant stars, with their extremely crowded spectra, the dwarf stars are hotter, their spe...
11. An Optimal Strategy for Accurate Bulge-to-disk Decomposition of Disk Galaxies
Energy Technology Data Exchange (ETDEWEB)
Gao Hua [Department of Astronomy, School of Physics, Peking University, Beijing 100871 (China); Ho, Luis C. [Kavli Institute for Astronomy and Astrophysics, Peking University, Beijing 100871 (China)
2017-08-20
The development of two-dimensional (2D) bulge-to-disk decomposition techniques has shown their advantages over traditional one-dimensional (1D) techniques, especially for galaxies with non-axisymmetric features. However, the full potential of 2D techniques has yet to be fully exploited. Secondary morphological features in nearby disk galaxies, such as bars, lenses, rings, disk breaks, and spiral arms, are seldom accounted for in 2D image decompositions, even though some image-fitting codes, such as GALFIT, are capable of handling them. We present detailed, 2D multi-model and multi-component decomposition of high-quality R -band images of a representative sample of nearby disk galaxies selected from the Carnegie-Irvine Galaxy Survey, using the latest version of GALFIT. The sample consists of five barred and five unbarred galaxies, spanning Hubble types from S0 to Sc. Traditional 1D decomposition is also presented for comparison. In detailed case studies of the 10 galaxies, we successfully model the secondary morphological features. Through a comparison of best-fit parameters obtained from different input surface brightness models, we identify morphological features that significantly impact bulge measurements. We show that nuclear and inner lenses/rings and disk breaks must be properly taken into account to obtain accurate bulge parameters, whereas outer lenses/rings and spiral arms have a negligible effect. We provide an optimal strategy to measure bulge parameters of typical disk galaxies, as well as prescriptions to estimate realistic uncertainties of them, which will benefit subsequent decomposition of a larger galaxy sample.
12. An Optimal Strategy for Accurate Bulge-to-disk Decomposition of Disk Galaxies
Science.gov (United States)
Gao, Hua; Ho, Luis C.
2017-08-01
The development of two-dimensional (2D) bulge-to-disk decomposition techniques has shown their advantages over traditional one-dimensional (1D) techniques, especially for galaxies with non-axisymmetric features. However, the full potential of 2D techniques has yet to be fully exploited. Secondary morphological features in nearby disk galaxies, such as bars, lenses, rings, disk breaks, and spiral arms, are seldom accounted for in 2D image decompositions, even though some image-fitting codes, such as GALFIT, are capable of handling them. We present detailed, 2D multi-model and multi-component decomposition of high-quality R-band images of a representative sample of nearby disk galaxies selected from the Carnegie-Irvine Galaxy Survey, using the latest version of GALFIT. The sample consists of five barred and five unbarred galaxies, spanning Hubble types from S0 to Sc. Traditional 1D decomposition is also presented for comparison. In detailed case studies of the 10 galaxies, we successfully model the secondary morphological features. Through a comparison of best-fit parameters obtained from different input surface brightness models, we identify morphological features that significantly impact bulge measurements. We show that nuclear and inner lenses/rings and disk breaks must be properly taken into account to obtain accurate bulge parameters, whereas outer lenses/rings and spiral arms have a negligible effect. We provide an optimal strategy to measure bulge parameters of typical disk galaxies, as well as prescriptions to estimate realistic uncertainties of them, which will benefit subsequent decomposition of a larger galaxy sample.
13. An Optimal Strategy for Accurate Bulge-to-disk Decomposition of Disk Galaxies
International Nuclear Information System (INIS)
Gao Hua; Ho, Luis C.
2017-01-01
The development of two-dimensional (2D) bulge-to-disk decomposition techniques has shown their advantages over traditional one-dimensional (1D) techniques, especially for galaxies with non-axisymmetric features. However, the full potential of 2D techniques has yet to be fully exploited. Secondary morphological features in nearby disk galaxies, such as bars, lenses, rings, disk breaks, and spiral arms, are seldom accounted for in 2D image decompositions, even though some image-fitting codes, such as GALFIT, are capable of handling them. We present detailed, 2D multi-model and multi-component decomposition of high-quality R -band images of a representative sample of nearby disk galaxies selected from the Carnegie-Irvine Galaxy Survey, using the latest version of GALFIT. The sample consists of five barred and five unbarred galaxies, spanning Hubble types from S0 to Sc. Traditional 1D decomposition is also presented for comparison. In detailed case studies of the 10 galaxies, we successfully model the secondary morphological features. Through a comparison of best-fit parameters obtained from different input surface brightness models, we identify morphological features that significantly impact bulge measurements. We show that nuclear and inner lenses/rings and disk breaks must be properly taken into account to obtain accurate bulge parameters, whereas outer lenses/rings and spiral arms have a negligible effect. We provide an optimal strategy to measure bulge parameters of typical disk galaxies, as well as prescriptions to estimate realistic uncertainties of them, which will benefit subsequent decomposition of a larger galaxy sample.
14. TWO RED CLUMPS AND THE X-SHAPED MILKY WAY BULGE
International Nuclear Information System (INIS)
McWilliam, Andrew; Zoccali, Manuela
2010-01-01
From Two Micron All Sky Survey infrared photometry, we find two red clump (RC) populations coexisting in fields toward the Galactic bulge at latitudes |b|>5. 0 5, ranging over ∼13 0 in longitude and 20 0 in latitude. These RC peaks indicate two stellar populations separated by ∼2.3 kpc; at (l, b) = (+1, - 8) the two RCs are located at 6.5 and 8.8 ± 0.2 kpc. The double-peaked RC is inconsistent with a tilted bar morphology. Most of our fields show the two RCs at roughly constant distance with longitude, also inconsistent with a tilted bar; however, an underlying bar may be present. Stellar densities in the two RCs change dramatically with longitude: on the positive longitude side the foreground RC is dominant, while the background RC dominates negative longitudes. A line connecting the maxima of the foreground and background populations is tilted to the line of sight by ∼20 0 ±4 0 , similar to claims for the tilt of a Galactic bar. The distance between the two RCs decreases toward the Galactic plane; seen edge-on the bulge is X-shaped, resembling some extragalactic bulges and the results of N-body simulations. The center of this X is consistent with the distance to the Galactic center, although better agreement would occur if the bulge is 2-3 Gyr younger than 47 Tuc. Our observations may be understood if the two RC populations emanate, nearly tangentially, from the Galactic bar ends, in a funnel shape. Alternatively, the X, or double funnel, may continue to the Galactic center. From the Sun, this would appear peanut/box shaped, but X-shaped when viewed tangentially.
15. Two Red Clumps and the X-shaped Milky Way Bulge
Science.gov (United States)
McWilliam, Andrew; Zoccali, Manuela
2010-12-01
From Two Micron All Sky Survey infrared photometry, we find two red clump (RC) populations coexisting in fields toward the Galactic bulge at latitudes |b|>5fdg5, ranging over ~13° in longitude and 20° in latitude. These RC peaks indicate two stellar populations separated by ~2.3 kpc at (l, b) = (+1, - 8) the two RCs are located at 6.5 and 8.8 ± 0.2 kpc. The double-peaked RC is inconsistent with a tilted bar morphology. Most of our fields show the two RCs at roughly constant distance with longitude, also inconsistent with a tilted bar; however, an underlying bar may be present. Stellar densities in the two RCs change dramatically with longitude: on the positive longitude side the foreground RC is dominant, while the background RC dominates negative longitudes. A line connecting the maxima of the foreground and background populations is tilted to the line of sight by ~20°±4°, similar to claims for the tilt of a Galactic bar. The distance between the two RCs decreases toward the Galactic plane; seen edge-on the bulge is X-shaped, resembling some extragalactic bulges and the results of N-body simulations. The center of this X is consistent with the distance to the Galactic center, although better agreement would occur if the bulge is 2-3 Gyr younger than 47 Tuc. Our observations may be understood if the two RC populations emanate, nearly tangentially, from the Galactic bar ends, in a funnel shape. Alternatively, the X, or double funnel, may continue to the Galactic center. From the Sun, this would appear peanut/box shaped, but X-shaped when viewed tangentially.
16. Research on Al-alloy sheet forming formability during warm/hot sheet hydroforming based on elliptical warm bulging test
Science.gov (United States)
Cai, Gaoshen; Wu, Chuanyu; Gao, Zepu; Lang, Lihui; Alexandrov, Sergei
2018-05-01
An elliptical warm/hot sheet bulging test under different temperatures and pressure rates was carried out to predict Al-alloy sheet forming limit during warm/hot sheet hydroforming. Using relevant formulas of ultimate strain to calculate and dispose experimental data, forming limit curves (FLCS) in tension-tension state of strain (TTSS) area are obtained. Combining with the basic experimental data obtained by uniaxial tensile test under the equivalent condition with bulging test, complete forming limit diagrams (FLDS) of Al-alloy are established. Using a quadratic polynomial curve fitting method, material constants of fitting function are calculated and a prediction model equation for sheet metal forming limit is established, by which the corresponding forming limit curves in TTSS area can be obtained. The bulging test and fitting results indicated that the sheet metal FLCS obtained were very accurate. Also, the model equation can be used to instruct warm/hot sheet bulging test.
17. The population of single and binary white dwarfs of the Galactic bulge
Science.gov (United States)
Torres, S.; García-Berro, E.; Cojocaru, R.; Calamida, A.
2018-05-01
Recent Hubble Space Telescope observations have unveiled the white dwarf cooling sequence of the Galactic bulge. Although the degenerate sequence can be well fitted employing the most up-to-date theoretical cooling sequences, observations show a systematic excess of red objects that cannot be explained by the theoretical models of single carbon-oxygen white dwarfs of the appropriate masses. Here, we present a population synthesis study of the white dwarf cooling sequence of the Galactic bulge that takes into account the populations of both single white dwarfs and binary systems containing at least one white dwarf. These calculations incorporate state-of-the-art cooling sequences for white dwarfs with hydrogen-rich and hydrogen-deficient atmospheres, for both white dwarfs with carbon-oxygen and helium cores, and also take into account detailed prescriptions of the evolutionary history of binary systems. Our Monte Carlo simulator also incorporates all the known observational biases. This allows us to model with a high degree of realism the white dwarf population of the Galactic bulge. We find that the observed excess of red stars can be partially attributed to white dwarf plus main sequence binaries, and to cataclysmic variables or dwarf novae. Our best fit is obtained with a higher binary fraction and an initial mass function slope steeper than standard values, as well as with the inclusion of differential reddening and blending. Our results also show that the possible contribution of double degenerate systems or young and thick-discbulge stars is negligible.
18. Nonlinear Local Bending Response and Bulging Factors for Longitudinal and Circumferential Cracks in Pressurized Cylindrical Shells
Science.gov (United States)
Young, Richard D.; Rose, Cheryl A.; Starnes, James H., Jr.
2000-01-01
Results of a geometrically nonlinear finite element parametric study to determine curvature correction factors or bulging factors that account for increased stresses due to curvature for longitudinal and circumferential cracks in unstiffened pressurized cylindrical shells are presented. Geometric parameters varied in the study include the shell radius, the shell wall thickness, and the crack length. The major results are presented in the form of contour plots of the bulging factor as a function of two nondimensional parameters: the shell curvature parameter, lambda, which is a function of the shell geometry, Poisson's ratio, and the crack length; and a loading parameter, eta, which is a function of the shell geometry, material properties, and the applied internal pressure. These plots identify the ranges of the shell curvature and loading parameters for which the effects of geometric nonlinearity are significant. Simple empirical expressions for the bulging factor are then derived from the numerical results and shown to predict accurately the nonlinear response of shells with longitudinal and circumferential cracks. The numerical results are also compared with analytical solutions based on linear shallow shell theory for thin shells, and with some other semi-empirical solutions from the literature, and limitations on the use of these other expressions are suggested.
19. POWERFUL RADIO EMISSION FROM LOW-MASS SUPERMASSIVE BLACK HOLES FAVORS DISK-LIKE BULGES
Energy Technology Data Exchange (ETDEWEB)
Wang, J.; Xu, Y.; Xu, D. W.; Wei, J. Y., E-mail: [email protected] [CAS Key Laboratory of Space Astronomy and Technology, National Astronomical Observatories, Chinese Academy of Sciences, Beijing (China)
2016-12-10
The origin of spin of low-mass supermassive black holes (SMBHs) is still a puzzle at present. We report here a study on the host galaxies of a sample of radio-selected nearby ( z < 0.05) Seyfert 2 galaxies with a BH mass of 10{sup 6–7} M{sub ⊙}. By modeling the SDSS r -band images of these galaxies through a two-dimensional bulge+disk decomposition, we identify a new dependence of SMBH's radio power on host bulge surface brightness profiles, in which more powerful radio emission comes from an SMBH associated with a more disk-like bulge. This result means low-mass and high-mass SMBHs are spun up by two entirely different modes that correspond to two different evolutionary paths. A low-mass SMBH is spun up by a gas accretion with significant disk-like rotational dynamics of the host galaxy in the secular evolution, while a high-mass one by a BH–BH merger in the merger evolution.
20. Star formation history of the Galactic bulge from deep HST imaging of low reddening windows
Science.gov (United States)
Bernard, Edouard J.; Schultheis, Mathias; Di Matteo, Paola; Hill, Vanessa; Haywood, Misha; Calamida, Annalisa
2018-04-01
Despite the huge amount of photometric and spectroscopic efforts targetting the Galactic bulge over the past few years, its age distribution remains controversial owing to both the complexity of determining the age of individual stars and the difficult observing conditions. Taking advantage of the recent release of very deep, proper-motion-cleaned colour-magnitude diagrams (CMDs) of four low reddening windows obtained with the Hubble Space Telescope (HST), we used the CMD-fitting technique to calculate the star formation history (SFH) of the bulge at -2° > b > -4° along the minor axis. We find that over 80 percent of the stars formed before 8 Gyr ago, but that a significant fraction of the super-solar metallicity stars are younger than this age. Considering only the stars that are within reach of the current generation of spectrographs (i.e. V≲ 21), we find that 10 percent of the bulge stars are younger than 5 Gyr, while this fraction rises to 20-25 percent in the metal-rich peak. The age-metallicity relation is well parametrized by a linear fit implying an enrichment rate of dZ/dt ˜ 0.005 Gyr-1. Our metallicity distribution function accurately reproduces that observed by several spectroscopic surveys of Baade's window, with the bulk of stars having metal-content in the range [Fe/H]˜-0.7 to ˜0.6, along with a sparse tail to much lower metallicities.
1. DISCOVERY OF A PAIR OF CLASSICAL CEPHEIDS IN AN INVISIBLE CLUSTER BEYOND THE GALACTIC BULGE
Energy Technology Data Exchange (ETDEWEB)
Dékány, I.; Palma, T. [Millennium Institute of Astrophysics, Santiago (Chile); Minniti, D. [Departamento de Ciencias Físicas, Universidad Andres Bello, República 220, Santiago (Chile); Hajdu, G.; Alonso-García, J.; Hempel, M.; Catelan, M. [Instituto de Astrofísica, Facultad de Física, Pontificia Universidad Católica de Chile, Av. Vicuña Mackenna 4860, Santiago (Chile); Gieren, W. [Departamento de Astronomía, Universidad de Concepción, Casilla 160 C, Concepción (Chile); Majaess, D. [Department of Astronomy and Physics, Saint Mary’s University, Halifax, NS B3H 3C3 (Canada)
2015-01-20
We report the discovery of a pair of extremely reddened classical Cepheid variable stars located in the Galactic plane behind the bulge, using near-infrared (NIR) time-series photometry from the VISTA Variables in the Vía Láctea Survey. This is the first time that such objects have ever been found in the opposite side of the Galactic plane. The Cepheids have almost identical periods, apparent brightnesses, and colors. From the NIR Leavitt law, we determine their distances with ∼1.5% precision and ∼8% accuracy. We find that they have a same total extinction of A(V)≃32 mag, and are located at the same heliocentric distance of 〈d〉=11.4±0.9 kpc, and less than 1 pc from the true Galactic plane. Their similar periods indicate that the Cepheids are also coeval, with an age of ∼48±3 Myr, according to theoretical models. They are separated by an angular distance of only 18.″3, corresponding to a projected separation of ∼1 pc. Their position coincides with the expected location of the Far 3 kpc Arm behind the bulge. Such a tight pair of similar classical Cepheids indicates the presence of an underlying young open cluster that is both hidden behind heavy extinction and disguised by the dense stellar field of the bulge. All our attempts to directly detect this “invisible cluster” have failed, and deeper observations are needed. (letters)
2. Plasmasphere dynamics in the duskside bulge region: A new look at old topic
Science.gov (United States)
Carpenter, D. L.; Giles, B. L.; Chappell, C. R.; Decreau, P. M. E.; Anderson, R. R.; Persoon, A. M.; Smith, A. J.; Corcuff, Y.; Canu, P.
1993-01-01
Data acquired during several multiday periods in 1982 at ground stations Siple, Halley, and Kerguelen and on satellites Dynamics Explorer 1, International Sun Earth Explorer 1, and GEOS 2 have been used to investigate thermal plasma structure and dynamics in the duskside plasmasphere bulge region of the Earth. The distribution of thermal plasma in the dusk bulge sector is difficult to describe realistically, in part because of the time integral manner in which the thermal plasma distribution depends upon on the effects of bulk cross-B flow and interchange plasma flows along B. While relatively simple MHD models can be useful for qualitatively predicting certain effects of enhanced convection on a quiet plasmasphere, such as an initial sunward entrainment of the outer regions, they are of limited value in predicting the duskside thermal plasma structures that are observed. Furthermore, use of such models can be misleading if one fails to realize that they do not address the question of the formation of the steep plasmapause profile or provide for a possible role of instabilities or other irreversible processes in plasmapause formation. Our specific findings, which are based both upon the present case studies and upon earlier work, include the following: (1) during active periods the plasmasphere appears to become divided into two entities, a main plasmasphere and a duskside bulge region. (2) in the aftermath of an increase in convection activity, the main plasmasphere tends (from a statistical point of view) to become roughly circular in equatorial cross section, with only a slight bulge at dusk; (3) the abrupt westward edge of the duskside bulge observed from whistlers represents a state in the evolution of sunward extending streamers; (4) in the aftermath of a weak magnetic storm, 10 to 30% of the plasma 'removed' from the outer plasmasphere appears to remain in the afternoon-dusk sector beyond the main plasmasphere. (5) outlying dense plasma structures may
3. An AO-assisted Variability Study of Four Globular Clusters
Science.gov (United States)
Salinas, R.; Contreras Ramos, R.; Strader, J.; Hakala, P.; Catelan, M.; Peacock, M. B.; Simunovic, M.
2016-09-01
The image-subtraction technique applied to study variable stars in globular clusters represented a leap in the number of new detections, with the drawback that many of these new light curves could not be transformed to magnitudes due to severe crowding. In this paper, we present observations of four Galactic globular clusters, M 2 (NGC 7089), M 10 (NGC 6254), M 80 (NGC 6093), and NGC 1261, taken with the ground-layer adaptive optics module at the SOAR Telescope, SAM. We show that the higher image quality provided by SAM allows for the calibration of the light curves of the great majority of the variables near the cores of these clusters as well as the detection of new variables, even in clusters where image-subtraction searches were already conducted. We report the discovery of 15 new variables in M 2 (12 RR Lyrae stars and 3 SX Phe stars), 12 new variables in M 10 (11 SX Phe and 1 long-period variable), and 1 new W UMa-type variable in NGC 1261. No new detections are found in M 80, but previous uncertain detections are confirmed and the corresponding light curves are calibrated into magnitudes. Additionally, based on the number of detected variables and new Hubble Space Telescope/UVIS photometry, we revisit a previous suggestion that M 80 may be the globular cluster with the richest population of blue stragglers in our Galaxy. Based on observations obtained at the Southern Astrophysical Research (SOAR) telescope, which is a joint project of the Ministério da Ciência, Tecnologia, e Inovação (MCTI) da República Federativa do Brasil, the U.S. National Optical Astronomy Observatory (NOAO), the University of North Carolina at Chapel Hill (UNC), and Michigan State University (MSU).
4. The Age of the Inner Halo Globular Cluster NGC 6652
OpenAIRE
Chaboyer, Brian; Sarajedini, Ata; Armandroff, Taft E.
2000-01-01
HST (V,I) photometry has been obtained for the inner halo globular cluster NGC 6652. The photometry reaches approximately 4 mag below the turn-off and includes a well populated horizontal branch. This cluster is located close to the Galactic center at a galactocentric distance of approximately 2.0 kpc with a reddening of E(V-I) = 0.15 +/- 0.02 and has a metallicity of [Fe/H] approximately -0.85. Based upon Delta(V) between the point on the sub-giant branch which is 0.05 mag redder than the tu...
5. The properties of the disk system of globular clusters
Science.gov (United States)
Armandroff, Taft E.
1989-01-01
A large refined data sample is used to study the properties and origin of the disk system of globular clusters. A scale height for the disk cluster system of 800-1500 pc is found which is consistent with scale-height determinations for samples of field stars identified with the Galactic thick disk. A rotational velocity of 193 + or - 29 km/s and a line-of-sight velocity dispersion of 59 + or - 14 km/s have been found for the metal-rich clusters.
6. Properties of the disk system of globular clusters
International Nuclear Information System (INIS)
Armandroff, T.E.
1989-01-01
A large refined data sample is used to study the properties and origin of the disk system of globular clusters. A scale height for the disk cluster system of 800-1500 pc is found which is consistent with scale-height determinations for samples of field stars identified with the Galactic thick disk. A rotational velocity of 193 + or - 29 km/s and a line-of-sight velocity dispersion of 59 + or - 14 km/s have been found for the metal-rich clusters. 70 references
7. The globular cluster ω Centauri and its RR Lyrae variables
International Nuclear Information System (INIS)
Dickens, R.J.
1989-07-01
The significance of some of the unusual characteristics of the globular cluster ωCentauri in various fundamental problems is explored. Interest is centred on the properties of the cluster RR Lyraes, and what they can contribute to studies of early cluster chemical enrichment, stellar pulsation, the distance scale, stellar evolution, stellar ages and the Oosterhoff period-shift problem. This article, which is intended to highlight problems and progress rather than give a comprehensive review, includes new results based on photometry of the RR Lyraes, red giants, subgiants, horizontal-branch and main sequence stars in the cluster. (author)
8. Microwave-enhanced folding and denaturation of globular proteins
DEFF Research Database (Denmark)
Bohr, Henrik; Bohr, Jakob
2000-01-01
It is shown that microwave irradiation can affect the kinetics of the folding process of some globular proteins, especially beta-lactoglobulin. At low temperature the folding from the cold denatured phase of the protein is enhanced, while at a higher temperature the denaturation of the protein from...... its folded state is enhanced. In the latter case, a negative temperature gradient is needed for the denaturation process, suggesting that the effects of the microwaves are nonthermal. This supports the notion that coherent topological excitations can exist in proteins. The application of microwaves...
9. Dynamics of proteins at low temperatures: fibrous vs. globular
Science.gov (United States)
Foucat, L.; Renou, J.-P.; Tengroth, C.; Janssen, S.; Middendorf, H. D.
We have measured quasielastic neutron scattering from H2O-hydrated collagen and haemoglobin at Tτ>10 ps. Relative to haemoglobin, the 200-K dynamic transition is shifted upward by 20-25 K in collagen, and the T-dependence of m.-sq. displacements derived from Sqe(Q;T) suggests that in triple-helical systems there are three rather than two regimes: one up to around 120K (probably purely harmonic), an intermediate quasiharmonic region with a linear dependence up to 240K, followed by a steeper nonlinear rise similar to that in globular proteins.
10. Blue straggler stars in the globular cluster NGC 5053
International Nuclear Information System (INIS)
Nemec, J.M.; Cohen, J.G.
1989-01-01
A study of the low central concentration globular cluster NGC 5053 based on photometry to 23 mag is reported. Deep C-M diagrams are presented, a mean metal abundance for the cluster is derived from the color of the RGB at the level of the horizontal branch, and theoretical isochrones are used to derive a distance modulus of (m - M0) = 16.05 + or - 0.14 mag and an age of 18 + or - 3 Gyr. A luminosity function based on subgiant and upper main-sequence stars is also constructed. A total of 24 blue stragglers in NGC 5053 are identified and their properties are studied. 65 references
11. The gravitational waveforms of white dwarf collisions in globular clusters
International Nuclear Information System (INIS)
Loren-Aguilar, P; Garcia-Berro, E; Lobo, J A; Isern, J
2009-01-01
In the dense central regions of globular clusters close encounters of two white dwarfs are relatively frequent. The estimated frequency is one or more strong encounters per star in the lifetime of the cluster. Such encounters should be then potential sources of gravitational wave radiation. Thus, it is foreseeable that these collisions could be either individually detected by LISA or they could contribute significantly to the background noise of the detector. We compute the pattern of gravitational wave emission from these encounters for a sufficiently broad range of system parameters, namely the masses, the relative velocities and the distances of the two white dwarfs involved in the encounter.
12. A fast pulsar candidate in the globular cluster M28
International Nuclear Information System (INIS)
Mahoney, M.J.; Erickson, W.C.
1985-01-01
Recent work on radio sources in globular clusters, using the very large Array telescope at 1,465 MHz, revealed a source within the core of M28. Observations of this source at 30.9 and 57.5 MHz have also been carried out, by the authors, using the Clark lake TPT synthesis telescope. The observations show that the source has a spectral index of -2.44. Only pulsars have well-documented spectra which are as steep as this. (U.K.)
13. Structure and Dynamics of the Globular Cluster Palomar 13
Science.gov (United States)
Bradford, J. D.; Geha, M.; Muñoz, R. R.; Santana, F. A.; Simon, J. D.; Côté, P.; Stetson, P. B.; Kirby, E.; Djorgovski, S. G.
2011-12-01
We present Keck/DEIMOS spectroscopy and Canada-France-Hawaii Telescope/MegaCam photometry for the Milky Way globular cluster Palomar 13. We triple the number of spectroscopically confirmed members, including many repeat velocity measurements. Palomar 13 is the only known globular cluster with possible evidence for dark matter, based on a Keck/High Resolution Echelle Spectrometer 21 star velocity dispersion of σ = 2.2 ± 0.4 km s-1. We reproduce this measurement, but demonstrate that it is inflated by unresolved binary stars. For our sample of 61 stars, the velocity dispersion is σ = 0.7+0.6 -0.5 km s-1. Combining our DEIMOS data with literature values, our final velocity dispersion is σ = 0.4+0.4 -0.3 km s-1. We determine a spectroscopic metallicity of [Fe/H] = -1.6 ± 0.1 dex, placing a 1σ upper limit of σ[Fe/H] ~ 0.2 dex on any internal metallicity spread. We determine Palomar 13's total luminosity to be MV = -2.8 ± 0.4, making it among the least luminous known globular clusters. The photometric isophotes are regular out to the half-light radius and mildly irregular outside this radius. The outer surface brightness profile slope is shallower than typical globular clusters (Σvpropr η, η = -2.8 ± 0.3). Thus at large radius, tidal debris is likely affecting the appearance of Palomar 13. Combining our luminosity with the intrinsic velocity dispersion, we find a dynamical mass of M 1/2 = 1.3+2: 7 -1.3 × 103 M ⊙ and a mass-to-light ratio of M/LV = 2.4+5.0 -2.4 M ⊙/L ⊙. Within our measurement errors, the mass-to-light ratio agrees with the theoretical predictions for a single stellar population. We conclude that, while there is some evidence for tidal stripping at large radius, the dynamical mass of Palomar 13 is consistent with its stellar mass and neither significant dark matter, nor extreme tidal heating, is required to explain the cluster dynamics. The data presented herein were obtained at the W. M. Keck Observatory, which is operated as a
14. Discovery of a ~205 Hz X-ray pulsar in the globular cluster NGC 6440
NARCIS (Netherlands)
Altamirano, D.; Strohmayer, T.E.; Heinke, C.O.; Markwardt, C.B.; Swank, J.H.; Pereira, D.; Smith, E.; Wijnands, R.; Linares, M.; Patruno, A.; Casella, P.; van der Klis, M.
2009-01-01
Discovery of a 205 Hz X-ray pulsar in the globular cluster NGC 6440 The globular cluster NGC 6440 was observed by the PCA instrument aboard RXTE on August 30, 2009 at 01:42 (UTC). The observation lasted for approximately 3000 seconds and the source was detected with an intensity of ~7 mCrab (2-10
15. The WAGGS project - I. The WiFeS Atlas of Galactic Globular cluster Spectra
Science.gov (United States)
Usher, Christopher; Pastorello, Nicola; Bellstedt, Sabine; Alabi, Adebusola; Cerulo, Pierluigi; Chevalier, Leonie; Fraser-McKelvie, Amelia; Penny, Samantha; Foster, Caroline; McDermid, Richard M.; Schiavon, Ricardo P.; Villaume, Alexa
2017-07-01
We present the WiFeS Atlas of Galactic Globular cluster Spectra, a library of integrated spectra of Milky Way and Local Group globular clusters. We used the WiFeS integral field spectrograph on the Australian National University 2.3 m telescope to observe the central regions of 64 Milky Way globular clusters and 22 globular clusters hosted by the Milky Way's low-mass satellite galaxies. The spectra have wider wavelength coverage (3300-9050 Å) and higher spectral resolution (R = 6800) than existing spectral libraries of Milky Way globular clusters. By including Large and Small Magellanic Cloud star clusters, we extend the coverage of parameter space of existing libraries towards young and intermediate ages. While testing stellar population synthesis models and analysis techniques is the main aim of this library, the observations may also further our understanding of the stellar populations of Local Group globular clusters and make possible the direct comparison of extragalactic globular cluster integrated light observations with well-understood globular clusters in the Milky Way. The integrated spectra are publicly available via the project website.
16. A VST and VISTA study of globular clusters in NGC 253
Science.gov (United States)
Cantiello, Michele; Grado, Aniello; Rejkuba, Marina; Arnaboldi, Magda; Capaccioli, Massimo; Greggio, Laura; Iodice, Enrica; Limatola, Luca
2018-03-01
of bright GCs. Part of the bright GCs missing might be at very large galactocentric distances or along the line of sight of the galaxy dusty disk. As an alternative possibility, we speculate that a fraction of low luminosity GC candidates might instead be metal-rich, intermediate age clusters, but fall in a similar color interval of old, metal-poor GCs. Conclusions: Defining a contaminant-free sample of GCs in extragalactic systems is not a straight forward exercise. Using optical and near-IR photometry we purged the list of GCs with spectroscopic membership and photometric GC candidates in NGC 253. Our results show that the use of either spectroscopic or photometric data only does not generally ensure a contaminant-free sample and a combination of both spectroscopy and photometry is preferred. Table 3 is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/611/A21This work is based on observations taken at the ESO La Silla Paranal Observatory within the VST Science Verification Programme ID 60.A-9286(A) and VISTA Science Verification Programme ID 60.A-9285(A).
17. On the evolution of globular clusters and the origin of galactic halo stars
International Nuclear Information System (INIS)
Surdin, V.G.
1978-01-01
Evolution of globular clusters of galactic halo is considered. It is shown that evolution of massive globular clusters with a greater degree of probability takes place under the effect of dynamic friction, which brings about the cluster fall on the center of galactic and their destruction by tidal forces. Evolution of small massive cluster takes place under the effect of dissipation. All the other reasons, causing the destruction of globular clusters (gravitational tidal forces, mutual cluster collision, outflow of gas from red gigant atmospheres, the change of the radius of the cluster orbit at the expense of the change of the galaxy mass inside the cluster orbit) play a secondary role. The whole mass of the stars lost by globular clusters does not exceed 10 7 M sun. It is concluded that the origin of the star population of galactic halo field can not be explained by destruction of already formed only astral globular clusters
18. SEARCH FOR RED DWARF STARS IN GLOBULAR CLUSTER NGC 6397
Science.gov (United States)
2002-01-01
Left A NASA Hubble Space Telescope image of a small region (1.4 light-years across) in the globular star cluster NGC 6397. Simulated stars (diamonds) have been added to this view of the same region of the cluster to illustrate what astronomers would have expected to see if faint red dwarf stars were abundant in the Milky Way Galaxy. The field would then contain 500 stars, according to theoretical calculations. Right The unmodified HST image shows far fewer stars than would be expected, according to popular theories of star formation. HST resolves about 200 stars. The stellar density is so low that HST can literally see right through the cluster and resolve far more distant background galaxies. From this observation, scientists have identified the surprising cutoff point below which nature apparently doesn't make many stars smaller that 1/5 the mass of our Sun. These HST findings provide new insights into star formation in our Galaxy. Technical detail:The globular cluster NGC 6397, one of the nearest and densest agglomerations of stars, is located 7,200 light-years away in the southern constellation Ara. This visible-light picture was taken on March 3, 1994 with the Wide Field Planetary Camera 2, as part the HST parallel observing program. Credit: F. Paresce, ST ScI and ESA and NASA
19. The Age of the Inner Halo Globular Cluster NGC 6652
Science.gov (United States)
Chaboyer, Brian; Sarajedini, Ata; Armandroff, Taft E.
2000-01-01
Hubble Space Telescope (HST) (V,I) photometry has been obtained for the inner halo globular cluster NGC 6652. The photometry reaches approximately 4 mag below the turn-off and includes a well populated horizontal branch (HB). This cluster is located close to the Galactic center at RGC approximately equal to 2.0 kpc with a reddening of E(V-I) = 0.15 +/- 0.02 and has a metallicity of [Fe/H] approximately equal to -0.85. Based upon DELTA V (sup SGB) (sub HB), NGC 6652 is 11.7 plus or minus 1.6 Gyr old. Using A HB precise differential ages for 47 Tuc (a thick disk globular), M107 and NGC 1851 (both halo clusters) were obtained. NGC 6652 appears to be the same age as 47 Tuc and NGC 1851 (within +/- 1.2 Gyr), while there is a slight suggestion that M107 is older than NGC 6652 by 2.3 +/- 1.5 Gyr. As this is a less than 2 sigma result, this issue needs to be investigated further before a definitive statement regarding the relative age of M107 and NGC 6652 may be made.
20. An Archival Search For Young Globular Clusters in Galaxies
Science.gov (United States)
1995-07-01
One of the most intriguing results from HST has been the discovery of ultraluminous star clusters in interacting and merging galaxies. These clusters have the luminosities, colors, and sizes that would be expected of young globular clusters produced by the interaction. We propose to use the data in the HST Archive to determine how prevalent this phenomena is, and to determine whether similar clusters are produced in other environments. Three samples will be extracted and studied in a systematic and consistent manner: 1} interacting and merging galaxies, 2} starburst galaxies, 3} a control sample of normal'' galaxies. A preliminary search of the archives shows that there are at least 20 galaxies in each of these samples, and the number will grow by about 50 observations become available. The data will be used to determine the luminosity function, color histogram , spatial distribution, and structural properties of the clusters using the same techniques employed in our study of NGC 7252 {Atoms -for-Peace'' galaxy} and NGC 4038/4039 {The Antennae''}. Our ultimate goals are: 1} to understand how globular clusters form, and 2} to use the clusters as evolutionary tracers to unravel the histories of interacting galaxies.
1. Dynamical Friction in Multi-component Evolving Globular Clusters
Science.gov (United States)
Alessandrini, Emiliano; Lanzoni, Barbara; Miocchi, Paolo; Ciotti, Luca; Ferraro, Francesco R.
2014-11-01
2. Deep CCD photometry in globular clusters III. M15
International Nuclear Information System (INIS)
Fahlman, G.G.; Richer, H.B.; Vandenberg, D.A.
1985-01-01
CCD photometry in U, B, and V is presented for a 5' x 3' field in the globular cluster M15. The location of the main sequence in the color-magnitude diagram is found here to be significantly bluer than previous studies have indicated. The luminosity function of the cluster is studied down to V = 22.8 (Mroughly-equal7.5) and shown to be consistent with a power-law mass function, n(M) = QM/sup -alpha/ with α = 2.5 +- 1.0, to the limit of our data. The field star population brighter than V = 21.5, is examined in some detail. There appears to be about 50% more stars belonging to the disk in the field as compared with the Bahcall-Soneira standard galaxy model. The reddening to the cluster is found to be E(B-V) = 0.11 +- 0.04 from nine bright field stars. A new value for the ultraviolet excess of the cluster main-sequence stars is obtained, delta(0.6) = 0.25 +- 0.02, and confirms the well-known fact that M15 is among the metal poorest of the globular clusters
3. Brain asymmetry in the white matter making and globularity
Directory of Open Access Journals (Sweden)
Constantina eTheofanopoulou
2015-09-01
Full Text Available Recent studies from the field of language genetics and evolutionary anthropology have put forward the hypothesis that the emergence of our species-specific brain is to be understood not in terms of size, but in light of developmental changes that gave rise to a more globular braincase configuration after the split from Neanderthals-Denisovans. On the grounds that (i white matter myelination is delayed relative to other brain structures and in humans is protracted compared with other primates and (ii neural connectivity is linked genetically to our brain/skull morphology and language-ready brain, I take it that one significant evolutionary change in Homo sapiens’ lineage is the interhemispheric connectivity mediated by the Corpus Callosum. The size, myelination and fiber caliber of the Corpus Callosum presents an anterior-to-posterior increase, in a way that inter-hemispheric connectivity is more prominent in the sensory motor areas, whereas high- order areas are more intra-hemispherically connected. Building on evidence from language-processing studies that account for this asymmetry (‘lateralization’ in terms of brain rhythms, I present an evo-devo hypothesis according to which the myelination of the Corpus Callosum, Brain Asymmetry and Globularity are conjectured to make up the angles of a co-evolutionary triangle that gave rise to our language-ready brain.
4. Medium Resolution Spectroscopy and Chemical Composition of Galactic Globular Clusters
Directory of Open Access Journals (Sweden)
Khamidullina D. A.
2014-12-01
Full Text Available We used integrated-light medium-resolution spectra of six Galactic globular clusters and model stellar atmospheres to carry out population synthesis and to derive chemical composition and age of the clusters. We used medium-resolution spectra of globular clusters published by Schiavon et al. (2005, as well as our long-slit observations with the 1.93 m telescope of the Haute Provence Observatory. The observed spectra were fitted to the theoretical ones interactively. As an initial approach, we used masses, radii and log g of stars in the clusters corresponding to the best fitting isochrones in the observed color-magnitude diagrams. The computed synthetic blanketed spectra of stars were summed according to the Chabrier mass function. To improve the determination of age and helium content, the shape and depth of the Balmer absorption lines was analysed. The abundances of Mg, Ca, C and several other elements were derived. A reasonable agreement with the literature data both in chemical composition and in age of the clusters is found. Our method might be useful for the development of stellar population models and for a better understanding of extragalactic star clusters.
5. The fragmentation of proto-globular clusters. I. Thermal instabilities
International Nuclear Information System (INIS)
Murray, S.D.; Lin, D.N.C.
1989-01-01
The metal abundances among the stars within a typical globular cluster are remarkably homogeneous. This indicates that star formation in these systems was a globally coordinated event which occurred over a time span less than or comparable to the collapse time scale of the cluster. This issue is addressed by assuming that the fragmentation of a proto-globular cluster cloud proceeded in two steps. In the first step, thermal instability led to the rapid growth of initial fluctuations. This led to a large contrast in the dynamical time scales between the perturbations and the parent cloud, and the perturbations then underwent gravitational instabilities on short time scales. This process is modeled using one-dimensional hydrodynamic simulations of clouds both with and without external heat sources and self-gravity. The models include the effects of a non-equilibrium H2 abundance. The results indicate that fragmentation can occur on time scales significantly less than the dynamical time scale of the parent cloud. 21 refs
6. Comparing Chemical Compositions of Dwarf Elliptical Galaxies and Globular Clusters
Science.gov (United States)
Chu, Jason; Sparkman, Lea; Toloba, Elisa; Guhathakurta, Puragra
2015-01-01
Because of their abundance in cluster environments and fragility due to their low mass, dwarf elliptical galaxies (dEs) are excellent specimens for studying the physical processes that occur inside galaxy clusters. These studies can be used to expand our understanding of the process of galaxy (specifically dE) formation and the role of dark matter in the Universe. To move closer to better understanding these topics, we present a study of the relationship between dEs and globular clusters (GCs) by using the largest sample of dEs and GC satellites to date. We focus on comparing the ages and chemical compositions of dE nuclei with those of satellite GCs by analyzing absorption lines in their spectra. To better view the spectral features of these relatively dim objects, we employ a spectral co-addition process, where we add the fluxes of several objects to produce a single spectrum with high signal-to-noise ratio. Our finding that dE nuclei are younger and more metal rich than globular clusters establishes important benchmarks that future dE formation theories will consider. We also establish a means to identify GCs whose parent galaxies are uncertain, which allows us to make comparisons between this GC group and the satellite GCs.
7. Variable Stars in Large Magellanic Cloud Globular Clusters. III. Reticulum
Science.gov (United States)
Kuehn, Charles A.; Dame, Kyra; Smith, Horace A.; Catelan, Márcio; Jeon, Young-Beom; Nemec, James M.; Walker, Alistair R.; Kunder, Andrea; Pritzl, Barton J.; De Lee, Nathan; Borissova, Jura
2013-06-01
This is the third in a series of papers studying the variable stars in old globular clusters in the Large Magellanic Cloud. The primary goal of this series is to look at how the characteristics and behavior of RR Lyrae stars in Oosterhoff-intermediate systems compare to those of their counterparts in Oosterhoff-I/II systems. In this paper we present the results of our new time-series BVI photometric study of the globular cluster Reticulum. We found a total of 32 variables stars (22 RRab, 4 RRc, and 6 RRd stars) in our field of view. We present photometric parameters and light curves for these stars. We also present physical properties, derived from Fourier analysis of light curves, for some of the RR Lyrae stars. We discuss the Oosterhoff classification of Reticulum and use our results to re-derive the distance modulus and age of the cluster. Based on observations taken with the SMARTS 1.3 m telescope operated by the SMARTS Consortium and observations taken at the Southern Astrophysical Research (SOAR) telescope, which is a joint project of the Ministério da Ciência, Tecnologia, e Inovação (MCTI) da República Federativa do Brasil, the U.S. National Optical Astronomy Observatory (NOAO), the University of North Carolina at Chapel Hill (UNC), and Michigan State University (MSU).
8. Color Gradients Within Globular Clusters: Restricted Numerical Simulation
Directory of Open Access Journals (Sweden)
Young-Jong Sohn
1997-06-01
Full Text Available The results of a restricted numerical simulation for the color gradients within globular clusters have been presented. The standard luminosity function of M3 and Salpeter's initial mass functions were used to generate model clusters as a fundamental population. Color gradients with the sample clusters for both King and power law cusp models of surface brightness distributions are discussed in the case of using the standard luminosity function. The dependence of color gradients on several parameters for the simulations with Salpeter's initial mass functions, such as slope of initial mass functions, cluster ages, metallicities, concentration parameters of King model, and slopes of power law, are also discussed. No significant radial color gradients are shown to the sample clusters which are regenerated by a random number generation technique with various parameters in both of King and power law cusp models of surface brightness distributions. Dynamical mass segregation and stellar evolution of horizontal branch stars and blue stragglers should be included for the general case of model simulations to show the observed radial color gradients within globular clusters.
9. Medium resolution spectroscopy and chemical composition of Galactic globular clusters
Science.gov (United States)
Khamidullina, D. A.; Sharina, M. E.; Shimansky, V. V.; Davoust, E.
We used integrated-light medium-resolution spectra of six Galactic globular clusters and model stellar atmospheres to carry out population synthesis and to derive chemical composition and age of the clusters. We used medium-resolution spectra of globular clusters published by Schiavon et al. (2005), as well as our long-slit observations with the 1.93 m telescope of the Haute Provence Observatory. The observed spectra were fitted to the theoretical ones interactively. As an initial approach, we used masses, radii and log g of stars in the clusters corresponding to the best fitting isochrones in the observed color-magnitude diagrams. The computed synthetic blanketed spectra of stars were summed according to the Chabrier mass function. To improve the determination of age and helium content, the shape and depth of the Balmer absorption lines was analysed. The abundances of Mg, Ca, C and several other elements were derived. A reasonable agreement with the literature data both in chemical composition and in age of the clusters is found. Our method might be useful for the development of stellar population models and for a better understanding of extragalactic star clusters.
10. Ages of Globular Clusters from HIPPARCOS Parallaxes of Local Subdwarfs
Science.gov (United States)
Gratton, Raffaele G.; Fusi Pecci, Flavio; Carretta, Eugenio; Clementini, Gisella; Corsi, Carlo E.; Lattanzi, Mario
1997-12-01
We report here initial but strongly conclusive results for absolute ages of Galactic globular clusters (GGCs). This study is based on high-precision trigonometric parallaxes from the HIPPARCOS satellite coupled with accurate metal abundances ([Fe/H], [O/Fe], and [α/Fe]) from high-resolution spectroscopy for a sample of about thirty subdwarfs. Systematic effects due to star selection (Lutz-Kelker corrections to parallaxes) and the possible presence of undetected binaries in the sample of bona fide single stars are examined, and appropriate corrections are estimated. They are found to be small for our sample. The new data allow us to reliably define the absolute location of the main sequence (MS) as a function of metallicity. These results are then used to derive distances and ages for a carefully selected sample of nine globular clusters having metallicities determined from high-dispersion spectra of individual giants according to a procedure totally consistent with that used for the field subdwarfs. Very precise and homogeneous reddening values have also been independently determined for these clusters. Random errors for our distance moduli are +/-0.08 mag, and systematic errors are likely of the same order of magnitude. These very accurate distances allow us to derive ages with internal errors of ~12% (+/-1.5 Gyr). The main results are: 1. HIPPARCOS parallaxes are smaller than corresponding ground-based measurements, leading, in turn, to longer distance moduli (~0.2 mag) and younger ages (~2.8 Gyr). 2. The distance to NGC 6752 derived from our MS fitting is consistent with that determined using the white dwarf cooling sequence. 3. The relation between the zero-age HB (ZAHB) absolute magnitude and metallicity for the nine program clusters is MV(ZAHB)=(0.22+/-0.09)([Fe/H]+1.5)+(0.49+/-0.04) . This relation is fairly consistent with some of the most recent theoretical models. Within quoted errors, the slope is in agreement with that given by the Baade-Wesselink (BW
11. The next generation Virgo cluster survey. VIII. The spatial distribution of globular clusters in the Virgo cluster
Energy Technology Data Exchange (ETDEWEB)
Durrell, Patrick R.; Accetta, Katharine [Department of Physics and Astronomy, Youngstown State University, Youngstown, OH 44555 (United States); Côté, Patrick; Blakeslee, John P.; Ferrarese, Laura; McConnachie, Alan; Gwyn, Stephen [Herzberg Astronomy and Astrophysics, National Research Council, 5071 West Saanich Road, Victoria, BC V9E 2E7 (Canada); Peng, Eric W.; Zhang, Hongxin [Department of Astronomy, Peking University, Beijing 100871 (China); Mihos, J. Christopher [Department of Astronomy, Case Western Reserve University, Cleveland, OH 44106 (United States); Puzia, Thomas H.; Jordán, Andrés [Institute of Astrophysics, Pontificia Universidad Catolica, Av. Vicu' a Mackenna 4860, Macul 7820436, Santiago (Chile); Lançon, Ariane [Observatoire astronomique de Strasbourg, Université de Strasbourg, CNRS, UMR 7550, 11 rue de l' Université, F-67000 Strasbourg (France); Liu, Chengze [Center for Astronomy and Astrophysics, Department of Physics and Astronomy, Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai 200240 (China); Cuillandre, Jean-Charles [Canada-France-Hawaii Telescope Corporation, Kamuela, HI 96743 (United States); Boissier, Samuel; Boselli, Alessandro [Aix Marseille Université, CNRS, LAM (Laboratoire d' Astrophysique de Marseille) UMR 7326, F-13388 Marseille (France); Courteau, Stéphane [Department of Physics, Engineering Physics and Astronomy, Queen' s University, Kingston, ON K7L 3N6 (Canada); Duc, Pierre-Alain [AIM Paris Saclay, CNRS/INSU, CEA/Irfu, Université Paris Diderot, Orme des Merisiers, F-91191 Gif sur Yvette cedex (France); Emsellem, Eric [Université de Lyon 1, CRAL, Observatoire de Lyon, 9 av. Charles André, F-69230 Saint-Genis Laval (France); CNRS, UMR 5574, ENS de Lyon (France); and others
2014-10-20
We report on a large-scale study of the distribution of globular clusters (GCs) throughout the Virgo cluster, based on photometry from the Next Generation Virgo Cluster Survey (NGVS), a large imaging survey covering Virgo's primary subclusters (Virgo A = M87 and Virgo B = M49) out to their virial radii. Using the g{sub o}{sup ′}, (g' – i') {sub o} color-magnitude diagram of unresolved and marginally resolved sources within the NGVS, we have constructed two-dimensional maps of the (irregular) GC distribution over 100 deg{sup 2} to a depth of g{sub o}{sup ′} = 24. We present the clearest evidence to date showing the difference in concentration between red and blue GCs over the full extent of the cluster, where the red (more metal-rich) GCs are largely located around the massive early-type galaxies in Virgo, while the blue (metal-poor) GCs have a much more extended spatial distribution with significant populations still present beyond 83' (∼215 kpc) along the major axes of both M49 and M87. A comparison of our GC maps to the diffuse light in the outermost regions of M49 and M87 show remarkable agreement in the shape, ellipticity, and boxiness of both luminous systems. We also find evidence for spatial enhancements of GCs surrounding M87 that may be indicative of recent interactions or an ongoing merger history. We compare the GC map to that of the locations of Virgo galaxies and the X-ray intracluster gas, and find generally good agreement between these various baryonic structures. We calculate the Virgo cluster contains a total population of N {sub GC} = 67, 300 ± 14, 400, of which 35% are located in M87 and M49 alone. For the first time, we compute a cluster-wide specific frequency S {sub N,} {sub CL} = 2.8 ± 0.7, after correcting for Virgo's diffuse light. We also find a GC-to-baryonic mass fraction ε {sub b} = 5.7 ± 1.1 × 10{sup –4} and a GC-to-total cluster mass formation efficiency ε {sub t} = 2.9 ± 0.5 × 10{sup –5
12. CO J = 2-1 EMISSION FROM EVOLVED STARS IN THE GALACTIC BULGE
Energy Technology Data Exchange (ETDEWEB)
Sargent, Benjamin A.; Meixner, M. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Patel, N. A. [Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138 (United States); Otsuka, M.; Srinivasan, S. [Institute of Astronomy and Astrophysics, Academia Sinica, P.O. Box 23-141, Taipei 10617, Taiwan (China); Riebel, D., E-mail: [email protected] [Department of Physics and Astronomy, The Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218 (United States)
2013-03-01
We observe a sample of eight evolved stars in the Galactic bulge in the CO J = 2-1 line using the Submillimeter Array with angular resolution of 1''-4''. These stars have been detected previously at infrared wavelengths, and several of them have OH maser emission. We detect CO J = 2-1 emission from three of the sources in the sample: OH 359.943 +0.260, [SLO2003] A12, and [SLO2003] A51. We do not detect the remaining five stars in the sample because of heavy contamination from the galactic CO emission. Combining CO data with observations at infrared wavelengths constraining dust mass loss from these stars, we determine the gas-to-dust ratios of the Galactic bulge stars for which CO emission is detected. For OH 359.943 +0.260, we determine a gas mass-loss rate of 7.9 ({+-}2.2) Multiplication-Sign 10{sup -5} M {sub Sun} yr{sup -1} and a gas-to-dust ratio of 310 ({+-}89). For [SLO2003] A12, we find a gas mass-loss rate of 5.4 ({+-}2.8) Multiplication-Sign 10{sup -5} M {sub Sun} yr{sup -1} and a gas-to-dust ratio of 220 ({+-}110). For [SLO2003] A51, we find a gas mass-loss rate of 3.4 ({+-}3.0) Multiplication-Sign 10{sup -5} M {sub Sun} yr{sup -1} and a gas-to-dust ratio of 160 ({+-}140), reflecting the low quality of our tentative detection of the CO J = 2-1 emission from A51. We find that the CO J = 2-1 detections of OH/IR stars in the Galactic bulge require lower average CO J = 2-1 backgrounds.
13. DRAFTS: A DEEP, RAPID ARCHIVAL FLARE TRANSIENT SEARCH IN THE GALACTIC BULGE
International Nuclear Information System (INIS)
Osten, Rachel A.; Sahu, Kailash; Kowalski, Adam; Hawley, Suzanne L.
2012-01-01
We utilize the Sagittarius Window Eclipsing Extrasolar Planet Search Hubble Space Telescope/Advanced Camera for Surveys data set for a Deep Rapid Archival Flare Transient Search to constrain the flare rate toward the older stellar population in the Galactic bulge. During seven days of monitoring 229,293 stars brighter than V = 29.5, we find evidence for flaring activity in 105 stars between V = 20 and V = 28. We divided the sample into non-variable stars and variable stars whose light curves contain large-scale variability. The flare rate on variable stars is ∼700 times that of non-variable stars, with a significant correlation between the amount of underlying stellar variability and peak flare amplitude. The flare energy loss rates are generally higher than those of nearby well-studied single dMe flare stars. The distribution of proper motions is consistent with the flaring stars being at the distance and age of the Galactic bulge. If they are single dwarfs, then they span a range of ≈1.0-0.25 M ☉ . A majority of the flaring stars exhibit periodic photometric modulations with P < 3 days. If these are tidally locked magnetically active binary systems, then their fraction in the bulge is enhanced by a factor of ∼20 compared to the local value. These stars may be useful for placing constraints on the angular momentum evolution of cool close binary stars. Our results expand the type of stars studied for flares in the optical band, and suggest that future sensitive optical time-domain studies will have to contend with a larger sample of flaring stars than the M dwarf flare stars usually considered.
14. Eyes - bulging
Science.gov (United States)
... different ages. In: Lambert SR, Lyons CJ, eds. Taylor and Hoyt's Pediatric Ophthalmology and Strabismus . 5th ed. Philadelphia, PA: Elsevier; 2017:chap 96. Orge FH, Grigorian F. Examination and common problems of the neonatal eye. ...
15. Characterization of Friction Stir Welded Tubes by Means of Tube Bulge Test
International Nuclear Information System (INIS)
D'Urso, G.; Longo, M.; Giardini, C.
2011-01-01
Mechanical properties of friction stir welded joints are generally evaluated by means of conventional tensile test. This testing method might provide insufficient information because maximum strain obtained in tensile test before necking is small; moreover, the application of tensile test is limited when the joint path is not linear or even when the welds are executed on curved surfaces. Therefore, in some cases, it would be preferable to obtain the joints properties from other testing methods. Tube bulge test can be a valid solution for testing circumferential or longitudinal welds executed on tubular workpieces. The present work investigates the mechanical properties and the formability of friction stir welded tubes by means of tube bulge tests. The experimental campaign was performed on tubular specimens having a thickness of 3 mm and an external diameter of 40 mm, obtained starting from two semi-tubes longitudinally friction stir welded. The first step, regarding the fabrication of tubes, was performed combining a conventional forming process and friction stir welding. Sheets in Al-Mg-Si-Cu alloy AA6060 T6 were adopted for this purpose. Plates having a dimension of 225x60 mm were bent (with a bending axis parallel to the main dimension) in order to obtain semi-tubes. A particular care was devoted to the fabrication of forming devices (punch and die) in order to minimize the springback effects. Semi-tubes were then friction stir welded by means of a CNC machine tool. Some preliminary tests were carried out by varying the welding parameters, namely feed rate and rotational speed. A very simple tool having flat shoulder and cylindrical pin was used. The second step of the research was based on testing the welded tubes by means of tube bulge test. A specific equipment having axial actuators with a conical shape was adopted for this study. Some analyses were carried out on the tubes bulged up to a certain pressure level. In particular, the burst pressure and the
16. The gamma-ray pulsar population of globular clusters: implications for the GeV excess
Energy Technology Data Exchange (ETDEWEB)
Hooper, Dan [Fermi National Accelerator Laboratory, Center for Particle Astrophysics, Batavia, IL 60510 (United States); Linden, Tim, E-mail: [email protected], E-mail: [email protected] [Ohio State University, Center for Cosmology and AstroParticle Physcis (CCAPP), Columbus, OH 43210 (United States)
2016-08-01
It has been suggested that the GeV excess, observed from the region surrounding the Galactic Center, might originate from a population of millisecond pulsars that formed in globular clusters. With this in mind, we employ the publicly available Fermi data to study the gamma-ray emission from 157 globular clusters, identifying a statistically significant signal from 25 of these sources (ten of which are not found in existing gamma-ray catalogs). We combine these observations with the predicted pulsar formation rate based on the stellar encounter rate of each globular cluster to constrain the gamma-ray luminosity function of millisecond pulsars in the Milky Way's globular cluster system. We find that this pulsar population exhibits a luminosity function that is quite similar to those millisecond pulsars observed in the field of the Milky Way (i.e. the thick disk). After pulsars are expelled from a globular cluster, however, they continue to lose rotational kinetic energy and become less luminous, causing their luminosity function to depart from the steady-state distribution. Using this luminosity function and a model for the globular cluster disruption rate, we show that millisecond pulsars born in globular clusters can account for only a few percent or less of the observed GeV excess. Among other challenges, scenarios in which the entire GeV excess is generated from such pulsars are in conflict with the observed mass of the Milky Way's Central Stellar Cluster.
17. A catalogue of masses, structural parameters and velocity dispersion profiles of 112 Milky Way globular clusters
Science.gov (United States)
Baumgardt, H.; Hilker, M.
2018-05-01
We have determined masses, stellar mass functions and structural parameters of 112 Milky Way globular clusters by fitting a large set of N-body simulations to their velocity dispersion and surface density profiles. The velocity dispersion profiles were calculated based on a combination of more than 15,000 high-precision radial velocities which we derived from archival ESO/VLT and Keck spectra together with ˜20, 000 published radial velocities from the literature. Our fits also include the stellar mass functions of the globular clusters, which are available for 47 clusters in our sample, allowing us to self-consistently take the effects of mass segregation and ongoing cluster dissolution into account. We confirm the strong correlation between the global mass functions of globular clusters and their relaxation times recently found by Sollima & Baumgardt (2017). We also find a correlation of the escape velocity from the centre of a globular cluster and the fraction of first generation stars (FG) in the cluster recently derived for 57 globular clusters by Milone et al. (2017), but no correlation between the FG star fraction and the global mass function of a globular cluster. This could indicate that the ability of a globular cluster to keep the wind ejecta from the polluting star(s) is the crucial parameter determining the presence and fraction of second generation stars and not its later dynamical mass loss.
18. EVIDENCE FOR AN ACCRETION ORIGIN FOR THE OUTER HALO GLOBULAR CLUSTER SYSTEM OF M31
International Nuclear Information System (INIS)
Mackey, A. D.; Huxor, A. P.; Ferguson, A. M. N.; Irwin, M. J.; Chapman, S. C.; Tanvir, N. R.; McConnachie, A. W.; Ibata, R. A.; Lewis, G. F.
2010-01-01
We use a sample of newly discovered globular clusters from the Pan-Andromeda Archaeological Survey (PAndAS) in combination with previously cataloged objects to map the spatial distribution of globular clusters in the M31 halo. At projected radii beyond ∼30 kpc, where large coherent stellar streams are readily distinguished in the field, there is a striking correlation between these features and the positions of the globular clusters. Adopting a simple Monte Carlo approach, we test the significance of this association by computing the probability that it could be due to the chance alignment of globular clusters smoothly distributed in the M31 halo. We find that the likelihood of this possibility is low, below 1%, and conclude that the observed spatial coincidence between globular clusters and multiple tidal debris streams in the outer halo of M31 reflects a genuine physical association. Our results imply that the majority of the remote globular cluster system of M31 has been assembled as a consequence of the accretion of cluster-bearing satellite galaxies. This constitutes the most direct evidence to date that the outer halo globular cluster populations in some galaxies are largely accreted.
19. Systematic main sequence photometry of globular cluster stars for age determination
International Nuclear Information System (INIS)
Alcaino, G.; Liller, W.
1984-01-01
The individual photometric study of the coeval stars in globular clusters presents one of the best observational tests of the stellar evolution theory. Our own globular cluster system provides fundamental clues to the dynamical and chemical evolutionary history of the galaxy, and the study of their ages give a lower limit to the age of the galaxy as well as to that of the universe. The authors have undertaken a systematic research program, and discuss the ages deduced by fitting main sequence photometry to theoretical isochrones of six galactic globular clusters: M4, M22, M30, NGC 288, NGC 3201 and NGC 6397. (Auth.)
20. Transiently disordered tails accelerate folding of globular proteins.
Science.gov (United States)
Mallik, Saurav; Ray, Tanaya; Kundu, Sudip
2017-07-01
Numerous biological proteins exhibit intrinsic disorder at their termini, which are associated with multifarious functional roles. Here, we show the surprising result that an increased percentage of terminal short transiently disordered regions with enhanced flexibility (TstDREF) is associated with accelerated folding rates of globular proteins. Evolutionary conservation of predicted disorder at TstDREFs and drastic alteration of folding rates upon point-mutations suggest critical regulatory role(s) of TstDREFs in shaping the folding kinetics. TstDREFs are associated with long-range intramolecular interactions and the percentage of native secondary structural elements physically contacted by TstDREFs exhibit another surprising positive correlation with folding kinetics. These results allow us to infer probable molecular mechanisms behind the TstDREF-mediated regulation of folding kinetics that challenge protein biochemists to assess by direct experimental testing. © 2017 Federation of European Biochemical Societies.
1. Cyanogen strengths of globular cluster post-main-sequence stars
International Nuclear Information System (INIS)
Hesser, J.E.; Hartwick, F.D.A.; McClure, R.D.
1976-01-01
CN strengths in the peculiar clusters ω Cen and M22 and the metal-rich clusters 47 Tuc, M71, and NGC 6352 are found to vary markedly from star to star. The strong variations in CN strength found earlier for ω Cen by Norris and Bessell and by Dickens and Bell are shown to extend to fainter stars, although expected correlations of CN strength with position in the color-magnitude (C-M) diagram are less evident in our sample. Several CN and metal-strong stars were also observed in M22. We conclude that CN, once it appears in globular clusters, can vary much more than it does in equivalent Population I samples, a result we briefly examine in light of current understanding regarding physical processes in the stars themselves and of models of galactic chemical evolution
2. Chemical abundances of globular clusters in NGC 5128 (Centaurus A)
Science.gov (United States)
Hernandez, Svea; Larsen, Søren; Trager, Scott; Kaper, Lex; Groot, Paul
2018-06-01
We perform a detailed abundance analysis on integrated-light spectra of 20 globular clusters (GCs) in the early-type galaxy NGC 5128 (Centaurus A). The GCs were observed with X-Shooter on the Very Large Telescope (VLT). The cluster sample spans a metallicity range of -1.92 poor GCs in NGC 5128 is genuine, it could hint at a chemical enrichment history different than that experienced by the MW. We also measure Na abundances in 9 out of 20 GCs. We find evidence for intracluster abundance variations in six of these clusters where we see enhanced [Na/Fe] > +0.25 dex. We obtain the first abundance measurements of Cr, Mn, and Ni for a sample of the GC population in NGC 5128 and find consistency with the overall trends observed in the MW, with a slight enhancement (<0.1 dex) in the Fe-peak abundances measured in the NGC 5128.
3. LISA Sources in Milky Way Globular Clusters.
Science.gov (United States)
Kremer, Kyle; Chatterjee, Sourav; Breivik, Katelyn; Rodriguez, Carl L; Larson, Shane L; Rasio, Frederic A
2018-05-11
We explore the formation of double-compact-object binaries in Milky Way (MW) globular clusters (GCs) that may be detectable by the Laser Interferometer Space Antenna (LISA). We use a set of 137 fully evolved GC models that, overall, effectively match the properties of the observed GCs in the MW. We estimate that, in total, the MW GCs contain ∼21 sources that will be detectable by LISA. These detectable sources contain all combinations of black hole (BH), neutron star, and white dwarf components. We predict ∼7 of these sources will be BH-BH binaries. Furthermore, we show that some of these BH-BH binaries can have signal-to-noise ratios large enough to be detectable at the distance of the Andromeda galaxy or even the Virgo cluster.
4. Globular conformation of some ribosomal proteins in solution
International Nuclear Information System (INIS)
Serdyuk, I.N.; Spirin, A.S.
1978-01-01
The possibility that such RNA-binding proteins of the 30 S subparticle as S4, S7, S8 and S16 exist in the form of compact globules in solution has been explored experimentally. These proteins have been studied in D 2 O solution by neutron scattering to measure their radii of gyration. This type of radiation using D 2 O as a solvent provides the maximum 'contrast', that is the maximum difference between the scattering of the protein and the solvent. It allowed measurements to be made using protein at <= 1.5 mg/ml. The radii of gyration for the ribosomal proteins S4, S7, S8 and S16 were found to be relatively small corresponding to the radii of gyration of compact globular proteins of the same molecular weights. (Auth.)
5. SHRINKING THE BRANEWORLD: BLACK HOLE IN A GLOBULAR CLUSTER
International Nuclear Information System (INIS)
Gnedin, Oleg Y.; Maccarone, Thomas J.; Psaltis, Dimitrios; Zepf, Stephen E.
2009-01-01
Large extra dimensions have been proposed as a possible solution to the hierarchy problem in physics. In one of the suggested models, the RS2 braneworld model, black holes may evaporate by Hawking radiation faster than in general relativity, on a timescale that depends on the black hole mass and on the asymptotic radius of curvature of the extra dimensions. Thus the size of the extra dimensions can be constrained by astrophysical observations. Here we point out that the black hole, recently discovered in an extragalactic globular cluster, places the strongest upper limit on the size of the extra dimensions in the RS2 model, L ∼< 0.003 mm. This black hole has the virtues of old age and relatively small mass. The derived upper limit is within an order of magnitude of the absolute limit afforded by astrophysical observations of black holes.
6. Search for Formation Criteria for Globular Cluster Systems
Science.gov (United States)
2005-01-01
Star cluster formation is a major mode of star formation in the extreme conditions of interacting galaxies and violent starbursts. By studying ages and metallicities of young metal-enhanced star clusters in mergers / merger remnants we can learn about the violent star formation history of these galaxies and eventually about galaxy formation and evolution. We will present a new set of evolutionary synthesis models of our GALEV code specially developed to account for the gaseous emission of presently forming star clusters and an advanced tool to compare large model grids with multi-color broad-band observations becoming presently available in large amounts. Such observations are an ecomonic way to determine the parameters of young star clusters as will be shown in the presentation. First results of newly-born clusters in mergers and starburst galaxies are presented and compared to the well-studied old globulars and interpreted in the framework of galaxy formation / evolution.
7. Neutron star/red giant encounters in globular clusters
International Nuclear Information System (INIS)
Bailyn, C.D.
1988-01-01
The author presents a simple expression for the amount by which xsub(crit) is diminished as a star evolves xsub(crit) Rsub(crit)/R*, where Rsub(crit) is the maximum distance of closest approach between two stars for which the tidal energy is sufficient to bind the system, and R* is the radius of the star on which tides are being raised. Also it is concluded that tidal capture of giants by neutron stars resulting in binary systems is unlikely in globular clusters. However, collisions between neutron stars and red giants, or an alternative process involving tidal capture of a main-sequence star into an initially detached binary system, may result either in rapidly rotating neutron stars or in white dwarf/neutron star binaries. (author)
8. The direct piezoelectric effect in the globular protein lysozyme
Science.gov (United States)
Stapleton, A.; Noor, M. R.; Sweeney, J.; Casey, V.; Kholkin, A. L.; Silien, C.; Gandhi, A. A.; Soulimane, T.; Tofail, S. A. M.
2017-10-01
Here, we present experimental evidence of the direct piezoelectric effect in the globular protein, lysozyme. Piezoelectric materials are employed in many actuating and sensing applications because they can convert mechanical energy into electrical energy and vice versa. Although originally studied in inorganic materials, several biological materials including amino acids and bone, also exhibit piezoelectricity. The exact mechanisms supporting biological piezoelectricity are not known, nor is it known whether biological piezoelectricity conforms strictly to the criteria of classical piezoelectricity. The observation of piezoelectricity in protein crystals presented here links biological piezoelectricity with the classical theory of piezoelectricity. We quantify the direct piezoelectric effect in monoclinic and tetragonal aggregate films of lysozyme using conventional techniques based on the Berlincourt Method. The largest piezoelectric effect measured in a crystalline aggregate film of lysozyme was approximately 6.5 pC N-1. These findings raise fundamental questions as to the possible physiological significance of piezoelectricity in lysozyme and the potential for technical applications.
9. The Hubble Space Telescope UV Legacy Survey of Galactic globular clusters - XIV. Multiple stellar populations within M 15 and their radial distribution
Science.gov (United States)
Nardiello, D.; Milone, A. P.; Piotto, G.; Anderson, J.; Bedin, L. R.; Bellini, A.; Cassisi, S.; Libralato, M.; Marino, A. F.
2018-06-01
In the context of the Hubble Space Telescope UV Survey of Galactic globular clusters (GCs), we derived high-precision, multi-band photometry to investigate the multiple stellar populations in the massive and metal-poor GC M 15. By creating for red-giant branch (RGB) stars of the cluster a `chromosome map', which is a pseudo two-colour diagram made with appropriate combination of F275W, F336W, F438W, and F814W magnitudes, we revealed colour spreads around two of the three already known stellar populations. These spreads cannot be produced by photometric errors alone and could hide the existence of (two) additional populations. This discovery increases the complexity of the multiple-population phenomenon in M 15. Our analysis shows that M 15 exhibits a faint sub-giant branch (SGB), which is also detected in colour-magnitude diagrams (CMDs) made with optical magnitudes only. This poorly populated SGB includes about 5 per cent of the total number of SGB stars and evolves into a red RGB in the mF336W versus mF336W - mF814W CMD, suggesting that M 15 belongs to the class of Type II GCs. We measured the relative number of stars in each population at various radial distances from the cluster centre, showing that all of these populations share the same radial distribution within statistic uncertainties. These new findings are discussed in the context of the formation and evolution scenarios of the multiple populations.
10. CHARACTERIZING THE HEAVY ELEMENTS IN GLOBULAR CLUSTER M22 AND AN EMPIRICAL s-PROCESS ABUNDANCE DISTRIBUTION DERIVED FROM THE TWO STELLAR GROUPS
International Nuclear Information System (INIS)
Roederer, I. U.; Marino, A. F.; Sneden, C.
2011-01-01
We present an empirical s-process abundance distribution derived with explicit knowledge of the r-process component in the low-metallicity globular cluster M22. We have obtained high-resolution, high signal-to-noise spectra for six red giants in M22 using the Magellan Inamori Kyocera Echelle spectrograph on the Magellan-Clay Telescope at Las Campanas Observatory. In each star we derive abundances for 44 species of 40 elements, including 24 elements heavier than zinc (Z = 30) produced by neutron-capture reactions. Previous studies determined that three of these stars (the 'r+s group') have an enhancement of s-process material relative to the other three stars (the 'r-only group'). We confirm that the r+s group is moderately enriched in Pb relative to the r-only group. Both groups of stars were born with the same amount of r-process material, but s-process material was also present in the gas from which the r+s group formed. The s-process abundances are inconsistent with predictions for asymptotic giant branch (AGB) stars with M ≤ 3 M ☉ and suggest an origin in more massive AGB stars capable of activating the 22 Ne(α,n) 25 Mg reaction. We calculate the s-process 'residual' by subtracting the r-process pattern in the r-only group from the abundances in the r+s group. In contrast to previous r- and s-process decompositions, this approach makes no assumptions about the r- and s-process distributions in the solar system and provides a unique opportunity to explore s-process yields in a metal-poor environment.
11. Characterizing the Heavy Elements in Globular Cluster M22 and an Empirical s-process Abundance Distribution Derived from the Two Stellar Groups
Science.gov (United States)
Roederer, I. U.; Marino, A. F.; Sneden, C.
2011-11-01
We present an empirical s-process abundance distribution derived with explicit knowledge of the r-process component in the low-metallicity globular cluster M22. We have obtained high-resolution, high signal-to-noise spectra for six red giants in M22 using the Magellan Inamori Kyocera Echelle spectrograph on the Magellan-Clay Telescope at Las Campanas Observatory. In each star we derive abundances for 44 species of 40 elements, including 24 elements heavier than zinc (Z = 30) produced by neutron-capture reactions. Previous studies determined that three of these stars (the "r+s group") have an enhancement of s-process material relative to the other three stars (the "r-only group"). We confirm that the r+s group is moderately enriched in Pb relative to the r-only group. Both groups of stars were born with the same amount of r-process material, but s-process material was also present in the gas from which the r+s group formed. The s-process abundances are inconsistent with predictions for asymptotic giant branch (AGB) stars with M <= 3 M ⊙ and suggest an origin in more massive AGB stars capable of activating the 22Ne(α,n)25Mg reaction. We calculate the s-process "residual" by subtracting the r-process pattern in the r-only group from the abundances in the r+s group. In contrast to previous r- and s-process decompositions, this approach makes no assumptions about the r- and s-process distributions in the solar system and provides a unique opportunity to explore s-process yields in a metal-poor environment. This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile.
12. Fracture toughness of silicon nitride thin films of different thicknesses as measured by bulge tests
International Nuclear Information System (INIS)
Merle, B.; Goeken, M.
2011-01-01
A bulge test setup was used to determine the fracture toughness of amorphous low-pressure chemical vapor deposited (LPCVD) silicon nitride films with various thicknesses in the range 40-108 nm. A crack-like slit was milled in the center of each free-standing film with a focused ion beam, and the membrane was deformed in the bulge test until failure occurred. The fracture toughness K IC was calculated from the pre-crack length and the stress at failure. It is shown that the membrane is in a transition state between pure plane-stress and plane-strain which, however, had a negligible influence on the measurement of the fracture toughness, because of the high brittleness of silicon nitride and its low Young's modulus over yield strength ratio. The fracture toughness K IC was found to be constant at 6.3 ± 0.4 MPa m 1/2 over the whole thickness range studied, which compares well with bulk values. This means that the fracture toughness, like the Young's modulus, is a size-independent quantity for LPCVD silicon nitride. This presumably holds true for all amorphous brittle ceramic materials.
13. Giant Rapid X-ray Flares in Extragalactic Globular Clusters
Science.gov (United States)
Irwin, Jimmy
2018-01-01
There is only one known class of non-destructive, highly energetic astrophysical object in the Universe whose energy emission varies by more than a factor of 100 on time scales of less than a minute -- soft gamma repeaters/anomalous X-ray pulsars, whose flares are believed to be caused by the energy release from the cracking of a neutron star's surface by very strong magnetic fields. All other known violent, rapid explosions, including gamma-ray bursts and supernovae, are believed to destroy the object in the process. Here, we report the discovery of a second class of non-destructive, highly energetic rapidly flaring X-ray object located within two nearby galaxies with fundamentally different properties than soft gamma repeaters/anomalous X-ray pulsars. One source is located within a suspected globular cluster of the host galaxy and flared one time, while the other source is located in either a globular cluster of the host galaxy or the core of a stripped dwarf companion galaxy that flared on six occasions over a seven year time span. When not flaring, the sources appear as normal accreting neutron star or black hole X-ray binaries, indicating that the flare event does not significantly disrupt the host system. While the nature of these sources is still unclear, the discovery of these sources in decade-old archival Chandra X-ray Observatory data illustrates the under-utilization of X-ray timing as a means to discover new classes of explosive events in the Universe.
14. The Black Hole Mass-Bulge Luminosity Relationship for Active Galactic Nuclei From Reverberation Mapping and Hubble Space Telescope Imaging
DEFF Research Database (Denmark)
Bentz, Misty C.; Peterson, Bradley M.; Pogge, Richard W.
2009-01-01
We investigate the relationship between black hole mass and bulge luminosity for active galactic nuclei (AGNs) with reverberation-based black hole mass measurements and bulge luminosities from two-dimensional decompositions of Hubble Space Telescope host galaxy images. We find that the slope...... of the relationship for AGNs is 0.76-0.85 with an uncertainty of ~0.1, somewhat shallower than the M BH vprop L 1.0±0.1 relationship that has been fit to nearby quiescent galaxies with dynamical black hole mass measurements. This difference is somewhat perplexing, as the AGN black hole masses include an overall...
15. Conformation and dynamics of nucleotides in bulges and symmetric internal loops in duplex DNA studied by EPR and fluorescence spectroscopies
International Nuclear Information System (INIS)
Cekan, Pavol; Sigurdsson, Snorri Th.
2012-01-01
Highlights: ► Bulges and loops were studied by both EPR and fluorescence spectroscopies using the probe Ç/Ç f . ► One-base bulge was in a temperature-dependent equilibrium between looped-out and stacked states. ► Bases in two- and three-base bulges were stacked at all temperatures, resulting in DNA bending. ► Bases were stacked in symmetrical two- to five-base internal loops, according to EPR data. ► Unexpectedly high fluorescence for the smaller loops indicated local structural perturbations. -- Abstract: The dynamics and conformation of base bulges and internal loops in duplex DNA were studied using the bifunctional spectroscopic probe Ç, which becomes fluorescent (Ç f ) upon reduction of the nitroxide functional group, along with EPR and fluorescence spectroscopies. A one-base bulge was in a conformational equilibrium between looped-out and stacked states, the former favored at higher temperature and the latter at lower temperature. Stacking of bulge bases was favored in two- and three-base bulges, independent of temperature, resulting in DNA bending as evidenced by increased fluorescence of Ç f . EPR spectra of Ç-labeled three-, four- and five-base symmetrical interior DNA bulges at 20 °C showed low mobility, indicating that the spin-label was stacked within the loop. The spin-label mobility at 37 °C increased as the loops became larger. A considerable variation in fluorescence between different loops was observed, as well as a temperature-dependence within constructs. Fluorescence unexpectedly increased as the size of the loop decreased at 2 °C. Fluorescence of the smallest loops, where a single T·T mismatch was located between the stem region and the probe, was even larger than for the single strand, indicating a considerable local structural deformation of these loops from regular B-DNA. These results show the value of combining EPR and fluorescence spectroscopy to study non-helical regions of nucleic acids.
16. Measuring age differences among globular clusters having similar metallicities - A new method and first results
International Nuclear Information System (INIS)
Vandenberg, D.A.; Bolte, M.; Stetson, P.B.
1990-01-01
A color-difference technique for estimating the relative ages of globular clusters with similar chemical compositions on the basis of their CM diagrams is described and demonstrated. The theoretical basis and implementation of the procedure are explained, and results for groups of globular clusters with m/H = about -2, -1.6, and -1.3, and for two special cases (Palomar 12 and NGC 5139) are presented in extensive tables and graphs and discussed in detail. It is found that the more metal-deficient globular clusters are nearly coeval (differences less than 0.5 Gyr), whereas the most metal-rich globular clusters exhibit significant age differences (about 2 Gyr). This result is shown to contradict Galactic evolution models postulating halo collapse in less than a few times 100 Myr. 77 refs
17. Chemical evolution of the Galactic bulge as traced by microlensed dwarf and subgiant stars: II. Ages, metallicities, detailed elemental abundances, and connections to the Galactic thick disc
NARCIS (Netherlands)
Bensby, T.; Feltzing, S.; Johnson, J.A.; Gould, A.; Adén, D.; Asplund, M.; Meléndez, J.; Gal-Yam, A.; Lucatello, S.; Sana, H.; Sumi, T.; Miyake, N.; Suzuki, D.; Han, C.; Bond, I.; Udalski, A.
2010-01-01
Context. The Bulge is the least understood major stellar population of the Milky Way. Most of what we know about the formation and evolution of the Bulge comes from bright giant stars. The underlying assumption that giants represent all the stars, and accurately trace the chemical evolution of a
18. A spectroscopic and photometric study of MSP companions in Galactic Globular Clusters
OpenAIRE
Cocozza, Gabriele
2008-01-01
This Thesis is devoted to the study of the optical companions of Millisecond Pulsars in Galactic Globular Clusters (GCs) as a part of a large project started at the Department of Astronomy of the Bologna University, in collaboration with other institutions (Astronomical Observatory of Cagliari and Bologna, University of Virginia), specifically dedicated to the study of the environmental effects on passive stellar evolution in galactic GCs. Globular Clusters are very efficien...
19. Globular clusters as a source of X-ray emission from the neighbourhood of M87
International Nuclear Information System (INIS)
Fabian, A.C.; Pringle, J.E.; Rees, M.J.
1976-01-01
It is stated that the X-ray emission from globular clusters may be attributable to accretion on to compact objects, the accreting material being supplied from binary companions, or gas trapped in the potential well of the cluster. Counts of objects in the vicinity of the M87 have revealed that it has an extensive halo of globular clusters, the number of which may exceed 10,000 within a radius of 23 arc min. Most of these clusters may be explicable as a population effect, and the similarity of their optical properties to those of cluster in our own Galaxy suggests that they may also contain X-ray sources. The brighter globular clusters in M87 may, however, be substantially more X-ray luminous, and there may be proportionally more gas available in globular clusters in M87 compared with our Galaxy. The average X-ray luminosity of individual globular clusters may be of the order of 10 38 erg/sec., which raises the possibility that the integrated globular cluster emission may account for a substantial fraction of the X-ray emission observed from the region of M87. In support of this it is noted that the extended X-ray emission from the Virgo cluster is centered on M87, which lies approximately 45 arc min from the cluster centroid, and it is expected that the general X-ray emission from the globular cluster will appear to be smoothly and symmetrically distributed about M87 at moderate spatial resolution. A similar situation may apply to the elliptical galaxy NGC 3311 in Abell 1060 which, as a cluster, has been suggested as the identification for the X-ray source 3 U 1044-40, and it seems possible that that galaxy is surrounded by a similar globular cluster population to that of M87. (U.K.)
20. THE PANCHROMATIC HUBBLE ANDROMEDA TREASURY. I. BRIGHT UV STARS IN THE BULGE OF M31
International Nuclear Information System (INIS)
Rosenfield, Philip; Johnson, L. Clifton; Dalcanton, Julianne J.; Williams, Benjamin F.; Gilbert, Karoline M.; Girardi, Léo; Bressan, Alessandro; Lang, Dustin; Guhathakurta, Puragra; Dorman, Claire E.; Howley, Kirsten M.; Lauer, Tod R.; Olsen, Knut A. G.; Bell, Eric F.; Bianchi, Luciana; Caldwell, Nelson; Dolphin, Andrew; Kalirai, Jason; Larsen, Søren S.; Rix, Hans-Walter
2012-01-01
As part of the Panchromatic Hubble Andromeda Treasury multi-cycle program, we observed a 12' × 6.'5 area of the bulge of M31 with the WFC3/UVIS filters F275W and F336W. From these data we have assembled a sample of ∼4000 UV-bright, old stars, vastly larger than previously available. We use updated Padova stellar evolutionary tracks to classify these hot stars into three classes: Post-AGB stars (P-AGB), Post-Early AGB (PE-AGB) stars, and AGB-manqué stars. P-AGB stars are the end result of the asymptotic giant branch (AGB) phase and are expected in a wide range of stellar populations, whereas PE-AGB and AGB-manqué (together referred to as the hot post-horizontal branch; HP-HB) stars are the result of insufficient envelope masses to allow a full AGB phase, and are expected to be particularly prominent at high helium or α abundances when the mass loss on the red giant branch is high. Our data support previous claims that most UV-bright sources in the bulge are likely hot (extreme) horizontal branch (EHB) stars and their progeny. We construct the first radial profiles of these stellar populations and show that they are highly centrally concentrated, even more so than the integrated UV or optical light. However, we find that this UV-bright population does not dominate the total UV luminosity at any radius, as we are detecting only the progeny of the EHB stars that are the likely source of the UV excess. We calculate that only a few percent of main-sequence stars in the central bulge can have gone through the HP-HB phase and that this percentage decreases strongly with distance from the center. We also find that the surface density of hot UV-bright stars has the same radial variation as that of low-mass X-ray binaries. We discuss age, metallicity, and abundance variations as possible explanations for the observed radial variation in the UV-bright population.
1. RR Lyrae star distance scale and kinematics from inner bulge to 50 kpc
Directory of Open Access Journals (Sweden)
Dambis Andrei
2017-01-01
Full Text Available We use the currently most complete sample of ∼ 3500 type ab RR Lyraes in our Galaxy with available radial-velocity and [Fe/H] measurements to perform a statisticalparallax analysis for a subsample of ∼ 600 type ab RR Lyraes located within 5 kpc from the Sun to refine the parameters of optical and WISE W1-band period-metallicityluminosity relations and adjust our preliminary distances. The new zero point implies the rescaled estimates for the solar Galactocentric distance (RG = 7.99 ± 0.37 kpc and the LMC distance modulus (DMLMC = 18.39 ±0.09. We use the kinematic data for the entire sample to explore the dependence of the halo and thick-disk RR Lyrae velocity ellipsoids on Galactocentric distance from the inner bulge out to R ∼ 50 kpc.
2. REDSHIFT EVOLUTION IN BLACK HOLE-BULGE RELATIONS: TESTING C IV-BASED BLACK HOLE MASSES
International Nuclear Information System (INIS)
Greene, Jenny E.; Peng, Chien Y.; Ludwig, Randi R.
2010-01-01
We re-examine claims for redshift evolution in black hole-bulge scaling relations based on lensed quasars. In particular, we refine the black hole (BH) mass estimates using measurements of Balmer lines from near-infrared spectroscopy obtained with Triplespec at Apache Point Observatory. In support of previous work, we find a large scatter between Balmer and UV line widths, both Mg IIλλ2796, 2803 and C IVλλ1548, 1550. There is tentative evidence that C III]λ1909, despite being a blend of multiple transitions, may correlate well with Mg II, although a larger sample is needed for a real calibration. Most importantly, we find no systematic changes in the estimated BH masses for the lensed sample based on Balmer lines, providing additional support to the interpretation that black holes were overly massive compared to their host galaxies at high redshift.
3. Selective Alkylation of C-Rich Bulge Motifs in Nucleic Acids by Quinone Methide Derivatives.
Science.gov (United States)
Lönnberg, Tuomas; Hutchinson, Mark; Rokita, Steven
2015-09-07
A quinone methide precursor featuring a bis-cyclen anchoring moiety has been synthesized and its capacity to alkylate oligonucleotide targets quantified in the presence and absence of divalent metal ions (Zn(2+) , Ni(2+) and Cd(2+) ). The oligonucleotides were designed for testing the sequence and secondary structure specificity of the reaction. Gel electrophoretic analysis revealed predominant alkylation of C-rich bulges, regardless of the presence of divalent metal ions or even the bis-cyclen anchor. This C-selectivity appears to be an intrinsic property of the quinone methide electrophile as reflected by its reaction with an equimolar mixture of the 2'-deoxynucleosides. Only dA-N1 and dC-N3 alkylation products were detected initially and only the dC adduct persisted for detection under conditions of the gel electrophoretic analysis. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
4. Limb darkening of a K giant in the galactic bulge : Planet photometry of MACHO 97-BLG-28
NARCIS (Netherlands)
Albrow, MD; Beaulieu, JP; Caldwell, JAR; Dominik, M; Greenhill, J; Hill, K; Kane, S; Martin, R; Menzies, J; Pel, JW; Pollard, K; Sackett, PD; Sahu, KC; Vermaak, P; Watson, R; Williams, A; Sahu, MS
1999-01-01
We present the PLANET photometric data set(10) for the binary-lens microlensing event MACHO 97-BLG-28, consisting of 696 I- and V-band measurements, and analyze it to determine the radial surface brightness profile of the Galactic bulge source star. The microlensed source, demonstrated to be a K
5. Study of mechanical-magnetic and electromagnetic properties of PZT/Ni film systems by a novel bulge technique
Energy Technology Data Exchange (ETDEWEB)
Liu, Q.; Zhou, W.; Ding, J.; Xiao, M. [School of Materials Science and Engineering, Xiangtan University, Hunan 411105 (China); Key Laboratory of Film Materials and Devices of Science and Technology Department of Hunan Province, Xiangtan University, Hunan 411105 (China); Yu, Z.J.; Xu, H. [State Key Lab for Turbulence and Complex Systems, Peking University, Beijing 100871 (China); Mao, W.G., E-mail: [email protected] [School of Materials Science and Engineering, Xiangtan University, Hunan 411105 (China); Key Laboratory of Film Materials and Devices of Science and Technology Department of Hunan Province, Xiangtan University, Hunan 411105 (China); Pei, Y.M.; Li, F.X. [State Key Lab for Turbulence and Complex Systems, Peking University, Beijing 100871 (China); Feng, X. [AML, Department of Engineering Mechanics, Tsinghua University, Beijing 100084 (China); Fang, D.N., E-mail: [email protected] [State Key Lab for Turbulence and Complex Systems, Peking University, Beijing 100871 (China); Institute of Advanced Structure Technology, Beijing Institute of Technology, Beijing 100081 (China)
2017-02-01
A novel multiple functional bulge apparatus was designed to study the mechanical-electronic-magnetic characteristics of electromagnetic materials. The elastic modulus difference effect of Ni thin film was observed and it was about 22.16% in the demagnetized and magnetization saturated states. The mechanical-magnetic behaviors of Ni and lead-titanate zirconate (PZT)/Ni films were in-situ measured by using the new bulge systems, respectively. The evolutions of three key material properties in hysteresis loop including saturation magnetization, remanent magnetization and coercive field were discussed in detail, respectively. The mechanisms of mechanical-magnetic coupled behaviors of Ni and PZT/Ni films were analyzed with the aid of the competitive relationship of stress and magnetization. Similarly, the electronic-magnetic characteristics of PZT/Ni films were in-situ measured by using this experimental system. The evolution of saturated magnetization, remanent magnetization and coercive field Kerr signals were discussed with the magneto-elastic anisotropy energy point. In this paper, a suitable mechanical-electronic-magnetic bulge measurement system was established, which would provide a good choice for further understanding the multi field coupling characteristics of electromagnetic film materials. - Highlights: • A novel bulge apparatus was designed to study electromagnetic materials. • The mechanical-magnetic features of Ni film were studied by this new apparatus. • The ΔE effect of Ni film was observed and analyzed. • The mechanical electronic-magnetic characteristics of PZT/Ni film were discussed.
6. Galactic Angular Momentum in Cosmological Zoom-in Simulations. I. Disk and Bulge Components and the Galaxy-Halo Connection
Science.gov (United States)
Sokołowska, Aleksandra; Capelo,
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8642176985740662, "perplexity": 5023.783999164429}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158958.72/warc/CC-MAIN-20180923020407-20180923040807-00167.warc.gz"}
|
https://mathematica.stackexchange.com/questions/125929/optional-argument-that-can-be-completely-omitted?noredirect=1
|
# Optional argument that can be completely omitted?
I am writing a simple function return similar words.
Clear[similarWords]
similarWords[string_]:=Nearest[WordList[],string]
I want to add another argument n which is optional. When it presents, it controls the number of words returns.
Clear[similarWords]
similarWords[string_,n_:???]:=Nearest[WordList[],string,n]
But the problem is, what should I put it ???. I cannot figure it out.
The only way I can come up is
Clear[similarWords]
similarWords[string_,n_:-1]:=If[n==-1,Nearest[WordList[],string],Nearest[WordList[],string,n]]
Is there a neater way?
• n___? or more precisely but longer: n:(_|PatternSequence[]). – Kuba Sep 9 '16 at 11:12
• I might be not understanding something here, but why not just similarWords[string_, n_: 1] := Nearest[WordList[], string, n]? – corey979 Sep 9 '16 at 11:21
• @corey979 Well, because by default Nearest return nearest items. But nearest words could be more than 1, if they have same distance – matheorem Sep 9 '16 at 11:24
• Could you give an exemplary word that illustrates this? – corey979 Sep 9 '16 at 11:26
• @corey979 you can try Nearest[WordList[], "suprise"] – matheorem Sep 9 '16 at 11:27
See @AlbertRetey's answer for all but trivial cases.
Don't use Optional, nor If. Use two definitions.
similarWords[string_] := Nearest[WordList[], string]
similarWords[string_, n_] := Nearest[WordList[], string, n]
I prefer this over just using arg___ and passing all arguments into Nearest because it keeps the responsibility for argument checking with similarWords. But of course just passing down everything is easier and quicker to write, and it's what I'd do in an interactive session (as opposed to a package or a situation where reusability and reliability is more important).
• I was always wondering why there is no shorthand for (_|PatternSequence[]) – Kuba Sep 9 '16 at 12:01
• @Kuba There's Repeated[..., {0, 1}], but it's not exactly short. – Szabolcs Sep 9 '16 at 12:06
• Hi, @Kuba, so you just hand the opportunity over to Szabolcs? I am about to accpet this : ) – matheorem Sep 9 '16 at 12:08
• @matheorem If you want to accept because it fits your needs better then it doesn't matter when I'm going to answer :) – Kuba Sep 9 '16 at 12:08
• @Kuba Your solution is worth to be posted as an answer. I didn't know this way before and it is much more elegant than via Optional. – Alexey Popkov Sep 9 '16 at 12:53
for your simple example Szabolcs suggestions is certainly the best you can do. If for some reason in a less simple situation you want the behavior you described with just one definition this is what you could do:
similarWords[string_, n_: Automatic] := If[n === Automatic,
Nearest[WordList[], string],
Nearest[WordList[], string, n]
]
note that Automatic is just a symbol whose name is guaranteed to have no definition and seems to fit the intented behavior, it has no special functionality builtin. Technically you could just as well use any other "tag", including -1 as you suggested...
EDIT there has been some discussion if and when this or the two definition approach are to be prefered, and I think it is pretty clear that the two definition approach is best when it doesn't lead to code duplication. If it does, you might be better off with the single definition approach in this answer. Alternatively you could extract one or more functions which do what is common to both definitions and only call those and have the code which is different in the bodies of the two definitions...
• +1. I also do exactly this in such cases, and prefer this over having 2 definitions. I also believe that this is the recommended solution, used most frequently in internal code / development. – Leonid Shifrin Sep 9 '16 at 13:25
• @LeonidShifrin Ah, so my original solution is already as good as recommended :D, good to hear this. – matheorem Sep 9 '16 at 13:31
• +1. One situation where this may make this more difficult is if you decide to extend the argument pattern in the future, e.g. add options too. – Szabolcs Sep 9 '16 at 13:34
• @matheorem Feel free to change the accept. – Szabolcs Sep 9 '16 at 13:43
• @Szabolcs I frequently use functions with both. What I do is of course to restrict the type of the optional arguments, like e.g. f[arg:Except[_?OptionQ]:Automatic, opts:OptionsPattern[]] (or stricter types when possible), and it works just fine. The reason I dislike two definition - based solution is code duplication and the need to maintain that (keep them in sync as the code changes). Over the time, more often than not such places become the origin of bad regression bugs. – Leonid Shifrin Sep 9 '16 at 14:35
I'd love to have a short syntax form for that. I'd use it more often:
similarWords[string_, n:(_|PatternSequence[]) ]:= Nearest[WordList[],string,n]
• Hi, @Kuba. I just realized that n___ is not equivalent to n:(_|PatternSequence[]), and n:(_|PatternSequence[]) is actually equivalent to Szabolcs's suggestion, right? – matheorem Sep 9 '16 at 13:15
• @matheorem correct. n___ alows you to put there a sequence which will break Nearest. Thus not included in my answer. – Kuba Sep 9 '16 at 13:16
• Now, I truly understand. Your solution is as good as Szabolcs', even neater. Only one downside, it will confuse novice and make him(or her) look up the doc for quite a while : ) I wish I could accept both : ) – matheorem Sep 9 '16 at 13:22
• @matheorem I'd go with Szabolcs for readability. – Kuba Sep 9 '16 at 13:24
This seems to work:
similarWords[string_, n: Repeated[_,{0,1}]] :=
Nearest[WordList[], string, n]
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21582679450511932, "perplexity": 2035.0138247483478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316785.68/warc/CC-MAIN-20190822064205-20190822090205-00477.warc.gz"}
|
https://arxiv.org/abs/1204.4526
|
cs.DS
(what is this?)
# Title: A Tight Combinatorial Algorithm for Submodular Maximization Subject to a Matroid Constraint
Abstract: We present an optimal, combinatorial 1-1/e approximation algorithm for monotone submodular optimization over a matroid constraint. Compared to the continuous greedy algorithm (Calinescu, Chekuri, Pal and Vondrak, 2008), our algorithm is extremely simple and requires no rounding. It consists of the greedy algorithm followed by local search. Both phases are run not on the actual objective function, but on a related non-oblivious potential function, which is also monotone submodular. Our algorithm runs in randomized time O(n^8u), where n is the rank of the given matroid and u is the size of its ground set. We additionally obtain a 1-1/e-eps approximation algorithm running in randomized time O (eps^-3n^4u). For matroids in which n = o(u), this improves on the runtime of the continuous greedy algorithm. The improvement is due primarily to the time required by the pipage rounding phase, which we avoid altogether. Furthermore, the independence of our algorithm from pipage rounding techniques suggests that our general approach may be helpful in contexts such as monotone submodular maximization subject to multiple matroid constraints.
Our approach generalizes to the case where the monotone submodular function has restricted curvature. For any curvature c, we adapt our algorithm to produce a (1-e^-c)/c approximation. This result complements results of Vondrak (2008), who has shown that the continuous greedy algorithm produces a (1-e^-c)/c approximation when the objective function has curvature c. He has also proved that achieving any better approximation ratio is impossible in the value oracle model.
Subjects: Data Structures and Algorithms (cs.DS) MSC classes: 68W25 ACM classes: F.2.2 Cite as: arXiv:1204.4526 [cs.DS] (or arXiv:1204.4526v4 [cs.DS] for this version)
## Submission history
From: Justin Ward [view email]
[v1] Fri, 20 Apr 2012 03:42:03 GMT (34kb)
[v2] Sun, 1 Jul 2012 06:39:48 GMT (35kb)
[v3] Wed, 16 Oct 2013 17:56:02 GMT (31kb)
[v4] Tue, 19 Nov 2013 17:01:33 GMT (32kb)
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8325580954551697, "perplexity": 1675.793663763483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607860.7/warc/CC-MAIN-20170524192431-20170524212431-00433.warc.gz"}
|
https://www.nature.com/articles/s41524-021-00525-5?error=cookies_not_supported&code=c2fd6d07-eaa5-4c12-bfaf-96d0c11c3c6f
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Intersystem crossing and exciton–defect coupling of spin defects in hexagonal boron nitride
## Abstract
Despite the recognition of two-dimensional (2D) systems as emerging and scalable host materials of single-photon emitters or spin qubits, the uncontrolled, and undetermined chemical nature of these quantum defects has been a roadblock to further development. Leveraging the design of extrinsic defects can circumvent these persistent issues and provide an ultimate solution. Here, we established a complete theoretical framework to accurately and systematically design quantum defects in wide-bandgap 2D systems. With this approach, essential static and dynamical properties are equally considered for spin qubit discovery. In particular, many-body interactions such as defect–exciton couplings are vital for describing excited state properties of defects in ultrathin 2D systems. Meanwhile, nonradiative processes such as phonon-assisted decay and intersystem crossing rates require careful evaluation, which competes together with radiative processes. From a thorough screening of defects based on first-principles calculations, we identify promising single-photon emitters such as SiVV and spin qubits such as TiVV and MoVV in hexagonal boron nitride. This work provided a complete first-principles theoretical framework for defect design in 2D materials.
## Introduction
Optically addressable defect-based qubits offer a distinct advantage in their ability to operate with high fidelity under room temperature conditions1,2. Despite the tremendous progress made in years of research, systems that exist today remain inadequate for real-world applications. The identification of stable single-photon emitters (SPEs) in 2D materials has opened up a new playground for novel quantum phenomena and quantum technology applications, with improved scalability in device fabrication and leverage in doping spatial control, qubit entanglement, and qubit tuning3,4. In particular, hexagonal boron nitride (h-BN) has demonstrated that it can host stable defect-based SPEs5,6,7,8 and spin triplet defects9,10. However, persistent challenges must be resolved before 2D quantum defects can become the most promising quantum information platform. These challenges include the undetermined chemical nature of existing SPEs7,11, difficulties in the controlled generation of desired spin defects, and scarcity of reliable theoretical methods which can accurately predict critical physical parameters for defects in 2D materials due to their complex many-body interactions.
To circumvent these challenges, the design of promising spin defects by high-integrity theoretical methods is urgently needed. Introducing extrinsic defects can be unambiguously produced and controlled, which fundamentally solves the current issues of the undetermined chemical nature of existing SPEs in 2D systems. As highlighted by refs. 2,12, promising spin qubit candidates should satisfy several essential criteria: deep defect levels, stable high spin states, large zero-field splitting (ZFS), efficient radiative recombination, high intersystem crossing (ISC) rates, and long spin coherence and relaxation time. Using these criteria for theoretical screening can effectively identify promising candidates but requires theoretical development of first-principles methods, significantly beyond the static and mean-field level. For example, accurate defect charge transition levels in 2D materials necessitates careful treatment of defect charge corrections for removal of spurious charge interactions13,14,15 and electron correlations for non-neutral excitation, e.g. from GW approximations15,16 or Koopmans-compliant hybrid functionals17,18,19,20. Optical excitation and exciton radiative lifetime must account for defect–exciton interactions, e.g. by solving the Bethe–Salpeter equation (BSE), due to large exciton-binding energies in 2D systems21,22. Spin-phonon relaxation time calls for a general theoretical approach to treat complex symmetry and state degeneracy of defective systems, along the line of recent development based on ab-initio density matrix approach23. Spin coherence time due to the nuclei spin and electron spin coupling can be accurately predicted for defects in solids by combining first-principles and spin Hamiltonian approaches24,25. In the end, nonradiative processes, such as phonon-assisted nonradiative recombination, have been recently computed with first-principles electron–phonon couplings for defects in h-BN26, and resulted in less competitive rates than corresponding radiative processes. However, the spin–orbit-induced ISC as the key process for pure spin state initialization during qubit operation has not been investigated for spin defects in 2D materials from first-principles in-depth.
This work has developed a complete theoretical framework which enables the design of spin defects based on the critical physical parameters mentioned above and highlighted in Fig. 1a. We employed state-of-the-art first-principles methods, focusing on many-body interaction such as defect–exciton couplings and dynamical processes through radiative and nonradiative recombinations. We developed a methodology to compute nonradiative ISC rates with an explicit overlap of phonon wavefunctions beyond current implementations in the Huang–Rhys approximation27. We showcase the discovery of transition metal complexes such as Ti and Mo with a vacancy (TiVV and MoVV) to be spin triplet defects in h-BN, and the discovery of SiVV to be a bright SPE in h-BN. We predict TiVV and MoVV are stable triplet defects in h-BN (which is rare considering the only known such defect is $${\,\text{V}}_{\text{B}\,}^{-}$$28) with large ZFS and spin-selective decay, which will set 2D quantum defects at a competitive stage with NV center in diamond for quantum technology applications.
## Results
In the development of spin qubits in 3D systems (e.g. diamond, SiC, and AlN), defects beyond sp dangling bonds from N or C have been explored. In particular, large metal ions plus anion vacancy in AlN and SiC were found to have potential as qubits due to triplet ground states and large ZFS29. Similar defects may be explored in 2D materials30, such as the systems shown in Fig. 1b–d. This opens up the possibility of overcoming the current limitations of the uncontrolled and undetermined chemical nature of 2D defects, and unsatisfactory spin-dependent properties of existing defects. In the following, we will start the computational screening of spin defects with static properties of the ground state (spin state, defect formation energy, and ZFS) and the excited state (optical spectra), then we will discuss dynamical properties including radiative and nonradiative (phonon-assisted spin conserving and spin-flip) processes, as the flow chart shown in Fig. 1a. We will summarize the complete defect discovery procedure and discuss the outlook at the end.
### Screening triplet spin defects in h-BN
To identify stable qubits in h-BN, we start by screening neutral dopant-vacancy defects for a triplet ground state based on total energy calculations of different spin states at both semi-local Perdew–Burke–Ernzerhof (PBE) and hybrid functional levels. We considered the dopant substitution at a divacancy site in h-BN (Fig. 1b) for four different elemental groups. The results of this procedure are summarized in Supplementary Table 1 and Note 1. With additional supercell tests in Supplementary Table 2, our screening process finally yielded that only MoVV and TiVV have a stable triplet ground state. We further confirmed the thermodynamic charge stability of these defect candidates via calculations of defect formation energy and charge transition levels. As shown in Supplementary Fig. 1, both TiVV and MoVV defects have a stable neutral (q = 0) region for a large range of Fermi levels (εF), from 2.2 to 5.6 eV for MoVV and from 2.9 to 6.1 eV for TiVV. These neutral states will be stable in intrinsic h-BN systems or with weak p-type or n-type doping (see Supplementary Note 2).
With a confirmed triplet ground state, we next computed the two defects’ ZFS. A large ZFS is necessary to isolate the ms = ± 1 and ms = 0 levels even at zero magnetic field allowing for controllable preparation of the spin qubit. Here we computed the contribution of spin–spin interaction to ZFS by implementing the plane-wave-based method developed by Rayson et al. (see the “Methods” section for details of implementation and benchmark on NV center in diamond)31. Meanwhile, the spin–orbit contribution to ZFS was computed with the ORCA code. We find that both defects have sizable ZFS including both spin–spin and spin–orbit contributions (axial D parameter) of 19.4 GHz for TiVV and 5.5 GHz for MoVV, highlighting the potential for the basis of a spin qubit with optically detected magnetic resonance (ODMR) (see Supplementary Note 3 and Fig. 2). They are notably larger than previously reported values for ZFS of other known spin defects in solids29, although at a reasonable range considering large ZFS values (up to 1000 GHz) in transition-metal complex molecules32.
### Screening SPE defects in h-BN
To identify SPEs in h-BN, we considered a separate screening process of these dopant-vacancy defects, targeting those with desirable optical properties. Namely, an SPE efficiently emits a single photon at a time at room temperature. Physically this corresponds to identifying defects that have a single bright intra-defect transition with a high quantum efficiency (i.e. much faster radiative rates than nonradiative ones), for example current SPEs in h-BN have radiative lifetimes ~1–10 ns and quantum efficiency over 50%33,34.
Using these criteria we screened the defects by computing their optical transitions and radiative lifetime at random phase approximation (RPA) (see Supplementary Note 4, Fig. 3, and Table 3). This offers a cost-efficient first-pass to identify defects with bright transition and short radiative lifetime as potential candidates for SPEs. From this procedure, we found that CVV(T), SiVV(S), SiVV(T), SVV(S), GeVV(S), and $${{\rm{Sn}}}_{{\rm{VV}}}$$(S) could be promising SPE defects ((T) denotes triplet; (S) denotes singlet), with a bright intra-defect transition and radiative lifetimes on the order of 10 ns, at the same order of magnitude of the SPEs’ lifetime observed experimentally34. Among these, SiVV(S) has the shortest radiative lifetime, and in addition, Si has recently been experimentally detected in h-BN with samples grown in chemical vapor deposition (the ground state of SiVV is also singlet)35. Hence we will focus on SiVV as an SPE candidate in the following sections as we compute optical and electronic properties at higher level of theory from many-body perturbation theory including accurate electron correlation and electron–hole interactions. Note that CVV (commonly denoted CBVN) has also been suggested to be an SPE source in h-BN36.
The single-particle energy levels of TiVV, MoVV, and SiVV are shown in Fig. 2. These levels are computed by many-body perturbation theory (G0W0) for accurate electron correlation, with hybrid functional (PBE0(α), α = 0.41 based on the Koopmans’ condition17) as the starting point to address self-interaction errors for 3d transition metal defects37,38. For example, we find that both the wavefunction distribution and ordering of defect states can differ between PBE and PBE0(α) (see Supplementary Figs. 46). The convergence test of G0W0 can been found in Supplementary Fig. 7, Note 5, and Table 4. Importantly, the single particle levels in Fig. 2 show there are well-localized occupied and unoccupied defect states in the h-BN bandgap, which yield the potential for intra-defect transitions.
Obtaining reliable optical properties of these two-dimensional materials necessitates solving the BSE to include excitonic effects due to their strong defect–exciton coupling, which is not included in RPA calculations (see comparison in Supplementary Fig. 8 and Table 5)39,40,41,42. The BSE optical spectra are shown for each defect in Fig. 3a–c (the related convergence tests can be found in Supplementary Figs. 9 and 10). In each case, we find an allowed intra-defect optical transition (corresponding to the lowest energy peak as labeled in Fig. 3a–c, and red arrows in Fig. 2). From the optical spectra we can compute their radiative lifetimes as detailed in the “Methods” section on “Radiative recombination”. We find the transition metal defects’ radiative lifetimes (tabulated in Table 1) are long, exceeding μs. Therefore, they are not good candidates for SPE. In addition, while they still are potential spin qubits with optically allowed intra-defect transitions, optical readout of these defects will be difficult. Referring to Table 1 and the expression of radiative lifetime in Eq. (9) we can see this is due to their low excitation energies (E0, in the infrared region) and small dipole moment strength ($${\mu }_{\mathrm {e-h}}^{2}$$). The latter is related to the tight localization of the excitonic wavefunction for TiVV and MoVV (shown in Fig. 3d–f), as strong localization of the defect-bound exciton leads to weaker oscillator strength43.
On the other hand, the optical properties of the SiVV defect are quite promising for SPEs, as Fig. 3c shows it has a very bright optical transition in the ultraviolet region. As a consequence, we find that the radiative lifetime (Table 1) for SiVV is 22.8 ns at G0W0 + BSE@PBE0(α). We note that although the lifetime of SiVV at the level of BSE is similar to that obtained at RPA (13.7 ns), the optical properties of 2D defects at RPA are still unreliable, due to the lack of excitonic effects. For example, the excitation energy (E0) can deviate by ~1 eV and oscillator strengths ($${\mu }_{{\mathrm {e-h}}}^{2}$$) can deviate by an order of magnitude (more details can be found in Supplementary Table 5). Above all, the radiative lifetime of SiVV is comparable to experimentally observed SPE defects in h-BN34, showing that SiVV is a strong SPE defect candidate in h-BN.
### Multiplet structure and excited-state dynamics
Finally, we discuss the excited-state dynamics of the spin qubit candidates TiVV and MoVV defects in h-BN, where the possibility of ISC is crucial. This can allow for polarization of the system to a particular spin state by optical pumping, required for realistic spin qubit operation.
An overview of the multiplet structure and excited-state dynamics is given in Fig. 4 for the TiVV and MoVV defects. For both defects, the system will begin from a spin-conserved optical excitation from the triplet ground state to the triplet excited state, where next the excited state relaxation and recombination can go through several pathways. The excited state can directly return to the ground state via a radiative (red lines) or nonradiative process (dashed dark blue lines). For the TiVV defect shown in Fig. 4a, we find the system may relax to another excited state with lower symmetry through a pseudo-Jahn–Teller distortion (PJT; solid dark blue lines), and ultimately recombine back to the ground state nonradiatively. Most importantly, a third pathway is to nonradiatively relax to an intermediate singlet state through a spin–flip ISC and then again recombine back to the ground state (dashed light-blue lines). This ISC pathway is critical for the preparation of a pure spin state, similar to the NV center in diamond. Below, we will discuss our results for the lifetime of each radiative or nonradiative process, in order to determine the most competitive pathway under the operation condition.
First, we will consider the direct ground state recombination processes. Figure 5 shows the configuration diagram of the TiVV and MoVV defects. The zero-phonon line (ZPL) for direct recombination can be accurately computed by subtracting its vertical excitation energy computed at BSE (0.56 eV for TiVV and 1.08 eV for MoVV) by its relaxation energy in the excited state (i.e. Franck–Condon shift44, ΔEFC in Fig. 5). This yields ZPLs of 0.53 and 0.91 eV for TiVV and MoVV, respectively. Although this method accurately includes both many-body effects and Franck–Condon shifts, it is difficult to evaluate ZPLs for the triplet to singlet-state transition currently. Therefore, we compared it with the ZPLs computed by the constrained occupation DFT (CDFT) method at PBE. This yields ZPLs of 0.49 and 0.92 eV for TiVV and MoVV, respectively, which are in great agreement with the ones obtained from BSE excitation energies subtracting ΔEFC above. Lastly, the radiative lifetimes for these transitions are presented in Table 1 as discussed in the earlier section, which shows TiVV and MoVV have radiative lifetimes of 195 and 33 μs, respectively (red lines in Fig. 4).
In terms of nonradiative properties, the small Huang–Rhys (Sf) for the $$|_{1}^{3}A^{\prime\prime} \rangle$$ to $$|_{0}^{3}A^{\prime\prime} \rangle$$ the transition of the TiVV defect (0.91) implies extremely small electron–phonon coupling and potentially an even slower nonradiative process. On the other hand, Sf for the $$|_{1}^{3}A\rangle$$ to $$|_{0}^{3}A\rangle$$ the transition of the MoVV defect is sizable (22.05) and may indicate a possible nonradiative decay. Following the formalism presented in ref. 26, we computed the nonradiative lifetime of the ground state direct recombination (T = 10 K is chosen to compare with the measurement at cryogenic temperatures45). Consistent with their Huang–Rhys factors, the nonradiative lifetime of TiVV is found to be 10 s, while the nonradiative lifetime of the MoVV defect is found to be 0.02 μs. The former lifetime is indicative of a forbidden transition; however, the TiVV defect also possesses a PJT effect in the triplet excited state (red curve in Fig. 5a). Due to the PJT effect, the excited state (CS, $$|_{1}^{3}A^{\prime\prime}\rangle$$) can relax to lower symmetry (C1, $$|_{1}^{3}A\rangle$$) with a nonradiative lifetime of 394 ps (solid dark blue line in Fig. 4a, additional details see Supplementary Note 9 and Fig. 11). Afterward, nonradiative decay from $$|_{1}^{3}A\rangle$$ to the ground state ($$|_{0}^{3}A^{\prime\prime} \rangle$$) (dashed dark blue line in Fig. 4a) exhibits a lifetime of 0.044 ps due to a large Huang–Rhys factor (14.95).
### Spin–orbit coupling (SOC) and nonradiative ISC rate
Lastly, we considered the possibility of an ISC between the triplet excited state and the singlet ground state for each defect, which is critical for spin qubit application. In order for a triplet to singlet transition to occur, a spin-flip process must take place. For ISC, typically SOC can entangle triplet and singlet states yielding the possibility for a spin-flip transition. To validate our methods for computing SOC (see the “Methods” section), we first computed the SOC strengths for the NV center in diamond. We obtained SOC values of 4.0 GHz for the axial λz and 45 GHz for non-axial λ in fair agreement with previously computed values and experimentally measured values27,46. We then computed the SOC strength for the TiVV defect (λz = 149 GHz, λ = 312 GHz) and the MoVV defect (λz = 16 GHz, λ = 257 GHz). The value of λ in particular leads to the potential for a spin-selective pathway for both defects, analogous to NV center in diamond.
To compute the ISC rate, we developed an approach which is a derivative of the nonradiative recombination formalism presented in Eq. (11):
$${{{\Gamma }}}_{{\mathrm {ISC}}}=4\pi \hslash {\lambda }_{\perp }^{2}{\widetilde{X}}_{{\mathrm {if}}}(T)$$
(1)
$${\widetilde{X}}_{{\mathrm {if}}}(T)=\sum _{n,m}{p}_{{\mathrm {in}}}{\left|\left\langle {\phi }_{fm}({\bf{R}})\right|{\phi }_{{\mathrm {in}}}({\bf{R}})\rangle\right|}^{2}\delta (m\hslash {\omega }_{{\mathrm {f}}}-n\hslash {\omega }_{{\mathrm {i}}}+{{\Delta }}{E}_{{\mathrm {if}}})$$
(2)
Compared with previous formalism27, this method allows different values for initial state vibrational frequency (ωi) and final state one (ωf) through explicit calculations of phonon wavefunction overlap. Again to validate our methods we first computed the ISC rate for NV center in diamond. Using the experimental value for λ we obtain an ISC rate for NV center in diamond of 2.3 MHz which is in excellent agreement with the experimental value of 8 and 16 MHz45. In final, we obtain an ISC time of 83 ps for TiVV and 2.7 μs for MoVV as shown in Table 2 and light blue lines in Fig. 4.
The results of all the nonradiative pathways for the two spin defects are summarized in Table 2 and are displayed in Fig. 4 along with the radiative pathway. We begin by summarizing the results for TiVV first and then discuss MoVV below. In short, for TiVV the spin conserved optical excitation from the triplet ground state $$|_{0}^{3}A^{\prime\prime} \rangle$$ to the triplet excited state $$|_{1}^{3}A^{\prime\prime} \rangle$$ cannot directly recombine nonradiatively due to a weak electron–phonon coupling between these states. In contrast, a nonradiative decay is possible via its PJT state ($$|_{1}^{3}A\rangle$$) with a lifetime of 394 ps. Finally, the process of ISC from the triplet excited state $$|_{1}^{3}A^{\prime\prime}\rangle$$ to the singlet state ($$|_{0}^{1}A^{\prime} \rangle$$) is an order of magnitude faster (i.e. 83 ps) and is in-turn a dominant relaxation pathway. Therefore the TiVV defect in h-BN is predicted to have an expedient spin purification process due to a fast ISC with a rate of 12 GHz. We note that while the defect has a low optical quantum yield and is predicted to not be a good SPE candidate, it is still noteworthy, as to date the only discovered triplet defect in h-BN is the negatively charged boron vacancy, which also does not exhibit SPE and has similarly low quantum efficiency9. Meanwhile, the leveraged control of an extrinsic dopant can offer advantages in spatial and chemical nature of defects.
For the MoVV defect, its direct nonradiative recombination lifetime from the triplet excited state $$|_{1}^{3}A\rangle$$ to the ground state $$|_{0}^{3}A\rangle$$ is 0.02 μs. While the comparison with its radiative lifetime (33 μs) is improved compared to the TiVV defect, it still is predicted to have low quantum efficiency. However, again the ISC between $$|_{1}^{3}A\rangle$$ and $$|_{0}^{1}A\rangle$$ is competitive with a lifetime of 2.7 μs. This rate (around MHz) is similar to diamond and implies a feasible ISC. Owing to its more ideal ZPL position (~1eV) and improved quantum efficiency, optical control of the MoVV defect is seen as more likely and may be further improved by other methods such as coupling to optical cavities47,48 and applying strain5,26.
## Discussion
In summary, we proposed a general theoretical framework for identifying and designing optically addressable spin defects for the future development of quantum emitter and quantum qubit systems. We started by searching for defects with triplet ground state by DFT total energy calculations which allow for rapid identification of possible candidates. Here we found that the TiVV and MoVV defects in h-BN have a neutral triplet ground state. We then computed ZFS of secondary spin quantum sublevels and found they are sizable for both defects, larger than that of NV center in diamond, enabling possible control of these levels for qubit operation. In addition, we screened for potential SPEs in h-BN based on allowed intra-defect transitions and radiative lifetimes, leading to the discovery of SiVV. Next, the electronic structure and optical spectra of each defect were computed from many-body perturbation theory. Specifically, the SiVV defect is shown to possess an exciton radiative lifetime similar to experimentally observed SPEs in h-BN and is a potential SPE candidate. Finally, we analyzed all possible radiative and nonradiative dynamical processes with first-principles rate calculations. In particular, we identified a dominant spin-selective decay pathway via ISC at the TiVV defect which gives a key advantage for initial pure spin state preparation and qubit operation. Meanwhile, for the MoVV defect, we found that it has the benefit of improved quantum efficiency for more realistic optical control.
This work emphasizes that the theoretical discovery of spin defects requires careful treatment of many-body interactions and various radiative and nonradiative dynamical processes such as ISC. We demonstrate the high potential of extrinsic spin defects in 2D host materials as qubits for quantum information science. Future work will involve further examination of spin coherence time and its dominant decoherence mechanism, as well as other spectroscopic fingerprints from first-principles calculations to facilitate experimental validation of these defects.
## Methods
### First-principles calculations
In this study, we used the open source plane-wave code Quantum ESPRESSO49 to perform calculations on all structural relaxations and total energies with optimized norm-conserving Vanderbilt (ONCV) pseudopotentials50 and a wavefunction cutoff of 50 Ry. A supercell size of 6 × 6 or higher was used in our calculations with a 3 × 3 × 1 k-point mesh. Charged cell total energies were corrected to remove spurious charge interactions by employing the techniques developed in refs. 15,51,52 and implemented in the JDFTx code53. The total energies, charged defect formation energies and geometry were evaluated at the PBE level54. Single-point calculations with k-point meshes of 2 × 2 × 1 and 3 × 3 × 1 were performed using hybrid exchange-correlation functional PBE0(α), where the mixing parameter α = 0.41 was determined by the generalized Koopmans’ condition as discussed in refs. 17,20. Moreover, we used the YAMBO code55 to perform many-body perturbation theory with the GW approximation to compute the quasi-particle correction using PBE0(α) eigenvalues and wavefunctions as the starting point. The RPA and BSE calculations were further solved on top of the GW approximation for the electron–hole interaction to investigate the optical properties of the defects, including absorption spectra and radiative lifetime.
### Thermodynamic charge transition levels and defect formation energy
The defect formation energy (FEq) was computed for the TiVV and MoVV defects following:
$${\mathrm {F{E}}}_{q}({\varepsilon }_{\mathrm {{F}}})={E}_{\mathrm {{q}}}-{E}_{\mathrm {{pst}}}+\sum _{i}{\mu }_{i}{{\Delta }}{N}_{i}+q{\varepsilon }_{\mathrm {{F}}}+{{{\Delta }}}_{q}$$
(3)
where Eq is the total energy of the defect system with charge q, Epst is the total energy of the pristine system, μi and ΔNi are the chemical potential and change in the number of atomic species i, and εF is the Fermi energy. A charged defect correction Δq was computed for charged cell calculations by employing the techniques developed in refs. 15,51. The chemical potential references are computed as $${\mu }_{{\mathrm {Ti}}}={E}_{{\mathrm {Ti}}}^{{\mathrm {bulk}}}$$ (total energy of bulk Ti), $${\mu }_{{\mathrm {Mo}}}={E}_{{\mathrm {Mo}}}^{{\mathrm {bulk}}}$$ (total energy of bulk Mo), $${\mu }_{{\mathrm {BN}}}={E}_{{\mathrm {BN}}}^{{\mathrm {ML}}}$$ (total energy of monolayer h-BN). Meanwhile the corresponding charge transition levels of defects can be obtained from the value of εF where the stable charge state transitions from q to $$q^{\prime}$$.
$${\epsilon }_{q| q^{\prime} }=\frac{\mathrm {{F{E}}}_{q}-{\mathrm {F{E}}}_{q^{\prime} }}{q^{\prime} -q}$$
(4)
### Zero-field splitting
The first-order ZFS due to spin–spin interactions was computed for the dipole–dipole interactions of the electron spin:
$${H}_{{\mathrm {ss}}}=\frac{{\mu }_{0}}{4\pi }\frac{{({g}_{{\mathrm {e}}}\hbar )}^{2}}{{r}^{5}}\left[3({{\bf{s}}}_{1}\cdot {\bf{r}})({{\bf{s}}}_{2}\cdot {\bf{r}})-({{\bf{s}}}_{1}\cdot {{\bf{s}}}_{2}){r}^{2}\right].$$
(5)
Here, μ0 is the magnetic permeability of vacuum, ge is the electron gyromagnetic ratio, $${\hbar}$$ is the Planck’s constant, s1, s2 is the spin of first and second electron, respectively, and r is the displacement vector between these two electron. The spatial and spin dependence can be separated by introducing the effective total spin S = ∑isi. This yields a Hamiltonian of the form $${H}_{{\mathrm {ss}}}={{\bf{S}}}^{{\mathrm {T}}}\hat{{\bf{D}}}{\bf{S}}$$, which introduces the traceless ZFS tensor $$\hat{{\bf{D}}}$$. It is common to consider the axial and rhombic ZFS parameters D and E which can be acquired from the $$\hat{{\bf{D}}}$$ tensor:
$$D=\frac{3}{2}{D}_{zz} \quad {\text{and}} \quad E=({D}_{yy}-{D}_{xx})/2\,.$$
(6)
Following the formalism of Rayson et al. 31, the ZFS tensor $$\hat{{\bf{D}}}$$ can be computed with periodic boundary conditions as
$${D}_{ab}=\frac{1}{2}\frac{{\mu }_{0}}{4\pi }{({g}_{{\mathrm {e}}}\hslash )}^{2}\sum\limits_{i > j}{\chi }_{ij}\left\langle {{{\Psi }}}_{ij}({{\bf{r}}}_{1},{{\bf{r}}}_{2})\left| \frac{{{\bf{r}}}^{2}{\delta }_{ab}-3{{\bf{r}}}_{a}{{\bf{r}}}_{b}}{{r}^{5}}\right| {{{\Psi }}}_{ij}({{\bf{r}}}_{1},{{\bf{r}}}_{2})\right\rangle .$$
(7)
Here the summation on pairs of i, j runs over all occupied spin-up and spin-down states, with χij taking the value +1 for parallel spin and −1 for anti-parallel spin, and Ψij(r1, r2) is a two-particle Slater determinant constructed from the Kohn–Sham wavefunctions of the ith and jth states. This procedure was implemented as a post-processing code interfaced with Quantum ESPRESSO. To verify our implementation is accurate, we computed the ZFS of the NV center in diamond which has a well-established result. Using ONCV pseudopotentials, we obtained a ZFS of 3.0 GHz for NV center, in perfect agreement with previous reported results29. For heavy elements such as transition metals, spin–orbit (SO) coupling can have substantial contribution to ZFS. Here, we also computed the SO contribution of the ZFS as implemented in the ORCA code56,57 (additional details can be found in Supplementary Note 10, Fig. 12, and Table 6).
In order to quantitatively study radiative processes, we computed the radiative rate ΓR from Fermi’s Golden Rule and considered the excitonic effects by solving BSE58:
$${{{\Gamma }}}_{{\mathrm {R}}}({{\bf{Q}}}_{{\mathrm {ex}}})=\frac{2\pi }{\hslash }\sum _{{q}_{L},\lambda }{\left|\left\langle G,{1}_{{q}_{L},\lambda }| {H}^{{\mathrm {R}}}| S({{\bf{Q}}}_{{\mathrm {ex}}}),0\right\rangle \right|}^{2}\delta (E({{\bf{Q}}}_{{\mathrm {ex}}})-\hslash c{q}_{L}).$$
(8)
Here, the radiative recombination rate is computed between the ground state G and the two-particle excited state S(Qex), $${1}_{{q}_{L},\lambda }$$ and 0 denote the presence and absence of a photon, HR is the electron–photon coupling (electromagnetic) Hamiltonian, E(Qex) is the exciton energy, and c is the speed of light. The summation indices in Eq. (8) run over all possible wavevector (qL) and polarization (λ) of the photon. Following the approach described in ref. 58, the radiative rate (inverse of radiative lifetime τR) in SI unit at zero temperature can be computed for isolated defect–defect transitions as
$${{{\Gamma }}}_{{\mathrm {R}}}=\frac{{n}_{D}{e}^{2}}{3\pi {\epsilon }_{0}{\hslash }^{4}{c}^{3}}{E}_{0}^{3}{\mu }_{{\mathrm {e-h}}}^{2},$$
(9)
where e is the charge of an electron, ϵ0 is vacuum permittivity, E0 is the exciton energy at Qex = 0, nD is the reflective index of the host material and $${\mu }_{{\mathrm {e-h}}}^{2}$$ is the modulus square of exciton dipole moment with length2 unit. Note that Eq. (9) considers defect–defect transitions in the dilute limit; therefore the lifetime formula for zero-dimensional systems embedded in a host material is used8,59 (also considering nD is unity in isolated 2D systems at the long-wavelength limit). We did not consider the radiative lifetime of TiVV defect at a finite temperature because the first and second excitation energy separation is much larger than kT. Therefore a thermal average of the first and higher excited states is not necessary and the first excited state radiative lifetime is nearly the same at 10 K as zero temperature.
In this work, we compute the phonon-assisted nonradiative recombination rate via a Fermi’s golden rule approach:
$${{{\Gamma }}}_{{\mathrm {NR}}}=\frac{2\pi }{\hslash }g\sum _{n,m}{p}_{{\mathrm {in}}}| \left\langle fm| {H}^{{\mathrm {e-ph}}}| {\mathrm {in}}\right\rangle {| }^{2}\delta ({E}_{{\mathrm {in}}}-{E}_{{\mathrm {fm}}})$$
(10)
Here, ΓNR is the nonradiative recombination rate between electron state i in phonon state n and electron state f in phonon state m, pin is the thermal probability distribution of the initial state $$\left|{\mathrm {in}}\right\rangle$$, He−ph is the electron–phonon coupling Hamiltonian, g is the degeneracy factor and Ein is the energy of vibronic state $$\left|{\mathrm {in}}\right\rangle$$. Within the static coupling and one-dimensional (1D) effective phonon approximations, the nonradiative recombination can be reduced to:
$${{{\Gamma }}}_{{\mathrm {NR}}}=\frac{2\pi }{\hslash }g| {W}_{{\mathrm {if}}}{| }^{2}{X}_{{\mathrm {if}}}(T),$$
(11)
$${X}_{{\mathrm {if}}}(T)=\sum _{n,m}{p}_{{\mathrm {in}}}{\left|\left\langle {\phi }_{{\mathrm {fm}}}({\bf{R}})| Q-{Q}_{a}| {\phi }_{{\mathrm {in}}}({\bf{R}})\right\rangle \right|}^{2}\delta (m\hslash {\omega }_{\mathrm {{f}}}-n\hslash {\omega }_{{\mathrm {i}}}+{{\Delta }}{E}_{{\mathrm {if}}}),$$
(12)
$$\left.{W}_{{\mathrm {if}}}=\left\langle {\psi }_{{\mathrm {i}}}({\bf{r}},{\bf{R}})\left| \frac{\partial H}{\partial Q}\right| {\psi }_{{\mathrm {f}}}({\bf{r}},{\bf{R}})\right\rangle \right|_{{\bf{R}} = {{\bf{R}}}_{a}}.$$
(13)
Here, the static coupling approximation naturally separates the nonradiative recombination rate into phonon and electronic terms, Xif and Wif, respectively. The 1D phonon approximation introduces a generalized coordinate Q, with effective frequency ωi and ωf. The phonon overlap in Eq. (12) can be computed using the quantum harmonic oscillator wavefunctions with QQa from the configuration diagram (Fig. 5). Meanwhile the electronic overlap in Eq. (13) is computed by finite difference using the Kohn–Sham orbitals from DFT at the Γ point. The nonradiative lifetime τNR is given by taking the inverse of the rate ΓNR. Supercell convergence of phonon-assisted nonradiative lifetime is shown in Supplementary Note 11 and Table 7. We validated the 1D effective phonon approximation by comparing the Huang–Rhys factor with the full phonon calculations in Supplementary Table 8.
### SOC constant
SOC can entangle triplet and singlet states yielding the possibility for a spin–flip transition. The SOC operator is given to zero-order by60
$${H}_{{\mathrm {so}}}=\frac{1}{2}\frac{1}{{c}^{2}{m}_{{\mathrm {e}}}^{2}}\sum _{i}\left({\nabla }_{i}V\times {{\bf{p}}}_{i}\right){{\bf{S}}}_{i}$$
(14)
where c is the speed of light, me is the mass of an electron, p and S are the momentum and spin of electron i and V is the nuclear potential energy. The spin–orbit interaction can be rewritten in terms of the angular momentum L and the SOC strength λ as60
$${H}_{{\mathrm {so}}}=\sum _{i}{\lambda }_{\perp }({L}_{x,i}{S}_{x,i}+{L}_{y,i}{S}_{y,i})+{\lambda }_{z}{L}_{z,i}{S}_{z,i}.$$
(15)
where λ and λz denote the non-axial and axial SOC strength, respectively. The SOC strength was computed for the TiVV and MoVV defect in h-BN using the ORCA code by TD-DFT56,61. More computational details can be found in Supplementary Note 10.
## Data availability
The data that support the findings of this study and the code for the first-principles methods proposed in this study are available from the corresponding author (Yuan Ping) upon reasonable request.
## References
1. 1.
Koehl, W. F., Buckley, B. B., Heremans, F. J., Calusine, G. & Awschalom, D. D. Room temperature coherent control of defect spin qubits in silicon carbide. Nature 479, 84–87 (2011).
2. 2.
Weber, J. et al. Quantum computing with defects. Proc. Natl Acad. Sci. USA 107, 8513–8518 (2010).
3. 3.
Liu, X. & Hersam, M. C. 2D materials for quantum information science. Nat. Rev. Mater. 4, 669–684 (2019).
4. 4.
Aharonovich, I. & Toth, M. Quantum emitters in two dimensions. Science 358, 170–171 (2017).
5. 5.
Mendelson, N., Doherty, M., Toth, M., Aharonovich, I. & Tran, T. T. Strain-induced modification of the optical characteristics of quantum emitters in hexagonal boron nitride. Adv. Mater. 32, 1908316 (2020).
6. 6.
Feldman, M. A. et al. Phonon-induced multicolor correlations in hBN single-photon emitters. Phys. Rev. B 99, 020101 (2019).
7. 7.
Yim, D., Yu, M., Noh, G., Lee, J. & Seo, H. Polarization and localization of single-photon emitters in hexagonal boron nitride wrinkles. ACS Appl. Mater. Int. 12, 36362–36369 (2020).
8. 8.
Mackoit-Sinkevičienė, M., Maciaszek, M., Van de Walle, C. G. & Alkauskas, A. Carbon dimer defect as a source of the 4.1 eV luminescence in hexagonal boron nitride. Appl. Phys. Lett. 115, 212101 (2019).
9. 9.
Kianinia, M., White, S., Fröch, J. E., Bradac, C. & Aharonovich, I. Generation of spin defects in hexagonal boron nitride. ACS Photonics 7, 2147–2152 (2020).
10. 10.
Turiansky, M., Alkauskas, A. & Walle, C. Spinning up quantum defects in 2D materials. Nat. Mater. 19, 487–489 (2020).
11. 11.
Li, X. et al. Nonmagnetic quantum emitters in boron nitride with ultranarrow and sideband-free emission spectra. ACS Nano 11, 6652–6660 (2017).
12. 12.
Ivády, V., Abrikosov, I. A. & Gali, A. First principles calculation of spin-related quantities for point defect qubit research. npj Comput. Mater. 4, 1–13 (2018).
13. 13.
Komsa, H.-P., Berseneva, N., Krasheninnikov, A. V. & Nieminen, R. M. Charged point defects in the flatland: accurate formation energy calculations in two-dimensional materials. Phys. Rev. X 4, 031044 (2014).
14. 14.
Wang, D. et al. Determination of formation and ionization energies of charged defects in two-dimensional materials. Phys. Rev. Lett. 114, 196801 (2015).
15. 15.
Wu, F., Galatas, A., Sundararaman, R., Rocca, D. & Ping, Y. First-principles engineering of charged defects for two-dimensional quantum technologies. Phys. Rev. Mater. 1, 071001 (2017).
16. 16.
Govoni, M. & Galli, G. Large scale GW calculations. J. Chem. Theory Comput. 11, 2680–2696 (2015).
17. 17.
Smart, T. J., Wu, F., Govoni, M. & Ping, Y. Fundamental principles for calculating charged defect ionization energies in ultrathin two-dimensional materials. Phys. Rev. Mater. 2, 124002 (2018).
18. 18.
Nguyen, N. L., Colonna, N., Ferretti, A. & Marzari, N. Koopmans-compliant spectral functionals for extended systems. Phys. Rev. X 8, 021051 (2018).
19. 19.
Weng, M., Li, S., Zheng, J., Pan, F. & Wang, L.-W. Wannier Koopmans method calculations of 2D material band gaps. J. Chem. Phys. Lett. 9, 281–285 (2018).
20. 20.
Miceli, G., Chen, W., Reshetnyak, I. & Pasquarello, A. Nonempirical hybrid functionals for band gaps and polaronic distortions in solids. Phys. Rev. B 97, 121112 (2018).
21. 21.
Refaely-Abramson, S., Qiu, D. Y., Louie, S. G. & Neaton, J. B. Defect-induced modification of low-lying excitons and valley selectivity in monolayer transition metal dichalcogenides. Phys. Rev. Lett. 121, 167402 (2018).
22. 22.
Gao, S., Chen, H.-Y., Bernardi, M. Radiative properties and excitons of candidate defect emitters in hexagonal boron nitride. Preprint at arXiv:2007.10547 (2020).
23. 23.
Xu, J., Habib, A., Kumar, S., Wu, F., Sundararaman, R. & Ping, Y. Spin-phonon relaxation from a universal ab initio density-matrix approach. Nat. Commun. 11, 1–10 (2020).
24. 24.
Seo, H., Falk, A. L., Klimov, P. V., Miao, K. C., Galli, G. & Awschalom, D. D. Quantum decoherence dynamics of divacancy spins in silicon carbide. Nat. Commun. 7, 1–9 (2016).
25. 25.
Ye, M., Seo, H. & Galli, G. Spin coherence in two-dimensional materials. npj Comput. Mater. 5, 1–6 (2019).
26. 26.
Wu, F., Smart, T. J., Xu, J. & Ping, Y. Carrier recombination mechanism at defects in wide band gap two-dimensional materials from first principles. Phys. Rev. B 100, 081407 (2019).
27. 27.
Thiering, G. & Gali, A. Ab initio calculation of spin–orbit coupling for an NV center in diamond exhibiting dynamic Jahn–Teller effect. Phys. Rev. B 96, 081115 (2017).
28. 28.
Gottscholl, A. et al. Initialization and read-out of intrinsic spin defects in a van der Waals crystal at room temperature. Nat. Mater. 19, 540–545 (2020).
29. 29.
Seo, H., Ma, H., Govoni, M. & Galli, G. Designing defect-based qubit candidates in wide-gap binary semiconductors for solid-state quantum technologies. Phys. Rev. Mater. 1, 075002 (2017).
30. 30.
Turiansky, M. E., Alkauskas, A., Bassett, L. C. & Walle, C. G. Dangling bonds in hexagonal boron nitride as single-photon emitters. Phys. Rev. Lett. 123, 127401 (2019).
31. 31.
Rayson, M. & Briddon, P. First principles method for the calculation of zero-field splitting tensors in periodic systems. Phys. Rev. B 77, 035119 (2008).
32. 32.
Zolnhofer, E. M. et al. Electronic structure and magnetic properties of a titanium (II) coordination complex. Inorg. Chem. 59, 6187–6201 (2020).
33. 33.
Tran, T. T. et al. Robust multicolor single photon emission from point defects in hexagonal boron nitride. ACS Nano 10, 7331–7338 (2016).
34. 34.
Schell, A. W., Takashima, H., Tran, T. T., Aharonovich, I. & Takeuchi, S. Coupling quantum emitters in 2D materials with tapered fibers. ACS Photonics 4, 761–767 (2017).
35. 35.
Ahmadpour Monazam, M. R., Ludacka, U., Komsa, H.-P. & Kotakoski, J. Substitutional Si impurities in monolayer hexagonal boron nitride. Appl. Phys. Lett. 115, 071604 (2019).
36. 36.
Sajid, A. & Thygesen, K. S. VNCB defect as source of single photon emission from hexagonal boron nitride. 2D Mater. 7, 031007 (2020).
37. 37.
Fuchs, F., Bechstedt, F., Shishkin, M. & Kresse, G. Quasiparticle band structure based on a generalized Kohn–Sham scheme. Phys. Rev. B 76, 115109 (2007).
38. 38.
Bechstedt, F. Many-Body Approach to Electronic Excitations (Springer-Verlag, 2016).
39. 39.
Ping, Y., Rocca, D. & Galli, G. Electronic excitations in light absorbers for photoelectrochemical energy conversion: first principles calculations based on many body perturbation theory. Chem. Soc. Rev. 42, 2437–2469 (2013).
40. 40.
Rocca, D., Ping, Y., Gebauer, R. & Galli, G. Solution of the Bethe–Salpeter equation without empty electronic states: application to the absorption spectra of bulk systems. Phys. Rev. B 85, 045116 (2012).
41. 41.
Ping, Y., Rocca, D., Lu, D. & Galli, G. Ab initio calculations of absorption spectra of semiconducting nanowires within many-body perturbation theory. Phys. Rev. B 85, 035316 (2012).
42. 42.
Ping, Y., Rocca, D. & Galli, G. Optical properties of tungsten trioxide from first-principles calculations. Phys. Rev. B 87, 165203 (2013).
43. 43.
Hours, J., Senellart, P., Peter, E., Cavanna, A. & Bloch, J. Exciton radiative lifetime controlled by the lateral confinement energy in a single quantum dot. Phys. Rev. B 71, 161306 (2005).
44. 44.
Van de Walle, C. G. & Neugebauer, J. First-principles calculations for defects and impurities: applications to III-nitrides. J. Appl. Phys. 95, 3851–3879 (2004).
45. 45.
Goldman, M. L. et al. Phonon-induced population dynamics and intersystem crossing in nitrogen-vacancy centers. Phys. Rev. Lett. 114, 145502 (2015).
46. 46.
Bassett, L. C. et al. Ultrafast optical control of orbital and spin dynamics in a solid-state defect. Science 345, 1333–1337 (2014).
47. 47.
Kim, S. et al. Photonic crystal cavities from hexagonal boron nitride. Nat. Commun. 9, 1–8 (2018).
48. 48.
Zhong, T. et al. Optically addressing single rare-earth ions in a nanophotonic cavity. Phys. Rev. Lett. 121, 183603 (2018).
49. 49.
Giannozzi, P. et al. QUANTUM ESPRESSO: a modular and open-source software project for quantum simulations of materials. J. Phys.: Condens. Matter 21, 395502 (2009).
50. 50.
Hamann, D. R. Optimized norm-conserving Vanderbilt pseudopotentials. Phys. Rev. B 88, 085117 (2013).
51. 51.
Sundararaman, R. & Ping, Y. First-principles electrostatic potentials for reliable alignment at interfaces and defects. J. Chem. Phys. 146, 104109 (2017).
52. 52.
Wang, D. & Sundararaman, R. Layer dependence of defect charge transition levels in two-dimensional materials. Phys. Rev. B 101, 054103 (2020).
53. 53.
Sundararaman, R., Letchworth-Weaver, K., Schwarz, K. A., Gunceler, D., Ozhabes, Y. & Arias, T. JDFTx: software for joint density-functional theory. SoftwareX 6, 278–284 (2017).
54. 54.
Perdew, J. P., Burke, K. & Ernzerhof, M. Generalized gradient approximation made simple. Phys. Rev. Lett. 77, 3865 (1996).
55. 55.
Marini, A., Hogan, C., Grüning, M. & Varsano, D. Yambo: an ab initio tool for excited state calculations. Comput. Phys. Commun. 180, 1392–1403 (2009).
56. 56.
Neese, F. The ORCA program system. WIREs Comput. Mol. Sci. 2, 73–78 (2012).
57. 57.
Neese, F. Calculation of the zero-field splitting tensor on the basis of hybrid density functional and Hartree–Fock theory. J. Chem. Phys. 127, 164112 (2007).
58. 58.
Wu, F., Rocca, D. & Ping, Y. Dimensionality and anisotropicity dependence of radiative recombination in nanostructured phosphorene. J. Mater. Chem. C 7, 12891–12897 (2019).
59. 59.
Gupta, S., Yang, J.-H. & Yakobson, B. I. Two-level quantum systems in two-dimensional materials for single photon emission. Nano Lett. 19, 408–414 (2018).
60. 60.
Maze, J. R. et al. Properties of nitrogen-vacancy centers in diamond: the group theoretic approach. N. J. Phys. 13, 025025 (2011).
61. 61.
de Souza, B., Farias, G., Neese, F. & Izsák, R. Predicting phosphorescence rates of light organic molecules using time-dependent density functional theory and the path integral approach to dynamics. J. Chem. Theory Comput. 15, 1896–1904 (2019).
62. 62.
Towns, J. et al. XSEDE: accelerating scientific discovery. Comput. Sci. Eng. 16, 62–74 (2014).
## Acknowledgements
We acknowledge Susumu Takahashi for helpful discussions. This work is supported by the National Science Foundation under grant nos. DMR-1760260, DMR-1956015, and DMR-1747426. Part of this work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. T.J.S. acknowledges the LLNL Graduate Research Scholar Program and funding support from LLNL LDRD 20-SI-004. This research used resources of the Scientific Data and Computing center, a component of the Computational Science Initiative, at Brookhaven National Laboratory under Contract No. DE-SC0012704, the lux supercomputer at UC Santa Cruz, funded by NSF MRI grant AST 1828315, the National Energy Research Scientific Computing Center (NERSC) a U.S. Department of Energy Office of Science User Facility operated under Contract No. DE-AC02-05CH11231, the Extreme Science and Engineering Discovery Environment (XSEDE) which is supported by National Science Foundation Grant No. ACI-154856262.
## Author information
Authors
### Contributions
Y.P. established the theoretical models and supervised the project, T.J.S. and K.L. performed the calculations and data analysis, Y.P. and J.X. discussed the results, and all authors participated in the writing of this paper. T.J.S. and K.L. contributed equally to this work.
### Corresponding author
Correspondence to Yuan Ping.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Smart, T.J., Li, K., Xu, J. et al. Intersystem crossing and exciton–defect coupling of spin defects in hexagonal boron nitride. npj Comput Mater 7, 59 (2021). https://doi.org/10.1038/s41524-021-00525-5
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8172446489334106, "perplexity": 3801.086593963475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487620971.25/warc/CC-MAIN-20210615084235-20210615114235-00426.warc.gz"}
|
https://www.doityourself.com/forum/rugs-carpets-carpeting/185596-how-can-we-do-baseboards-before-carpet-if-tack-strip-already-down.html
|
> >
>
# How can we do baseboards before carpet if tack strip is already down?
#1
10-24-04, 02:08 PM
txatmag
Visiting Guest
Posts: n/a
How can we do baseboards before carpet if tack strip is already down?
We tore up the carpet and the pad in our main room to do some remodeling. We originally had lawyer block panelling in our main room. We removed it and replaced it with drywall. The tack strip stayed down from where the original carpet laid.
We want to put the baseboards on before the carpet comes, which is in three days. However, the question is primarily about the tack strip. We would rather not remove it...but the distance from the new drywall is different than from the old panelling. How far is the tack strip supposed to be from the wall? Ours is 1/2 inch away from the wall and we haven't yet added the baseboard.
Now, do we add the baseboards first or after carpet? If we add them before the carpet? How far do we go up? (Going up a 1/2 inch like other messages have suggested doesn't allow any space between the baseboard and the tack strip.) So how do they tuck it in?
#2
10-24-04, 04:01 PM
Member
Join Date: Nov 2002
Location: Canton Ohio
Posts: 1,397
Likes: 0
Received 0 Likes on 0 Posts
Go ahead and take the old tackless out and install the baseboard 1/4" to 3/8" above the floor. 1/2 inch is a little high for my taste, and unless the chosen carpet is really thick there would be a void between the two.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8422418236732483, "perplexity": 3636.4913822449757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710829.5/warc/CC-MAIN-20221201153700-20221201183700-00011.warc.gz"}
|
https://iwaponline.com/aqua/article-abstract/62/3/155/29139/Bench-scale-evaluation-of-Fe-II-ions-on-haloacetic?redirectedFrom=fulltext
|
Cast iron pipes were installed broadly in North American water utilities, particularly in older cities such as Halifax, NS, and other cities in the northeastern portions of Canada and the USA. Many of these cast iron pipes are corroded and are continuous sources of Fe(II) ions in drinking water distribution systems. In this paper, the results of an experimental investigation into the factors influencing haloacetic acids (HAAs) formation in the presence of Fe(II) ions are presented. The experiments were conducted using NaHCO3 buffered synthetic water samples with different characteristics (i.e. pH, phosphate, stagnation time) simulating with water distribution systems. The results showed that Fe(II) ions significantly reduced HAAs formation in different reaction systems at a 95% confidence level. In control water systems, pH had no significant impact, however, in the presence of Fe(II) ions in water, pH had an obvious impact to increase HAAs formation (α = 0.05). In contrast, phosphate-based corrosion inhibitor significantly (α = 0.05) reduced HAAs formation in the presence of different dosages of Fe(II) ions in water samples for the reaction period of 24, 48, 84 and 130 h, respectively. Significant factors and their rank influencing HAAs formation and distribution were identified using a 24 full factorial design approach.
This content is only available as a PDF.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8825547695159912, "perplexity": 3264.5017530397918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203493.88/warc/CC-MAIN-20190324210143-20190324232143-00505.warc.gz"}
|
http://www.gradesaver.com/textbooks/math/other-math/basic-college-mathematics-9th-edition/chapter-6-percent-6-4-using-proportions-to-solve-percent-problems-6-4-exercises-page-420/30
|
## Basic College Mathematics (9th Edition)
$\frac{176}{x}=\frac{5.5}{100}$ Cross products: $5.5x=17600$ Solve for x: $x=3200$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.487387090921402, "perplexity": 4805.71162721954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818685129.23/warc/CC-MAIN-20170919112242-20170919132242-00561.warc.gz"}
|
https://itk.org/Wiki/index.php?title=TubeTK/Intra-operative_Ultrasound_Registration&oldid=53562
|
# TubeTK/Intra-operative Ultrasound Registration
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
# Intra-operative Ultrasound Registration
## The Problem
Tubes and image before registration.
Tubes and image after registration.
During preparation for a surgical operation such as liver tumor radiofrequecy abation or surgical resection of a brain tumor, pre-operative medical images are acquired to help inform both the radiologist and also the surgeon involved in the treatment. The radiologist can use this diagnostic image to identify and evaluate the extent of a lesion. A surgeon will use both lesion segmentation information provided by the radiologist along with patient-specific anatomical information found in the pre-operative image to plan her operation.
While the surgeon tries to hold a mental picture of the internal structures she will be encountering as she penetrates the body, but it would be very helpful if the surgeon could see where the structures examined in the pre-operative image lie with respect to knives, needles, caughterizing agents or other tools intended to deliver damage. In this way, damage is delivered precisely and completely to the problematic tissues with minimal collatoral damage to healthy tissue.
In a pre-operative situation, it is possible to collect well-sampled 3D volumes with modalities like X-ray computed tomography (CT) or magnetic resonance images (MRI) that can be critically examined. During an operation, this is not the case. It is essential that the patient spends as little time as possible under anesthesia and with the skin opened. Also, the imaging device cannot occlude surgical instrumentation and the actions of the surgical team. Ultrasound imaging is well-suited for this situation because of its real-time performance and small spatial footprint. Yet, the ultrasound images are usually only 2D, have a limited field of view, noisy, and do not have the annotated segmentations created in the pre-operative image. The grand challenge addressed here is the identification of the spatial transformation that maps the pre-operative image to the intra-operative real-time ultrasound image, i.e. the spatial registration problem. After this mapping is known, the pre-operative image can be overlayed on the intra-operative ultrasound to give the surgeon a more complete picture of where their instruments are located.
The confounding demands in this problem include:
### Real-time performance
Even though a computationally complex analysis is required, real-time performance is essential so feedback is presented to the surgeon as she is manipulating her tools and probing their location relative to the insight provided by the images.
### Sparse and incomplete datasets
As previously mentioned, while the pre-operative image may be a well-sampled volume, the intra-operative ultrasound is typically 2D or a limited 3D view of the area of interest. The ultrasound imaging plane cannot be directly in line with the plane of interest because the surgical tool is in this plane. Incomplete or imperfect coupling of the transducer with the skin results in signal dropout as does acoustic shadowing from intervening structures or surgical instruments.
### Multi-modality registration
We are often posed with the registration of a CT or MRI image against an ultrasound (US) image. Tissue's presentation in these modalities drastically vary, which makes it difficult to compare them.
### Noise introduced by surgical actions
Applied registration is normally complicated by changed occur between the fixed and moving image -- noise varies between the images, metabolic and other physiological processes and disease progression can change tissues between time points, etc. Surgery results in additional various between the fixed and moving image; topological changes occur with incisions and hemorrhage, and the tissue properties image change with ablation.
## TubeTK Approach: Model-to-Image Registration
### Model-to-Image Registration Philosophy
• Incorporate image understanding
• Improve results by reaching beyond a "naive" approach to registration
1. Use knowledge of abstract structures (models) expected in the image
2. Use knowledge of the the physics of image acquisition to decrease modality-specific artifacts
• Viewed as a optimization problem with discretely sampled data
• The initial estimate of the transformation is important
• Impact on optimization strategy?
• The solution depends on optimizer and metric used
• Discreteness impacts implementation method
• Transform from samples in the model space (feature points) to the image space
• Methods similar to the image resampling process -- transform from moving image to fixed image
• The model is sparse, image a dense field
• Speed
• Robustness
### New Research Areas
Tube points weighted by radius. The weights are ${\displaystyle w_{i}={\frac {2}{1+e^{-2r_{i}}}}}$ [Aylward2001].
• Identification and extraction of the image structures
• Simulation of the imaging system artifacts
• How should samples from the models be weighted?
• Multiresolution approach
• Local extrema vs precise localization
• Spatial uniformity
• Orientational uniformity
• Uniqueness
• Conspicuity
• Contrast
• Signal-to-noise
## Related Work
### Noise reduction
• Noise reduction presentations
• A View on Despeckling in Ultrasound Imaging
• http://www.doaj.org/doaj?func=abstract&id=590052
• S.Kalaivani Narayanan ; R.S.D.Wahidabanu
• Ultrasound imaging is a widely used and safe medical diagnostic technique, due to its noninvasive nature, low cost and capability of forming real time imaging. However the usefulness of ultrasound imaging is degraded by the presence of signal dependant noise knownas speckle. The speckle pattern depends on the structure of the image tissue and various imaging parameters. There are two main purposes for speckle reduction in medical ultrasound imaging (1) to improve the human interpretation of ultrasound images (2) despeckling is the preprocessing step for many ultrasound image processing tasks such as segmentation and registration. A number of methods have been proposed for speckle reduction in ultrasoundimaging. While incorporating speckle reduction techniques as an aid for visual diagnosis, it has to keep in mind that certain speckle contains diagnostic information and should be retained. The objective of this paper is to give an overview about types of speckle reduction techniques in ultrasound imaging.
• Noise suppression and motion estimation in medical ultrasound imaging
• https://dspacedev.rice.edu/handle/1911/20673
• Echocardiographic imaging is a primary modality in the diagnosis of heart disease. Compared to other imaging techniques, such as X-Ray, MRI, and PET, ultrasound imaging owes its great popularity to the fact that it is a safe and non-invasive procedure for visualizing the heart and vasculature. The ultrasound image however is corrupted by speckle, which is distinguished from Gaussian noise by its signal-dependent nature. This dissertation focuses on two important issues for the clinical applications of medical ultrasound images: speckle suppression and motion estimation. The dissertation first describes the statistics of speckle and ultrasound image models, which are important for performance evaluation and further algorithm development. Secondly, a novel speckle suppression approach is developed for the purpose of visualization enhancement and auto-segmentation improvement. This method is designed to utilize the favorable denoising properties of two frequently used techniques: wavelet and nonlinear diffusion. Speckle is iteratively reduced by the multiscale nonlinear diffusion via the framework of dyadic wavelet transform. With a noise adaptive feature, our algorithm is versatile for both envelop-detected and log-compressed ultrasound images. We validate our method using synthetic speckle images and real ultrasonic images. Performance improvement over other despeckling filters is quantified in terms of the quality indices. In summary, our algorithm provides very significant speckle suppression and edge enhancement for the purposes of visualization and automatic structure detection. We further extend the ultrasound statistical knowledge into the motion estimation, and develop a speckle tracking algorithm for myocardial wall motion estimation in intracardiac echocardiographic images. To achieve robust noise resistance, we employ maximum likelihood estimation while fully exploiting ultrasound speckle statistics, and treat the maximization of motion probability as the minimization of an energy function. Non-rigid myocardial deformation is estimated by optimizing this energy function within a framework of elastic registration. Accuracy of the method is evaluated by using a computer model and an animal model, which provides continuous intracardiac echocardiographic images as well as reference measurements for myocardial deformation. As a result, our approach achieves an accurate estimation of regional myocardial deformation from intracardiac echocardiography. This approach has important clinical implications for multimodal imaging during catheterization.
### Ultrasound simulation
• http://campar.in.tum.de/Chair/ProjectSimulatedUltrasound
• Real-Time Simulation of Medical Ultrasound from CT Images
• Lecture Notes In Computer Science archive. Proceedings of the 11th International Conference on Medical Image Computing and Computer-Assisted Intervention, Part II, New York, New York, Pages: 734 - 741
• Ramtin Shams, Richard Hartley
• RSISE, The Australian National University, Canberra, and NICTA, Canberra, Australia
• Nassir Navab, Computer Aided Medical Procedures (CAMP), TU München, Germany
• Medical ultrasound interpretation requires a great deal of experience. Real-time simulation of medical ultrasound provides a cost-effective tool for training and easy access to a variety of cases and exercises. However, fully synthetic and realistic simulation of ultrasound is complex and extremely time-consuming. In this paper, we present a novel method for simulation of ultrasound images from 3D CT scans by breaking down the computations into a preprocessing and a run-time phase. The preprocessing phase produces detailed fixed-view 3D scattering images and the run-time phase generates view-dependent ultrasonic artifacts for a given aperture geometry and position within a volume of interest. We develop a simple acoustic model of the ultrasound for the run-time phase, which produces realistic ultrasound images in real-time when combined with the previously computed scattering image.
• http://portal.acm.org/citation.cfm?id=1483392
• Advanced training methods using an Augmented Reality ultrasound simulator
• Blum, T.; Heining, S.M.; Kutter, O.; Navab, N.; Comput. Aided Med. Procedures & Augmented Reality (CAMP), Tech. Univ. Munchen, Munich, Germany
• This paper appears in: Mixed and Augmented Reality, 2009. ISMAR 2009. 8th IEEE International Symposium on: 177 - 178
• Ultrasound (US) is a medical imaging modality which is extremely difficult to learn as it is user-dependent, has low image quality and requires much knowledge about US physics and human anatomy. For training US we propose an Augmented Reality (AR) ultrasound simulator where the US slice is simulated from a CT volume. The location of the US slice inside the body is visualized using contextual in-situ techniques. We also propose advanced methods how to use an AR simulator for training.
• http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5336476
• Registration of 3D ultrasound to computed tomography images of the kidney
• http://circle.ubc.ca/handle/2429/27817?show=full
• The integration of 3D computed tomography (CT) and ultrasound (US) is of considerable interest because it can potentially improve many minimally invasive procedures such as robot-assisted laparoscopic partial nephrectomy. Partial nephrectomy patients often receive preoperative CT angiography for diagnosis. The 3D CT image is of high quality and has a large field of view. Intraoperatively, dynamic real-time images are acquired using ultrasound. While US is real-time and safe for frequent imaging, the images captured are noisy and only provide a limited perspective. Providing accurate registration between the two modalities would enhance navigation and image guidance for the surgeon because it can bring the pre-operative CT into a current view of the patient provided by US.
• The challenging aspect of this registration problem is that US and CT produce very different images. Thus, a recurring strategy is to use preprocessing techniques to highlight the similar elements between the images. The registration technique presented here goes further by dynamically simulating an US image from the CT, and registering the simulated image to the actual US. This is validated on US and CT volumes of porcine phantom data. Validation on realistic phantoms remains an ongoing problem in the development of registration methods. A detailed protocol is presented here for constructing tissue phantoms that incorporate contrast agent into the tissue such that the kidneys appear representative of in vivo human CT angiography. Registration with 3D CT is performed successfully on the reconstructed 3D US volumes, and the mean TREs ranged from 1.8 to 3.5 mm. In addition, the simulation-based algorithm was revised to consider the shape of the US beam by using pre-scan converted US data. The corresponding CT image is iteratively interpolated along the direction of the US beam during simulation. The mean TREs resulting from registering the pre-scan US data and CT data were between 1.4 to 2.6 mm. The results show that both methods yield similar results and are promising for clinical application. Finally, the method is tested on a set of in vivo CT and US images of a partial nephrectomy patient, and the registration results are discussed.
• http://portal.acm.org/citation.cfm?id=1844544&CFID=102173375&CFTOKEN=74693374
• Rigid registration of segmented volumes in frequency domain using spherical correlation
• Proceedings of the 12th WSEAS international conference on Mathematical methods, computational techniques and intelligent systems table of contents, Pages: 234-238
• An algorithm for the rigid registration of binary volumes is described in this paper. Binary volumes result from a segmentation of ovarian ultrasound volumes. Rigid registration is preformed in frequency domain, where the rotation and translation can be calculated separately. The calculation of rotation is done using the amplitude spectrum and with the help of sphere correlation. The method was tested on 100 synthetic ultrasonic volume pairs. Registration accuracy was estimated by a ratio ρ that compares the intersection volume of the two registered volumes to the final volume. The average ratio ρ between registered volumes was 0.50 (std 0.09) when final result of registration was used. For comparison we tested transformation, used in synthetic volumes creation. The average ratio ρ was 0.53 (std. 0.08) in that case.
### Registration
• http://wwwx.cs.unc.edu/~mn/sites/default/files/lee2010_physically-based-deformable-image-registration.pdf
• Physically-based deformable image registration with material properties and boundary conditions
• We propose a new deformable medical image registration method that uses a physically-based simulator and an iterative optimizer to estimate the simulation parameters determining the deformation field between the two images. Although a simulation-based registration method can enforce physical constraints exactly and considers different material properties, it requires hand adjustment of material properties, and boundary conditions cannot be acquired directly from the images. We treat the material properties and boundary conditions as parameters for the optimizer, and integrate the physically-based simulation into the optimization loop to generate a physically accurate deformation automatically.
• http://ukpmc.ac.uk/abstract/MED/18975707;jsessionid=5437290B6533DA7FFD4DBA261D257325.jvm4
• Mutual-information-based image to patient re-registration using intraoperative ultrasound in image-guided neurosurgery.
• An image-based re-registration scheme has been developed and evaluated that uses fiducial registration as a starting point to maximize the normalized mutual information (nMI) between intraoperative ultrasound (iUS) and preoperative magnetic resonance images (pMR). We show that this scheme significantly (p<0.001) reduces tumor boundary misalignment between iUS pre-durotomy and pMR from an average of 2.5 mm to 1.0 mm in six resection surgeries. The corrected tumor alignment before dural opening provides a more accurate reference for assessing subsequent intraoperative tumor displacement, which is important for brain shift compensation as surgery progresses. In addition, we report the translational and rotational capture ranges necessary for successful convergence of the nMI registration technique (5.9 mm and 5.2 deg, respectively). The proposed scheme is automatic, sufficiently robust, and computationally efficient (<2 min), and holds promise for routine clinical use in the operating room during image-guided neurosurgical procedures.
• http://ieeexplore.ieee.org/search/searchresult.jsp?newsearch=partialPref&queryText=ultrasound&x=0&y=0&filter=OR%28Publication+Number%3A5162127%29
• Bioinformatics and Biomedical Engineering , 2009. ICBBE 2009. 3rd International Conference on
1. Medical Ultrasound Image Segmentation Based on Improved Watershed Scheme
2. Dynamic Persistence of Ultrasound Images After Local Tissue Motion Tracking
3. Improved T-Snake Model Based Edge Detection of the Coronary Arterial Walls in Intravascular Ultrasound Images
4. Clutter Removal of Doppler Ultrasound Signal Using Double Density Discrete Wavelet Transform
• http://portal.acm.org/citation.cfm?id=1487504&dl=ACM&coll=portal
• Non-Rigid Ultrasound Image Registration Based on Intensity and Local Phase Information
• Jonghye Woo Electrical Engineering, University of Southern California, Los Angeles, USA 90089-2564
• Byung-Woo Hong School of Computer Science and Engineering, Chung-Ang University, Seoul, Korea 156-756
• Chang-Hong Hu Biomedical Engineering, University of Southern California, Los Angeles, USA 90089-1451
• K. Kirk Shung Biomedical Engineering, University of Southern California, Los Angeles, USA 90089-1451
• C. -C. Kuo Electrical Engineering, University of Southern California, Los Angeles, USA 90089-2564
• Piotr J. Slomka Department of Imaging, Cedars-Sinai Medical Center, Los Angeles, USA
• A non-rigid ultrasound image registration method is proposed in this work using the intensity as well as the local phase information under a variational framework. One application of this technique is to register two consecutive images in an ultrasound image sequence. Although intensity is the most widely used feature in traditional ultrasound image registration algorithms, speckle noise and lower image resolution make the registration process difficult. By integrating the intensity and the local phase information, we can find and track the non-rigid transformation of each pixel under diffeomorphism between the source and target images. Experiments using synthetic and cardiac images of in vivo mice and human subjects are conducted to demonstrate the advantages of the proposed method.
### RFA Monitoring
• http://onlinelibrary.wiley.com/doi/10.1111/j.1477-2574.2010.00247.x/full
• Intra-operative ultrasound elasticity imaging for monitoring of hepatic tumour thermal ablation
• Mark G. Van Vledder, Emad M. Boctor, Lia R. Assumpcao, Hassan Rivaz, Pezhman Foroughi, Gregory D. Hager, Ulrike M. Hamper, Timothy M. Pawlik, Michael A. Choti
• HPB (Hepato-Pancreato-Biliary ) Volume 12, Issue 10, pages 717–723, December 2010
• Abstract
• Background: Thermal ablation is an accepted therapy for selected hepatic malignancies. However, the reliability of thermal ablation is limited by the inability to accurately monitor and confirm completeness of tumour destruction in real time. We investigated the ability of ultrasound elasticity imaging (USEI) to monitor thermal ablation.
• Objectives: Capitalizing on the known increased stiffness that occurs with protein denaturation and dehydration during thermal therapy, we sought to investigate the feasibility and accuracy of USEI for monitoring of liver tumour ablation.
• Methods: A model for hepatic tumours was developed and elasticity images of liver ablation were acquired in in vivo animal studies, comparing the elasticity images to gross specimens. A clinical pilot study was conducted using USEI in nine patients undergoing open radiofrequency ablation for hepatic malignancies. The size and shape of thermal lesions on USEI were compared to B-mode ultrasound and post-ablation computed tomography (CT).
• Results: In both in vivo animal studies and in the clinical trial, the boundary of thermal lesions was significantly more conspicuous on USEI when compared with B-mode imaging. Animal studies demonstrated good correlation between the diameter of ablated lesions on USEI and the gross specimen (r = 0.81). Moreover, high-quality strain images were generated in real time during therapy. In patients undergoing tumour ablation, a good size correlation was observed between USEI and post-operative CT (r = 0.80).
• Conclusion: USEI can be a valuable tool for the accurate monitoring and real-time verification of successful thermal ablation of liver tumours.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32496556639671326, "perplexity": 3517.3349442707254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943625.81/warc/CC-MAIN-20230321033306-20230321063306-00378.warc.gz"}
|
https://math.stackexchange.com/questions/3231079/decomposition-and-inertia-fields-in-the-factorization-of-3-in-mathbbq-zet
|
# Decomposition and inertia fields in the factorization of $3$ in $\mathbb{Q}(\zeta_{24})$
I've seen the following exercise from an old problem sheet:
For $$\zeta:=\zeta_{24}$$ a primitive $$24$$-th root of unity and $$\mathcal{O}:=\mathbb{Z}[\zeta]$$, determine the prime decomposition of $$3$$. Determine the decomposition and inertia fields of the primes above $$3$$.
[Hint: show that there is a unique $$4$$-subextension $$F$$ of $$\mathbb{Q}(\zeta)|\mathbb{Q}$$ in which $$3$$ does not ramify, and that $$F$$ is the inertia field. Describe $$F$$ explicitly, then determine all quadratic fields $$E$$ under $$F$$ and find one where $$3$$ splits]
Using a famous theorem on the decomposition of primes in cyclotomic fields, we find easily that $$3\mathcal{O}=(\mathfrak{p}\mathfrak{q})^2$$ for some primes $$\mathfrak{p}, \mathfrak{q}$$.
For $$G:=\text{Gal}(\mathbb{Q}(\zeta)|\mathbb{Q})$$, we have $$G\simeq(\mathbb{Z}/(24))^\times=\{\overline{1},\overline{5},\overline{7},\overline{11},\overline{13},\overline{17},\overline{19},\overline{23}\}$$. Since $$\overline{d}^2=\overline{1}$$ for all $$\overline{d}\neq \overline{1}$$, then all subgroups $$H with order $$2$$ are of the form $$\langle\overline{d}\rangle$$ with $$\overline{d}\in G\setminus\{\overline{1}\}$$. By the Galois correspondence, $$F$$ must have the form $$\mathbb{Q}(\zeta)^H$$ for some $$H$$ as above.
My questions are:
1) How do we know whether or not $$3$$ ramifies in $$\mathbb{Q}(\zeta)^H$$ for a given $$H$$?
2) Once we have $$F$$, how do we find $$E$$?
• By the way, you can find $\mathfrak{p}$ and $\mathfrak{q}$ explicitly in the factorization of $(3)$ using Proposition (8.3) in Neukirch's ANT book. The 24-th cyclotomic polynomial is $x^8 - x^4 + 1$, and its factorization mod $3$ is $(x^2 + x + 2)^2(x^2 + 2x + 2)^2$. The proposition then says that the primes are $\mathfrak{p} = \langle 3, \zeta_{24}^2 + \zeta_{24} + 2\rangle$ and $\mathfrak{q} = \langle 3, \zeta_{24}^2 + 2\zeta_{24} + 2\rangle$. – Tob Ernack May 18 at 22:04
• For the decomposition group, maybe you can find the explicit subgroup of $G$ (in terms of maps $\zeta_{24} \to \zeta_{24}^i, \gcd(i, 24) = 1$) that sends $\zeta_{24}^2 + \zeta_{24} + 2$ back to $\mathfrak{p}$. – Tob Ernack May 18 at 22:11
• To be honest, I just used WolframAlpha. But given the theorem that you mentioned about prime factorizations in cyclotomic fields, you can already guess the form of the factorization, and use a bit of brute force on the irreducible quadratics mod $3$. – Tob Ernack May 18 at 22:28
• I think the approach hinted at in the problem statement might be more elegant actually. I haven't thought it through yet but it might spare you these computations. – Tob Ernack May 18 at 22:33
• Ok looking at their approach, one idea could be that the fixed field of $\langle \overline{d}\rangle$ is $\mathbb{Q}\left(\zeta_{24} + \zeta_{24}^{d}\right)$ (I haven't proved that). The minimal polynomial of $\zeta_{24} + \zeta_{24}^d$ can be computed for each $d$ in $\{1, 5, 11, ..., 23\}$ (incidentally this would prove that the fixed fields really are what I said, by checking that the degree is $4$). Then you can check whether $3$ ramifies by checking whether it divides the discriminant. This approach should work although there might be a smarter way to avoid the computations. – Tob Ernack May 18 at 23:43
Use the fact that $$\mathbb{Q}(\zeta_{24}) = \mathbb{Q}(\zeta_3)\mathbb{Q}(\zeta_8)$$. Then $$3$$ won't ramify in $$\mathbb{Q}(\zeta_8)$$, as $$3$$ doesn't divide the discriminant of the field. This is your wanted subfield $$F$$. Obviously $$F$$ is the inertia field, as it's the biggest subfield in which ramification doesn't occur.
Moreover, using the fact that: $$\text{Gal}(\mathbb{Q}(\zeta_{24})/\mathbb{Q}) \cong \text{Gal}(\mathbb{Q}(\zeta_{8})/\mathbb{Q}) \times \text{Gal}(\mathbb{Q}(\zeta_{3})/\mathbb{Q})$$ we get that $$\mathbb{Q}(\zeta_8)$$ corresponds to $$H = \{1,17\}$$ in $$(\mathbb{Z}/(24))^\times$$
Now the quadratic subfields of $$F$$ are $$\mathbb{Q}(i), \mathbb{Q}(\sqrt{2})$$ and $$\mathbb{Q}(i\sqrt{2})$$. It's not hard to see that $$3$$ is inert in $$\mathbb{Q}(i)$$ and $$\mathbb{Q}(\sqrt{2})$$, while it splits in $$\mathbb{Q}(i\sqrt{2})$$. Hence the decomposition field is $$\mathbb{Q}(i\sqrt{2})$$.
• How did you find out that $\mathbb{Q}(\zeta_8)$ corresponts to $H=\{1,17\}$ from the fact that $\text{Gal}(\mathbb{Q}(\zeta_{24})|\mathbb{Q})\simeq \text{Gal}(\mathbb{Q}(\zeta_8)|\mathbb{Q})\times\text{Gal}(\mathbb{Q}(\zeta_3)|\mathbb{Q})$? – rmdmc89 May 31 at 0:00
• And how did you conclude that the quadratic subfields of $F$ are $\mathbb{Q}(i)$, $\mathbb{\sqrt{2}}$, $\mathbb{Q}(i\sqrt{2})$? I'm sure there are many ways to do it, but I'm curious to know how you did it – rmdmc89 May 31 at 0:09
• @rmdmc89 From the Chinese Remainder's Theorem we have that $(\mathbb{Z}/(24))^\times \cong (\mathbb{Z}/(8))^\times \times (\mathbb{Z}/(3))^\times$, where the isomorphism is given by $n \to (n \mod 8, n\mod 3)$. Now the group fixing $\mathbb{Q}(\zeta_8)$ is given by $\{1\} \times (\mathbb{Z}/(3))^\times$, which under the isomorphism corresponds to elements of $(\mathbb{Z}/(24))^\times$ having remainder $1$ modulo $8$. They are exactly $1$ and $17$. – Stefan4024 May 31 at 8:00
• @rmdmc89 One way is to use the Galois group of $\mathbb{Q}(\zeta_8)$ and see what elements are fixed by the subgroups of order $2$. However this method is tedious. The easier method would be to use the explicit form od $\zeta_8$, i.e. $\frac{1+i}{\sqrt{2}}$. We have $\zeta_8^2 = \frac{1+2i-1}{2} = i$. Thus $\mathbb{Q}(i) \subset F$. Also we have that $\zeta_8 + \zeta_8^{-1} = \frac{1+ i}{\sqrt{2}} + \frac{1-i}{\sqrt{2}} = \sqrt{2}$. Thus $\mathbb{Q}(\sqrt{2}) \subset F$. From above we also have that $\mathbb{Q}(i\sqrt{2}) \subset F$. Since there are 3 quadratic fields we have found them all. – Stefan4024 May 31 at 8:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 50, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9983834028244019, "perplexity": 1731.2308333332987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999163.73/warc/CC-MAIN-20190620065141-20190620091141-00549.warc.gz"}
|
https://ewintang.com/blog/2019/06/13/some-settings-supporting-efficient-state-preparation/
|
I wrote most of this list to procrastinate on the flight back from TQC (which was great!). So, for my own reference: here’s some settings where efficient state preparation / data loading is possible, and classical versions of these protocols. Notes:
• There might be errors, especially in details of the quantum protocols, and some of the algorithms may be suboptimal (note the streaming setting, in particular). Let me know if you notice either of these.
• Some relevant complexity research here is in QSampling (Section 4).
• All these runtimes should have an extra $O(\log n)$ factor, since we assume that indices and entries take $\log n$ bits/qubits to specify. However, I’m going to follow the convention from classical computing and ignore these factors, hopefully with little resulting confusion.
For all that follows, we are given $v \in \mathbb{C}^n$ in some way and want to output
1. for the quantum case, a copy of the state $\ket{v} = \sum_{i=1}^n \frac{v_i}{\|v\|} \ket{i}$, and
2. for the classical case, the pair $(i,v_i)$ output with probability $\frac{\vert v_i\vert^2}{\|v\|^2}$.
type sparse uniform integrable QRAM streamed
quantum $O(s)$ $O(C\log\frac1\delta)$ $O(I \log n)$ $O(\log n)$ depth $O(1)$ space with 2 passes
classical $O(s)$ $O(C^2\log\frac1\delta)$ $O(I\log n)$ $O(\log n)$ $O(1)$ space with 1 pass
Recall that if we want to prepare an arbitrary quantum state, we need at least $\Omega(\sqrt{n})$ time by search lower bounds, so for some settings of the above constants, these protocols are exponentially faster than the naive strategy. Further recall that state preparation and sampling both have easy protocols running in $O(n)$ time.
## $v$ is sparse
We assume that $v$ has at most $s$ nonzero entries and we can access a list of the nonzero entries $((i_1,v_{i_1}),(i_2,v_{i_2}),\ldots,(i_s,v_{i_s}))$. Thus, we have the oracle $a \to (i_a, v_{i_a})$.
We can prepare the quantum state and classical sample by preparing the vector $v' \in \BB{C}^s$ where $v_a' = v_{i_a}$, and then using the oracle to swap out the index $a$ with $i_a$. This gives $O(s)$ classical and quantum time.
## $v$ is close-to-uniform
We assume that $\max\vert v_i\vert \leq C\frac{\|v\|}{\sqrt{n}}$ and we know $C, \|v\|$. Notice that we don’t give a lower bound on the size of entries, but we can’t have too many small entries, since this would lower the norm. Also notice that $C \geq 1$.
Quantumly, given the typical oracle $\ket{i}\ket{0} \to \ket{i}\ket{v_i}$ we can prepare the state
Measuring the ancilla and post-selecting on 0 gives $\ket{v}$. This happens with probability $\frac{1}{C^2}$, and with amplitude amplification this means we can get a copy of the state with probability $\geq 1-\delta$ in $O(C\log\frac1\delta)$ time.
Classically, we perform rejection sampling from the uniform distribution: pick an index uniformly at random, and keep it with probability $\frac{v_i^2n}{\|v\|^2C^2}$; otherwise, restart. This outputs the correct distribution and gives a sample in $O(C^2\log\frac1\delta)$ time.
## $v$ is efficiently integrable
We assume that, given $1 \leq a \leq b \leq n$, I can compute $\sqrt{\sum_{i=a}^b |v_i|^2}$ in $O(I)$ time. This assumption and the resulting quantum preparation routine comes from Grover-Rudolph.
The quantum algorithm uses one core subroutine: adding an extra qubit, sending $\ket{v^{(k)}} \to \ket{v^{(k+1)}}$, where
All that’s necessary is to apply it $O(\log n)$ times and add the phase at the end. I haven’t worked it out, but I think you can run the subroutine efficiently using three calls to the integration oracle, giving $O(I\log n)$ time.
Classically, we can do essentially the same thing: the integration oracle means that we can compute marginal probabilities; that is,
Thus, we can sample from the distribution on the first bit, then sample from the distribution on the second bit conditioned on our value of the first bit, and so on. This also gives $O(I\log n)$ time.
## $v$ is stored in a dynamic data structure
We assume that our vector can be stored in a data structure that supports efficient updating of entries. Namely, we use the standard binary search tree data structure (see, for example, Section 2.2.2 of Prakash’s thesis). This is a simple data structure with many nice properties, including $O(\log n)$ time updates. If you want to prepare many states corresponding to similar vectors, this is a good option.
There’s not much more to say, since the protocol is the same as the integrability protocol. The only difference is that, instead of assuming that we can compute interval sums efficiently, we instead precompute and store all of the integration oracle calls we need for the state preparation procedure in a data structure.
The classical runtime is $O(\log n)$, and the quantum circuit takes $O(n)$ gates but only $O(\log n)$ depth. The quantum algorithm is larger because here, we need to query a linear number of memory cells, as opposed to the integrabilility assumption, where we only needed to run the integration oracle in superposition.
While it may seem that the classical algorithm wins definitively here, the small depth leaves potential for this protocol to run in $O(\log n)$ time in practice, matching the classical algorithm.
## $v$ is streamed
We assume that we can receive a stream of the entries of $v$ in order; we wish to produce a state/sample using as little space as possible.
Classically, we can do this with reservoir sampling. The idea is that we maintain a sample $(s, v_s)$ from all of the entries we’ve seen before, along with their squared norm $\lambda = \sum_{i=1}^k \vert v_i\vert^2$. Then, when we receive a new entry $v_{k+1}$, we swap our sample to $(k+1,v_{k+1})$ with probability $\vert v_{k+1}\vert^2/(\lambda + \vert v_{k+1}\vert^2)$ and update our $\lambda$ to $\lambda + \vert v_{k+1}\vert^2$. After we go through all of $v$’s entries, we get a sample only using $O(1)$ space. (This is a particularly nice algorithm for sampling from a vector, since it has good locality and can be generalized to get $O(k)$ samples in $O(k)$ space and one pass.)
Quantumly, I only know how to prepare a state in one pass with sublinear space if the norm is known. If you know $\|v\|$, then you can prepare $\ket{n}$, and as entries come in, rotate to get $\frac{v_1}{\|v\|}\ket{1} + \sqrt{1-\frac{|v_1|^2}{\|v\|^2}}\ket{n}$, then $\frac{v_1}{\|v\|}\ket{1} + \frac{v_2}{\|v\|}\ket{2} + \sqrt{1-\frac{|v_1|^2+|v_2|^2}{\|v\|^2}}\ket{n}$, and so on. This uses only $O(\log n)$ qubits, which I notate here as $O(1)$ space.
You can relax this assumption to just having an estimate $\lambda$ of $\|v\|$ such that $\frac{1}{\text{poly}(n)} \leq \lambda/\|v\| \leq \text{poly}(n)$. Finally, if you like, you can remove the assumption that you know the norm just by requiring two passes instead of one; in the first pass, compute the norm, and in the second pass, prepare the state. But it’d be nice to remove the assumption entirely.
So, is it possible to prepare a quantum state corresponding to a generic $v \in \BB{C}^n$, given only one pass through it? Thanks to Chunhao Wang and Nai-Hui Chia for telling me about this problem.
|
{"extraction_info": {"found_math": true, "script_math_tex": 81, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9641268253326416, "perplexity": 337.16222548390766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195530385.82/warc/CC-MAIN-20190724041048-20190724063048-00394.warc.gz"}
|
https://orbit.dtu.dk/en/publications/fluctuations-at-electrode-ysz-interfaces
|
# Fluctuations at Electrode-YSZ Interfaces
Research output: Contribution to conferenceConference abstract for conferenceResearch
## Abstract
\textbf{Fluctuations at Electrode -- YSZ Interfaces}[\topsep] T.\,Jacobsen$^a$, K.\,Vels Hansen$^b$, E.\,Skou$^c$[1ex] $^a$Dept.\,of Chemistry, Technical University of Denmark, DK-2800 Kgs.\,Lyngby, Denmark.[0.5ex] $^b$Materials Research Dept., Ris\o\ National Laboratory, DK-4000 Roskilde, Denmark.[0.5ex] $^c$ Dept.\,of Chemistry, University of Southern Denmark, DK-5230 Odense M, Denmark. \end{center}%\ [\topsep] From experiments on electrodes on YSZ surfaces it is well known that a short current or overvoltage pulse can decrease the interfacial impedance leading to an temporary increase in performance [\ref{McEvoy}, \ref{Adler}]. This sensitivity to the pre-history is probably one of the explanations for the discrepancies between results reported for single electrode studies. The mechanism behind the activation is still an unresolved problem. In the case of point electrodes, i.e. electrodes with a very small electrode -- electrolyte contact area, more or less regular fluctuating patterns are observed. Probably this behavior is part of the mechanism behind the general current activation mechanism, but in the case of large porous electrodes it is only observed after a major perturbation because of the damping by the large interface. Fig.\,1 shows a slow potential sweep on a Pt point electrode on a YSZ surface. For the part of the anodic and the cathodic branches where the electrode approaches equilibrium, quadratic expressions are used as smooth approximations for the current -- overvoltage relation. Subtracting the calculated curve from the experimental, the expanded plots of the fluctuations in fig.\,1b and 1c are obtained. Fig.\,1b shows a characteristic pattern where the numerical value of the current slowly increases for a certain time, after which it suddenly drops to a lower value. This behavior is repeated with irregular time intervals in the order of $10^3$\,s. In the anodic range the fluctuations appear to be more random and as seen from fig.\,1c the low frequency fluctuations are an order of magnitude lower than those seen in the cathodic range. A somewhat similar behavior has been observed for LSM cathodes on YSZ. %Fig.\,2 shows a similar behavior from LSM--YSZ interface, although in this case the the oscillations are somewhat damped by the changes in stoichiometry??. As shown in fig.\,3 spontaneous current spikes have been observed on polarized Ni anodes. In this case a sudden activation of the electrode is followed by a decay of the current. This pattern is contrary to that of the cathode where a slow activation was followed by an abrupt deactivation. At present, it can not be ruled out entirely that the observations above are at least partially induced by external effects like discrete steps in D/A converters, duty cycles of thermo regulators, etc. But even so, the dramatic spikes seen at the Ni anode emphasizes the care that must be taken in order to obtain reproducible results from point electrode studies. However, it is noted that Pt cathodes and Ni anodes show reverse patterns with respect to spontaneous activation/deactivation whereas the normal behavior for both electrodes is to be activated when exposed to a current pulse. This, combined with the fact that in the case of Pt electrodes the behavior is also seen at constant polarization points to the fluctuations as an inherent property of the interface. \newpage \noindent {REFERENCES}\begin{enumerate} \item A.J.\,McEvoy, \textit{Solid State Ionics} \textbf{135}, 331, (2000)\label{McEvoy} \item S.\,McIntosh, S.B.\,Adler, J.M.\,Vohs, and R.J.\,Gorte, \textit{Electrochemical and Solid State Letters}, \textbf{7}, A111, (2004)\label{Adler}
Original language English 2004 Published - 2004 Meeting of the electrochemical society - Honolulu, Hawaii, U.S.ADuration: 1 Jan 2004 → …Conference number: 206th
### Conference
Conference Meeting of the electrochemical society 206th Honolulu, Hawaii, U.S.A 01/01/2004 → …
## Fingerprint
Dive into the research topics of 'Fluctuations at Electrode-YSZ Interfaces'. Together they form a unique fingerprint.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7924050092697144, "perplexity": 2497.1668118026305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00148.warc.gz"}
|
http://math.stackexchange.com/questions/418464/projections-of-multivariate-normal-distribution/459443
|
# Projections of multivariate normal distribution
Given a random vector X with the multivariate normal distribution F(X), we know that, for two vectors a and b, the projections $A=\sum_j a_j X_j$ and $B=\sum_i b_i X_i$ are univariate normal.
I'm interested in the joint distribution of A and B. Is their joint distribution normal? Is the dependence between A and B described only by their correlation? (do they have only linear dependence?) Thank you for any insight. References are highly appreciated as well.
-
I found the answer for the above question and I thought is nice to share it. So the answer is yes: A and B are joint normal and so the relation between them is determined by the correlation. This is due to the properties of the characteristic function of a multivariate normal. – KAT Aug 4 '13 at 11:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9381906390190125, "perplexity": 187.4540328319709}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500808153.1/warc/CC-MAIN-20140820021328-00361-ip-10-180-136-8.ec2.internal.warc.gz"}
|
http://www.leadinglesson.com/component-view-of-vector-addition
|
Study guide and 2 practice problems on:
## Component view of vector addition
The sum of two vectors is obtained by summing each of components: $$\langle x_1, x_2\rangle + \langle y_1, y_2\rangle = \langle x_1+y_1, x_2+y_2\rangle.$$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7779513001441956, "perplexity": 1329.9293486133479}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247506094.64/warc/CC-MAIN-20190221172909-20190221194909-00002.warc.gz"}
|
https://crypto.stackexchange.com/questions/37671/more-suitable-substitution-box
|
# More suitable Substitution Box
Can anyone suggest me any fast searching method or algorithm to find best S-Box among a big number of S-boxes? For example; if I have 100 different S-boxes, I want to pick that one which is more secure than the others.
There is no such specific algorithm to find the best S-box but you can construct a Diference Distribution Table(DDT) like this and check out the values in it .A DDT is a matrix containing input differences as rows and corresponding output differences in column.
A better S-box has- 1- Lower values in its DDT 2- Less occurence of higher or highest value
• this Differential approximation probability, right?can you elaborate it a little?that how is the first value 64?thanks dear – faiz Jul 12 '16 at 5:32
• Yes these are probability, let me explain why first value is 64 The first row show the probability of output differences(which can be between 0000-1111 for input difference 000000. So as you can see column 1 (i.e. 0000) has 64 because if the input differencial will be 000000 then output differential will always be 0000 i.e probability 64/64= 1. – Abhinav singh Jul 12 '16 at 5:34
• i am alittle weak in mathematics and probability, sorry for asking dumb questions but i am confused about "64" for output, because output should be 4 bits, maximum different values should be 16, for input 2^6=64 , but how 64 for input?and how other all values are zeros?thanks in advance brother – faiz Jul 12 '16 at 7:37
• output differences are in 4 bit , 64 in the first row shows the frequency of 0000 as output difference when input difference is 000000. – Abhinav singh Jul 12 '16 at 7:50
• can we chat here?i mean individual discussion?i don't want to bother other reviewer ...i don't know how to chat personally in this website? – faiz Jul 12 '16 at 8:35
AES S-Box is not just a non linear substitution table, it is the result of a precalculation based on two operations. This is, imho, the greatest value of Rijndael, everything rests with maths.
The first operation is a non linear transformation $g: x\rightarrow y = x^{-1} \in \frac{\mathbb{F}_{2}[z]}{m(z)}$ with $m(z) = z^8+z^4+z^3+z+1$ who is irreducible that tells us all the elements are invertible. The second operation is an affine map in $(\mathbb{F}_2)^8$.
Do you like to build a different S-Box but with the same good properties? Lets use a different $m(z)$ (with the same degree) but this will not provide to you any extra feature. Do you like to build table from scratch? Any linearity introduced by mistake will be a weakness. The maths behind the Rijndael helps to avoid mistakes.
• no i didn't made in this way, i am using chaotic maps to generate s-box by my own algo, i have already 100 different s-boxes , now i want to select best of them through any fast searching method. – faiz Jul 12 '16 at 8:07
• ok, then I think you are working with a variant of the Rijndael. The subBytes operation provides shannon's "confusion", perhaps a way to evaluate your SBoxes candidates is to measure this confusion. How many bits are changed when the input changes by one bit. – srgblnch Jul 12 '16 at 8:26
• sometimes 3 sometimes 6, 4, varies – faiz Jul 12 '16 at 8:33
• So, I think it must change as much as possible as a minimum. I mean, if there is an SBox that has a case that doesn't change any one, discard it. Then I would select the one that provides more confusion. – srgblnch Jul 12 '16 at 8:36
• all of my sbox have this property. i need any recent research clue about this... – faiz Jul 12 '16 at 8:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5823465585708618, "perplexity": 1066.6758145352214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330750.45/warc/CC-MAIN-20190825151521-20190825173521-00044.warc.gz"}
|
http://headinside.blogspot.com/2008/07/?m=0
|
0
## iPhone and iPod Touch Webapps
Published on Thursday, July 31, 2008 in , , , , , , , , , ,
After developing the iPhone version of Werner Miller's Age Square, I became interested in what other iPhone programs were out there that would interest Grey Matters readers.
Before I go into other programs, I should mention that I've updated the Age Square iPhone webapp. First, if you tried it when I originally posted it, you may have noticed the graphics were somewhat slow. In the newer version, the graphics load up much faster. Another change is that you will get different magic squares each time. Yes, they will contain the same numbers and still give the same total, but there are now different arrangements that show up.
Finally, I've added a feature to the program that allows you to show off the patterns. Once the final age magic square is up, each time you tap anywhere in the top 2 rows, a new pattern of 4 squares that give the magic total will be highlighted. You can go through up to 24 different patterns before it clears the board. If you don't want to go through all 24, you can tap anywhere in the bottom 2 rows to immediately jump to the blank squares to restart.
Many of you are familiar with my iPhone Mental Gym, but for a long time, the Knight's Tour on there was a minor rewrite of my original Knight's Tour program. Having some iPhone programming under my belt, I figured this was my next project. The result is the completely new iPhone Knight's Tour. Right away, you'll notice it has been fully rewritten to better bring in line with iPhone standards. The board and the controls have been made larger and easier to use. There are a couple of brand new features, as well, including the ability to undo your moves (all the way back to the beginning!), the ability to end the game at any time, and even a feature that detects when you're trapped. If you do become trapped, you're offered the choice of undoing your last move or ending the game.
Other developers have been just as busy, and I've found a several webapps you can use to help strain your brain and entertain. Don't have an iPhone or iPod Touch? Don't worry, these will all work in your browser, as well.
Since I was just discussing the Knight's Tour, how about another chess challenge? Playing chess blindfolded has always been a challenge, but now there's the Blind Chess Trainer. This is a series of quizzes that will progressively help you understand how to think of the chess pieces, the board arrangement, and the movements, without ever seeing the board.
If you would like help memorizing any of these, don't forget iFlipr, a custom flashcard program for the iPhone that I recently mentioned. If you need to practice any memory technique, or memorize anything else, this is a great way to do so on the go.
Speaking of things previously mentioned on this blog, there's also the book Geek Logik, by Garth Sundem, who specializes in creating equations to help solve every day dilemmas, as mentioned in my review. For pure fun, there is now a Geek Logik webapp. It features several questions from the book, and allows to enter the needed variables to answer them, with full explanations for what each variable means. When you're done, you click the Calculate button, and (after a brief ad) you're given the answer.
Finishing up with another webapp for pure fun, try iPhone Magix. There are numerous iPhone webapps used for magic effects, but few of them are deceptive. However, iPhone Magix is one of the best iPhone magic routines out there, and it is truly deceptive. You can even repeat this routine, and it will become even more puzzling (as the method isn't always exactly the same).
Since we've started with a magic webapp, and finished with a magic webapp, I'd say we've come full circle. Let me know if you find any more ingenious iPhone programs (preferably ones that also run on browsers) that you think other Grey Matters readers would enjoy in the comments.
0
## Werner Miller's Age Square
Published on Sunday, July 27, 2008 in , , , , ,
Perennial Grey Matters favorite Werner Miller has a new mathematical routine available, but this one is making its debut here on Grey Matters!
I received an e-mail from Werner Miller a few days ago, and he mentioned that he had been inspired by my post on How To Guess Someone's Age (which you should read before using this program). After reading it, he's created a mathemagical program determining someone's age. Not to be confused with his Age Cube, this one is called the Age Square.
Werner Miller's original program is made to run offline in Windows. However, if you aren't running Windows on your system, don't worry, as I've created an online version that will even work on your iPhone or iPod Touch (or any other portable device capable of internet access and javascript execution) as a free WebApp!
This program will only for for people whose ages range from 30 to 85 (inclusive). This is because magic squares that total lower than 30 would have to employ either negative numbers or duplicate numbers. The reason 85 is the upper limit will be explained later in this post. Mr. Miller mentions that there may be some bugs in it, so if you find any, please leave a note in the comments here, and I will forward them to him, so that they may be corrected in possible future versions.
Before using the program, make sure you read the comprehensive guide to accurately guessing someone's age, and practice what you learn there at AgeGuess, The Age Project, Match>Age, and/or Guess My Age, as mentioned in my How To Guess Someone's Age post.
If you're going to use the offline Windows version, download the Age Square program from here (Windows only), decompress the file (.zip format), and run the program. Age Square will always be available in the downloads section of the rightmost column on Grey Matters.
The Age Square WebApp is located in the Mental Gym. As soon as you click that link, it will load just like a webpage, and will be ready to go.
From here, the best way to explain the use of the program is by example. Let's say you're performing this effect for a Mr. Davidson, whose age is 51, but which you don't know yet.
The first thing you have to do is look at the person and, using what you've learned from your practice, get a rough idea of the person's age. Taking a look at Mr. Davidson, the clues you gather lead you to believe that he's probably in his late 40s or early 50s.
When you start up the age program, you will see a blank 4 by 4 grid on the screen, as seen in Fig. 1, below. You start by secretly communicating the age range of Mr. Davidson to the program. How do you do this? Think of the blank grid as being divided up into 4 quadrants, each contain a 2 by 2 block of individual cells. You'll click on a particular quadrant, depending on the age range you've determined:
• For ages 30 to 45, click on any of the 4 cells in the upper left corner.
• For ages 40 to 55, click on any of the 4 cells in the upper right corner.
• For ages 50 to 65, click on any of the 4 cells in the lower left corner.
• For ages 60 to 75, click on any of the 4 cells in the lower right corner.
• For ages 70 to 85, click anywhere beneath the grid (yet still inside the window). Note: If you're using the WebApp version, you'll need to touch (or click) just under the window, in between where the YES and NO buttons will appear.
This is easy to remember, as the age ranges are all 15-year spans, and each quadrant is 10 years more than the previous one. If you think of the quadrant numbers as reading the same way you'd read a book (proceeding from left to right, and then top to bottom), you should have little trouble remembering which quadrant is which. This arrangement is why 85 is the upper limit.
In our example, since we've determined Mr. Davidson to be in his late 40s to early 50s, we'd click anywhere in the upper right corner (for the 40-55 range), as in Fig. 1a, below. A 4 by 4 magic square will then appear on the screen, using all the numbers in the age range you specified. Fig. 2, below, shows the magic square using the numbers 40-55 that you requested for Mr. Davidson.
From your spectator's point of view, all they should think is that you clicked the screen to bring up a square of numbers. Explain that the computer has created a magic square, and that it adds up the same number in many different directions. I don't recommend explaining and adding up the patterns in detail, since the totals will range from 150 to 310, varying with the age range you designate. Just a basic description of magic squares should be sufficient at this point.
Click anywhere in the window one more time, and 8 of the squares will be highlighted in blue, and the words YES and NO will appear beneath the grid, as in Fig. 3, below. Ask your spectator to click on YES if they see their age highlighted in blue, or NO if it isn't. Looking at Fig. 3, Mr. Davidson would click NO, since 51 isn't highlighted.
After YES or NO is clicked, another arrangement of squares are highlighted, and you ask the person to do the same again. When Mr. Davidson sees the arrangement in Fig. 4, he would again click NO, since 51 still isn't highlighted.
This process is repeated two more times. Mr. Davidson, seeing the arrangement in Fig. 5, would click YES, since 51 is highlighted, and would click YES again when seeing the arrangement in Fig. 6, since 51 is highlighted in both cases.
At this point, the computer has been able to determine the person's exact age. In our example, the computer would now know Mr. Davidson is 51. If you don't understand how, go back to my How To Guess Someone's Age post, and read the section on the age cards and Werner Miller's Age Cube, and the included links will explain it in more detail.
After the 4th YES/NO click, the original magic square disappears, and a new magic square takes its place. In this new magic square, all the rows, columns, diagonals, and other patterns (see the patterns used in my 40 30s 4 15 video, although not all of them will always work) will all add up to the spectator's age! In Fig. 7, since the computer has determined Mr. Davidson's age to be 51, the new square totals 51 in numerous ways.
I would like to publicly thank Werner Miller for letting Grey Matters debut this amazing routine, and for all the work he has put into it. If you would like to learn what other amazing routines he has up his sleeve, and show your appreciation to him, check out his book Ear-Marked, with 177 pages full of more ingenious mathemagical routines!
Answers to US Presidential Candidate Puzzles:
$J&space;=&space;4,&space;MCCAIN&space;=&space;155927,&space;SIDNEY&space;=&space;623708$
$BARACK&space;=&space;291956,&space;H&space;=&space;4,&space;OBAMA&space;=&space;72989$
0
## US Presidential Candidate Puzzles
Published on Thursday, July 24, 2008 in , ,
Due to yesterday's surprise Mental Gym outage, I didn't have as much time as I'd like to prepare today's column. Instead, I'd like to post two original puzzles I've been saving.
These puzzles are both cryptarithms. Cryptarithms, for those who aren't familiar with them, are puzzles in which you're given a mathematical expression where all the digits, 0 through 9, are replaced by letters, with a given digit always being replaced by the same letter, and a given letter representing only one particular digit. Probably the most famous puzzle of this type is Henry Dudeney's classic from the July 1924 issue of Strand Magazine:
$SEND&space;+&space;MORE&space;=&space;MONEY$
The unique answer to this problem is:
$9567&space;+&space;1085&space;=&space;10652$
Note that the D has been replaced by a 7, the Es have been replaced by 5s, the Ms by 1s, the N by 6, the Os by 0s (hey, that's handy!), the R by 8, the S by 9, and the Y by 2.
As you can see, you have to look at how each letter interacts with the others in the mathematical expression to work out which digit is represented by which letter.
The following two cryptarithms I am going to give you, both of which are original with me, concern the US presidential candidates from both of the major parties in the 2008 election.
In both cases, these rules are in effect:
• Any given letter always represents only one corresponding digit (0-9).
• Any given digit (0-9) is always represented by only one corresponding letter.
• The leftmost digit of any number is never 0 (For example, in SEND + MORE = MONEY, neither the S nor the Ms are allowed to represent 0).
• When all the letters are replaced by their corresponding digits, the resulting mathematical equation must be correct.
• All numbers involved are whole numbers.
• Each puzzle has only one solution.
$J&space;*&space;MCCAIN&space;=&space;SIDNEY$
The second puzzle concerns the Democrat presidential candidate, Barack Obama:
$\frac{BARACK}{H}=OBAMA$
I'm not going to give the answers now. If no one solves these by the time I post on Sunday, July 27th, I'll include the answers at the end of my next entry.
0
## Mental Gym Moved and Site Features Updated
Published on Wednesday, July 23, 2008 in
The Mental Gym is back up! I have relocated it to http://mentalgym.freehostia.com/. The 3-Letter Body Part Quiz proved so popular, it exceeded my ISP's bandwidth limits!
• The Blog Summary feed, the Grey Matters Videos feed, the Original Products Store feed, and the Timed Quizzes Feed all have new locations, which you can now access from these links and the site feed listings at the top of the rightmost column on this blog.
• The widgets I've developed have all been updated, as well. You can find the updated versions of the widgets (where required) over in the rightmost column under Downloads. Over on Apple's widget site, you can already find the updated versions of the Grey Matters Feed widget, the How Many Xs Can You Name in Y Minutes widget, and the Date Quiz widget.
• All the Google Homepage Feed Gadgets have been updated to reflect the new feed locations, as well. If you've been using any of the Google Hompage Feed Gadgets, please update to the new versions: Grey Matters, Grey Matters Videos, Original Products, or Timed Quizzes. The Recommended Products feed gadget is still the same.
• I was unable to update the How Many Xs Can You Name in Y Minutes blog widget with the new feed, so it's down for now. If I can get it fixed, I'll post the new blog widget.
There are also other minor changes I'm making. Over the next week or so, you may see small things like broken links and missing graphics, but rest assured that I'll be working on fixing these details! Thank you for your understanding, and I apologize for the inconvenience.
0
## Mental Gym Temporarily Down
Published on Wednesday, July 23, 2008 in
The Grey Matters Mental Gym is temporarily down. It was removed due to exceeding bandwidth limits of my ISP. I believe this was due to the popularity of the 3-letter body part quiz.
This also means that some of my graphics, RSS files, and other files won't be available.
I will work on finding a new location for the site as soon as I can. I apologize for the inconvenience.
4
## How To Guess Someone's Age
Published on Sunday, July 20, 2008 in , , ,
(Update - May 3, 2012: This post has been updated, revised and clarified as a series of posts. The new age-guessing posts can be found at these links: Part 1, Part 2, Part 3, Part 4)
For as long as carnivals have been around, and probably much longer than that, there have been people who have tried to guess another person's age. Age is often not talked about openly, so how would you even begin to go about it?
The first thing that comes to mind for me, naturally, is the mathematical approach. First, there's what I call the algebraic approach to finding someone's age. You have them perform a series of calculations that include their age, and the answer will, in one way or another, secretly tell you how old they are. The most common tricks of this type you'll probably run across is the choose your age and a secret number version, or one of these two approaches. The precise answer that results is great. However, anyone who is familiar with algebra can work out why these work. Even those links themselves point out that the best use for those kinds of tricks is for generating interest in a math lesson.
To add at least some mystery, you could turn to what has become known in magic as the Age Cards. As a matter of fact, you can try a Javascript version of the Age Cards here, and a Java version with a link to the explanation here. Personally, the best way I've come across to present this ancient effect is Werner Miller's Age Cube (as long as you're sure the person isn't 32 or older), as the arrangement of numbers are given a different purpose than just "random numbers". This helps hide the method more effectively.
These mathematical methods are nice, but what about judging their age the classic way, by their appearance and lifestyle clues? Over at the Vat19.com blog, they've posted a very comprehensive guide to accurately guessing someone's age. Of course, having the information is one thing. How do you practice using that information?
Strangely enough, age guessing has caught on as an internet meme. There are many sites with a seemingly endless supply of pictures of real people whose age you can guess. Among the more popular of these sites are AgeGuess, The Age Project, Match>Age, and Guess My Age. The best thing about these sites is that you'll find the more practice you put in with these sites, the more accurate you can get.
However, even if you were to practice every moment you were awake, there's still no way to get a perfect guess every time. Even those carnival workers who guess your age say they'll guess it within a margin of error, such as 2-3 years. The purely mathematical approach offers precision, but can be easy to figure out, thus diminishing the impact of a correct guess. Perhaps there's a happy medium?
Using both approaches together can actually be very effective. Adding mathematics to the appearance/lifestyle approach adds precision, while adding the appearance/lifestyle approach to the math helps take the heat off of the math. One simple way to combine them is to hand a calculator to someone, and ask them to enter any 4-, 5-, 6-, or 7-digit number into the calculator, without telling you what that number is (For example, let's say they enter 32,419). Next, have them multiply that number by 9 (in our example, that results in the number 291,771), and then add their age to that total (Let's say they're 33, so they add 291,771 to 33 to get 291,804). Once they tell you the final total, you can give their correct age!
How is this possible? What you do is mentally add up the digits in the final total (In our 291,804 example, you would mentally add 2+9+1+8+0+4, to get 24). Use this number as a starting point (24, in our example), ask yourself whether this person appears to be that age, and if not, whether they appear older or younger than this number. If the person seems younger than the number you get, subtract 9, and ask yourself if the age seems right again. If the person seems older than the number you get, add 9, and yourself if this seems right again. In our example that results in 24, you would probably look at the guy, and determine that he's older than 24, so you would add 9 to get 33. You look at him, and that seems right, especially as adding another 9 would result in 42, and he appears far too young to be 42. With this result in mind, you state that he's 33! As long as you've practiced with the age guessing sites I mentioned earlier to the point where you can get the correct age within 2 or 3 years, this should pose little problem.
If you're wondering why this works, it's all due to the digital root of the numbers involved. When you multiply any number times 9, you're insuring that you now have some number whose digital root is 9. When the age is added, the result will be a number with the same digital root as the person's age (thanks to the 9 principle). Since any two numbers whose difference is a multiple of 9 will have the same digital root, you can keep adding or subtracting 9 to the result you get until the age seems right. Anyone trying to work this out solely with algebra will realize that all you had to work with was 9x + a (x being the random number, a being the age), and it won't make much sense to them.
Probably the most deceptive version of mathematically determining someone's age involves only days and dates, so it seems like no math could be involved. For this version, you ask someone whether they've already had their birthday, and when it will be (or was) this year. Handing them a perpetual calendar, you ask them to look up the day of the week on which they were born. You ask them not to state the year, but rather just state the day of the week on which they were born. After this, you study their lifestyle and appearance clues, and announce exactly how old they are (or, alternatively, the year they were born)!
As an example, let's say someone tells you their birthday this year was on June 14th. After looking in the perpetual calendar, they tell you that they were born on a Monday. You study the person, and announce (correctly) that they're 43! Are you curious as to how this is possible?
Before you even think of performing this version, you need to be very comfortable performing the classic version of the Day of the Week for any Date feat. You'll also need to have the year keys memorized, as described in this article, in section 3.2.5: Memorizing the Key Numbers. Once you can do that, you'll fully prepared.
Always ask for their birthday in the current year, so that they don't slip and accidentally give you the year they were born. When you're given the month and day, add the month key and day. If you're given June 14th, as in our example, you would add 4 (the key number for June) to 14 (the day) to get 18. Next, when give the day of the week, convert that day of the week to it's key number, and immediately add 42. In our example, the day is Monday, you convert that to 1, and add 42 for a total of 43.
Next, we're going to subtract the month and day total from the adjusted weekday total, and cast out 7's to get a number from 0 to 6. Keeping with our example, we'd perform 43 (adjusted weekday number) minus 18 (the month and day total) to get 25. Casting out 7's (or figuring modulus 7, as math majors would call it) from 25, we get 4 (21 is the largest multiple of 7 under 25, so we do 25-21 to get 4). This single-digit number (4, in our example) will be used as a year key.
Math break: Why did we add 42? As you have learned (or will learn) from the classic date feat, adding or subtracting 7's doesn't substantially change the number (no more than adding or subtracting 7 days from now will change the day of the week), so we need a multiple of 7 for the day adjustment. The largest month key we'll be using is 6, and the largest day we can get is 31, so the largest month and day number we can get is 37 (6+31). From what multiple of 7 can we subtract numbers up to 37, and still get a positive answer? 42 is the answer (Any larger multiple of 7 could also be used, but 42 keeps it minimal).
Once you've figured the year key, you need to look at the person themselves (actually, you've probably been doing this all along). In what age range would you place them? To move our example along, let's say you think the person might be in their late 30s or early 40s. We need to find a year in the age range you've determined whose key number is the same as the year key number you've determined. Turning back to our example, is there a year roughly 40 years ago whose key number is 4?
First, try 40 years ago itself. That's 1968, whose key number is 1. OK, it's not 1968. How about later? 1969 is 2, 1970 is 3, and 1971 is 4! Does 37 work? Look at the person and ask yourself if 37 seems too young. Trying going back from 40 years ago, too. 1967 has a key of 6, 1966 is 5, and 1965 is 4! Hmm, they could also be 43. You can begin to see now why you both need to memorize the year keys, and why you need so much practice determining someone's age. In our example, you would have to ask yourself whether 43 or 37 seems more likely.
With this version, there are a few things you'd have to remember. First, if they're born in January or February, you'll want to add 1 to your key number when checking leap years. If someone who looks to be around 40 tells you they were born on January 12th (0+12=12) on a Friday (5+42=47), you'll get a key of 0 (47-12=35, 35 mod 7=0). Since 1968's key is 1, and your key is 0, you might think that 1968 won't work. However, if you temporarily add 1 to your key number when checking 1968, you'll realize that 1968 is a possibility (By the way, January 12th, 1968 did fall on a Friday). A similar principle applies when working with other centuries, except you'll need to add 0, 2, 4, or 6 to your key number (and not temporarily, as with leap years).
If you're trying to determine a woman's age, and you find two close possibilities (like our 37 and 43 examples above), always, Always, ALWAYS, ALWAYS give the younger of the two possibilities. She'll take it as a compliment, and will never correct you.
It should go without saying, that when performing any of these versions, you should warn someone that their age could possibly be revealed. If they don't approve, move on to something else!
2
## Review: Mind Blasters
Published on Thursday, July 17, 2008 in , , , , ,
Peter Duffie has been publishing a series of magic books gathering great material from English and Scottish magicians. This series includes England Up Close, Scotland Up Close, and Miraculous Minds (Scottish Mentalism). The newest entry in this series is Mind Blasters, which features mentalism routines from British magicians.
This book contains an amazing number of contributors, so instead of giving a full review for each and every effect, I'm going to focus on the routines that will interest regular Grey Matters readers.
The whole reason the book caught my attention in the first place was due to two things, the name Harold Cataquet, and the words Knight's Tour next to his name. In this version, you start the Knight's Tour from a chosen square, and have the spectator number the squares as you go (The first square is marked 1, the next is marked 2, and so on). Not only are you able to complete the Knight's Tour, but show that your path has resulted in a semimagic square! (Note: Unfortunately, a fully magic square resulting from a Knight's Tour is mathematically impossible.)
The work and analysis that went into this is just incredible. Not only is there a process for doing the Knight's Tour as described above, but there is a new mnemonic approach developed to aid this method. The one downside I see to this approach is that, in some cases, chess players will immediately know that something isn't quite right, so you risk losing their interest. How much of a concern this becomes is really up to your persona and your audiences. This write-up is definitely a new step forward in the Knight's Tour, and should be part of any research you do towards performing it. I should mention that I'm not just giving this a good review just because a link to my Knight's Tour in included in this routine (but that didn't hurt, either).
Shiv Duggal's Frequency is another routine of potential interest to my readers. This is a three-phase pseudo-memory routine with playing cards. After the deck is shuffled by a spectator, they look at a random card in the deck. After that, the performer memorizing the order of the deck, and is able to give the position of the selected card after it's named. In the 2nd phase, the performer memorizes half of the deck, has a card selected from it, placed in the other half, and using memory is able to find it. In the final phase, three cards are removed from the deck, and the performers looks at each of the remaining 49 cards, then recalls which cards weren't seen. Frequency isn't for budding magicians or mentalists, but if you can manage the skills required, this is a powerful routine for those who want to look like a memory expert using a legitimately shuffled deck.
1812, by Stephen Jones is a great way to predict the multi-digit outcome of an addition problem created at random by your audience members. The basic principle itself isn't new (it can be found in Secrets of Mental Math, Predict Perfect, and elsewhere), but I like Stephen Jones' handling of it. He also provides one of the clearest explanations I've seen of the principle, so you can customize the routine to your particular needs. Quick Tip: You can use the 1812 handling to significantly reduce the work required (From remembering 198 links, down to 6!) to perform my Psuedo-Phone Book Memorization feat.
Wayne Dobson's Fluke and Stephen Tucker's ACAARN are almost directly opposite tricks. They both take a step back from the pure Any Card At Any Number routine, where the spectator freely selects both the card and number, but still manage to become impressive effects on their own. In ACAARN, the performer writes down a location before a card is fully named by the spectators. You take the cards out of the case, count to the position, and the named card is at that position. In the case of Fluke, the magician brings out one deck containing a prediction, and another for the spectator to use. The spectator names any position from 1-52, and the magician/mentalist shows his prediction. When the spectator counts to their selected position, the predicted card is found at that location. This latter version does employ a gimmick that could be exposed, but it won't be hard for any regular reader of my blog to figure out how to eliminate the gimmick. Between the two, my personal preference is for Fluke, but the methods for either one are worth investigating, and both can inspire some ingenious variations.
There are a number of other routines which feature methods that you'll find intriguing if you enjoy this blog. Stephen Tucker's other routine, 58 to 1, is a divination of the name of an imaginary place chosen by the spectator. The technical demands are minimal, and there's nothing written down. However, from a presentational standpoint, the method can be obvious if not performed correctly. If you're seriously interested in doing this routine for a paying audience, I would recommend learning important presentational details about this type of method from Doug Dyment's Sign Language first.
If you liked the works of Leo Boudreau, check out Remote Viewing Magic. Les Johnson takes Boudreau's work in a new direction by using it to divine a chosen scene. Even as clean as 58 to 1 is, this is even cleaner and more direct.
In Roger Curzon's Devil Rides Out, a spectator chooses from several random numbers from a grid, above which is the picture of a devil. After the spectator adds up their chosen numbers, the devil disappears, simultaneously bringing a prediction into view. Even though some may consider the two ideas here to be old and uninteresting, I like the fact that they're used here to create a piece of mentalism that features a rare visual climax.
The final routine I'll mention here is Two-Person Book Test by Mike Hopley. If you perform mentalism with a partner, here's a routine that will be unfathomable to your audiences. The “sender” takes any borrowed book, and rapidly highlights several words from the first line of different pages using a pencil, all while saying absolutely nothing. The spectator takes the book back, and freely chooses any page with highlighted words. After the line is read to he “medium,” he or she is able to divine which of the words are highlighted. This can be repeated several times without the sender saying anything, or even needing to be in the same room! For a climax, the medium asks the person to concentrate on the last digit of a selected page number, and the medium is able to divine this, as well.
The method of communication will be very familiar to the readers of this blog, but is well hidden by the routining here. Even members of your audience who are familiar with the principle at use here will not recognize its use. While many routines like this must be constantly studied with a partner, this one is simple enough that, once you're assured that each of you has the basics down, you can decide to do on the fly. If you can convince your audience that you're not working together, the trick will come across as even more powerful.
As you can see from the list of routines, I haven't even mentioned a full quarter of what is in this book. Whether you're into mentalism, or even if you just want to research incredible new ways to use some classic ideas, I think you'll find that this book is a great value, especially considering the low price you're paying for the knowledge of so many prominent mentalists. Every year there is one book that stands out head and shoulders above the rest, and I believe Peter Duffie's Mind Blasters will be that book for 2008.
0
## Great Moments in Memory and Mental Math
Published on Sunday, July 13, 2008 in , , , , ,
Every so often, I run across great stories of geniuses who show exceptional mental prowess, often while still young. I'm going to share some of my favorites.
The first is probably the most famous, featuring a young Carl Friedrich Gauss. It may or may not be true, but I'll pass it on anyway. Gauss' teacher needed to occupy the students with busy work (some versions of the story say it was to talk to other faculty members, others say it was to take a nap), so he assigned the students to add up all the numbers from 1 to 100. Young Gauss was able to produce the correct answer, 5050, in just a few seconds, and without writing anything down!
How was that possible? Gauss took a look at the problem first, and was able to find a pattern that made the problem far simpler. His aha! moment came when he realized 1+100=101, as did 2+99, 3+98, and so on, all the way up to 50+51! Thus the problem became a matter of multiplying 50 by 101, which is much easier and quicker to do mentally. The bloggers at Better Explained can not only help you better understand how problems like this are solved, but also help you realize that one problem doesn't mean only one approach.
This next story is far more recent, and concerns Arthur Benjamin, whom many of you probably know from his TED video. In his book Secrets of Mental Math, Dr. Benjamin tells of a time when he was only 13, his teacher demonstrated how to work out a problem, and concluded with an answer of 1082. Apparently unhappy with what he saw as an unfinished problem, he blurted out that 1082 was simply 11,664!
The teacher was amazed that a 13-year-old could square a number like 108 in his head. The method he used, detailed in the above book, was roughly the same as this squaring method described on MathPath. When he explained his approach, and the teacher commented that she'd never run across this method, thoughts of being famous for this new discovery ran through his mind! Unfortunately, when he ran across the very same method in Martin Gardner's Mathematical Carnival, he realized this was not to be. I suggest reading the full story in Secrets of Mental Math, not only for the method, but also to learn how he discovered it on his own.
Dr. Solomon Golomb, who is a respected mathematician, engineer, and puzzle expert, had his own great moment in his college freshman biology class. The teacher was explaining that human DNA has 24 chromosomes (as was believed at the time), so the number of possible cells was 224. The instructor jokingly added that everyone in the class knew what number that was. Golomb immediately responded that it was 16,777,216. When the instructor didn't believe the number was right, and looked through his noted to find the correct number, he was stunned to find that Golomb was exactly right! Not surprisingly, Golomb instantly the nickname, “Einstein” from his fellow students.
But how did he know the answer? As it happened, Golomb had memorized the answers to all the exponential expressions from 11 up to 1010 as a personal challenge. While he hadn't didn't know the answer to 224 itself, he did realize that it was the same as 88, which he did know!
Learning the answers for exponential expressions up to 1010 is akin to learning the multiplication tables, but since the results go up into the billions, it seems much more impressive. You can learn to do this yourself in the brand new Exponential Expressions section of Grey Matters' Mental Gym.
Dr. Golomb's ability to stun his college instructor using memorized information reminds me of one of my own college experiences. In my case, I stunned my art history teacher with my ability to perfectly recall the names of 60 paintings, their respective artists, and the exact year each one was painted. Was I instantly regarded as an art genius? No, I was accused of cheating (click that link for the full story).
I'll wrap this post up with a story of memory skill that happened back in Ancient Greece in the 5th or 6th century BC, yet still resonates to this day. Simonides of Ceos, a respected Greek poet, was attending a well-attended banquet dinner. Like all orators of the time, Simonides possessed a great memory as a tool of his trade. After giving his speech toasting the guest of honor, Simonides went outside for a break. While he was there, the roof of the structure collapsed, killing everyone who was inside. As the excavation happened, city officials called for Simonides' help to identify the bodies.
Incredibly, Simonides was able to identify every body from the banquet dinner! He later realized that he was able to do this because he knew the people by where they were sitting. It was this experience that inspired him to create a formal memory system based on locations. This system is still used today, and is known as the Journey System. It involves visualizing something that represents the first point of your presentation in the first location of some familiar place (say, the bedroom where you wake up). To recall your second point, you mentally travel to the second point in your journey (say, the hall outside your bedroom), and see a different image there. In this way, it's possible to remember hundreds of different points without any notes.
The memory system itself isn't all that survives to this day. As orators throughout the ages used this same technique, it wasn't uncommon for them to refer to the journey through their mental structures as the speech was given. This tradition of references to mental journey stayed around long enough that it eventually came into the English language as, “In the first place . . . ,” “In the second place . . . ,” and so on.
1
## iFlipr
Published on Thursday, July 10, 2008 in , ,
I've been wondering when flashcards would meet the iPhone. I now wonder no longer.
The two have met at long last, thanks to iFlipr. It's a free flashcard site that, while specifically designed for the iPhone, will work on a regular computer, and even most older phones with internet access. Like many of the better flashcard programs and sites I've discussed, it employs the Leitner System, so that you're quizzed more frequently on items with which you have more trouble.
Rather than a long description of the features, check out their introductory video:
I like that it can be used on my desktop, but for portability I may just have to pick up an iPhone after all. Although, I'm probably not going to even get near an Apple Store tomorrow.
0
## Character: Definition
Published on Sunday, July 06, 2008 in ,
Once you've decided to pursue a performing character, you're faced with the even bigger question of where to begin.
The traditional advice is to “be yourself.” As far as preventing performers from copying each other, that's great advice, but beyond that, the advice is far too simplified.
The best place to start is by examining where you are presently. What types of routines do you like to perform? Look at those routines from your audience's point of view, and ask yourself what those routines suggest about the person performing them. As a group, do they suggest their performer is funny? Intelligent? Psychic? Suave? Creative? Quick-Thinking? As you do this, you'll probably find that some conflicting messages are being sent by your routines. If you have to be the funny man at one point, then the deep-thinking man of mystery shortly afterwards, you'll have to make a decision about which direction you really want to take.
It's very important to remember that your character is not you. It can range anywhere from a minor extension of you (such as Ricky Jay's scholarly persona), or it may be a complete caricature (such as Rudy Coby's otherworldly scientist persona). In Richard Tenace's article, The Base Character, he brings up some excellent basic questions that you should know about your character. The more detail you know about your character, the better.
As a matter of fact, questions are a great way to develop your character. You'll note from my Questions For Better Magic, especially the questions on character, that I'm a big supporter of asking better questions to get better answers. John B. Pyka, who usually charges much more to consult on theatrical character development, has generously shared some excellent character development questions at no charge! I find the killer/victim/witness question especially interesting, as that one single question will do more to bring focus to your act than any other I've seen.
Screenwriting resources, such as iFV's Character Questionnaire , can also provide some very thorough food for thought. I've previously mentioned Dramatica and Story Fanatic as great resources, too. While Dramatica does focus on larger, more fully developed stories, I've found their 12 essential questions a very useful tool. Story Fanatic's Thinking of Your Audience First post is very helpful in figuring out what effect the various decisions can have on your audience.
One fun way to develop your character is to put him or her through those internet personality tests you see so frequently. Regardless of their true psychological value, the test results can often prove valuable as inspiration. If such a quiz describes your character as having a trait which you think would make them less effective, you're free to discard it! Two of my favorite quizzes for this purpose are PersonalDNA, because of the rich descriptions in the results, and the Jung/Meyers-Briggs Personality tests (also known as the Meyer-Briggs Typology Index, or MBTI), due to the large amount of online resources that can help take a better look at your results. Once you know your MBTI type, you can do more research at sites like TypeLogic, Socionics, and Dave Nevins can provide plenty of detail to inspire you. One interesting source of inspiration is to look at other fictional characters with the same MBTI type as yours.
Keep in mind that creating a character is not a one-time event. Rudy Coby once noted that the secret to an effective performing character, once you began the process, was summed up in two ideas, developing your character, and getting as much time in front of an audience as possible to constantly determine the effectiveness of the character in order to refine it. This is one of those journeys where the journey itself is the treasure you seek.
0
## Character: Purpose
Published on Thursday, July 03, 2008 in ,
I talk quite a bit about performing memory feats, lightning calculations and magic, but I don't get to talk much about what performing really means. The most essential thing to performing is a character. But why it is so essential?
The short, dry answer can be found in section 2A of my questions developed from Strong Magic. Developing a character puts the focus on you, as opposed to the tricks, creates certain expectations, and makes it easier for the audience to care about you as a performer.
A more vivid explanation, which is worth hunting down, is Jon Armstrong's essay, Superhero Theory, as published in the December 2004 issue of Genii Magazine. In that essay, Jon uses superheroes as a good example of how to develop a magic character, as they both have extraordinary powers and need to be memorable in the public eye. Jon's main points in this work are:
• Superheroes are defined by their powers, to the extent that they're often named after them (e.g., Spiderman, the Flash).
• Audiences are familiar with what a particular superhero is capable of, so the heroes have certain expectations (without being made predictable), and they're made more memorable.
• Superheroes are limited by their powers (e.g., Batman doesn't have X-ray vision, Spiderman can't talk to sea creatures), creating focus, as well as opportunities for challenge.
• Speaking of limitations, many superheroes also have a weakness. How they deal with this weakness can be as engaging as how they use their superpowers.
In every successful superhero comic book, graphic novel, and movie, you'll find that these basic principles are employed repeatedly throughout. Do your audiences ever develop expectations, and get such a clear idea of who you are? Perhaps it's worth asking yourself if your act is up the standard of your favorite superhero.
If you perform close-up magic, you might think that this extra work is only needed for stage performers. As Richard Tenace will tell you, a close-up kind of actor is needed even more than on stage! Close-up workers have smaller props to hide behind, so a more defined character is even more essential.
Contrasted with what can happen due to a lack of character, you'll find that the work required to develop a character will reward you many times over in the response you get from your audiences and your clients.
If you're sold on developing your performing character, the next question is how to go about it. That will be the topic of my next post.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3450661301612854, "perplexity": 1026.3173280994906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813622.87/warc/CC-MAIN-20180221123439-20180221143439-00151.warc.gz"}
|
https://math.answers.com/geometry/What_is_the_area_of_a_rectangle_that_is_4cm_long_and_2cm_wide
|
0
# What is the area of a rectangle that is 4cm long and 2cm wide?
Wiki User
2013-11-07 12:13:13
It is: 4*2 = 8 square cm
Wiki User
2013-11-07 12:13:13
Study guides
20 cards
## Properties that describe the appearance of matter are known as what properties
➡️
See all cards
3.84
38 Reviews
Earn +20 pts
Q: What is the area of a rectangle that is 4cm long and 2cm wide?
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9372237920761108, "perplexity": 3815.3283471548434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662534693.28/warc/CC-MAIN-20220520223029-20220521013029-00023.warc.gz"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.