url
stringlengths
14
1.76k
text
stringlengths
100
1.02M
metadata
stringlengths
1.06k
1.1k
https://www.physicsforums.com/threads/orbital-motion.883553/
# I Orbital Motion 1. Aug 29, 2016 ### Phys_Boi So I'm really interested in orbital mechanics. I'm only 16 so my knowledge of physics is restricted to an intermediate level. If there is a planet with large mass and a planet with small mass they are attracted to each other... So imagine a system where the large mass is fixed and the small object has a constant velocity. How can I model (I'm a big programmer and would like to create a script for this) the path the object takes around the plant. I know this has to do with the balance between the Fg and the velocity... Thank you for any help. 2. Aug 29, 2016 ### Phys_Boi I would like to also point out that i don't want only circular orbits. I would like for the program to be able to compute all orbits easily computable 3. Aug 29, 2016 ### Staff: Mentor Have fun! https://en.wikipedia.org/wiki/Orbital_mechanics 4. Aug 30, 2016 ### Phys_Boi Thank you! 5. Aug 30, 2016 ### nasu If the velocity is constant there is no path around the planet. Constant velocity means that the trajectory is a straight line. So there is nothing to program about it. It is not enough to be good at programming to solve physics problems. It helps a lot if you spend some time to understand the physics and the meaning of the terms used in physics. Draft saved Draft deleted Similar Discussions: Orbital Motion
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8653655052185059, "perplexity": 489.7459390808163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886107487.10/warc/CC-MAIN-20170821022354-20170821042354-00607.warc.gz"}
https://zbmath.org/?q=an:0863.60052
× # zbMATH — the first resource for mathematics Large deviations for solutions of stochastic equations. (English. Russian original) Zbl 0863.60052 Theory Probab. Appl. 40, No. 4, 660-678 (1995); translation from Teor. Veroyatn. Primen. 40, No. 4, 764-785 (1995). Let $$D([0,T]; R^d)$$ be a Skorokhod space, $$\mu^\varepsilon(A)=P\{\xi^\varepsilon(\cdot)\in A\}$$, $$A\in {\mathcal B}(D([0,T]; R^d))$$, $$\varepsilon > 0$$, be a family of probability measures, corresponding to the $$d$$-dimensional locally infinitely divisible processes $$\xi^\varepsilon(t)$$, $$t\geq 0$$, $$\varepsilon > 0$$, defined on some filtered probability space. A general principle of large deviations is proved for the family $$\{\mu^\varepsilon$$, $$\varepsilon>0\}$$ in terms of the local characteristics of $$\xi^\varepsilon$$, $$\varepsilon>0$$. Some special cases are discussed in detail. ##### MSC: 60H10 Stochastic ordinary differential equations (aspects of stochastic analysis) 60G48 Generalizations of martingales 60F10 Large deviations
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8524187207221985, "perplexity": 401.94886595382724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385389.83/warc/CC-MAIN-20210308174330-20210308204330-00453.warc.gz"}
https://www.science.gov/topicpages/b/beam+ct+kv.html
#### Sample records for beam ct kv 1. Evaluation of on-board kV cone beam CT (CBCT)-based dose calculation Yang, Yong; Schreibmann, Eduard; Li, Tianfang; Wang, Chuang; Xing, Lei 2007-02-01 On-board CBCT images are used to generate patient geometric models to assist patient setup. The image data can also, potentially, be used for dose reconstruction in combination with the fluence maps from treatment plan. Here we evaluate the achievable accuracy in using a kV CBCT for dose calculation. Relative electron density as a function of HU was obtained for both planning CT (pCT) and CBCT using a Catphan-600 calibration phantom. The CBCT calibration stability was monitored weekly for 8 consecutive weeks. A clinical treatment planning system was employed for pCT- and CBCT-based dose calculations and subsequent comparisons. Phantom and patient studies were carried out. In the former study, both Catphan-600 and pelvic phantoms were employed to evaluate the dosimetric performance of the full-fan and half-fan scanning modes. To evaluate the dosimetric influence of motion artefacts commonly seen in CBCT images, the Catphan-600 phantom was scanned with and without cyclic motion using the pCT and CBCT scanners. The doses computed based on the four sets of CT images (pCT and CBCT with/without motion) were compared quantitatively. The patient studies included a lung case and three prostate cases. The lung case was employed to further assess the adverse effect of intra-scan organ motion. Unlike the phantom study, the pCT of a patient is generally acquired at the time of simulation and the anatomy may be different from that of CBCT acquired at the time of treatment delivery because of organ deformation. To tackle the problem, we introduced a set of modified CBCT images (mCBCT) for each patient, which possesses the geometric information of the CBCT but the electronic density distribution mapped from the pCT with the help of a BSpline deformable image registration software. In the patient study, the dose computed with the mCBCT was used as a surrogate of the 'ground truth'. We found that the CBCT electron density calibration curve differs moderately from that of pCT. No 2. Evaluation of on-board kV cone beam CT (CBCT)-based dose calculation. PubMed Yang, Yong; Schreibmann, Eduard; Li, Tianfang; Wang, Chuang; Xing, Lei 2007-02-07 On-board CBCT images are used to generate patient geometric models to assist patient setup. The image data can also, potentially, be used for dose reconstruction in combination with the fluence maps from treatment plan. Here we evaluate the achievable accuracy in using a kV CBCT for dose calculation. Relative electron density as a function of HU was obtained for both planning CT (pCT) and CBCT using a Catphan-600 calibration phantom. The CBCT calibration stability was monitored weekly for 8 consecutive weeks. A clinical treatment planning system was employed for pCT- and CBCT-based dose calculations and subsequent comparisons. Phantom and patient studies were carried out. In the former study, both Catphan-600 and pelvic phantoms were employed to evaluate the dosimetric performance of the full-fan and half-fan scanning modes. To evaluate the dosimetric influence of motion artefacts commonly seen in CBCT images, the Catphan-600 phantom was scanned with and without cyclic motion using the pCT and CBCT scanners. The doses computed based on the four sets of CT images (pCT and CBCT with/without motion) were compared quantitatively. The patient studies included a lung case and three prostate cases. The lung case was employed to further assess the adverse effect of intra-scan organ motion. Unlike the phantom study, the pCT of a patient is generally acquired at the time of simulation and the anatomy may be different from that of CBCT acquired at the time of treatment delivery because of organ deformation. To tackle the problem, we introduced a set of modified CBCT images (mCBCT) for each patient, which possesses the geometric information of the CBCT but the electronic density distribution mapped from the pCT with the help of a BSpline deformable image registration software. In the patient study, the dose computed with the mCBCT was used as a surrogate of the 'ground truth'. We found that the CBCT electron density calibration curve differs moderately from that of pCT. No 3. SU-E-J-214: Comparative Assessment On IGRT On Partial Bladder Cancer Treatment Between CT-On-Rails (CTOR) and KV Cone Beam CT (CBCT) SciTech Connect Lin, T; Ma, C 2014-06-01 4. Evaluation of the effects of sagging shifts on isocenter accuracy and image quality of cone-beam CT from kV on-board imagers. PubMed 2009-07-17 5. [Image guided radiotherapy with the Cone Beam CT kV (Elekta): experience of the Léon Bérard centre]. PubMed Pommier, P; Gassa, F; Lafay, F; Claude, L 2009-09-01 Image guide radiotherapy with the Cone Beam CT kV (CBCT-kV) developed by Elekta has been implemented at the centre Léon Bérard in November 2006. The treatment procedure is presented and detailed for prostate cancer IGRT and non small cell lung cancer (NSCLC) stereotactic radiotherapy (SRT). CBCT-kV is routinely used for SRT, selected paediatric cancers, all prostate carcinomas, primitive brain tumours and head and neck cancers that do not require nodes irradiation. Thirty-five to 40 patients are treated within a daily 11-hours period. The general procedure for 3D images acquisition and their analysis is described. The CBCT-kV permitted to identify about 10% of prostate cancer patients for whom a positioning with bone-based 2D images only would have led to an unacceptable dose distribution for at least one session. SRT is now used routinely for inoperable NSCLC. The easiness of implementing CBCT-kV imaging and its expected medical benefit should lead to a rapid diffusion of this technology that is also submitted to prospective and multicentric medico-economical evaluations. 6. Dose Calculation on KV Cone Beam CT Images: An Investigation of the Hu-Density Conversion Stability and Dose Accuracy Using the Site-Specific Calibration SciTech Connect Rong Yi 2010-10-01 Precise calibration of Hounsfield units (HU) to electron density (HU-density) is essential to dose calculation. On-board kV cone beam computed tomography (CBCT) imaging is used predominantly for patients' positioning, but will potentially be used for dose calculation. The impacts of varying 3 imaging parameters (mAs, source-imager distance [SID], and cone angle) and phantom size on the HU number accuracy and HU-density calibrations for CBCT imaging were studied. We proposed a site-specific calibration method to achieve higher accuracy in CBCT image-based dose calculation. Three configurations of the Computerized Imaging Reference Systems (CIRS) water equivalent electron density phantom were used to simulate sites including head, lungs, and lower body (abdomen/pelvis). The planning computed tomography (CT) scan was used as the baseline for comparisons. CBCT scans of these phantom configurations were performed using Varian Trilogy{sup TM} system in a precalibrated mode with fixed tube voltage (125 kVp), but varied mAs, SID, and cone angle. An HU-density curve was generated and evaluated for each set of scan parameters. Three HU-density tables generated using different phantom configurations with the same imaging parameter settings were selected for dose calculation on CBCT images for an accuracy comparison. Changing mAs or SID had small impact on HU numbers. For adipose tissue, the HU discrepancy from the baseline was 20 HU in a small phantom, but 5 times lager in a large phantom. Yet, reducing the cone angle significantly decreases the HU discrepancy. The HU-density table was also affected accordingly. By performing dose comparison between CT and CBCT image-based plans, results showed that using the site-specific HU-density tables to calibrate CBCT images of different sites improves the dose accuracy to {approx}2%. Our phantom study showed that CBCT imaging can be a feasible option for dose computation in adaptive radiotherapy approach if the site 7. Dose calculation on kV cone beam CT images: an investigation of the Hu-density conversion stability and dose accuracy using the site-specific calibration. PubMed Rong, Yi; Smilowitz, Jennifer; Tewatia, Dinesh; Tomé, Wolfgang A; Paliwal, Bhudatt 2010-01-01 Precise calibration of Hounsfield units (HU) to electron density (HU-density) is essential to dose calculation. On-board kV cone beam computed tomography (CBCT) imaging is used predominantly for patients' positioning, but will potentially be used for dose calculation. The impacts of varying 3 imaging parameters (mAs, source-imager distance [SID], and cone angle) and phantom size on the HU number accuracy and HU-density calibrations for CBCT imaging were studied. We proposed a site-specific calibration method to achieve higher accuracy in CBCT image-based dose calculation. Three configurations of the Computerized Imaging Reference Systems (CIRS) water equivalent electron density phantom were used to simulate sites including head, lungs, and lower body (abdomen/pelvis). The planning computed tomography (CT) scan was used as the baseline for comparisons. CBCT scans of these phantom configurations were performed using Varian Trilogy system in a precalibrated mode with fixed tube voltage (125 kVp), but varied mAs, SID, and cone angle. An HU-density curve was generated and evaluated for each set of scan parameters. Three HU-density tables generated using different phantom configurations with the same imaging parameter settings were selected for dose calculation on CBCT images for an accuracy comparison. Changing mAs or SID had small impact on HU numbers. For adipose tissue, the HU discrepancy from the baseline was 20 HU in a small phantom, but 5 times lager in a large phantom. Yet, reducing the cone angle significantly decreases the HU discrepancy. The HU-density table was also affected accordingly. By performing dose comparison between CT and CBCT image-based plans, results showed that using the site-specific HU-density tables to calibrate CBCT images of different sites improves the dose accuracy to approximately 2%. Our phantom study showed that CBCT imaging can be a feasible option for dose computation in adaptive radiotherapy approach if the site-specific calibration 8. Experimental validation of a Monte Carlo-based kV x-ray projection model for the Varian linac-mounted cone-beam CT imaging system Lazos, Dimitrios; Pokhrel, Damodar; Su, Zhong; Lu, Jun; Williamson, Jeffrey F. 2008-03-01 Fast and accurate modeling of cone-beam CT (CBCT) x-ray projection data can improve CBCT image quality either by linearizing projection data for each patient prior to image reconstruction (thereby mitigating detector blur/lag, spectral hardening, and scatter artifacts) or indirectly by supporting rigorous comparative simulation studies of competing image reconstruction and processing algorithms. In this study, we compare Monte Carlo-computed x-ray projections with projections experimentally acquired from our Varian Trilogy CBCT imaging system for phantoms of known design. Our recently developed Monte Carlo photon-transport code, PTRAN, was used to compute primary and scatter projections for cylindrical phantom of known diameter (NA model 76-410) with and without bow-tie filter and antiscatter grid for both full- and half-fan geometries. These simulations were based upon measured 120 kVp spectra, beam profiles, and flat-panel detector (4030CB) point-spread function. Compound Poisson- process noise was simulated based upon measured beam output. Computed projections were compared to flat- and dark-field corrected 4030CB images where scatter profiles were estimated by subtracting narrow axial-from full axial width 4030CB profiles. In agreement with the literature, the difference between simulated and measured projection data is of the order of 6-8%. The measurement of the scatter profiles is affected by the long tails of the detector PSF. Higher accuracy can be achieved mainly by improving the beam modeling and correcting the non linearities induced by the detector PSF. 9. Monitoring Dosimetric Impact of Weight Loss With Kilovoltage (KV) Cone Beam CT (CBCT) During Parotid-Sparing IMRT and Concurrent Chemotherapy SciTech Connect Ho, Kean Fatt; Marchant, Tom; Moore, Chris; Webster, Gareth; Rowbottom, Carl; Penington, Hazel; Lee, Lip; Yap, Beng; Sykes, Andrew; Slevin, Nick 2012-03-01 Purpose: Parotid-sparing head-and-neck intensity-modulated radiotherapy (IMRT) can reduce long-term xerostomia. However, patients frequently experience weight loss and tumor shrinkage during treatment. We evaluate the use of kilovoltage (kV) cone beam computed tomography (CBCT) for dose monitoring and examine if the dosimetric impact of such changes on the parotid and critical neural structures warrants replanning during treatment. Methods and materials: Ten patients with locally advanced oropharyngeal cancer were treated with contralateral parotid-sparing IMRT concurrently with platinum-based chemotherapy. Mean doses of 65 Gy and 54 Gy were delivered to clinical target volume (CTV)1 and CTV2, respectively, in 30 daily fractions. CBCT was prospectively acquired weekly. Each CBCT was coregistered with the planned isocenter. The spinal cord, brainstem, parotids, larynx, and oral cavity were outlined on each CBCT. Dose distributions were recalculated on the CBCT after correcting the gray scale to provide accurate Hounsfield calibration, using the original IMRT plan configuration. Results: Planned contralateral parotid mean doses were not significantly different to those delivered during treatment (p > 0.1). Ipsilateral and contralateral parotids showed a mean reduction in volume of 29.7% and 28.4%, respectively. There was no significant difference between planned and delivered maximum dose to the brainstem (p = 0.6) or spinal cord (p = 0.2), mean dose to larynx (p = 0.5) and oral cavity (p = 0.8). End-of-treatment mean weight loss was 7.5 kg (8.8% of baseline weight). Despite a {>=}10% weight loss in 5 patients, there was no significant dosimetric change affecting the contralateral parotid and neural structures. Conclusions: Although patient weight loss and parotid volume shrinkage was observed, overall, there was no significant excess dose to the organs at risk. No replanning was felt necessary for this patient cohort, but a larger patient sample will be investigated 10. SU-E-I-07: Response Characteristics and Signal Conversion Modeling of KV Flat-Panel Detector in Cone Beam CT System SciTech Connect Wang, Yu; Cao, Ruifen; Pei, Xi; Wang, Hui; Hu, Liqin 2015-06-15 Purpose: The flat-panel detector response characteristics are investigated to optimize the scanning parameter considering the image quality and less radiation dose. The signal conversion model is also established to predict the tumor shape and physical thickness changes. Methods: With the ELEKTA XVI system, the planar images of 10cm water phantom were obtained under different image acquisition conditions, including tube voltage, electric current, exposure time and frames. The averaged responses of square area in center were analyzed using Origin8.0. The response characteristics for each scanning parameter were depicted by different fitting types. The transmission measured for 10cm water was compared to Monte Carlo simulation. Using the quadratic calibration method, a series of variable-thickness water phantoms images were acquired to derive the signal conversion model. A 20cm wedge water phantom with 2cm step thickness was used to verify the model. At last, the stability and reproducibility of the model were explored during a four week period. Results: The gray values of image center all decreased with the increase of different image acquisition parameter presets. The fitting types adopted were linear fitting, quadratic polynomial fitting, Gauss fitting and logarithmic fitting with the fitting R-Square 0.992, 0.995, 0.997 and 0.996 respectively. For 10cm water phantom, the transmission measured showed better uniformity than Monte Carlo simulation. The wedge phantom experiment show that the radiological thickness changes prediction error was in the range of (-4mm, 5mm). The signal conversion model remained consistent over a period of four weeks. Conclusion: The flat-panel response decrease with the increase of different scanning parameters. The preferred scanning parameter combination was 100kV, 10mA, 10ms, 15frames. It is suggested that the signal conversion model could effectively be used for tumor shape change and radiological thickness prediction. Supported by 11. [New methods in the treatment of localized prostate cancer: use of dynamic arc therapy and kV cone-beam CT positioning]. PubMed Szappanos, Szabolcs; Farkas, Róbert; Lőcsei, Zoltán; László, Zoltán; Kalincsák, Judit; Bellyei, Szabolcs; Sebestyén, Zsolt; Csapó, László; Sebestyén, Klára; Halász, Judit; Musch, Zoltán; Beöthe, Tamás; Farkas, László; Mangel, László 2014-08-10 Bevezetés: A prosztatarák az idősebb életkor és a fejlett világ daganatos megbetegedése. Lokalizált prosztatarák esetében a műtéti ellátás mellett komoly szerepe van a definitív sugárkezelésnek. Célkitűzés: A szerzők intézetében telepített Novalis TX gyorsító segítségével úgynevezett intenzitásmodulált sugárterápia, annak dinamikus ívbesugárzással elvégzett formája, illetve verifikáció során háromdimenziós lágy szöveti képellenőrzést biztosító, integrált kilovoltos cone-beam komputertomográfiával végzett képvezérelt sugárterápia került bevezetésre, amely módszerekkel szerzett első tapasztalataikat ismertetik a szerzők. Módszer: 2011 decembere és 2013 februárja között, dóziseszkalációt követően, 102 dinamikus ívbesugárzással elvégzett kezelést végeztek, majd 10-10 szelektált, alacsony és magas kockázatú betegnél (átlagéletkor 72,5 év) elkészítették a háromdimenziós konformális besugárzási terveket is. Azonos célterület-lefedettség mellett összevetették a rizikószervek dózisterhelését. Eredmények: A dinamikus ívbesugárzással elvégzett kezelések mellett a rizikószervek szignifikánsan alacsonyabb dózisterhelését érték el, amelyet a kedvező korai mellékhatásprofil is alátámaszt. Következtetések: Az intenzitásmodulált sugárterápia dinamikus ívbesugárzással elvégzett formája biztonsággal alkalmazott standard kezelési módozattá vált a szerzők intézetében. Késői mellékhatások és lokális kontroll további vizsgálata szükséges. Orv. Hetil., 2014, 155(32), 1265–1272. 12. An algorithm to extract three-dimensional motion by marker tracking in the kV projections from an on-board imager: four-dimensional cone-beam CT and tumor tracking implications. PubMed 2011-02-01 The purpose of this work is to extract three-dimensional (3D) motion trajectories of internal implanted and external skin-attached markers from kV cone-beam projections and reduce image artifact from patient motion in cone-beam computed tomography (CBCT) from on-board imager. Cone beam radiographic projections were acquired for a mobile phantom and liver patients with internal implanted and external skin-attached markers. An algorithm was developed to automatically find the positions of the markers in the projections. It uses normalized cross-correlation between a template image of a metal seed marker and the projections to find the marker position. From these positions and time-tagged angular views, the marker 3D motion trajectory was obtained over a time interval of nearly one minute, which is the time required for scanning. This marker trajectory was used to remap the pixels of the projections to eliminate motion. Then, the motion-corrected projections were used to reconstruct CBCT. An algorithm was developed to extract 3D motion trajectories of internal and external markers from cone-beam projections using a kV monoscopic on-board imager. This algorithm was tested and validated using a mobile phantom and patients with liver masses that had radio-markers implanted in the tumor and attached to the skin. The extracted motion trajectories were used to investigate motion correlation between internal and external markers in liver patients. Image artifacts from respiratory motion were reduced in CBCT reconstructed from cone-beam projections that were preprocessed to remove motion shifts obtained from marker tracking. With this method, motion-related image artifacts such as blurring and spatial distortion were reduced, and contrast and position resolutions were improved significantly in CBCT reconstructed from motion-corrected projections. Furthermore, correlated internal and external marker 3D-motion tracks obtained from the kV projections might be useful for 4DCBCT 13. [Radiation output evaluation of kilovoltage cone beam CT unit]. PubMed Wang, Yunlai; Liao, Xiongfei; Ge, Ruigang 2011-09-01 To evaluate the radiation output and stability of linac-integrated kV cone beam CT unit. Air kermas in radiographic mode were measured with 0.6 cc ion chamber and Unidos electrometer for Synergy-integrated XVI kV cone beam CT unit. Air kermas vs image frames were measured in fluoroscopic mode. Output stability and depth doses were measured. The air kerma increased quadratically with the increased tube voltage, while increasing linearly with the tube current, exposure time, and number of frames. The radiation output stability and its change with the gantry angle were within +/-1%. The percentage depth dose increased with higher tube voltage. The radiation output of XVI is stable. The radiation outputs change considerably with the preset parameters. Parameters should be optimally chosen to reduce the patient dose. 14. Cardiac cone-beam CT SciTech Connect Manzke, Robert . E-mail: [email protected] 2005-10-15 This doctoral thesis addresses imaging of the heart with retrospectively gated helical cone-beam computed tomography (CT). A thorough review of the CT reconstruction literature is presented in combination with a historic overview of cardiac CT imaging and a brief introduction to other cardiac imaging modalities. The thesis includes a comprehensive chapter about the theory of CT reconstruction, familiarizing the reader with the problem of cone-beam reconstruction. The anatomic and dynamic properties of the heart are outlined and techniques to derive the gating information are reviewed. With the extended cardiac reconstruction (ECR) framework, a new approach is presented for the heart-rate-adaptive gated helical cardiac cone-beam CT reconstruction. Reconstruction assessment criteria such as the temporal resolution, the homogeneity in terms of the cardiac phase, and the smoothness at cycle-to-cycle transitions are developed. Several reconstruction optimization approaches are described: An approach for the heart-rate-adaptive optimization of the temporal resolution is presented. Streak artifacts at cycle-to-cycle transitions can be minimized by using an improved cardiac weighting scheme. The optimal quiescent cardiac phase for the reconstruction can be determined automatically with the motion map technique. Results for all optimization procedures applied to ECR are presented and discussed based on patient and phantom data. The ECR algorithm is analyzed for larger detector arrays of future cone-beam systems throughout an extensive simulation study based on a four-dimensional cardiac CT phantom. The results of the scientific work are summarized and an outlook proposing future directions is given. The presented thesis is available for public download at www.cardiac-ct.net. 15. SU-E-I-23: A General KV Constrained Optimization of CNR for CT Abdominal Imaging SciTech Connect Weir, V; Zhang, J 2015-06-15 Purpose: While Tube current modulation has been well accepted for CT dose reduction, kV adjusting in clinical settings is still at its early stage. This is mainly due to the limited kV options of most current CT scanners. kV adjusting can potentially reduce radiation dose and optimize image quality. This study is to optimize CT abdomen imaging acquisition based on the assumption of a continuous kV, with the goal to provide the best contrast to noise ratio (CNR). Methods: For a given dose (CTDIvol) level, the CNRs at different kV and pitches were measured with an ACR GAMMEX phantom. The phantom was scanned in a Siemens Sensation 64 scanner and a GE VCT 64 scanner. A constrained mathematical optimization was used to find the kV which led to the highest CNR for the anatomy and pitch setting. Parametric equations were obtained from polynomial fitting of plots of kVs vs CNRs. A suitable constraint region for optimization was chosen. Subsequent optimization yielded a peak CNR at a particular kV for different collimations and pitch setting. Results: The constrained mathematical optimization approach yields kV of 114.83 and 113.46, with CNRs of 1.27 and 1.11 at the pitch of 1.2 and 1.4, respectively, for the Siemens Sensation 64 scanner with the collimation of 32 x 0.625mm. An optimized kV of 134.25 and 1.51 CNR is obtained for a GE VCT 64 slice scanner with a collimation of 32 x 0.625mm and a pitch of 0.969. At 0.516 pitch and 32 x 0.625 mm an optimized kV of 133.75 and a CNR of 1.14 was found for the GE VCT 64 slice scanner. Conclusion: CNR in CT image acquisition can be further optimized with a continuous kV option instead of current discrete or fixed kV settings. A continuous kV option is a key for individualized CT protocols. 16. Feasibility of Dose Reduction Using Novel Denoising Techniques for Low kV (80 kV) CT Enterography: Optimization and Validation PubMed Central Guimarães, Luís S; Fletcher, Joel G; Yu, Lifeng; Huprich, James E; Fidler, Jeff L.; Manduca, Armando; Ramirez-Giraldo, Juan Carlos; Holmes, David R.; McCollough, Cynthia H 2010-01-01 Rational and Objectives To optimize and validate projection space denoising (PSDN) strategies for application to 80 kV computed tomography (CT) data to achieve 50% dose reduction. Materials and Methods This retrospective HIPAA-compliant study had IRB approval. We utilized 80 kV image data (mean CTDIvol 7.9 mGy) obtained from dual-source dual-energy CTE exams in 42 patients. For each exam, nine 80 kV image datasets were reconstructed using PSDN (3 levels of intensity) ± image-based denoising and compared to commercial reconstruction kernels. For optimization, qualitative analysis selected optimal denoising strategies, with quantitative analysis measuring image contrast, noise and sharpness (FWHM bowel wall thickness, maximum CT number gradient). For validation, two radiologists examined image quality, comparing low-dose 80 kV optimally denoised images to full dose mixed kV images. Results PSDN algorithms generated the best 80 kV image quality (41/42 patients), while the commercial kernels produced the worst (39/42, p < 0.001). Overall 80 kV PSDN approaches resulted in higher contrast (mean 332 HU vs. 290 HU), slightly less noise (mean 20 HU vs. 26 HU), but slightly decreased images sharpness (relative bowel wall thickness, 1.069 vs. 1.000) compared to full-dose mixed kV images. Mean image quality scores for full-dose CTE images was 4.9 compared to 4.5 for optimally-denoised half-dose 80 kV CTE images, and 3.1 for non-denoised 80 kV CTE images (p<0.001). Conclusion Optimized denoising strategies improve the quality of 80 kV CT enterography images such that CT data obtained at 50% of routine dose levels approaches the image quality of full-dose exams. PMID:20832023 17. The adaptation of megavoltage cone beam CT for use in standard radiotherapy treatment planning Thomas, T. Hannah Mary; Devakumar, D.; Purnima, S.; Ravindran, B. Paul 2009-04-01 Potential areas where megavoltage computed tomography (MVCT) could be used are second- and third-phase treatment planning in 3D conformal radiotherapy and IMRT, adaptive radiation therapy, single fraction palliative treatment and for the treatment of patients with metal prostheses. A feasibility study was done on using MV cone beam CT (CBCT) images generated by proprietary 3D reconstruction software based on the FDK algorithm for megavoltage treatment planning. The reconstructed images were converted to a DICOM file set. The pixel values of megavoltage cone beam computed tomography (MV CBCT) were rescaled to those of kV CT for use with a treatment planning system. A calibration phantom was designed and developed for verification of geometric accuracy and CT number calibration. The distance measured between two marker points on the CBCT image and the physical dimension on the phantom were in good agreement. Point dose verification for a 10 cm × 10 cm beam at a gantry angle of 0° and SAD of 100 cm were performed for a 6 MV beam for both kV and MV CBCT images. The point doses were found to vary between ±6.1% of the dose calculated from the kV CT image. The isodose curves for 6 MV for both kV CT and MV CBCT images were within 2% and 3 mm distance-to-agreement. A plan with three beams was performed on MV CBCT, simulating a treatment plan for cancer of the pituitary. The distribution obtained was compared with those corresponding to that obtained using the kV CT. This study has shown that treatment planning with MV cone beam CT images is feasible. 18. The adaptation of megavoltage cone beam CT for use in standard radiotherapy treatment planning. PubMed Thomas, T Hannah Mary; Devakumar, D; Purnima, S; Ravindran, B Paul 2009-04-07 Potential areas where megavoltage computed tomography (MVCT) could be used are second- and third-phase treatment planning in 3D conformal radiotherapy and IMRT, adaptive radiation therapy, single fraction palliative treatment and for the treatment of patients with metal prostheses. A feasibility study was done on using MV cone beam CT (CBCT) images generated by proprietary 3D reconstruction software based on the FDK algorithm for megavoltage treatment planning. The reconstructed images were converted to a DICOM file set. The pixel values of megavoltage cone beam computed tomography (MV CBCT) were rescaled to those of kV CT for use with a treatment planning system. A calibration phantom was designed and developed for verification of geometric accuracy and CT number calibration. The distance measured between two marker points on the CBCT image and the physical dimension on the phantom were in good agreement. Point dose verification for a 10 cm x 10 cm beam at a gantry angle of 0 degrees and SAD of 100 cm were performed for a 6 MV beam for both kV and MV CBCT images. The point doses were found to vary between +/-6.1% of the dose calculated from the kV CT image. The isodose curves for 6 MV for both kV CT and MV CBCT images were within 2% and 3 mm distance-to-agreement. A plan with three beams was performed on MV CBCT, simulating a treatment plan for cancer of the pituitary. The distribution obtained was compared with those corresponding to that obtained using the kV CT. This study has shown that treatment planning with MV cone beam CT images is feasible. 19. Commissioning kilovoltage cone-beam CT beams in a radiation therapy treatment planning system. PubMed Alaei, Parham; Spezi, Emiliano 2012-11-08 The feasibility of accounting of the dose from kilovoltage cone-beam CT in treatment planning has been discussed previously for a single cone-beam CT (CBCT) beam from one manufacturer. Modeling the beams and computing the dose from the full set of beams produced by a kilovoltage cone-beam CT system requires extensive beam data collection and verification, and is the purpose of this work. The beams generated by Elekta X-ray volume imaging (XVI) kilovoltage CBCT (kV CBCT) system for various cassettes and filters have been modeled in the Philips Pinnacle treatment planning system (TPS) and used to compute dose to stack and anthropomorphic phantoms. The results were then compared to measurements made using thermoluminescent dosimeters (TLDs) and Monte Carlo (MC) simulations. The agreement between modeled and measured depth-dose and cross profiles is within 2% at depths beyond 1 cm for depth-dose curves, and for regions within the beam (excluding penumbra) for cross profiles. The agreements between TPS-calculated doses, TLD measurements, and Monte Carlo simulations are generally within 5% in the stack phantom and 10% in the anthropomorphic phantom, with larger variations observed for some of the measurement/calculation points. Dose computation using modeled beams is reasonably accurate, except for regions that include bony anatomy. Inclusion of this dose in treatment plans can lead to more accurate dose prediction, especially when the doses to organs at risk are of importance. 20. Engineering of beam direct conversion for a 120-kV, 1-MW ion beam NASA Technical Reports Server (NTRS) Barr, W. L.; Doggett, J. N.; Hamilton, G. W.; Kinney, J. D.; Moir, R. W. 1977-01-01 Practical systems for beam direct conversion are required to recover the energy from ion beams at high efficiency and at very high beam power densities in the environment of a high-power neutral-injection system. Such an experiment is now in progress using a 120-kV beam with a maximum total current of 20 A. After neutralization, the H(+) component to be recovered will have a power of approximately 1 MW. A system testing these concepts has been designed and tested at 15 kV, 2 kW in preparation for the full-power tests. The engineering problems involved in the full-power tests affect electron suppression, gas pumping, voltage holding, diagnostics, and measurement conditions. Planning for future experiments at higher power includes the use of cryopumping and electron suppression by a magnetic field rather than by an electrostatic field. Beam direct conversion for large fusion experiments and reactors will save millions of dollars in the cost of power supplies and electricity and will dispose of the charged beam under conditions that may not be possible by other techniques. 1. Minimizing image noise in on-board CT reconstruction using both kilovoltage and megavoltage beam projections. PubMed Zhang, Junan; Yin, Fang-Fang 2007-09-01 We studied a recently proposed aggregated CT reconstruction technique which combines the complementary advantages of kilovoltage (kV) and megavoltage (MV) x-ray imaging. Various phantoms were imaged to study the effects of beam orientations and geometry of the imaging object on image quality of reconstructed CT. It was shown that the quality of aggregated CT was correlated with both kV and MV beam orientations and the degree of this correlation depended upon the geometry of the imaging object. The results indicated that the optimal orientations were those when kV beams pass through the thinner portion and MV beams pass through the thicker portion of the imaging object. A special preprocessing procedure was also developed to perform contrast conversions between kV and MV information prior to image reconstruction. The performance of two reconstruction methods, one filtered backprojection method and one iterative method, were compared. The effects of projection number, beam truncation, and contrast conversion on the CT image quality were investigated. 2. Feasibility study on effect and stability of adaptive radiotherapy on kilovoltage cone beam CT. PubMed Yadav, Poonam; Ramasubramanian, Velayudham; Paliwal, Bhudatt R 2011-09-01 We have analyzed the stability of CT to density curve of kilovoltage cone-beam computerized tomography (kV CBCT) imaging modality over the period of six months. We also, investigated the viability of using image value to density table (IVDT) generated at different time, for adaptive radiotherapy treatment planning. The consequences of target volume change and the efficacy of kV CBCT for adaptive planning issues is investigated. MATERIALS AND METHODS.: Standard electron density phantom was used to establish CT to electron density calibrations curve. The CT to density curve for the CBCT images were observed for the period of six months. The kV CBCT scans used for adaptive planning was acquired with an on-board imager system mounted on a "Trilogy" linear accelerator. kV CBCT images were acquired for daily setup registration. The effect of variations in CT to density curve was studied on two clinical cases: prostate and lung. The soft tissue contouring is superior in kV CBCT scans in comparison to mega voltage CT (MVCT) scans. The CT to density curve for the CBCT images was found steady over six months. Due to difficulty in attaining the reproducibility in daily setup for the prostate treatment, there is a day-to-day difference in dose to the rectum and bladder. There is no need for generating a new CT to density curve for the adaptive planning on the kV CBCT images. Also, it is viable to perform the adaptive planning to check the dose to target and organ at risk (OAR) without performing a new kV CT scan, which will reduce the dose to the patient. 3. Accuracy in automatic image registration between MV cone beam computed tomography and planning kV computed tomography in image guided radiotherapy. PubMed Kanakavelu, Nithya; Samuel, E James Jebaseelan 2016-01-01 To verify the accuracy of automatic image registration (IR) between the planning kilo voltage computed tomography (kV CT) and megavoltage cone beam computed tomography (MV CBCT) datasets using phantom and patient images. The automatic IR between MV CBCT and planning kV CT is a fast solution for performing online image guided radiotherapy (IGRT). The IR accuracy has to be verified periodically as it directly affects patient setup accuracy. The automatic IR accuracy was evaluated using image quality phantom acquired with different kV CT slice thickness, different MV CBCT acquisition MUs and reconstruction slice size and thickness. The IR accuracy was also evaluated on patient images on different anatomical sites such as brain, head & neck, thorax and pelvis. The uncertainty in the automatic registration was assessed by introducing known offset to kV CT dataset and compared with the registration results. The result with the phantom images was within 2 mm in all three translational directions. The accuracy in automatic IR using patient images was within 2 mm in most of the cases. 3 mm planning kV CT slice thickness was sufficient to perform automatic IR successfully within 2 mm accuracy. The MV CBCT reconstruction parameters such as slice thickness and slice size had no effect on the registration accuracy. This study shows that the automatic IR is accurate within 2 mm and provides confidence in performing them between planning kV CT and MV CBCT image datasets for online image guided radiotherapy. 4. Enhancement of image quality with a fast iterative scatter and beam hardening correction method for kV CBCT. PubMed Reitz, Irmtraud; Hesse, Bernd-Michael; Nill, Simeon; Tücking, Thomas; Oelfke, Uwe 2009-01-01 The problem of the enormous amount of scattered radiation in kV CBCT (kilo voltage cone beam computer tomography) is addressed. Scatter causes undesirable streak- and cup-artifacts and results in a quantitative inaccuracy of reconstructed CT numbers, so that an accurate dose calculation might be impossible. Image contrast is also significantly reduced. Therefore we checked whether an appropriate implementation of the fast iterative scatter correction algorithm we have developed for MV (mega voltage) CBCT reduces the scatter contribution in a kV CBCT as well. This scatter correction method is based on a superposition of pre-calculated Monte Carlo generated pencil beam scatter kernels. The algorithm requires only a system calibration by measuring homogeneous slab phantoms with known water-equivalent thicknesses. In this study we compare scatter corrected CBCT images of several phantoms to the fan beam CT images acquired with a reduced cone angle (a slice-thickness of 14 mm in the isocenter) at the same system. Additional measurements at a different CBCT system were made (different energy spectrum and phantom-to-detector distance) and a first order approach of a fast beam hardening correction will be introduced. The observed image quality of the scatter corrected CBCT images is comparable concerning resolution, noise and contrast-to-noise ratio to the images acquired in fan beam geometry. Compared to the CBCT without any corrections the contrast of the contrast-and-resolution phantom with scatter correction and additional beam hardening correction is improved by a factor of about 1.5. The reconstructed attenuation coefficients and the CT numbers of the scatter corrected CBCT images are close to the values of the images acquired in fan beam geometry for the most pronounced tissue types. Only for extreme dense tissue types like cortical bone we see a difference in CT numbers of 5.2%, which can be improved to 4.4% with the additional beam hardening correction. Cupping is 5. Imaging doses from the Elekta Synergy X-ray cone beam CT system. PubMed Amer, A; Marchant, T; Sykes, J; Czajka, J; Moore, C 2007-06-01 The Elekta Synergy is a radiotherapy treatment machine with integrated kilovoltage (kV) X-ray imaging system capable of producing cone beam CT (CBCT) images of the patient in the treatment position. The aim of this study is to assess the additional imaging dose. Cone beam CT dose index (CBDI) is introduced and measured inside standard CTDI phantoms for several sites (head: 100 kV, 38 mAs, lung: 120 kV, 152 mAs and pelvis: 130 kV, 456 mAs). The measured weighted doses were compared with thermoluminescent dosimeter (TLD) measurements at various locations in a Rando phantom and at patients' surfaces. The measured CBDIs in-air at the isocentre were 9.2 mGy 100 mAs(-1), 7.3 mGy 100 mAs(-1) and 5.3 mGy 100 mAs(-1) for 130 kV, 120 kV and 100 kV, respectively. The body phantom weighted CBDI were 5.5 mGy 100 mAs(-1) and 3.8 mGy 100 mAs(-1 )for 130 kV and 120 kV. The head phantom weighted CBDI was 4.3 mGy 100 mAs(-1) for 100 kV. The weighted doses for the Christie Hospital CBCT imaging techniques were 1.6 mGy, 6 mGy and 22 mGy for the head, lung and pelvis. The measured CBDIs were used to estimate the total effective dose for the Synergy system using the ImPACT CT Patient Dosimetry Calculator. Measured CBCT doses using the Christie Hospital protocols are low for head and lung scans whether compared with electronic portal imaging (EPI), commonly used for treatment verification, or single and multiple slice CT. For the pelvis, doses are similar to EPI but higher than CT. Repeated use of CBCT for treatment verification is likely and hence the total patient dose needs to be carefully considered. It is important to consider further development of low dose CBCT techniques to keep additional doses as low as reasonably practicable. 6. How do kV and mAs affect CT lesion detection performance? Huda, W.; Ogden, K. M.; Shah, K.; Jadoo, C.; Scalzetti, E. M.; Lavallee, R. L.; Roskopf, M. L. 2007-03-01 The purpose of this study was to investigate how output (mAs) and x-ray tube voltage (kV) affect lesion detection in CT imaging. An adult Rando phantom was scanned on a GE LightSpeed CT scanner at x-ray tube voltages from 80 to 140 kV, and outputs from 90 to 360 mAs. Axial images of the abdomen were reconstructed and viewed on a high quality monitor at a soft tissue display setting. We measured detection of 2.5 to 12.5 mm sized lesions using a 2 Alternate Forced Choice (2-AFC) experimental paradigm that determined lesion contrast (I) corresponding to a 92% accuracy (I 92%) of lesion detection. Plots of log(I 92%) versus log(lesion size) were all approximately linear. The slope of the contrast detail curve was ~ -1.0 at 90 mAs, close to the value predicted by the Rose model, but monotonically decreased with increasing mAs to a value of ~ -0.7 at 360 mAs. Increasing the x-ray tube output by a factor of four improved lesion detection by a factor of 1.9 for the smallest lesion (2.5 mm), close to the value predicted by the Rose model, but only by a factor of 1.2 for largest lesion (12.5 mm). Increasing the kV monotonically decreased the contrast detail slopes from -1.02 at 80 kV to -0.71 at 140 kV. Increasing the x-ray tube voltage from 80 to 140 kV improved lesion detection by a factor of 2.8 for the smallest lesion (2.5 mm), but only by a factor of 1.7 for largest lesion (12.5 mm). We conclude that: (i) quantum mottle is an important factor for low contrast lesion detection in images of anthropomorphic phantoms; (ii) x-ray tube voltage has a much greater influence on lesion detection performance than x-ray tube output; (iii) the Rose model only predicts CT lesion detection performance at low x-ray tube outputs (90 mAs) and for small lesions (2.5 mm). 7. Algorithm for X-ray scatter, beam-hardening, and beam profile correction in diagnostic (kilovoltage) and treatment (megavoltage) cone beam CT. PubMed Maltz, Jonathan S; Gangadharan, Bijumon; Bose, Supratik; Hristov, Dimitre H; Faddegon, Bruce A; Paidi, Ajay; Bani-Hashemi, Ali R 2008-12-01 Quantitative reconstruction of cone beam X-ray computed tomography (CT) datasets requires accurate modeling of scatter, beam-hardening, beam profile, and detector response. Typically, commercial imaging systems use fast empirical corrections that are designed to reduce visible artifacts due to incomplete modeling of the image formation process. In contrast, Monte Carlo (MC) methods are much more accurate but are relatively slow. Scatter kernel superposition (SKS) methods offer a balance between accuracy and computational practicality. We show how a single SKS algorithm can be employed to correct both kilovoltage (kV) energy (diagnostic) and megavoltage (MV) energy (treatment) X-ray images. Using MC models of kV and MV imaging systems, we map intensities recorded on an amorphous silicon flat panel detector to water-equivalent thicknesses (WETs). Scattergrams are derived from acquired projection images using scatter kernels indexed by the local WET values and are then iteratively refined using a scatter magnitude bounding scheme that allows the algorithm to accommodate the very high scatter-to-primary ratios encountered in kV imaging. The algorithm recovers radiological thicknesses to within 9% of the true value at both kV and megavolt energies. Nonuniformity in CT reconstructions of homogeneous phantoms is reduced by an average of 76% over a wide range of beam energies and phantom geometries. 8. Sci—Thur AM: YIS - 09: Validation of a General Empirically-Based Beam Model for kV X-ray Sources SciTech Connect Poirier, Y.; Sommerville, M.; Johnstone, C.D.; Gräfe, J.; Nygren, I.; Jacso, F.; Khan, R.; Villareal-Barajas, J.E.; Tambasco, M. 2014-08-15 Purpose: To present an empirically-based beam model for computing dose deposited by kilovoltage (kV) x-rays and validate it for radiographic, CT, CBCT, superficial, and orthovoltage kV sources. Method and Materials: We modeled a wide variety of imaging (radiographic, CT, CBCT) and therapeutic (superficial, orthovoltage) kV x-ray sources. The model characterizes spatial variations of the fluence and spectrum independently. The spectrum is derived by matching measured values of the half value layer (HVL) and nominal peak potential (kVp) to computationally-derived spectra while the fluence is derived from in-air relative dose measurements. This model relies only on empirical values and requires no knowledge of proprietary source specifications or other theoretical aspects of the kV x-ray source. To validate the model, we compared measured doses to values computed using our previously validated in-house kV dose computation software, kVDoseCalc. The dose was measured in homogeneous and anthropomorphic phantoms using ionization chambers and LiF thermoluminescent detectors (TLDs), respectively. Results: The maximum difference between measured and computed dose measurements was within 2.6%, 3.6%, 2.0%, 4.8%, and 4.0% for the modeled radiographic, CT, CBCT, superficial, and the orthovoltage sources, respectively. In the anthropomorphic phantom, the computed CBCT dose generally agreed with TLD measurements, with an average difference and standard deviation ranging from 2.4 ± 6.0% to 5.7 ± 10.3% depending on the imaging technique. Most (42/62) measured TLD doses were within 10% of computed values. Conclusions: The proposed model can be used to accurately characterize a wide variety of kV x-ray sources using only empirical values. 9. SU-E-I-06: A Dose Calculation Algorithm for KV Diagnostic Imaging Beams by Empirical Modeling SciTech Connect 2015-06-15 Purpose: To develop accurate three-dimensional (3D) empirical dose calculation model for kV diagnostic beams for different radiographic and CT imaging techniques. Methods: Dose was modeled using photon attenuation measured using depth dose (DD), scatter radiation of the source and medium, and off-axis ratio (OAR) profiles. Measurements were performed using single-diode in water and a diode-array detector (MapCHECK2) with kV on-board imagers (OBI) integrated with Varian TrueBeam and Trilogy linacs. The dose parameters were measured for three energies: 80, 100, and 125 kVp with and without bowtie filters using field sizes 1×1–40×40 cm2 and depths 0–20 cm in water tank. Results: The measured DD decreased with depth in water because of photon attenuation, while it increased with field size due to increased scatter radiation from medium. DD curves varied with energy and filters where they increased with higher energies and beam hardening from half-fan and full-fan bowtie filters. Scatter radiation factors increased with field sizes and higher energies. The OAR was with 3% for beam profiles within the flat dose regions. The heal effect of this kV OBI system was within 6% from the central axis value at different depths. The presence of bowtie filters attenuated measured dose off-axis by as much as 80% at the edges of large beams. The model dose predictions were verified with measured doses using single point diode and ionization chamber or two-dimensional diode-array detectors inserted in solid water phantoms. Conclusion: This empirical model enables fast and accurate 3D dose calculation in water within 5% in regions with near charge-particle equilibrium conditions outside buildup region and penumbra. It considers accurately scatter radiation contribution in water which is superior to air-kerma or CTDI dose measurements used usually in dose calculation for diagnostic imaging beams. Considering heterogeneity corrections in this model will enable patient specific dose 10. Empirical beam hardening correction (EBHC) for CT. PubMed Kyriakou, Yiannis; Meyer, Esther; Prell, Daniel; Kachelriess, Marc 2010-10-01 Due to x-ray beam polychromaticity and scattered radiation, attenuation measurements tend to be underestimated. Cupping and beam hardening artifacts become apparent in the reconstructed CT images. If only one material such as water, for example, is present, these artifacts can be reduced by precorrecting the rawdata. Higher order beam hardening artifacts, as they result when a mixture of materials such as water and bone, or water and bone and iodine is present, require an iterative beam hardening correction where the image is segmented into different materials and those are forward projected to obtain new rawdata. Typically, the forward projection must correctly model the beam polychromaticity and account for all physical effects, including the energy dependence of the assumed materials in the patient, the detector response, and others. We propose a new algorithm that does not require any knowledge about spectra or attenuation coefficients and that does not need to be calibrated. The proposed method corrects beam hardening in single energy CT data. The only a priori knowledge entering EBHC is the segmentation of the object into different materials. Materials other than water are segmented from the original image, e.g., by using simple thresholding. Then, a (monochromatic) forward projection of these other materials is performed. The measured rawdata and the forward projected material-specific rawdata are monomially combined (e.g., multiplied or squared) and reconstructed to yield a set of correction volumes. These are then linearly combined and added to the original volume. The combination weights are determined to maximize the flatness of the new and corrected volume. EBHC is evaluated using data acquired with a modern cone-beam dual-source spiral CT scanner (Somatom Definition Flash, Siemens Healthcare, Forchheim, Germany), with a modern dual-source micro-CT scanner (Tomo-Scope Synergy Twin, CT Imaging GmbH, Erlangen, Germany), and with a modern C-arm CT scanner 11. Empirical beam hardening correction (EBHC) for CT. PubMed Kyriakou, Yiannis; Meyer, Esther; Prell, Daniel; Kachelrieß, Marc 2010-10-01 Due to x-ray beam polychromaticity and scattered radiation, attenuation measurements tend to be underestimated. Cupping and beam hardening artifacts become apparent in the reconstructed CT images. If only one material such as water, for example, is present, these artifacts can be reduced by precorrecting the rawdata. Higher order beam hardening artifacts, as they result when a mixture of materials such as water and bone, or water and bone and iodine is present, require an iterative beam hardening correction where the image is segmented into different materials and those are forward projected to obtain new rawdata. Typically, the forward projection must correctly model the beam polychromaticity and account for all physical effects, including the energy dependence of the assumed materials in the patient, the detector response, and others. We propose a new algorithm that does not require any knowledge about spectra or attenuation coefficients and that does not need to be calibrated. The proposed method corrects beam hardening in single energy CT data. The onlya priori knowledge entering EBHC is the segmentation of the object into different materials. Materials other than water are segmented from the original image, e.g., by using simple thresholding. Then, a (monochromatic) forward projection of these other materials is performed. The measured rawdata and the forward projected material-specific rawdata are monomially combined (e.g., multiplied or squared) and reconstructed to yield a set of correction volumes. These are then linearly combined and added to the original volume. The combination weights are determined to maximize the flatness of the new and corrected volume. EBHC is evaluated using data acquired with a modern cone-beam dual-source spiral CT scanner (Somatom Definition Flash, Siemens Healthcare, Forchheim, Germany), with a modern dual-source micro-CT scanner (TomoScope Synergy Twin, CT Imaging GmbH, Erlangen, Germany), and with a modern C-arm CT scanner 12. Empirical beam hardening correction (EBHC) for CT SciTech Connect Kyriakou, Yiannis; Meyer, Esther; Prell, Daniel; Kachelriess, Marc 2010-10-15 Purpose: Due to x-ray beam polychromaticity and scattered radiation, attenuation measurements tend to be underestimated. Cupping and beam hardening artifacts become apparent in the reconstructed CT images. If only one material such as water, for example, is present, these artifacts can be reduced by precorrecting the rawdata. Higher order beam hardening artifacts, as they result when a mixture of materials such as water and bone, or water and bone and iodine is present, require an iterative beam hardening correction where the image is segmented into different materials and those are forward projected to obtain new rawdata. Typically, the forward projection must correctly model the beam polychromaticity and account for all physical effects, including the energy dependence of the assumed materials in the patient, the detector response, and others. We propose a new algorithm that does not require any knowledge about spectra or attenuation coefficients and that does not need to be calibrated. The proposed method corrects beam hardening in single energy CT data. Methods: The only a priori knowledge entering EBHC is the segmentation of the object into different materials. Materials other than water are segmented from the original image, e.g., by using simple thresholding. Then, a (monochromatic) forward projection of these other materials is performed. The measured rawdata and the forward projected material-specific rawdata are monomially combined (e.g., multiplied or squared) and reconstructed to yield a set of correction volumes. These are then linearly combined and added to the original volume. The combination weights are determined to maximize the flatness of the new and corrected volume. EBHC is evaluated using data acquired with a modern cone-beam dual-source spiral CT scanner (Somatom Definition Flash, Siemens Healthcare, Forchheim, Germany), with a modern dual-source micro-CT scanner (TomoScope Synergy Twin, CT Imaging GmbH, Erlangen, Germany), and with a modern 13. Spectroscopic determination of the composition of a 50 kV hydrogen diagnostic neutral beam SciTech Connect Feng, X.; Nornberg, M. D. Den Hartog, D. J.; Oliva, S. P.; Craig, D. 2016-11-15 A grating spectrometer with an electron multiplying charge-coupled device camera is used to diagnose a 50 kV, 5 A, 20 ms hydrogen diagnostic neutral beam. The ion source density is determined from Stark broadened H{sub β} emission and the spectrum of Doppler-shifted H{sub α} emission is used to quantify the fraction of ions at full, half, and one-third beam energy under a variety of operating conditions including fueling gas pressure and arc discharge current. Beam current is optimized at low-density conditions in the ion source while the energy fractions are found to be steady over most operating conditions. 14. Spectroscopic determination of the composition of a 50 kV hydrogen diagnostic neutral beam Feng, X.; Nornberg, M. D.; Craig, D.; Den Hartog, D. J.; Oliva, S. P. 2016-11-01 A grating spectrometer with an electron multiplying charge-coupled device camera is used to diagnose a 50 kV, 5 A, 20 ms hydrogen diagnostic neutral beam. The ion source density is determined from Stark broadened Hβ emission and the spectrum of Doppler-shifted Hα emission is used to quantify the fraction of ions at full, half, and one-third beam energy under a variety of operating conditions including fueling gas pressure and arc discharge current. Beam current is optimized at low-density conditions in the ion source while the energy fractions are found to be steady over most operating conditions. 15. Collimated electron beam accelerated at 12 kV from a Penning discharge SciTech Connect Toader, D.; Oane, M.; Ticoş, C. M. 2015-01-15 A pulsed electron beam accelerated at 12 kV with a duration of 40 μs per pulse is obtained from a Penning discharge with a hollow anode and two cathodes. The electrons are extracted through a hole in one of the cathodes and focused by a pair of coils. The electron beam has a diameter of a few mm in the cross section, while the beam current reaches peak values of 400 mA, depending on the magnetic field inside the focussing coils. This relatively inexpensive and compact device is suitable for the irradiation of small material samples placed in high vacuum. 16. 130 kV High-Resolution Electron Beam Lithography System for Sub-10-nm Nanofabrication Okino, Teruaki; Kuba, Yukio; Shibata, Masahiro; Ohyi, Hideyuki 2013-06-01 An electron beam lithography (EBL) system, CABL-UH, with a 130 kV high acceleration voltage has been developed that succeeded in minimizing beam size by minimizing Coulomb blur. This system has a short single-stage electron beam (EB) gun with an alignment function of two extractor centers to minimize Coulomb blur. This gun has also succeeded in thoroughly avoiding microdischarges. By adopting this EB gun and many other techniques, high resolution and long-term high stability have been achieved and an extremely fine pattern (4 nm line) has been delineated. 17. A technique for on-board CT reconstruction using both kilovoltage and megavoltage beam projections for 3D treatment verification SciTech Connect Yin Fangfang; Guan Huaiqun; Lu Wenkai 2005-09-15 The technologies with kilovoltage (kV) and megavoltage (MV) imaging in the treatment room are now available for image-guided radiation therapy to improve patient setup and target localization accuracy. However, development of strategies to efficiently and effectively implement these technologies for patient treatment remains challenging. This study proposed an aggregated technique for on-board CT reconstruction using combination of kV and MV beam projections to improve the data acquisition efficiency and image quality. These projections were acquired in the treatment room at the patient treatment position with a new kV imaging device installed on the accelerator gantry, orthogonal to the existing MV portal imaging device. The projection images for a head phantom and a contrast phantom were acquired using both the On-Board Imager{sup TM} kV imaging device and the MV portal imager mounted orthogonally on the gantry of a Varian Clinac{sup TM} 21EX linear accelerator. MV projections were converted into kV information prior to the aggregated CT reconstruction. The multilevel scheme algebraic-reconstruction technique was used to reconstruct CT images involving either full, truncated, or a combination of both full and truncated projections. An adaptive reconstruction method was also applied, based on the limited numbers of kV projections and truncated MV projections, to enhance the anatomical information around the treatment volume and to minimize the radiation dose. The effects of the total number of projections, the combination of kV and MV projections, and the beam truncation of MV projections on the details of reconstructed kV/MV CT images were also investigated. 18. A technique for on-board CT reconstruction using both kilovoltage and megavoltage beam projections for 3D treatment verification. PubMed Yin, Fang-Fang; Guan, Huaiqun; Lu, Wenkai 2005-09-01 The technologies with kilovoltage (kV) and megavoltage (MV) imaging in the treatment room are now available for image-guided radiation therapy to improve patient setup and target localization accuracy. However, development of strategies to efficiently and effectively implement these technologies for patient treatment remains challenging. This study proposed an aggregated technique for on-board CT reconstruction using combination of kV and MV beam projections to improve the data acquisition efficiency and image quality. These projections were acquired in the treatment room at the patient treatment position with a new kV imaging device installed on the accelerator gantry, orthogonal to the existing MV portal imaging device. The projection images for a head phantom and a contrast phantom were acquired using both the On-Board Imager kV imaging device and the MV portal imager mounted orthogonally on the gantry of a Varian Clinac 21EX linear accelerator. MV projections were converted into kV information prior to the aggregated CT reconstruction. The multilevel scheme algebraic-reconstruction technique was used to reconstruct CT images involving either full, truncated, or a combination of both full and truncated projections. An adaptive reconstruction method was also applied, based on the limited numbers of kV projections and truncated MV projections, to enhance the anatomical information around the treatment volume and to minimize the radiation dose. The effects of the total number of projections, the combination of kV and MV projections, and the beam truncation of MV projections on the details of reconstructed kV/MV CT images were also investigated. 19. SU-E-I-29: Care KV: Dose It Influence Radiation Dose in Non-Contrast Examination of CT Abdomen/pelvis? SciTech Connect Zhang, J; Ganesh, H; Weir, V 2015-06-15 Purpose: CARE kV is a tool that automatically recommends optimal kV setting for individual patient for specific CT examination. The use of CARE kV depends on topogram and the user-selected contrast behavior. CARE kV is expected to reduce radiation dose while improving image quality. However, this may work only for certain groups of patients and/or certain CT examinations. This study is to investigate the effects of CARE kV on radiation dose of non-contrast examination of CT abdomen/pelvis. Methods: Radiation dose (CTDIvol and DLP) from patients who underwent abdomen/pelvis non-contrast examination with and without CARE kV were retrospectively reviewed. All patients were scanned in the same scanner (Siemens Somatom AS64). To mitigate any possible influences due to technologists’ unfamiliarity with the CARE kV, the data with CARE kV were retrieved 1.5 years after the start of CARE kV usage. T-test was used for significant difference in radiation dose. Results: Volume CTDIs and DLPs from 18 patients before and 24 patients after the use of CARE kV were obtained in a duration of one month. There is a slight increase in both average CTDIvol and average DLP with CARE kV compared to those without CARE kV (25.52 mGy vs. 22.65 mGy for CTDIvol; 1265.81 mGy-cm vs. 1199.19 mGy-cm). Statistically there was no significant difference. Without CARE kV, 140 kV was used in 9 of 18 patients, while with CARE KV, 140 kV was used in 15 of 24 patients. 80kV was not used in either group. Conclusion: The use of CARE kV may save time for protocol optimization and minimize variability among technologists. Radiation dose reduction was not observed in non-contrast examinations of CT abdomen/pelvis. This was partially because our CT protocols were tailored according to patient size before CARE kV and partially because of large size patients. 20. Development of resist process for 5-KV multi-beam technology Icard, B.; Rio, D.; Veltman, P.; Kampherbeek, B.; Constancias, C.; Pain, L. 2009-03-01 E-beam Maskless activities raised a lot of interest in the past years from semiconductor companies strongly concerned by the constant cost increase of masked-based lithography (1). Beginning of 2008, the European Commission started an integrated program called "MAGIC", Maskless lithography for IC manufacturing, which pushes the development and the insertion of the European multi-beam technology (2) in the semiconductor industry. This project supports also to develop the infrastructure for the use of this technology, including resist processes, data processing and proximity corrections. Within MAGIC, MAPPER develops its low energy (5keV) massively parallel concept (3). Compared to a standard single E-Beam machine working classically at 50kV, this low accelerating voltage requires the use of thin resist film to deal with the lower penetration depth of the electrons. This paper presents the resist development status, including Chemically Amplified Resist and non-CAR platforms. Comparisons of the performances of these resist platforms in terms of resolution, sensitivity, roughness and stability are detailed, including their potential integration into CMOS technological flow. Finally, a first review of the state of the art of resist performance for patterning at 5kV will be performed. Based on the level of achievements presented in this paper, a discussion is also engaged about the needs of resist developments to fulfill industry targets in 2011. 1. [Dual energy CT angiography of the carotid arteries: quality, bone subtraction, and radiation dosage using tube voltage 80/140 kV versus 100/140 kV]. PubMed Santos Armentia, E; Tardáguila de la Fuente, G; Castellón Plaza, D; Delgado Sánchez-Gracián, C; Prada González, R; Fernández Fernández, L; Tardáguila Montero, F 2014-01-01 2. HECTOR: A 240kV micro-CT setup optimized for research Masschaele, Bert; Dierick, Manuel; Van Loo, Denis; Boone, Matthieu N.; Brabant, Loes; Pauwels, Elin; Cnudde, Veerle; Van Hoorebeke, Luc 2013-10-01 X-ray micro-CT has become a very powerful and common tool for non-destructive three-dimensional (3D) visualization and analysis of objects. Many systems are commercially available, but they are typically limited in terms of operational freedom both from a mechanical point of view as well as for acquisition routines. HECTOR is the latest system developed by the Ghent University Centre for X-ray Tomography (http://www.ugct.ugent.be) in collaboration with X-Ray Engineering (XRE bvba, Ghent, Belgium). It consists of a mechanical setup with nine motorized axes and a modular acquisition software package and combines a microfocus directional target X-ray source up to 240 kV with a large flat-panel detector. Provisions are made to install a line-detector for a maximal operational range. The system can accommodate samples up to 80 kg, 1 m long and 80 cm in diameter while it is also suited for high resolution (down to 4 μm) tomography. The bi-directional detector tiling is suited for large samples while the variable source-detector distance optimizes the signal to noise ratio (SNR) for every type of sample, even with peripheral equipment such as compression stages or climate chambers. The large vertical travel of 1 m can be used for helical scanning and a vertical detector rotation axis allows laminography experiments. The setup is installed in a large concrete bunker to allow accommodation of peripheral equipment such as pumps, chillers, etc., which can be integrated in the modular acquisition software to obtain a maximal correlation between the environmental control and the CT data taken. The acquisition software does not only allow good coupling with the peripheral equipment but its scripting feature is also particularly interesting for testing new and exotic acquisition routines. 3. Assessment of image quality and dose calculation accuracy on kV CBCT, MV CBCT, and MV CT images for urgent palliative radiotherapy treatments. PubMed Held, Mareike; Cremers, Florian; Sneed, Penny K; Braunstein, Steve; Fogh, Shannon E; Nakamura, Jean; Barani, Igor; Perez-Andujar, Angelica; Pouliot, Jean; Morin, Olivier 2016-03-08 A clinical workflow was developed for urgent palliative radiotherapy treatments that integrates patient simulation, planning, quality assurance, and treatment in one 30-minute session. This has been successfully tested and implemented clinically on a linac with MV CBCT capabilities. To make this approach available to all clin-ics equipped with common imaging systems, dose calculation accuracy based on treatment sites was assessed for other imaging units. We evaluated the feasibility of palliative treatment planning using on-board imaging with respect to image quality and technical challenges. The purpose was to test multiple systems using their commercial setup, disregarding any additional in-house development. kV CT, kV CBCT, MV CBCT, and MV CT images of water and anthropomorphic phantoms were acquired on five different imaging units (Philips MX8000 CT Scanner, and Varian TrueBeam, Elekta VersaHD, Siemens Artiste, and Accuray Tomotherapy linacs). Image quality (noise, contrast, uniformity, spatial resolution) was evaluated and compared across all machines. Using individual image value to density calibrations, dose calculation accuracies for simple treatment plans were assessed for the same phantom images. Finally, image artifacts on clinical patient images were evaluated and compared among the machines. Image contrast to visualize bony anatomy was sufficient on all machines. Despite a high noise level and low contrast, MV CT images provided the most accurate treatment plans relative to kV CT-based planning. Spatial resolution was poorest for MV CBCT, but did not limit the visualization of small anatomical structures. A comparison of treatment plans showed that monitor units calculated based on a prescription point were within 5% difference relative to kV CT-based plans for all machines and all studied treatment sites (brain, neck, and pelvis). Local dose differences > 5% were found near the phantom edges. The gamma index for 3%/3 mm criteria was ≥ 95% in most 4. Assessment of image quality and dose calculation accuracy on kV CBCT, MV CBCT, and MV CT images for urgent palliative radiotherapy treatments. PubMed Held, Mareike; Cremers, Florian; Sneed, Penny K; Braunstein, Steve; Fogh, Shannon E; Nakamura, Jean; Barani, Igor; Perez-Andujar, Angelica; Pouliot, Jean; Morin, Olivier 2016-03-01 A clinical workflow was developed for urgent palliative radiotherapy treatments that integrates patient simulation, planning, quality assurance, and treatment in one 30-minute session. This has been successfully tested and implemented clinically on a linac with MV CBCT capabilities. To make this approach available to all clinics equipped with common imaging systems, dose calculation accuracy based on treatment sites was assessed for other imaging units. We evaluated the feasibility of palliative treatment planning using on-board imaging with respect to image quality and technical challenges. The purpose was to test multiple systems using their commercial setup, disregarding any additional in-house development. kV CT, kV CBCT, MV CBCT, and MV CT images of water and anthropomorphic phantoms were acquired on five different imaging units (Philips MX8000 CT Scanner, and Varian TrueBeam, Elekta VersaHD, Siemens Artiste, and Accuray Tomotherapy linacs). Image quality (noise, contrast, uniformity, spatial resolution) was evaluated and compared across all machines. Using individual image value to density calibrations, dose calculation accuracies for simple treatment plans were assessed for the same phantom images. Finally, image artifacts on clinical patient images were evaluated and compared among the machines. Image contrast to visualize bony anatomy was sufficient on all machines. Despite a high noise level and low contrast, MV CT images provided the most accurate treatment plans relative to kV CT-based planning. Spatial resolution was poorest for MV CBCT, but did not limit the visualization of small anatomical structures. A comparison of treatment plans showed that monitor units calculated based on a prescription point were within 5% difference relative to kV CT-based plans for all machines and all studied treatment sites (brain, neck, and pelvis). Local dose differences >5% were found near the phantom edges. The gamma index for 3%/3 mm criteria was ≥95% in most cases 5. Dynamic bowtie for fan-beam CT. PubMed Liu, Fenglin; Wang, Ge; Cong, Wenxiang; Hsieh, Scott S; Pelc, Norbert J 2013-01-01 A bowtie is a filter used to shape an x-ray beam and equalize its flux reaching different detector channels. For development of spectral CT with energy discriminating photon-counting (EDPC) detectors, here we propose and evaluate a dynamic bowtie for performance optimization based on a patient model or a scout scan. With a mechanical rotation of a dynamic bowtie and an adaptive adjustment of an x-ray source flux, an x-ray beam intensity profile can be modulated. First, a mathematical model for dynamic bowtie filtering is established for an elliptical section in fan-beam geometry, and the contour of the optimal bowtie is derived. Then, numerical simulation is performed to compare the performance of the dynamic bowtie in the cases of an ideal phantom and a realistic cross-section relative to the counterparts without any bowtie and with a fixed bowtie respectively. Our dynamic bowtie can equalize the expected numbers of photons in the case of an ideal phantom. In practical cases, our dynamic bowtie can effectively reduce the dynamic range of detected signals inside the field of view. Although our design is optimized for an elliptical phantom, the resultant dynamic bowtie can be applied to a real fan-beam scan if the underlying cross-section can be approximated as an ellipse. Furthermore, our design methodology can be applied to specify an optimized dynamic bowtie for any cross-section of a patient, preferably using rapid prototyping technology. 6. Patient radiation doses for electron beam CT SciTech Connect Castellano, Isabel A.; Dance, David R.; Skinner, Claire L.; Evans, Phil M. 2005-08-15 A Monte Carlo based computer model has been developed for electron beam computed tomography (EBCT) to calculate organ and effective doses in a humanoid hermaphrodite phantom. The program has been validated by comparison with experimental measurements of the CT dose index in standard head and body CT dose phantoms; agreement to better than 8% has been found. The robustness of the model has been established by varying the input parameters. The amount of energy deposited at the 12:00 position of the standard body CT dose phantom is most susceptible to rotation angle, whereas that in the central region is strongly influenced by the beam quality. The program has been used to investigate the changes in organ absorbed doses arising from partial and full rotation about supine and prone subjects. Superficial organs experience the largest changes in absorbed dose with a change in subject orientation and for partial rotation. Effective doses for typical clinical scan protocols have been calculated and compared with values obtained using existing dosimetry techniques based on full rotation. Calculations which make use of Monte Carlo conversion factors for the scanner that best matches the EBCT dosimetric characteristics consistently overestimate the effective dose in supine subjects by typically 20%, and underestimate the effective dose in prone subjects by typically 13%. These factors can therefore be used to correct values obtained in this way. Empirical dosimetric techniques based on the dose-length product yield errors as great as 77%. This is due to the sensitivity of the dose length product to individual scan lengths. The magnitude of these errors is reduced if empirical dosimetric techniques based on the average absorbed dose in the irradiated volume (CTDI{sub vol}) are used. Therefore conversion factors specific to EBCT have been calculated to convert the CTDI{sub vol} to an effective dose. 7. Beam property measurement of a 300-kV ion source test stand for a 1-MV electrostatic accelerator Park, Sae-Hoon; Kim, Dae-Il; Kim, Yu-Seok 2016-09-01 The KOMAC (Korea Multi-purpose Accelerator Complex) has been developing a 300-kV ion source test stand for a 1-MV electrostatic accelerator for industrial purposes. A RF ion source was operated at 200 MHz with its matching circuit. The beam profile and emittance were measured behind an accelerating column to confirm the beam property from the RF ion source. The beam profile was measured at the end of the accelerating tube and at the beam dump by using a beam profile monitor (BPM) and wire scanner. An Allison-type emittance scanner was installed behind the beam profile monitor (BPM) to measure the beam density in phase space. The measurement results for the beam profile and emittance are presented in this paper. 8. CT thermometry for cone-beam CT guided ablation DeStefano, Zachary; Abi-Jaoudeh, Nadine; Li, Ming; Wood, Bradford J.; Summers, Ronald M.; Yao, Jianhua 2016-03-01 Monitoring temperature during a cone-beam CT (CBCT) guided ablation procedure is important for prevention of over-treatment and under-treatment. In order to accomplish ideal temperature monitoring, a thermometry map must be generated. Previously, this was attempted using CBCT scans of a pig shoulder undergoing ablation.1 We are extending this work by using CBCT scans of real patients and incorporating more processing steps. We register the scans before comparing them due to the movement and deformation of organs. We then automatically locate the needle tip and the ablation zone. We employ a robust change metric due to image noise and artifacts. This change metric takes windows around each pixel and uses an equation inspired by Time Delay Analysis to calculate the error between windows with the assumption that there is an ideal spatial offset. Once the change map is generated, we correlate change data with measured temperature data at the key points in the region. This allows us to transform our change map into a thermal map. This thermal map is then able to provide an estimate as to the size and temperature of the ablation zone. We evaluated our procedure on a data set of 12 patients who had a total of 24 ablation procedures performed. We were able to generate reasonable thermal maps with varying degrees of accuracy. The average error ranged from 2.7 to 16.2 degrees Celsius. In addition to providing estimates of the size of the ablation zone for surgical guidance, 3D visualizations of the ablation zone and needle are also produced. 9. Scattering intensities for a white beam (120 kV) presenting a semi-empirical model to preview scattered beams Gonçalves, O. D.; Boldt, S.; Kasch, K. U. 2016-09-01 This work aims at measuring the scattering cross sections for white beams and the verification of a semi-empirical model predicting scattered energy spectra of an X-ray beam produced by an industrial X-ray tube (Pantack Sievert, 120 kV, tungsten target) incident on a water sample. Both, theoretical and semi-empirical results presented are based on the form factor approach with results well corresponding to performed measurements. The elastic (Rayleigh) scattering cross sections are based on Thomson scattering with a form factor correction as published by Morin (1982). The inelastic (Compton) contribution is based on the Klein Nishina equation (Klein and Nishina, 1929) multiplied by the incoherent scattering factors calculated by Hubbel et al. (1975). Two major results are presented: first, the experimental integrated in energy cross sections corresponds with theoretical cross sections obtained at the mean energy of the measured scattered spectra at a given angle. Secondly, the measured scattered spectra at a given angle correspond to those obtained utilizing the semi-empirical model as proposed here. A good correspondence of experimental results and model predictions can be shown. The latter, therefore, proves to be a useful method to calculate the scattering contributions in a number of applications as for example cone beam tomography. 10. Personalized Assessment of kV Cone Beam Computed Tomography Doses in Image-guided Radiotherapy of Pediatric Cancer Patients SciTech Connect Zhang Yibao; Yan Yulong; Nath, Ravinder; Bao Shanglian; Deng Jun 2012-08-01 Purpose: To develop a quantitative method for the estimation of kV cone beam computed tomography (kVCBCT) doses in pediatric patients undergoing image-guided radiotherapy. Methods and Materials: Forty-two children were retrospectively analyzed in subgroups of different scanned regions: one group in the head-and-neck and the other group in the pelvis. Critical structures in planning CT images were delineated on an Eclipse treatment planning system before being converted into CT phantoms for Monte Carlo simulations. A benchmarked EGS4 Monte Carlo code was used to calculate three-dimensional dose distributions of kVCBCT scans with full-fan high-quality head or half-fan pelvis protocols predefined by the manufacturer. Based on planning CT images and structures exported in DICOM RT format, occipital-frontal circumferences (OFC) were calculated for head-and-neck patients using DICOMan software. Similarly, hip circumferences (HIP) were acquired for the pelvic group. Correlations between mean organ doses and age, weight, OFC, and HIP values were analyzed with SigmaPlot software suite, where regression performances were analyzed with relative dose differences (RDD) and coefficients of determination (R{sup 2}). Results: kVCBCT-contributed mean doses to all critical structures decreased monotonically with studied parameters, with a steeper decrease in the pelvis than in the head. Empirical functions have been developed for a dose estimation of the major organs at risk in the head and pelvis, respectively. If evaluated with physical parameters other than age, a mean RDD of up to 7.9% was observed for all the structures in our population of 42 patients. Conclusions: kVCBCT doses are highly correlated with patient size. According to this study, weight can be used as a primary index for dose assessment in both head and pelvis scans, while OFC and HIP may serve as secondary indices for dose estimation in corresponding regions. With the proposed empirical functions, it is possible 11. Evaluating the image quality of cone beam CT acquired during rotational delivery PubMed Central Maria Das, K J; Maria Midunvaleja, K; Gowtham Raj, D; Agarwal, Arpita; Velmurugan, J; Kumar, Shaleen 2015-01-01 Objective: The aim of this work was to evaluate the quality of kilovoltage (kV) cone beam CT (CBCT) images acquired during arc delivery. Methods: Arc plans were delivered on a Catphan® 600 phantom (The Phantom Laboratory Inc., Salem, NY), and kV CBCT images were acquired during the treatment. The megavoltage (MV) scatter effect on kV CBCT image quality was evaluated using parameters such as Hounsfield unit (HU) accuracy, spatial resolution, contrast-to-noise ratio (CNR) and spatial non-uniformity (SNU). These CBCT images were compared with reference scans acquired with the same acquisition parameters without MV “beam on”. This evaluation was carried out for different photon beams (6 and 15 MV), arc types (half vs full arc), static field sizes (10 × 10 and 25 × 25 cm2) and source-to-imager distances (SID) (150 and 170 cm). Results and Conclusion: HU accuracy, CNR and SNU were considerably affected by MV scatter, and this effect was increased with increasing field size and decreasing photon energy, whereas the spatial resolution was almost unchanged. The MV scatter effect was observed to be more for full-rotation arc delivery than for half-arc delivery. In addition, increasing the SID resulted in decreased MV scatter effect and improved the image quality. Advances in knowledge: Nowadays, volumetric modulated arc therapy (VMAT) is increasingly used in clinics, and this arc therapy enables us to acquire CBCT imaging simultaneously. But, the main issue of concurrent imaging is the “MV scatter” effect on CBCT imaging. This study aims to experimentally quantify the effect of MV scatter on CBCT image quality. PMID:26226396 12. 6 GHz Microwave Power-Beaming Demonstration with 6-kV Rectenna and Ion-Breeze Thruster Cummings, T.; Janssen, J.; Karnesky, J.; Laks, D.; Santillo, M.; Strause, B.; Myrabo, L. N.; Alden, A.; Bouliane, P.; Zhang, M. 2004-03-01 On 14 April 2003 at the Communications Research Center (CRC) in Ottawa, Ontario, a 5.85-GHz transmitter beamed 3-kW of microwave power to a remote rectifying antenna (i.e., rectenna) that delivered 6-kV to a special Ion-Breeze' Engine (IBE). Three of CRC's 26.5-cm by 31-cm rectennas were connected in series to provide the ~6-kV output. RPI's low-voltage IBE thrusters performed well in a world's first'' power-beaming demonstration with rectennas and endoatmospheric ion-propulsion engines. The successful tests were a low-tech, proof-of-concept demonstration for the future full-sized MicroWave Lightcraft (MWLC) and its air breathing loiter' propulsion mode. Additional IBE experiments investigated the feasibility of producing flight control forces on the MWLC. The objective was to torque the charged hull for pitch' or roll' maneuvers. The torquing demonstration was entirely successful. 13. [Patient positioning using in-room kV CT for image-guided radiotherapy (IGRT) of prostate cancer]. PubMed Kliton, Jorgo; Agoston, Péter; Major, Tibor; Polgár, Csaba 2012-09-01 The purpose of the study was to evaluate accuracy of patients' set up verified by kV CT-on-rails system and compare automatic and manual image registration of planning and verification kVCT-s. Between January 2001 and March 2011, at ten patients with prostate cancer the clinical target volumes (CTVs) for prostate (CTV-PROS), and prostate plus caudal 1 cm of seminal vesicles (CTV-PVS) with or without pelvic lymph node region were contoured on the treatment planning CT, according to risk category of the patient. Planning target volumes (PTVs) were created with 1 cm margin extended around the CTVs in each direction. The isocentre was marked on the skin with three radiopaque markers. After the set up, treatment couch with the patient was turned by 180 degree and images were acquired of the region of the isocentre with a kV helical CT-on-rails system (treatment CT). An image registration software was used to co-register planning and treatment CT images. Automatic CT image co-registration was followed by manual co-registration taking into account the CTV-PROS contour and soft tissue informations. Deviations of the isocentres in lateral (LAT), longitudinal (LONG) and vertical (VERT) directions were recorded after each image co-registration. Corresponding data were compared using the t-probe. Systematic (S) and random (s) errors of the set up were calculated. Adequate PTV to CTV margins were calculated by van Herk's formula (2.5xS + 0,7xs). Overall 252 deviations were analysed on fourty-two CT series of 10 patients. The mean errors of the set up with automatic and manual image co-registrations were 0.19 cm and 0.07 cm (p=0.001) in LAT, 0.05 cm and 0.03 cm (p=0.07) in LONG and 0.16 cm and 0.22 cm (p=0.16) in VERT directions, respectively. The systematic errors of the set up for automatic and manual image registrations were 0.22 cm and 0.26 cm in LAT, 0.17 cm and 0.18 cm in LONG, 0.25 cm and 0.26 cm in VERT directions, respectively. The random errors of the set up for 14. Third-generation Dual-source CT for Head and Neck CT Angiography with 70 kV Tube Voltage and 20-25 ml Contrast Medium in Patients With Body Weight Lower than 75 kg. PubMed Chen, Yu; Zhu, Yuanli; Xue, Huadan; Wang, Yun; Li, Yumei; Zhang, Zhuhua; Jin, Zhengyu 2017-02-20 Objective To investigate the image quality of head and neck CT angiography (CTA)using the third-generation dual-source CT combined with 70 kV tube voltage and 20-25 ml contrast medium (CM),and evaluate the effects of venous artifacts arising from the CM on the ipsilateral side of injection. Methods Totally 40 consecutive patients with suspected vascular diseases and body weight lower than 75 kg prospectively underwent head and neck CTA examination using the third-generation dual-source CT. CTA was performed with a third-generation dual-source CT system. Patients were randomly divived into 70 kV group (n=20)and 100 kV group (n=20). The 70 kV group used 20-25 ml CM and advanced modeled iterative reconstruction technique,and the 100 kV group used 40 ml CM and filtered back projection. Venous artifacts and CM residues were evaluated by a 3-point scale (1=excellent,3=poor),respectively. Results The effective dose of 70 kV group decreased 58% compared to 100 kV group (t=-18.14,P<0.001).In the 70 kV group,16 patients (80.0%)presented with venous artifacts and six of them (37.5%,6/16)affected the adjacent arteries. In the 100 kV group,19 patients (95.0%)presented with venous artifacts,and seven of them (36.8%,7/19)affected the adjacent arteries (Z=-0.878,P=0.380). In the 70 kV group,13 patients (65.0%)presented with obvious CM residues and two of them (15.3%,2/13)prolonged into the superior vena cava (SVC). In the 100 kV group,19 patients(95.0%)presented with obvious CM residues,and thirteen of them(68.4%,13/19)prolonged into the SVC (Z=-3.654,P<0.001). Conclusion Compared with the 100 kV,the third-generation dual-source CT for head and neck CTA,combined with 70 kV and 20-25 ml CM,can remarkably decrease the radiation dose,along with reduced CM residues and comparable venous artifacts. 15. Cone Beam CT vs. Fan Beam CT: A Comparison of Image Quality and Dose Delivered Between Two Differing CT Imaging Modalities. PubMed Lechuga, Lawrence; Weidlich, Georg A 2016-09-12 A comparison of image quality and dose delivered between two differing computed tomography (CT) imaging modalities-fan beam and cone beam-was performed. A literature review of quantitative analyses for various image quality aspects such as uniformity, signal-to-noise ratio, artifact presence, spatial resolution, modulation transfer function (MTF), and low contrast resolution was generated. With these aspects quantified, cone beam computed tomography (CBCT) shows a superior spatial resolution to that of fan beam, while fan beam shows a greater ability to produce clear and anatomically correct images with better soft tissue differentiation. The results indicate that fan beam CT produces superior images to that of on-board imaging (OBI) cone beam CT systems, while providing a considerably less dose to the patient. 16. [Accurate 3D free-form registration between fan-beam CT and cone-beam CT]. PubMed Liang, Yueqiang; Xu, Hongbing; Li, Baosheng; Li, Hongsheng; Yang, Fujun 2012-06-01 Because the X-ray scatters, the CT numbers in cone-beam CT cannot exactly correspond to the electron densities. This, therefore, results in registration error when the intensity-based registration algorithm is used to register planning fan-beam CT and cone-beam CT. In order to reduce the registration error, we have developed an accurate gradient-based registration algorithm. The gradient-based deformable registration problem is described as a minimization of energy functional. Through the calculus of variations and Gauss-Seidel finite difference method, we derived the iterative formula of the deformable registration. The algorithm was implemented by GPU through OpenCL framework, with which the registration time was greatly reduced. Our experimental results showed that the proposed gradient-based registration algorithm could register more accurately the clinical cone-beam CT and fan-beam CT images compared with the intensity-based algorithm. The GPU-accelerated algorithm meets the real-time requirement in the online adaptive radiotherapy. 17. SU-E-J-27: Effects of Metal Artifacts of KV and MV CT Images on Structure Delineation and Tissue Electron/Mass Density Calculation. PubMed He, T; Tanyi, J; Crilly, R; Laub, W 2012-06-01 To quantitatively evaluate effects of image artifacts of hip prostheses on the accuracy of structure delineation and tissue density calculation on kV and MV CT images. Five hip prostheses made of stainless steel, titanium and cobalt chrome alloys were positioned inside a water tank and scanned respectively on a Philips CT and a Tomotherapy Hi-Art unit. Prostheses were positioned to mimic single and bilateral implantations. Rods of tissue materials of lung, water and bone were placed at locations next and distal to metal implants near femoral head, neck and stem of prostheses. kV and MV CT scans were repeated for each placement. On CT images, cross-sectional outlines of metal implants and tissue rods were delineated. Densities of rod materials were determined and compared to the true values. Metal artifacts were severe on kV CTs and minimal on MV CTs. Cross-sectional outlines of metal implants and tissue rods on kV CTs were severely distorted by artifacts while those on MV CTs remained clearly identifiable. For kV CTs, deviations of measured tissue density from true value were up to 51.3%, 30.6% and 40.9% respectively for lung, bone and solid water. The magnitude of deviation was generally larger at locations closer to metal implants and greater with bilateral implants than single implant. For MV CTs, deviations of measured density from true value were less than 6% for all three tissue materials either with single or bilateral implants. Magnitude of deviation appeared to be uniform and independent of locations relative to metal implants. High Z metal artifacts on kV CTs can have severe impact on the accuracy of structure delineation and tissue density calculation, while on MV CTs, the impact is substantially less and insignificant. MV CTs should be considered for treatment planning on patients with high Z metal implants. © 2012 American Association of Physicists in Medicine. 18. Dose reduction in dynamic CT stress myocardial perfusion imaging: comparison of 80-kV/370-mAs and 100-kV/300-mAs protocols. PubMed Fujita, Makiko; Kitagawa, Kakuya; Ito, Tatsuro; Shiraishi, Yasuyuki; Kurobe, Yusuke; Nagata, Motonori; Ishida, Masaki; Sakuma, Hajime 2014-03-01 To determine the effect of reduced 80-kV tube voltage with increased 370-mAs tube current on radiation dose, image quality and estimated myocardial blood flow (MBF) of dynamic CT stress myocardial perfusion imaging (CTP) in patients with a normal body mass index (BMI) compared with a 100-kV and 300-mAs protocol. Thirty patients with a normal BMI (<25 kg/m(2)) with known or suspected coronary artery disease underwent adenosine-stress dual-source dynamic CTP. Patients were randomised to 80-kV/370-mAs (n = 15) or 100-kV/300-mAs (n = 15) imaging. Maximal enhancement and noise of the left ventricular (LV) cavity, contrast-to-noise ratio (CNR) and MBF of the two groups were compared. Imaging with 80-kV/370-mAs instead of 100-kV/300-mAs was associated with 40% lower radiation dose (mean dose-length product, 359 ± 66 vs 628 ± 112 mGy[Symbol: see text]cm; P < 0.001 ) with no significant difference in CNR (34.5 ± 13.4 vs 33.5 ± 10.4; P = 0.81) or MBF in non-ischaemic myocardium (0.95 ± 0.20 vs 0.99 ± 0.25 ml/min/g; P = 0.66). Studies obtained using 80-kV/370-mAs were associated with 30.9% higher maximal enhancement (804 ± 204 vs 614 ± 115 HU; P < 0.005), and 31.2% greater noise (22.7 ± 3.5 vs 17.4 ± 2.6; P < 0.001). Dynamic CTP using 80-kV/370-mA instead of 100-kV/300-mAs allowed 40% dose reduction without compromising image quality or MBF. Tube voltage of 80-kV should be considered for individuals with a normal BMI. • CT stress perfusion imaging (CTP) is increasingly used to assess myocardial function. • Dynamic CTP is feasible at 80-kV in patients with normal BMI. • An 80-kV/370-mAs protocol allows 40% dose reduction compared with 100-kV/300-mAs. • Contrast-to-noise ratio and myocardial blood flow of the two protocols were comparable. 19. Cone Beam CT vs. Fan Beam CT: A Comparison of Image Quality and Dose Delivered Between Two Differing CT Imaging Modalities PubMed Central Weidlich, Georg A. 2016-01-01 A comparison of image quality and dose delivered between two differing computed tomography (CT) imaging modalities—fan beam and cone beam—was performed. A literature review of quantitative analyses for various image quality aspects such as uniformity, signal-to-noise ratio, artifact presence, spatial resolution, modulation transfer function (MTF), and low contrast resolution was generated. With these aspects quantified, cone beam computed tomography (CBCT) shows a superior spatial resolution to that of fan beam, while fan beam shows a greater ability to produce clear and anatomically correct images with better soft tissue differentiation. The results indicate that fan beam CT produces superior images to that of on-board imaging (OBI) cone beam CT systems, while providing a considerably less dose to the patient. PMID:27752404 20. Effects of High Volume MOSFET Usage on Dosimetry in Pediatric CT, Pediatric Lens of the Eye Dose Reduction Using Siemens Care kV, & Designing Quality Assurance of a Cesium Calibration Source Smith, Aaron Kenneth Project 1: Effects of High Volume MOSFET Usage on Dosimetry in Pediatric CT: Purpose: The objective of this study was to determine if using large numbers of Metal-Oxide-Semiconducting-Field-Effect Transistors, MOSFETs, effects the results of dosimetry studies done with pediatric phantoms due to the attenuation properties of the MOSFETs. The two primary focuses of the study were first to experimentally determine the degree to which high numbers of MOSFET detectors attenuate an X-ray beam of Computed Tomography (CT) quality and second, to experimentally verify the effect that the large number of MOSFETs have on dose in a pediatric phantom undergoing a routine CT examination. Materials and Methods: A Precision X-Ray X-Rad 320 set to 120kVp with an effective half value layer of 7.30mm aluminum was used in concert with a tissue equivalent block phantom and several used MOSFET cables to determine the attenuation properties of the MOSFET cables by measuring the dose (via a 0.18cc ion chamber) given to a point in the center of the phantom in a 0.5 min exposure with a variety of MOSFET arrangements. After the attenuating properties of the cables were known, a GE Discovery 750 CT scanner was employed using a routine chest CT protocol in concert with a 10-year-old Atom Dosimetry Phantom and MOSFET dosimeters in 5 different locations in and on the phantom (upper left lung (ULL), upper right lung (URL), lower left lung (LLL), lower right lung (LRL), and the center of the chest to represent skin dose). Twenty-eight used MOSFET cables were arranged and taped on the chest of the phantom to cover 30% of the circumference of the phantom (19.2 cm). Scans using tube current modulation and not using tube current modulation were taken at 30, 20, 10, and 0% circumference coverage and 28 MOSFETs bundled and laid to the side of the phantom. The dose to the various MOSFET locations in and on the chest were calculated and the image quality was accessed in several of these situations by 1. Current role of hybrid CT/angiography system compared with C-arm cone beam CT for interventional oncology PubMed Central Arai, Y; Inaba, Y; Inoue, M; Nishiofuku, H; Anai, H; Hori, S; Sakaguchi, H; Kichikawa, K 2014-01-01 Hybrid CT/angiography (angiography) system and C-arm cone beam CT provide cross-sectional imaging as an adjunct to angiography. Current interventional oncological procedures can be conducted precisely using these two technologies. In this article, several cases using a hybrid CT/angiography system are shown first, and then the advantages and disadvantages of the hybrid CT/angiography and C-arm cone beam CT are discussed with literature reviews. PMID:24968749 2. Current role of hybrid CT/angiography system compared with C-arm cone beam CT for interventional oncology. PubMed Tanaka, T; Arai, Y; Inaba, Y; Inoue, M; Nishiofuku, H; Anai, H; Hori, S; Sakaguchi, H; Kichikawa, K 2014-09-01 Hybrid CT/angiography (angiography) system and C-arm cone beam CT provide cross-sectional imaging as an adjunct to angiography. Current interventional oncological procedures can be conducted precisely using these two technologies. In this article, several cases using a hybrid CT/angiography system are shown first, and then the advantages and disadvantages of the hybrid CT/angiography and C-arm cone beam CT are discussed with literature reviews. 3. [Dosimetric calibration of CT pencil chamber in cobalt beams]. PubMed Li, Yi; Wang, Junliang; Wang, Yunlai 2014-01-01 To explore the dose-length product calibration method for pencil ionization chamber in cobalt beams. The PTW TM30009 ionization chamber was placed into the central hole of T40017 head phantom and irradiated 60 s in 20 cm x 20 cm cobalt beam. The charge was collected with UNIDOS electrometer. Absorbed doses were measured with TM30013 0.6 mL farmer-type chamber under the same condition. The CT chamber calibration factor was expressed in dose-length product. Dose linearity and spatial response were also investigated. The calibration factor in dose-length product was derived from measured data. Dose linearity and spatial response were good in cobalt beams. It is feasible to calibrate the CT chamber in cobalt beams for patient dose evaluation in MVCT. 4. Beam hardening correction for sparse-view CT reconstruction Liu, Wenlei; Rong, Junyan; Gao, Peng; Liao, Qimei; Lu, HongBing 2015-03-01 Beam hardening, which is caused by spectrum polychromatism of the X-ray beam, may result in various artifacts in the reconstructed image and degrade image quality. The artifacts would be further aggravated for the sparse-view reconstruction due to insufficient sampling data. Considering the advantages of the total-variation (TV) minimization in CT reconstruction with sparse-view data, in this paper, we propose a beam hardening correction method for sparse-view CT reconstruction based on Brabant's modeling. In this correction model for beam hardening, the attenuation coefficient of each voxel at the effective energy is modeled and estimated linearly, and can be applied in an iterative framework, such as simultaneous algebraic reconstruction technique (SART). By integrating the correction model into the forward projector of the algebraic reconstruction technique (ART), the TV minimization can recover images when only a limited number of projections are available. The proposed method does not need prior information about the beam spectrum. Preliminary validation using Monte Carlo simulations indicates that the proposed method can provide better reconstructed images from sparse-view projection data, with effective suppression of artifacts caused by beam hardening. With appropriate modeling of other degrading effects such as photon scattering, the proposed framework may provide a new way for low-dose CT imaging. 5. SU-E-J-14: A Comparison of a 2.5MV Imaging Beam to KV and 6MV Imaging Beams SciTech Connect Nitsch, P; Robertson, D; Balter, P 2015-06-15 Purpose: To compare image quality metrics and dose of TrueBeam V2.0’s 2.5MV imaging beam and kV and 6MV images. Methods: To evaluate the MV image quality, the Standard Imaging QC-3 and Varian Las Vegas (LV) phantoms were imaged using the ‘quality’ and ‘low dose’ modes and then processed using RIT113 V6.3. The LEEDS phantom was used to evaluate the kV image quality. The signal to noise ratio (SNR) was also evaluated in patient images using Matlab. In addition, dose per image was evaluated at a depth of 5cm using solid water for a 28.6 cm × 28.6 cm field size, which is representative of the largest jaw settings at an SID of 150cm. Results: The 2.5MV images had lower dose than the 6 MV images and a contrast to noise ratio (CNR) about 1.4 times higher, when evaluated using the QC-3. When energy was held constant but dose varied, the different modes, ‘low dose’ and ‘quality’, showed less than an 8% difference in CNR. The ‘quality’ modes demonstrated better spatial resolution than the ‘low dose’; however, even with the ‘low dose’ all line pairs were distinct except for the 0.75lp/mm on the 2.5MV. The LV phantom was used to measure low contrast detectability and showed similar results to the QC-3. Several patient images all confirmed that SNR were highest in kV images followed by 2.5MV and then 6MV. Qualitatively, for anatomical areas with large variability in thickness, like lateral head and necks, 2.5MV images show more anatomy, such as shoulder position, than kV images. Conclusions: The kV images clearly provide the best image metrics per unit dose. The 2.5MV beam showed excellent contrast at a lower dose than 6MV and may be superior to kV for difficult to image areas that include large changes in anatomical thickness. P Balter: Varian, Sun Nuclear, Philips, CPRIT. 6. Development and validation of a measurement-based source model for kilovoltage cone-beam CT Monte Carlo dosimetry simulations PubMed Central McMillan, Kyle; McNitt-Gray, Michael; Ruan, Dan 2013-01-01 Purpose: The purpose of this study is to adapt an equivalent source model originally developed for conventional CT Monte Carlo dose quantification to the radiation oncology context and validate its application for evaluating concomitant dose incurred by a kilovoltage (kV) cone-beam CT (CBCT) system integrated into a linear accelerator. Methods: In order to properly characterize beams from the integrated kV CBCT system, the authors have adapted a previously developed equivalent source model consisting of an equivalent spectrum module that takes into account intrinsic filtration and an equivalent filter module characterizing the added bowtie filtration. An equivalent spectrum was generated for an 80, 100, and 125 kVp beam with beam energy characterized by half-value layer measurements. An equivalent filter description was generated from bowtie profile measurements for both the full- and half-bowtie. Equivalent source models for each combination of equivalent spectrum and filter were incorporated into the Monte Carlo software package MCNPX. Monte Carlo simulations were then validated against in-phantom measurements for both the radiographic and CBCT mode of operation of the kV CBCT system. Radiographic and CBCT imaging dose was measured for a variety of protocols at various locations within a body (32 cm in diameter) and head (16 cm in diameter) CTDI phantom. The in-phantom radiographic and CBCT dose was simulated at all measurement locations and converted to absolute dose using normalization factors calculated from air scan measurements and corresponding simulations. The simulated results were compared with the physical measurements and their discrepancies were assessed quantitatively. Results: Strong agreement was observed between in-phantom simulations and measurements. For the radiographic protocols, simulations uniformly underestimated measurements by 0.54%–5.14% (mean difference = −3.07%, SD = 1.60%). For the CBCT protocols, simulations uniformly 7. Calibration free beam hardening correction for cardiac CT perfusion imaging Levi, Jacob; Fahmi, Rachid; Eck, Brendan L.; Fares, Anas; Wu, Hao; Vembar, Mani; Dhanantwari, Amar; Bezerra, Hiram G.; Wilson, David L. 2016-03-01 Myocardial perfusion imaging using CT (MPI-CT) and coronary CTA have the potential to make CT an ideal noninvasive gate-keeper for invasive coronary angiography. However, beam hardening artifacts (BHA) prevent accurate blood flow calculation in MPI-CT. BH Correction (BHC) methods require either energy-sensitive CT, not widely available, or typically a calibration-based method. We developed a calibration-free, automatic BHC (ABHC) method suitable for MPI-CT. The algorithm works with any BHC method and iteratively determines model parameters using proposed BHA-specific cost function. In this work, we use the polynomial BHC extended to three materials. The image is segmented into soft tissue, bone, and iodine images, based on mean HU and temporal enhancement. Forward projections of bone and iodine images are obtained, and in each iteration polynomial correction is applied. Corrections are then back projected and combined to obtain the current iteration's BHC image. This process is iterated until cost is minimized. We evaluate the algorithm on simulated and physical phantom images and on preclinical MPI-CT data. The scans were obtained on a prototype spectral detector CT (SDCT) scanner (Philips Healthcare). Mono-energetic reconstructed images were used as the reference. In the simulated phantom, BH streak artifacts were reduced from 12+/-2HU to 1+/-1HU and cupping was reduced by 81%. Similarly, in physical phantom, BH streak artifacts were reduced from 48+/-6HU to 1+/-5HU and cupping was reduced by 86%. In preclinical MPI-CT images, BHA was reduced from 28+/-6 HU to less than 4+/-4HU at peak enhancement. Results suggest that the algorithm can be used to reduce BHA in conventional CT and improve MPI-CT accuracy. 8. Dedicated Cone-Beam CT System for Extremity Imaging PubMed Central Al Muhit, Abdullah; Zbijewski, Wojciech; Thawait, Gaurav K.; Stayman, J. Webster; Packard, Nathan; Senn, Robert; Yang, Dong; Foos, David H.; Yorkston, John; Siewerdsen, Jeffrey H. 2014-01-01 Purpose To provide initial assessment of image quality and dose for a cone-beam computed tomographic (CT) scanner dedicated to extremity imaging. Materials and Methods A prototype cone-beam CT scanner has been developed for imaging the extremities, including the weight-bearing lower extremities. Initial technical assessment included evaluation of radiation dose measured as a function of kilovolt peak and tube output (in milliampere seconds), contrast resolution assessed in terms of the signal difference–to-noise ratio (SDNR), spatial resolution semiquantitatively assessed by using a line-pair module from a phantom, and qualitative evaluation of cadaver images for potential diagnostic value and image artifacts by an expert CT observer (musculoskeletal radiologist). Results The dose for a nominal scan protocol (80 kVp, 108 mAs) was 9 mGy (absolute dose measured at the center of a CT dose index phantom). SDNR was maximized with the 80-kVp scan technique, and contrast resolution was sufficient for visualization of muscle, fat, ligaments and/or tendons, cartilage joint space, and bone. Spatial resolution in the axial plane exceeded 15 line pairs per centimeter. Streaks associated with x-ray scatter (in thicker regions of the patient—eg, the knee), beam hardening (about cortical bone—eg, the femoral shaft), and cone-beam artifacts (at joint space surfaces oriented along the scanning plane—eg, the interphalangeal joints) presented a slight impediment to visualization. Cadaver images (elbow, hand, knee, and foot) demonstrated excellent visibility of bone detail and good soft-tissue visibility suitable to a broad spectrum of musculoskeletal indications. Conclusion A dedicated extremity cone-beam CT scanner capable of imaging upper and lower extremities (including weight-bearing examinations) provides sufficient image quality and favorable dose characteristics to warrant further evaluation for clinical use. © RSNA, 2013 Online supplemental material is available for 9. Reduction of beam hardening artifacts in cone-beam CT imaging via SMART-RECON algorithm Li, Yinsheng; Garrett, John; Chen, Guang-Hong 2016-03-01 When an automatic exposure control is introduced in C-arm cone beam CT data acquisition, the spectral inconsistencies between acquired projection data are exacerbated. As a result, conventional water/bone correction schemes are not as effective as in conventional diagnostic x-ray CT acquisitions with a fixed tube potential. In this paper, a new method was proposed to reconstruct several images with different degrees of spectral consistency and thus different levels of beam hardening artifacts. The new method relies neither on prior knowledge of the x-ray beam spectrum nor on prior compositional information of the imaging object. Numerical simulations were used to validate the algorithm. 10. TU-EF-204-08: Dose Efficiency of Added Beam-Shaping Filter with Varied Attenuation Levels in Lung-Cancer Screening CT SciTech Connect Ma, C; Yu, L; Vrieze, T; Leng, S; Fletcher, J; McCollough, C 2015-06-15 11. Cardiac cone-beam CT volume reconstruction using ART SciTech Connect Nielsen, T.; Manzke, R.; Proksa, R.; Grass, M. 2005-04-01 Modern computed tomography systems allow volume imaging of the heart. Up to now, approximately two-dimensional (2D) and 3D algorithms based on filtered backprojection are used for the reconstruction. These algorithms become more sensitive to artifacts when the cone angle of the x-ray beam increases as it is the current trend of computed tomography (CT) technology. In this paper, we investigate the potential of iterative reconstruction based on the algebraic reconstruction technique (ART) for helical cardiac cone-beam CT. Iterative reconstruction has the advantages that it takes the cone angle into account exactly and that it can be combined with retrospective cardiac gating fairly easily. We introduce a modified ART algorithm for cardiac CT reconstruction. We apply it to clinical cardiac data from a 16-slice CT scanner and compare the images to those obtained with a current analytical reconstruction method. In a second part, we investigate the potential of iterative reconstruction for a large area detector with 256 slices. For the clinical cases, iterative reconstruction produces excellent images of diagnostic quality. For the large area detector, iterative reconstruction produces images superior to analytical reconstruction in terms of cone-beam artifacts. 12. Quality control and patient dosimetry in dental cone beam CT. PubMed Vassileva, J; Stoyanov, D 2010-01-01 This paper presents the initial experience in performing quality control and patient dose measurements in a cone beam computed tomography (CT) scanner (ILUMA Ultra, IMTEC Imaging, USA) for oral and maxillofacial radiology. The X-ray tube and the generator were tested first, including the kVp accuracy and precision, and the half-value layer (HVL). The following tests specific for panoramic dental systems were also performed: tube output, beam size and beam alignment to the detector. The tests specific for CT included measurements of noise and CT numbers in water and in air, as well as the homogeneity of CT numbers. The most appropriate dose quantity was found to be the air kerma-area product (KAP) measured with a KAP-metre installed at the tube exit. KAP values were found to vary from 110 to 185 microGy m(2) for available adult protocols and to be 54 microGy m(2) for the paediatric protocol. The effective dose calculated with the software PCXMC (STUK, Finland) was 0.05 mSv for children and 0.09-0.16 mSv for adults. 13. Automated planning of breast radiotherapy using cone beam CT imaging SciTech Connect Amit, Guy; Purdie, Thomas G. 2015-02-15 Purpose: Develop and clinically validate a methodology for using cone beam computed tomography (CBCT) imaging in an automated treatment planning framework for breast IMRT. Methods: A technique for intensity correction of CBCT images was developed and evaluated. The technique is based on histogram matching of CBCT image sets, using information from “similar” planning CT image sets from a database of paired CBCT and CT image sets (n = 38). Automated treatment plans were generated for a testing subset (n = 15) on the planning CT and the corrected CBCT. The plans generated on the corrected CBCT were compared to the CT-based plans in terms of beam parameters, dosimetric indices, and dose distributions. Results: The corrected CBCT images showed considerable similarity to their corresponding planning CTs (average mutual information 1.0±0.1, average sum of absolute differences 185 ± 38). The automated CBCT-based plans were clinically acceptable, as well as equivalent to the CT-based plans with average gantry angle difference of 0.99°±1.1°, target volume overlap index (Dice) of 0.89±0.04 although with slightly higher maximum target doses (4482±90 vs 4560±84, P < 0.05). Gamma index analysis (3%, 3 mm) showed that the CBCT-based plans had the same dose distribution as plans calculated with the same beams on the registered planning CTs (average gamma index 0.12±0.04, gamma <1 in 99.4%±0.3%). Conclusions: The proposed method demonstrates the potential for a clinically feasible and efficient online adaptive breast IMRT planning method based on CBCT imaging, integrating automation. 14. Practical patient dosimetry for partial rotation cone beam CT PubMed Central Podnieks, E C; Negus, I S 2012-01-01 Objectives This work investigates the validity of estimating effective dose for cone beam CT (CBCT) exposures from the weighted CT dose index (CTDIW) and irradiated length. Methods Measurements were made within cylindrical poly(methyl methacrylate) (PMMA) phantoms measuring 14 cm and 28 cm in length and 32 cm in diameter for the 200° DynaCT acquisition on the Siemens Artis zee fluoroscopy unit (Siemens Medical Solutions, Erlangen, Germany). An interpolated average dose was calculated to account for the partial rotation. Organ and effective doses were estimated by modelling projections in the Monte Carlo software programme PCXMC (STUK, Helsinki, Finland). Results The CTDIW was found to closely approximate the interpolated average dose if the positions of the measured doses reflected the X-ray beam rotation. The average dose was found to increase by 8% when the phantom length was increased from 14 to 28 cm. Using the interpolated average dose and the irradiated length for effective dose calculations gave similar values to PCXMC when a double-length (28-cm) CT dose index phantom was irradiated. Simplifying the estimation of effective dose with PCXMC by modelling just 4 projections around the abdomen gave effective doses that were only 7% different to those given when 41 projections were modelled. Calculated doses to key organs within the beam varied by as much as 27%. Conclusion Estimating effective dose from the CTDIW and the irradiated length is sufficiently accurate for CBCT if the chamber positions are considered carefully. A conversion factor can be used only if a single CT dose index phantom is available. The estimation of organ doses requires a large number of modelled projections in PCXMC. PMID:21304011 15. Development of a 3D CT scanner using cone beam Endo, Masahiro; Kamagata, Nozomu; Sato, Kazumasa; Hattori, Yuichi; Kobayashi, Shigeo; Mizuno, Shinichi; Jimbo, Masao; Kusakabe, Masahiro 1995-05-01 In order to acquire 3D data of high contrast objects such as bone, lung and vessels enhanced by contrast media for use in 3D image processing, we have developed a 3D CT-scanner using cone beam x ray. The 3D CT-scanner consists of a gantry and a patient couch. The gantry consists of an x-ray tube designed for cone beam CT and a large area two-dimensional detector mounted on a single frame and rotated around an object in 12 seconds. The large area detector consists of a fluorescent plate and a charge coupled device video camera. The size of detection area was 600 mm X 450 mm capable of covering the total chest. While an x-ray tube was rotated around an object, pulsed x ray was exposed 30 times a second and 360 projected images were collected in a 12 second scan. A 256 X 256 X 256 matrix image (1.25 mm X 1.25 mm X 1.25 mm voxel) was reconstructed by a high-speed reconstruction engine. Reconstruction time was approximately 6 minutes. Cylindrical water phantoms, anesthetized rabbits with or without contrast media, and a Japanese macaque were scanned with the 3D CT-scanner. The results seem promising because they show high spatial resolution in three directions, though there existed several point to be improved. Possible improvements are discussed. 16. Upright cone beam CT imaging using the onboard imager SciTech Connect Fave, Xenia Martin, Rachael; Yang, Jinzhong; Balter, Peter; Court, Laurence; Carvalho, Luis; Pan, Tinsu 2014-06-15 Purpose: Many patients could benefit from being treated in an upright position. The objectives of this study were to determine whether cone beam computed tomography (CBCT) could be used to acquire upright images for treatment planning and to demonstrate whether reconstruction of upright images maintained accurate geometry and Hounsfield units (HUs). Methods: A TrueBeam linac was programmed in developer mode to take upright CBCT images. The gantry head was positioned at 0°, and the couch was rotated to 270°. The x-ray source and detector arms were extended to their lateral positions. The x-ray source and gantry remained stationary as fluoroscopic projections were taken and the couch was rotated from 270° to 90°. The x-ray tube current was normalized to deposit the same dose (measured using a calibrated Farmer ion chamber) as that received during a clinical helical CT scan to the center of a cylindrical, polyethylene phantom. To extend the field of view, two couch rotation scans were taken with the detector offset 15 cm superiorly and then 15 cm inferiorly. The images from these two scans were stitched together before reconstruction. Upright reconstructions were compared to reconstructions from simulation CT scans of the same phantoms. Two methods were investigated for correcting the HUs, including direct calibration and mapping the values from a simulation CT. Results: Overall geometry, spatial linearity, and high contrast resolution were maintained in upright reconstructions. Some artifacts were created and HU accuracy was compromised; however, these limitations could be removed by mapping the HUs from a simulation CT to the upright reconstruction for treatment planning. Conclusions: The feasibility of using the TrueBeam linac to take upright CBCT images was demonstrated. This technique is straightforward to implement and could be of enormous benefit to patients with thoracic tumors or those who find a supine position difficult to endure. 17. Dose calculation using megavoltage cone-beam CT SciTech Connect Morin, Olivier . E-mail: [email protected]; Chen, Josephine; Aubin, Michele; Gillis, Amy; Aubry, Jean-Francois; Bose, Supratik; Chen Hong; Descovich, Martina; Xia Ping; Pouliot, Jean 2007-03-15 Purpose: To demonstrate the feasibility of performing dose calculation on megavoltage cone-beam CT (MVCBCT) of head-and-neck patients in order to track the dosimetric errors produced by anatomic changes. Methods and Materials: A simple geometric model was developed using a head-size water cylinder to correct an observed cupping artifact occurring with MVCBCT. The uniformity-corrected MVCBCT was calibrated for physical density. Beam arrangements and weights from the initial treatment plans defined using the conventional CT were applied to the MVCBCT image, and the dose distribution was recalculated. The dosimetric inaccuracies caused by the cupping artifact were evaluated on the water phantom images. An ideal test patient with no observable anatomic changes and a patient imaged with both CT and MVCBCT before and after considerable weight loss were used to clinically validate MVCBCT for dose calculation and to determine the dosimetric impact of large anatomic changes. Results: The nonuniformity of a head-size water phantom ({approx}30%) causes a dosimetric error of less than 5%. The uniformity correction method developed greatly reduces the cupping artifact, resulting in dosimetric inaccuracies of less than 1%. For the clinical cases, the agreement between the dose distributions calculated using MVCBCT and CT was better than 3% and 3 mm where all tissue was encompassed within the MVCBCT. Dose-volume histograms from the dose calculations on CT and MVCBCT were in excellent agreement. Conclusion: MVCBCT can be used to estimate the dosimetric impact of changing anatomy on several structures in the head-and-neck region. 18. Fast 3D multiple fan-beam CT systems Kohlbrenner, Adrian; Haemmerle, Stefan; Laib, Andres; Koller, Bruno; Ruegsegger, Peter 1999-09-01 Two fast, CCD-based three-dimensional CT scanners for in vivo applications have been developed. One is designed for small laboratory animals and has a voxel size of 20 micrometer, while the other, having a voxel size of 80 micrometer, is used for human examinations. Both instruments make use of a novel multiple fan-beam technique: radiation from a line-focus X-ray tube is divided into a stack of fan-beams by a 28 micrometer pitch foil collimator. The resulting wedge-shaped X-ray field is the key to the instrument's high scanning speed and allows to position the sample close to the X-ray source, which makes it possible to build compact CT systems. In contrast to cone- beam scanners, the multiple fan-beam scanner relies on standard fan-beam algorithms, thereby eliminating inaccuracies in the reconstruction process. The projections from one single rotation are acquired within 2 min and are subsequently reconstructed into a 1024 X 1024 X 255 voxel array. Hence a single rotation about the sample delivers a 3D image containing a quarter of a billion voxels. Such volumetric images are 6.6 mm in height and can be stacked on top of each other. An area CCD sensor bonded to a fiber-optic light guide acts as a detector. Since no image intensifier, conventional optics or tapers are used throughout the system, the image is virtually distortion free. The scanner's high scanning speed and high resolution at moderately low radiation dose are the basis for reliable time serial measurements and analyses. 19. Beam hardening and partial beam hardening of the bowtie filter: Effects on dosimetric applications in CT Lopez-Rendon, X.; Zhang, G.; Bosmans, H.; Oyen, R.; Zanca, F. 2014-03-01 Purpose: To estimate the consequences on dosimetric applications when a CT bowtie filter is modeled by means of full beam hardening versus partial beam hardening. Method: A model of source and filtration for a CT scanner as developed by Turner et. al. [1] was implemented. Specific exposures were measured with the stationary CT X-ray tube in order to assess the equivalent thickness of Al of the bowtie filter as a function of the fan angle. Using these thicknesses, the primary beam attenuation factors were calculated from the energy dependent photon mass attenuation coefficients and used to include beam hardening in the spectrum. This was compared to a potentially less computationally intensive approach, which accounts only partially for beam hardening, by giving the photon spectrum a global (energy independent) fan angle specific weighting factor. Percentage differences between the two methods were quantified by calculating the dose in air after passing several water equivalent thicknesses representative for patients having different BMI. Specifically, the maximum water equivalent thickness of the lateral and anterior-posterior dimension and of the corresponding (half) effective diameter were assessed. Results: The largest percentage differences were found for the thickest part of the bowtie filter and they increased with patient size. For a normal size patient they ranged from 5.5% at half effective diameter to 16.1% for the lateral dimension; for the most obese patient they ranged from 7.7% to 19.3%, respectively. For a complete simulation of one rotation of the x-ray tube, the proposed method was 12% faster than the complete simulation of the bowtie filter. Conclusion: The need for simulating the beam hardening of the bow tie filter in Monte Carlo platforms for CT dosimetry will depend on the required accuracy. 20. Investigation of gated cone-beam CT to reduce respiratory motion blurring PubMed Central Kincaid, Russell E.; Yorke, Ellen D.; Goodman, Karyn A.; Rimner, Andreas; Wu, Abraham J.; Mageras, Gig S. 2013-01-01 Purpose: Methods of reducing respiratory motion blurring in cone-beam CT (CBCT) have been limited to lung where soft tissue contrast is large. Respiration-correlated cone-beam CT uses slow continuous gantry rotation but image quality is limited by uneven projection spacing. This study investigates the efficacy of a novel gated CBCT technique. Methods: In gated CBCT, the linac is programmed such that gantry rotation and kV image acquisition occur within a gate around end expiration and are triggered by an external respiratory monitor. Standard CBCT and gated CBCT scans are performed in 22 patients (11 thoracic, 11 abdominal) and a respiration-correlated CT (RCCT) scan, acquired on a standard CT scanner, from the same day serves as a criterion standard. Image quality is compared by calculating contrast-to-noise ratios (CNR) for tumors in lung, gastroesophageal junction (GEJ) tissue, and pancreas tissue, relative to surrounding background tissue. Congruence between the object in the CBCT images and that in the RCCT is measured by calculating the optimized normalized cross-correlation (NCC) following CBCT-to-RCCT rigid registrations. Results: Gated CBCT results in reduced motion artifacts relative to standard CBCT, with better visualization of tumors in lung, and of abdominal organs including GEJ, pancreas, and organs at risk. CNR of lung tumors is larger in gated CBCT in 6 of 11 cases relative to standard CBCT. A paired two-tailed t-test of lung patient mean CNR shows no statistical significance (p = 0.133). In 4 of 5 cases where CNR is not increased, lung tumor motion observed in RCCT is small (range 1.3–5.2 mm). CNR is increased and becomes statistically significant for 6 out of 7 lung patients with > 5 mm tumor motion (p = 0.044). CNR is larger in gated CBCT in 5 of 7 GEJ cases and 3 of 4 pancreas cases (p = 0.082 and 0.192). Gated CBCT yields improvement with lower NCC relative to standard CBCT in 10 of 11, 7 of 7, and 3 of 4 patients for lung, GEJ, and pancreas 1. Segmentation-free empirical beam hardening correction for CT. PubMed Schüller, Sören; Sawall, Stefan; Stannigel, Kai; Hülsbusch, Markus; Ulrici, Johannes; Hell, Erich; Kachelrieß, Marc 2015-02-01 The polychromatic nature of the x-ray beams and their effects on the reconstructed image are often disregarded during standard image reconstruction. This leads to cupping and beam hardening artifacts inside the reconstructed volume. To correct for a general cupping, methods like water precorrection exist. They correct the hardening of the spectrum during the penetration of the measured object only for the major tissue class. In contrast, more complex artifacts like streaks between dense objects need other techniques of correction. If using only the information of one single energy scan, there are two types of corrections. The first one is a physical approach. Thereby, artifacts can be reproduced and corrected within the original reconstruction by using assumptions in a polychromatic forward projector. These assumptions could be the used spectrum, the detector response, the physical attenuation and scatter properties of the intersected materials. A second method is an empirical approach, which does not rely on much prior knowledge. This so-called empirical beam hardening correction (EBHC) and the previously mentioned physical-based technique are both relying on a segmentation of the present tissues inside the patient. The difficulty thereby is that beam hardening by itself, scatter, and other effects, which diminish the image quality also disturb the correct tissue classification and thereby reduce the accuracy of the two known classes of correction techniques. The herein proposed method works similar to the empirical beam hardening correction but does not require a tissue segmentation and therefore shows improvements on image data, which are highly degraded by noise and artifacts. Furthermore, the new algorithm is designed in a way that no additional calibration or parameter fitting is needed. To overcome the segmentation of tissues, the authors propose a histogram deformation of their primary reconstructed CT image. This step is essential for the proposed 2. Segmentation-free empirical beam hardening correction for CT SciTech Connect Schüller, Sören; Sawall, Stefan; Stannigel, Kai; Hülsbusch, Markus; Ulrici, Johannes; Hell, Erich; Kachelrieß, Marc 2015-02-15 Purpose: The polychromatic nature of the x-ray beams and their effects on the reconstructed image are often disregarded during standard image reconstruction. This leads to cupping and beam hardening artifacts inside the reconstructed volume. To correct for a general cupping, methods like water precorrection exist. They correct the hardening of the spectrum during the penetration of the measured object only for the major tissue class. In contrast, more complex artifacts like streaks between dense objects need other techniques of correction. If using only the information of one single energy scan, there are two types of corrections. The first one is a physical approach. Thereby, artifacts can be reproduced and corrected within the original reconstruction by using assumptions in a polychromatic forward projector. These assumptions could be the used spectrum, the detector response, the physical attenuation and scatter properties of the intersected materials. A second method is an empirical approach, which does not rely on much prior knowledge. This so-called empirical beam hardening correction (EBHC) and the previously mentioned physical-based technique are both relying on a segmentation of the present tissues inside the patient. The difficulty thereby is that beam hardening by itself, scatter, and other effects, which diminish the image quality also disturb the correct tissue classification and thereby reduce the accuracy of the two known classes of correction techniques. The herein proposed method works similar to the empirical beam hardening correction but does not require a tissue segmentation and therefore shows improvements on image data, which are highly degraded by noise and artifacts. Furthermore, the new algorithm is designed in a way that no additional calibration or parameter fitting is needed. Methods: To overcome the segmentation of tissues, the authors propose a histogram deformation of their primary reconstructed CT image. This step is essential for the 3. Evaluation of dose from kV cone-beam computed tomography during radiotherapy: a comparison of methodologies Buckley, J.; Wilkinson, D.; Malaroda, A.; Metcalfe, P. 2017-01-01 Three alternative methodologies to the Computed-Tomography Dose Index for the evaluation of Cone-Beam Computed Tomography dose are compared, the Cone-Beam Dose Index, IAEA Human Health Report No. 5 recommended methodology and the AAPM Task Group 111 recommended methodology. The protocols were evaluated for Pelvis and Thorax scan modes on Varian® On-Board Imager and Truebeam kV XI imaging systems. The weighted planar average dose was highest for the AAPM methodology across all scans, with the CBDI being the second highest overall. A 17.96% and 1.14% decrease from the TG-111 protocol to the IAEA and CBDI protocols for the Pelvis mode and 18.15% and 13.10% decrease for the Thorax mode were observed for the XI system. For the OBI system, the variation was 16.46% and 7.14% for Pelvis mode and 15.93% to the CBDI protocol in Thorax mode respectively. 4. Use of MV and kV imager correlation for maintaining continuous real-time 3D internal marker tracking during beam interruptions. PubMed Wiersma, R D; Riaz, N; Dieterich, Sonja; Suh, Yelin; Xing, L 2009-01-07 The integration of onboard kV imaging together with a MV electronic portal imaging device (EPID) on linear accelerators (LINAC) can provide an easy to implement real-time 3D organ position monitoring solution for treatment delivery. Currently, real-time MV-kV tracking has only been demonstrated by simultaneous imagining by both MV and kV imaging devices. However, modalities such as step-and-shoot IMRT (SS-IMRT), which inherently contain MV beam interruptions, can lead to loss of target information necessary for 3D localization. Additionally, continuous kV imaging throughout the treatment delivery can lead to high levels of imaging dose to the patient. This work demonstrates for the first time how full 3D target tracking can be maintained even in the presence of such beam interruption, or MV/kV beam interleave, by use of a relatively simple correlation model together with MV-kV tracking. A moving correlation model was constructed using both present and prior positions of the marker in the available MV or kV image to compute the position of the marker on the interrupted imager. A commercially available radiotherapy system, equipped with both MV and kV imaging devices, was used to deliver typical SS-IMRT lung treatment plans to a 4D phantom containing internally embedded metallic markers. To simulate actual lung tumor motion, previous recorded 4D lung patient motion data were used. Lung tumor motion data of five separate patients were inputted into the 4D phantom, and typical SS-IMRT lung plans were delivered to simulate actual clinical deliveries. Application of the correlation model to SS-IMRT lung treatment deliveries was found to be an effective solution for maintaining continuous 3D tracking during 'step' beam interruptions. For deliveries involving five or more gantry angles with 50 or more fields per plan, the positional errors were found to have < or =1 mm root mean squared error (RMSE) in all three spatial directions. In addition to increasing the robustness 5. Use of MV and kV imager correlation for maintaining continuous real-time 3D internal marker tracking during beam interruptions Wiersma, R. D.; Riaz, N.; Dieterich, Sonja; Suh, Yelin; Xing, L. 2009-01-01 The integration of onboard kV imaging together with a MV electronic portal imaging device (EPID) on linear accelerators (LINAC) can provide an easy to implement real-time 3D organ position monitoring solution for treatment delivery. Currently, real-time MV-kV tracking has only been demonstrated by simultaneous imagining by both MV and kV imaging devices. However, modalities such as step-and-shoot IMRT (SS-IMRT), which inherently contain MV beam interruptions, can lead to loss of target information necessary for 3D localization. Additionally, continuous kV imaging throughout the treatment delivery can lead to high levels of imaging dose to the patient. This work demonstrates for the first time how full 3D target tracking can be maintained even in the presence of such beam interruption, or MV/kV beam interleave, by use of a relatively simple correlation model together with MV-kV tracking. A moving correlation model was constructed using both present and prior positions of the marker in the available MV or kV image to compute the position of the marker on the interrupted imager. A commercially available radiotherapy system, equipped with both MV and kV imaging devices, was used to deliver typical SS-IMRT lung treatment plans to a 4D phantom containing internally embedded metallic markers. To simulate actual lung tumor motion, previous recorded 4D lung patient motion data were used. Lung tumor motion data of five separate patients were inputted into the 4D phantom, and typical SS-IMRT lung plans were delivered to simulate actual clinical deliveries. Application of the correlation model to SS-IMRT lung treatment deliveries was found to be an effective solution for maintaining continuous 3D tracking during 'step' beam interruptions. For deliveries involving five or more gantry angles with 50 or more fields per plan, the positional errors were found to have <=1 mm root mean squared error (RMSE) in all three spatial directions. In addition to increasing the robustness of 6. Deformable planning CT to cone-beam CT image registration in head-and-neck cancer SciTech Connect Hou Jidong; Guerrero, Mariana; Chen, Wenjuan; D'Souza, Warren D. 2011-04-15 Purpose: The purpose of this work was to implement and validate a deformable CT to cone-beam computed tomography (CBCT) image registration method in head-and-neck cancer to eventually facilitate automatic target delineation on CBCT. Methods: Twelve head-and-neck cancer patients underwent a planning CT and weekly CBCT during the 5-7 week treatment period. The 12 planning CT images (moving images) of these patients were registered to their weekly CBCT images (fixed images) via the symmetric force Demons algorithm and using a multiresolution scheme. Histogram matching was used to compensate for the intensity difference between the two types of images. Using nine known anatomic points as registration targets, the accuracy of the registration was evaluated using the target registration error (TRE). In addition, region-of-interest (ROI) contours drawn on the planning CT were morphed to the CBCT images and the volume overlap index (VOI) between registered contours and manually delineated contours was evaluated. Results: The mean TRE value of the nine target points was less than 3.0 mm, the slice thickness of the planning CT. Of the 369 target points evaluated for registration accuracy, the average TRE value was 2.6{+-}0.6 mm. The mean TRE for bony tissue targets was 2.4{+-}0.2 mm, while the mean TRE for soft tissue targets was 2.8{+-}0.2 mm. The average VOI between the registered and manually delineated ROI contours was 76.2{+-}4.6%, which is consistent with that reported in previous studies. Conclusions: The authors have implemented and validated a deformable image registration method to register planning CT images to weekly CBCT images in head-and-neck cancer cases. The accuracy of the TRE values suggests that they can be used as a promising tool for automatic target delineation on CBCT. 7. Image quality of flat-panel cone beam CT Rose, Georg; Wiegert, Jens; Schaefer, Dirk; Fiedler, Klaus; Conrads, Norbert; Timmer, Jan; Rasche, Volker; Noordhoek, Niels; Klotz, Erhard; Koppe, Reiner 2003-06-01 We present results on 3D image quality in terms of spatial resolution (MTF) and low contrast detectability, obtained on a flat dynamic X-ray detector (FD) based cone-beam CT (CB-CT) setup. Experiments have been performed on a high precision bench-top system with rotating object table, fixed X-ray tube and 176 x 176 mm2 active detector area (Trixell Pixium 4800). Several objects, including CT performance-, MTF- and pelvis phantoms, have been scanned under various conditions, including a high dose setup in order to explore the 3D performance limits. Under these optimal conditions, the system is capable of resolving less than 1% (~10 HU) contrast in a water background. Within a pelvis phantom, even inserts of muscle and fat equivalent are clearly distinguishable. This also holds for fast acquisitions of up to 40 fps. Focusing on the spatial resolution, we obtain an almost isotropic three-dimensional resolution of up to 30 lp/cm at 10% modulation. 8. Deformable Image Registration of CT and Truncated Cone-beam CT for Adaptive Radiation Therapy* PubMed Central Zhen, Xin; Yan, Hao; Zhou, Linghong; Jia, Xun; Jiang, Steve B. 2013-01-01 Truncation of a cone-beam computed tomography (CBCT) image, mainly caused by the limited field of view (FOV) of CBCT imaging, poses challenges to the problem of deformable image registration (DIR) between CT and CBCT images in adaptive radiation therapy (ART). The missing information outside the CBCT FOV usually causes incorrect deformations when a conventional DIR algorithm is utilized, which may introduce significant errors in subsequent operations such as dose calculation. In this paper, based on the observation that the missing information in the CBCT image domain does exist in the projection image domain, we propose to solve this problem by developing a hybrid deformation/reconstruction algorithm. As opposed to deforming the CT image to match the truncated CBCT image, the CT image is deformed such that its projections match all the corresponding projection images for the CBCT image. An iterative forward-backward projection algorithm is developed. Six head-and-neck cancer patient cases are used to evaluate our algorithm, five with simulated truncation and one with real truncation. It is found that our method can accurately register the CT image to the truncated CBCT image and is robust against image truncation when the portion of the truncated image is less than 40% of the total image. PMID:24169817 9. Deformable image registration of CT and truncated cone-beam CT for adaptive radiation therapy Zhen, Xin; Yan, Hao; Zhou, Linghong; Jia, Xun; Jiang, Steve B. 2013-11-01 Truncation of a cone-beam computed tomography (CBCT) image, mainly caused by the limited field of view (FOV) of CBCT imaging, poses challenges to the problem of deformable image registration (DIR) between computed tomography (CT) and CBCT images in adaptive radiation therapy (ART). The missing information outside the CBCT FOV usually causes incorrect deformations when a conventional DIR algorithm is utilized, which may introduce significant errors in subsequent operations such as dose calculation. In this paper, based on the observation that the missing information in the CBCT image domain does exist in the projection image domain, we propose to solve this problem by developing a hybrid deformation/reconstruction algorithm. As opposed to deforming the CT image to match the truncated CBCT image, the CT image is deformed such that its projections match all the corresponding projection images for the CBCT image. An iterative forward-backward projection algorithm is developed. Six head-and-neck cancer patient cases are used to evaluate our algorithm, five with simulated truncation and one with real truncation. It is found that our method can accurately register the CT image to the truncated CBCT image and is robust against image truncation when the portion of the truncated image is less than 40% of the total image. Part of this work was presented at the 54th AAPM Annual Meeting (Charlotte, NC, USA, 29 July-2 August 2012). 10. CT to Cone-beam CT Deformable Registration With Simultaneous Intensity Correction PubMed Central Zhen, Xin; Gu, Xuejun; Yan, Hao; Zhou, Linghong; Jia, Xun; Jiang, Steve B. 2012-01-01 Computed tomography (CT) to cone-beam computed tomography (CBCT) deformable image registration (DIR) is a crucial step in adaptive radiation therapy. Current intensity-based registration algorithms, such as demons, may fail in the context of CT-CBCT DIR because of inconsistent intensities between the two modalities. In this paper, we propose a variant of demons, called Deformation with Intensity Simultaneously Corrected (DISC), to deal with CT-CBCT DIR. DISC distinguishes itself from the original demons algorithm by performing an adaptive intensity correction step on the CBCT image at every iteration step of the demons registration. Specifically, the intensity correction of a voxel in CBCT is achieved by matching the first and the second moments of the voxel intensities inside a patch around the voxel with those on the CT image. It is expected that such a strategy can remove artifacts in the CBCT image, as well as ensuring the intensity consistency between the two modalities. DISC is implemented on computer graphics processing units (GPUs) in compute unified device architecture (CUDA) programming environment. The performance of DISC is evaluated on a simulated patient case and six clinical head-and-neck cancer patient data. It is found that DISC is robust against the CBCT artifacts and intensity inconsistency and significantly improves the registration accuracy when compared with the original demons. PMID:23032638 11. Demons deformable registration of CT and cone-beam CT using an iterative intensity matching approach SciTech Connect Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali; and others 2011-04-15 Purpose: A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values (''intensity''). Methods: A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specific intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and/or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. Results: The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5{+-}2.8) mm compared to (3.5{+-}3.0) mm 12. Demons deformable registration of CT and cone-beam CT using an iterative intensity matching approach PubMed Central Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali; Mirota, Daniel J.; Stayman, J. Webster; Zbijewski, Wojciech; Brock, Kristy K.; Daly, Michael J.; Chan, Harley; Irish, Jonathan C.; Siewerdsen, Jeffrey H. 2011-01-01 Purpose: A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values (“intensity”). Methods: A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specific intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and∕or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. Results: The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5±2.8) mm compared to (3.5±3.0) mm 13. Demons deformable registration of CT and cone-beam CT using an iterative intensity matching approach. PubMed Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali; Mirota, Daniel J; Stayman, J Webster; Zbijewski, Wojciech; Brock, Kristy K; Daly, Michael J; Chan, Harley; Irish, Jonathan C; Siewerdsen, Jeffrey H 2011-04-01 A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values ("intensity"). A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specific intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and/or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5 +/- 2.8) mm compared to (3.5 +/- 3.0) mm with rigid registration. A 14. Appropriate patient selection at abdominal dual-energy CT using 80 kV: relationship between patient size, image noise, and image quality. PubMed Guimarães, Luís S; Fletcher, Joel G; Harmsen, William S; Yu, Lifeng; Siddiki, Hassan; Melton, Zachary; Huprich, James E; Hough, David; Hartman, Robert; McCollough, Cynthia H 2010-12-01 To determine the computed tomographic (CT) detector configuration, patient size, and image noise limitations that will result in acceptable image quality of 80-kV images obtained at abdominal dual-energy CT. The Institutional Review Board approved this HIPAA-compliant retrospective study from archival material from patients consenting to the use of medical records for research purposes. A retrospective review of contrast material-enhanced abdominal dual-energy CT scans in 116 consecutive patients was performed. Three gastrointestinal radiologists noted detector configuration and graded image quality and artifacts at specified levels-midliver, midpancreas, midkidneys, and terminal ileum-by using two five-point scales. In addition, an organ-specific enhancement-to-noise ratio and background noise were measured in each patient. Patient size was measured by using the longest linear dimension at the level of interest, weight, lean body weight, body mass index, and body surface area. Detector configuration, patient sizes, and image noise levels that resulted in unacceptable image quality and artifact rankings (score of 4 or higher) were determined by using multivariate logistic regression. A 14 × 1.2-mm detector configuration resulted in fewer images with unacceptable quality than did the 64 × 0.6-mm configuration at all anatomic levels (P = .004, .01, and .02 for liver, pancreas, and kidneys, respectively). Image acceptability for the kidneys and ileum was significantly greater than that for the liver for all readers and detector configurations (P < .001). For the 14 × 1.2-mm detector configuration, patient longest linear dimensions yielding acceptable image quality across readers ranged from 34.9 to 35.8 cm at the four anatomic levels. An 80-kV abdominal CT can be performed with appropriate diagnostic quality in a substantial percentage of the population, but it is not recommended beyond the described patient size for each anatomic level. The 14 × 1.2-mm detector 15. Orthogonal-rotating tetrahedral scanning for cone-beam CT Ye, Ivan B.; Wang, Ge 2012-10-01 In this article, a cone-beam CT scanning mode is designed assuming four x-ray sources and a spherical sample. The x-ray sources are mounted at the vertices of a regular tetrahedron. On the circumsphere of the tetrahedron, four detection panels are mounted opposite to each vertex. To avoid x-ray interference, the largest half angle of each x-ray cone beam is 27°22', while the radius of the largest ball fully covered by all the cone beams is 0.460, when the radius of the circumsphere is 1. Several scanning schemes are proposed which consist of two rotations about orthogonal axes, such that each quarter turn provides sufficient data for theoretically exact and stable reconstruction. This design can be used in biomedical or industrial settings, such as when a sequence of reconstructions of an object is desired. Similar scanning schemes based on other regular or irregular polyhedra and various rotation speeds are also discussed. 16. Deformable registration of CT and cone-beam CT with local intensity matching. PubMed Park, Seyoun; Plishker, William; Quon, Harry; Wong, John; Shekhar, Raj; Lee, Junghoon 2017-02-07 Cone-beam CT (CBCT) is a widely used intra-operative imaging modality in image-guided radiotherapy and surgery. A short scan followed by a filtered-backprojection is typically used for CBCT reconstruction. While data on the mid-plane (plane of source-detector rotation) is complete, off-mid-planes undergo different information deficiency and the computed reconstructions are approximate. This causes different reconstruction artifacts at off-mid-planes depending on slice locations, and therefore impedes accurate registration between CT and CBCT. In this paper, we propose a method to accurately register CT and CBCT by iteratively matching local CT and CBCT intensities. We correct CBCT intensities by matching local intensity histograms slice by slice in conjunction with intensity-based deformable registration. The correction-registration steps are repeated in an alternating way until the result image converges. We integrate the intensity matching into three different deformable registration methods, B-spline, demons, and optical flow that are widely used for CT-CBCT registration. All three registration methods were implemented on a graphics processing unit for efficient parallel computation. We tested the proposed methods on twenty five head and neck cancer cases and compared the performance with state-of-the-art registration methods. Normalized cross correlation (NCC), structural similarity index (SSIM), and target registration error (TRE) were computed to evaluate the registration performance. Our method produced overall NCC of 0.96, SSIM of 0.94, and TRE of 2.26 → 2.27 mm, outperforming existing methods by 9%, 12%, and 27%, respectively. Experimental results also show that our method performs consistently and is more accurate than existing algorithms, and also computationally efficient. 17. Deformable registration of CT and cone-beam CT with local intensity matching Park, Seyoun; Plishker, William; Quon, Harry; Wong, John; Shekhar, Raj; Lee, Junghoon 2017-02-01 Cone-beam CT (CBCT) is a widely used intra-operative imaging modality in image-guided radiotherapy and surgery. A short scan followed by a filtered-backprojection is typically used for CBCT reconstruction. While data on the mid-plane (plane of source-detector rotation) is complete, off-mid-planes undergo different information deficiency and the computed reconstructions are approximate. This causes different reconstruction artifacts at off-mid-planes depending on slice locations, and therefore impedes accurate registration between CT and CBCT. In this paper, we propose a method to accurately register CT and CBCT by iteratively matching local CT and CBCT intensities. We correct CBCT intensities by matching local intensity histograms slice by slice in conjunction with intensity-based deformable registration. The correction-registration steps are repeated in an alternating way until the result image converges. We integrate the intensity matching into three different deformable registration methods, B-spline, demons, and optical flow that are widely used for CT-CBCT registration. All three registration methods were implemented on a graphics processing unit for efficient parallel computation. We tested the proposed methods on twenty five head and neck cancer cases and compared the performance with state-of-the-art registration methods. Normalized cross correlation (NCC), structural similarity index (SSIM), and target registration error (TRE) were computed to evaluate the registration performance. Our method produced overall NCC of 0.96, SSIM of 0.94, and TRE of 2.26 → 2.27 mm, outperforming existing methods by 9%, 12%, and 27%, respectively. Experimental results also show that our method performs consistently and is more accurate than existing algorithms, and also computationally efficient. 18. Multiscale registration of planning CT and daily cone beam CT images for adaptive radiation therapy SciTech Connect Paquin, Dana; Levy, Doron; Xing Lei 2009-01-15 Adaptive radiation therapy (ART) is the incorporation of daily images in the radiotherapy treatment process so that the treatment plan can be evaluated and modified to maximize the amount of radiation dose to the tumor while minimizing the amount of radiation delivered to healthy tissue. Registration of planning images with daily images is thus an important component of ART. In this article, the authors report their research on multiscale registration of planning computed tomography (CT) images with daily cone beam CT (CBCT) images. The multiscale algorithm is based on the hierarchical multiscale image decomposition of E. Tadmor, S. Nezzar, and L. Vese [Multiscale Model. Simul. 2(4), pp. 554-579 (2004)]. Registration is achieved by decomposing the images to be registered into a series of scales using the (BV, L{sup 2}) decomposition and initially registering the coarsest scales of the image using a landmark-based registration algorithm. The resulting transformation is then used as a starting point to deformably register the next coarse scales with one another. This procedure is iterated at each stage using the transformation computed by the previous scale registration as the starting point for the current registration. The authors present the results of studies of rectum, head-neck, and prostate CT-CBCT registration, and validate their registration method quantitatively using synthetic results in which the exact transformations our known, and qualitatively using clinical deformations in which the exact results are not known. 19. Cone beam CT: a current overview of devices PubMed Central Nemtoi, A; Czink, C; Haba, D; Gahleitner, A 2013-01-01 The purpose of this study was to review and compare the properties of all the available cone beam CT (CBCT) devices offered on the market, while focusing especially on Europe. In this study, we included all the different commonly used CBCT devices currently available on the European market. Information about the properties of each device was obtained from the manufacturers’ official available data, which was later confirmed by their representatives in cases where it was necessary. The main features of a total of 47 CBCT devices that are currently marketed by 20 companies were presented, compared and discussed in this study. All these CBCT devices differ in specific properties according to the companies that produce them. The summarized technical data from a large number of CBCT devices currently on the market offer a wide range of imaging possibilities in the oral and maxillofacial region. PMID:23818529 20. Intracranial physiological calcifications evaluated with cone beam CT. PubMed Sedghizadeh, P P; Nguyen, M; Enciso, R 2012-12-01 The purpose of this study was to evaluate cone beam CT (CBCT) scans for the presence of physiological and pathological intracranial calcifications. CBCT scans from male and female patients that met our ascertainment criteria were evaluated retrospectively (n=500) for the presence of either physiological or pathological intracranial calcifications. Out of the 500 patients evaluated, 176 had evidence of intracranial physiological calcification (35.2% prevalence), and none had evidence of pathological calcification. There was a 3:2 male-to-female ratio and no ethnic predilection; the ages of affected patients ranged from 13 years to 82 years with a mean age of 52 years. The majority of calcifications appeared in the pineal/habenular region (80%), with some also appearing in the choroid plexus region bilaterally (12%), and a smaller subset appearing in the petroclinoid ligament region bilaterally (8%). Intracranial physiological calcifications can be a common finding on CBCT scans, whereas pathological intracranial calcifications are rare. 1. Cone beam CT: a current overview of devices. PubMed Nemtoi, A; Czink, C; Haba, D; Gahleitner, A 2013-01-01 The purpose of this study was to review and compare the properties of all the available cone beam CT (CBCT) devices offered on the market, while focusing especially on Europe. In this study, we included all the different commonly used CBCT devices currently available on the European market. Information about the properties of each device was obtained from the manufacturers' official available data, which was later confirmed by their representatives in cases where it was necessary. The main features of a total of 47 CBCT devices that are currently marketed by 20 companies were presented, compared and discussed in this study. All these CBCT devices differ in specific properties according to the companies that produce them. The summarized technical data from a large number of CBCT devices currently on the market offer a wide range of imaging possibilities in the oral and maxillofacial region. 2. Computer aided breast density evaluation in cone beam breast CT Zhang, Xiaohua; Ning, Ruola 2011-03-01 Cone Beam Breast CT is a three-dimensional breast imaging modality with high contrast resolution and no tissue overlap. With these advantages, it is possible to measure volumetric breast density accurately and quantitatively with CBBCT 3D images. Three major breast components need to be segmented: skin, fat and glandular tissue. In this research, a modified morphological processing is applied to the CBBCT images to detect and remove the skin of the breast. After the skin is removed, a 2-step fuzzy clustering scheme is applied to the CBBCT image volume to adaptively cluster the image voxels into fat and glandular tissue areas based on the intensity of each voxel. Finally, the CBBCT breast volume images are divided into three categories: skin, fat and glands. Clinical data is used and the quantitative CBBCT breast density evaluation results are compared with the mammogram-based BIRADS breast density categories. 3. Dual resolution cone beam breast CT: A feasibility study PubMed Central Chen, Lingyun; Shen, Youtao; Lai, Chao-Jen; Han, Tao; Zhong, Yuncheng; Ge, Shuaiping; Liu, Xinming; Wang, Tianpeng; Yang, Wei T.; Whitman, Gary J.; Shaw, Chris C. 2009-01-01 Purpose: In this study, the authors investigated the feasibility of a dual resolution volume-of-interest (VOI) cone beam breast CT technique and compared two implementation approaches in terms of dose saving and scatter reduction. Methods: With this technique, a lead VOI mask with an opening is inserted between the x-ray source and the breast to deliver x-ray exposure to the VOI while blocking x rays outside the VOI. A CCD detector is used to collect the high resolution projection data of the VOI. Low resolution cone beam CT (CBCT) images of the entire breast, acquired with a flat panel (FP) detector, were used to calculate the projection data outside the VOI with the ray-tracing reprojection method. The Feldkamp–Davis–Kress filtered backprojection algorithm was used to reconstruct the dual resolution 3D images. Breast phantoms with 180 μm and smaller microcalcifications (MCs) were imaged with both FP and FP-CCD dual resolution CBCT systems, respectively. Two approaches of implementing the dual resolution technique, breast-centered approach and VOI-centered approach, were investigated and evaluated for dose saving and scatter reduction with Monte Carlo simulation using a GEANT4 package. Results: The results showed that the breast-centered approach saved more breast absorbed dose than did VOI-centered approach with similar scatter reduction. The MCs in fatty breast phantom, which were invisible with FP CBCT scan, became visible with the FP-CCD dual resolution CBCT scan. Conclusions: These results indicate potential improvement of the image quality inside the VOI with reduced breast dose both inside and outside the VOI. PMID:19810473 4. Dual resolution cone beam breast CT: a feasibility study. PubMed Chen, Lingyun; Shen, Youtao; Lai, Chao-Jen; Han, Tao; Zhong, Yuncheng; Ge, Shuaiping; Liu, Xinming; Wang, Tianpeng; Yang, Wei T; Whitman, Gary J; Shaw, Chris C 2009-09-01 In this study, the authors investigated the feasibility of a dual resolution volume-of-interest (VOI) cone beam breast CT technique and compared two implementation approaches in terms of dose saving and scatter reduction. With this technique, a lead VOI mask with an opening is inserted between the x-ray source and the breast to deliver x-ray exposure to the VOI while blocking x rays outside the VOI. A CCD detector is used to collect the high resolution projection data of the VOI. Low resolution cone beam CT (CBCT) images of the entire breast, acquired with a flat panel (FP) detector, were used to calculate the projection data outside the VOI with the ray-tracing reprojection method. The Feldkamp-Davis-Kress filtered backprojection algorithm was used to reconstruct the dual resolution 3D images. Breast phantoms with 180 microm and smaller microcalcifications (MCs) were imaged with both FP and FP-CCD dual resolution CBCT systems, respectively. Two approaches of implementing the dual resolution technique, breast-centered approach and VOI-centered approach, were investigated and evaluated for dose saving and scatter reduction with Monte Carlo simulation using a GEANT4 package. The results showed that the breast-centered approach saved more breast absorbed dose than did VOI-centered approach with similar scatter reduction. The MCs in fatty breast phantom, which were invisible with FP CBCT scan, became visible with the FP-CCD dual resolution CBCT scan. These results indicate potential improvement of the image quality inside the VOI with reduced breast dose both inside and outside the VOI. 5. Iodine contrast cone beam CT imaging of breast cancer Partain, Larry; Prionas, Stavros; Seppi, Edward; Virshup, Gary; Roos, Gerhard; Sutherland, Robert; Boone, John 2007-03-01 An iodine contrast agent, in conjunction with an X-ray cone beam CT imaging system, was used to clearly image three, biopsy verified, cancer lesions in two patients. The lesions were approximately in the 10 mm to 6 mm diameter range. Additional regions were also enhanced with approximate dimensions down to 1 mm or less in diameter. A flat panel detector, with 194 μm pixels in 2 x 2 binning mode, was used to obtain 500 projection images at 30 fps with an 80 kVp X-ray system operating at 112 mAs, for an 8-9 mGy dose - equivalent to two view mammography for these women. The patients were positioned prone, while the gantry rotated in the horizontal plane around the uncompressed, pendant breasts. This gantry rotated 360 degrees during the patient's 16.6 sec breath hold. A volume of 100 cc of 320 mg/ml iodine-contrast was power injected at 4 cc/sec, via catheter into the arm vein of the patient. The resulting 512 x 512 x 300 cone beam CT data set of Feldkamp reconstructed ~(0.3 mm) 3 voxels were analyzed. An interval of voxel contrast values, characteristic of the regions with iodine contrast enhancement, were used with surface rendering to clearly identify up to a total of 13 highlighted volumes. This included the three largest lesions, that were previously biopsied and confirmed to be malignant. The other ten highlighted regions, of smaller diameters, are likely areas of increased contrast trapping unrelated to cancer angiogenesis. However the technique itself is capable of resolving lesions that small. 6. Effective dose from cone beam CT examinations in dentistry. PubMed Roberts, J A; Drage, N A; Davies, J; Thomas, D W 2009-01-01 Cone beam CT (CBCT) is becoming an increasingly utilized imaging modality for dental examinations in the UK. Previous studies have presented little information on patient dose for the range of fields of view (FOVs) that can be utilized. The purpose of the study was therefore to calculate the effective dose delivered to the patient during a selection of CBCT examinations performed in dentistry. In particular, the i-CAT CBCT scanner was investigated for several imaging protocols commonly used in clinical practice. A Rando phantom containing thermoluminescent dosemeters was scanned. Using both the 1990 and recently approved 2007 International Commission on Radiological Protection recommended tissue weighting factors, effective doses were calculated. The doses (E(1990), E(2007)) were: full FOV head (92.8 microSv, 206.2 microSv); 13 cm scan of the jaws (39.5 microSv, 133.9 microSv); 6 cm high-resolution mandible (47.2 microSv, 188.5 microSv); 6 cm high-resolution maxilla (18.5 microSv, 93.3 microSv); 6 cm standard mandible (23.9 microSv, 96.2 microSv); and 6 cm standard maxilla (9.7 microSv, 58.9 microSv). The doses from CBCT are low compared with conventional CT but significantly higher than conventional dental radiography techniques. 7. Cone beam CT tumor vasculature dynamic study (Murine model) Yang, Dong; Ning, Ruola; Conover, David; Ricardo, Betancourt; Liu, Shaohua 2008-03-01 Tumor angiogenesis is the process by which new blood vessels are formed from the existing vessels in a tumor to promote tumor growth. Tumor angiogenesis has important implications in the diagnosis and treatment of various solid tumors. Flat panel detector based cone beam CT opens up a new way for detection of tumors, and tumor angiogenesis associated with functional CBCT has the potential to provide more information than traditional functional CT due to more overall coverage during the same scanning period and the reconstruction being isotropic resulting in a more accurate 3D volume intensity measurement. A functional study was conducted by using CBCT to determine the degree of the enhancement within the tumor after injecting the contrast agent intravenously. For typical doses of contrast material, the amount of enhancement is proportional to the concentration of this material within the region of interest. A series of images obtained at one location over time allows generation of time-attenuation data from which a number of semi-quantitative parameters, such as enhancement rate, can be determined. An in vivo mice study with and without mammo tumor was conducted on our prototype CBCT system, and half scan scheme is used to determine the time-intensity curve within the VOI of the mouse. The CBCT has an x-ray tube, a gantry with slip ring technology, and a 40×30 cm Varian Paxscan 4030CB real time FPD. 8. Cone beam CT for dental and maxillofacial imaging: dose matters. PubMed Pauwels, Ruben 2015-07-01 The widespread use of cone-beam CT (CBCT) in dentistry has led to increasing concern regarding justification and optimisation of CBCT exposures. When used as a substitute to multidetector CT (MDCT), CBCT can lead to significant dose reduction; however, low-dose protocols of current-generation MDCTs show that there is an overlap between CBCT and MDCT doses. More importantly, although the 3D information provided by CBCT can often lead to improved diagnosis and treatment compared with 2D radiographs, a routine or excessive use of CBCT would lead to a substantial increase of the collective patient dose. The potential use of CBCT for paediatric patients (e.g. developmental disorders, trauma and orthodontic treatment planning) further increases concern regarding its proper application. This paper provides an overview of justification and optimisation issues in dental and maxillofacial CBCT. The radiation dose in CBCT will be briefly reviewed. The European Commission's Evidence Based Guidelines prepared by the SEDENTEXCT Project Consortium will be summarised, and (in)appropriate use of CBCT will be illustrated for various dental applications. 9. A ray-tracing backprojection algorithm for cone beam CT Lu, Jun; Pan, Tinsu 2007-03-01 We have developed a ray-tracing backprojection (RTB) to back-project all the detector pixels into the image domain of cone beam CT (CBCT). The underlying mathematic framework is the FDK reconstruction. In this method, every ray recorded by the flat panel detector is traced back into the image space. In each voxel of the imaging domain, all the rays contributing to the formation of the CT image are summed together weighted by each rays' intersection length with the voxel. The RTB is similar to a reverse process of x-ray transmission imaging, as opposed to the conventional voxel-driven backprojection (VDB). In the RTB, we avoided interpolation and pixel binning approximations, achieved better spatial resolution and eliminated some image artifacts. We have successfully applied the RTB in phantom studies on the Varian On Board Imager CBCT. The images of the Catphan CTP404 module show more accurate representation of the oblique ramps in the measurement of slice thickness, and more accurate determination of slice thickness with the RTB than with VDB. The RTB also shows higher spatial resolution than the VDB in the studies of a high contrast resolution phantom. 10. Deformable registration of CT and cone-beam CT by local CBCT intensity correction Park, Seyoun; Plishker, William; Shekhar, Raj; Quon, Harry; Wong, John; Lee, Junghoon 2015-03-01 In this paper, we propose a method to accurately register CT to cone-beam CT (CBCT) by iteratively correcting local CBCT intensity. CBCT is a widely used intra-operative imaging modality in image-guided radiotherapy and surgery. A short scan followed by a filtered-backprojection is typically used for CBCT reconstruction. While data on the mid-plane (plane of source-detector rotation) is complete, off-mid-planes undergo different information deficiency and the computed reconstructions are approximate. This causes different reconstruction artifacts at off-mid-planes depending on slice locations, and therefore impedes accurate registration between CT and CBCT. To address this issue, we correct CBCT intensities by matching local intensity histograms slice by slice in conjunction with intensity-based deformable registration. This correction-registration step is repeated until the result image converges. We tested the proposed method on eight head-and-neck cancer cases and compared its performance with state-of-the-art registration methods, Bspline, demons, and optical flow, which are widely used for CT-CBCT registration. Normalized mutual-information (NMI), normalized cross-correlation (NCC), and structural similarity (SSIM) were computed as similarity measures for the performance evaluation. Our method produced overall NMI of 0.59, NCC of 0.96, and SSIM of 0.93, outperforming existing methods by 3.6%, 2.4%, and 2.8% in terms of NMI, NCC, and SSIM scores, respectively. Experimental results show that our method is more consistent and roust than existing algorithms, and also computationally efficient with faster convergence. 11. Carotid CT-angiography: low versus standard volume contrast media and low kV protocol for 128-slice MDCT. PubMed Kayan, Mustafa; Köroğlu, Mert; Yeşildağ, Ahmet; Ceylan, Ergun; Aktaş, Aykut Recep; Yasar, Selçuk; Aynali, Giray; Parlak, Cem; Munduz, Mehmet; Gürses, Cemil 2012-09-01 Availability and utilization of computed tomography angiography has been increasing recently. We aimed to assess the effectiveness of low amount of contrast media and low kV value in order to reduce possible side effects of contrast media and to provide optimization of kV value in the evaluation of the carotid artery with multi-detector computed tomography angiography. Forty one patients were randomized into two groups. Contrast media was administered at a dose of 1 ml/kg in group A patients and of 0.5 ml/kg in group B patients. kV value of 120 in group A and 100 in group B were chosen. Bolus tracking technique was used. Attenuation values of certain arterial segments were measured, and values over 200 HU were considered as significant. North American Symptomatic Carotid Endartherectomy Trial criteria were utilized in the evaluation of stenosis. Image quality in arterial segments of all cases was found to be sufficient for diagnosis. Arterial attenuation values were found to be higher in group B than group A. When compared separately in all arterial segments, there was no statistically significant difference between the groups. For stenosis, 615 arterial segments were evaluated. Moderate stenosis in eight segments and severe stenosis in three segments were identified in group A. Occlusion in three segments, severe stenosis in three segments, and moderate stenosis in 25 segments were detected in group B. Better image quality can be obtained, and the amount of contrast media can be reduced using low kV technique in carotid artery multi-detector computed tomography angiography examination. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved. 12. TH-C-18A-10: The Influence of Tube Current On X-Ray Focal Spot Size for 70 KV CT Imaging SciTech Connect Duan, X; Grimes, J; Yu, L; Leng, S; McCollough, C 2014-06-15 Purpose: Focal spot blooming is an increase in the focal spot size at increased tube current and/or decreased tube potential. In this work, we evaluated the influence of tube current on the focal spot size at low kV for two CT systems, one of which used a tube designed to reduce blooming effects. Methods: A slit camera (10 micron slit) was used to measure focal spot size on two CT scanners from the same manufacturer (Siemens Somatom Force and Definition Flash) at 70 kV and low, medium and maximum tube currents, according to the capabilities of each system (Force: 100, 800 and 1300 mA; Flash: 100, 200 and 500 mA). Exposures were made with a stationary tube in service mode using a raised stand without table movement or flying focal spot technique. Focal spot size, nominally 0.8 and 1.2 mm, respectively, was measured parallel and perpendicular to the cathode-anode axis by calculating the full-width-at-half-maximum of the slit profile recording using computed radiographic plates. Results: Focal spot sizes perpendicular to the anode-cathode axis increased at the maximum mA by 5.7% on the Force and 39.1% on the Flash relative to that at the minimal mA, even though the mA was increased 13-fold on the Force and only 5- fold on the Flash. Focal spot size increased parallel to the anode-cathode axis by 70.4% on Force and 40.9% on Flash. Conclusion: For CT protocols using low kV, high mA is typically required. These protocols are relevant in children and smaller adults, and for dual-energy scanning. Technical measures to limit focal spot blooming are important in these settings to avoid reduced spatial resolution. The x-ray tube on a recently-introduced scanner appears to greatly reduce blooming effects, even at very high mA values. CHM has research support from Siemens Healthcare. 13. Computed tomography dose assessment for a 160 mm wide, 320 detector row, cone beam CT scanner. PubMed Geleijns, J; Salvadó Artells, M; de Bruin, P W; Matter, R; Muramatsu, Y; McNitt-Gray, M F 2009-05-21 Computed tomography (CT) dosimetry should be adapted to the rapid developments in CT technology. Recently a 160 mm wide, 320 detector row, cone beam CT scanner that challenges the existing Computed Tomography Dose Index (CTDI) dosimetry paradigm was introduced. The purpose of this study was to assess dosimetric characteristics of this cone beam scanner, to study the appropriateness of existing CT dose metrics and to suggest a pragmatic approach for CT dosimetry for cone beam scanners. Dose measurements with a small Farmer-type ionization chamber and with 100 mm and 300 mm long pencil ionization chambers were performed free in air to characterize the cone beam. According to the most common dose metric in CT, namely CTDI, measurements were also performed in 150 mm and 350 mm long CT head and CT body dose phantoms with 100 mm and 300 mm long pencil ionization chambers, respectively. To explore effects that cannot be measured with ionization chambers, Monte Carlo (MC) simulations of the dose distribution in 150 mm, 350 mm and 700 mm long CT head and CT body phantoms were performed. To overcome inconsistencies in the definition of CTDI100 for the 160 mm wide cone beam CT scanner, doses were also expressed as the average absorbed dose within the pencil chamber (D100). Measurements free in air revealed excellent correspondence between CTDI300air and D100air, while CTDI100air substantially underestimates CTDI300air. Results of measurements in CT dose phantoms and corresponding MC simulations at centre and peripheral positions were weighted and revealed good agreement between CTDI300w, D100w and CTDI600w, while CTDI100w substantially underestimates CTDI300w. D100w provides a pragmatic metric for characterizing the dose of the 160 mm wide cone beam CT scanner. This quantity can be measured with the widely available 100 mm pencil ionization chamber within 150 mm long CT dose phantoms. CTDI300w measured in 350 mm long CT dose phantoms serves as an appropriate standard of 14. Auto calibration of a cone-beam-CT SciTech Connect Gross, Daniel; Heil, Ulrich; Schulze, Ralf; Schoemer, Elmar; Schwanecke, Ulrich 2012-10-15 Purpose: This paper introduces a novel autocalibration method for cone-beam-CTs (CBCT) or flat-panel CTs, assuming a perfect rotation. The method is based on ellipse-fitting. Autocalibration refers to accurate recovery of the geometric alignment of a CBCT device from projection images alone, without any manual measurements. Methods: The authors use test objects containing small arbitrarily positioned radio-opaque markers. No information regarding the relative positions of the markers is used. In practice, the authors use three to eight metal ball bearings (diameter of 1 mm), e.g., positioned roughly in a vertical line such that their projection image curves on the detector preferably form large ellipses over the circular orbit. From this ellipse-to-curve mapping and also from its inversion the authors derive an explicit formula. Nonlinear optimization based on this mapping enables them to determine the six relevant parameters of the system up to the device rotation angle, which is sufficient to define the geometry of a CBCT-machine assuming a perfect rotational movement. These parameters also include out-of-plane rotations. The authors evaluate their method by simulation based on data used in two similar approaches [L. Smekal, M. Kachelriess, S. E, and K. Wa, 'Geometric misalignment and calibration in cone-beam tomography,' Med. Phys. 31(12), 3242-3266 (2004); K. Yang, A. L. C. Kwan, D. F. Miller, and J. M. Boone, 'A geometric calibration method for cone beam CT systems,' Med. Phys. 33(6), 1695-1706 (2006)]. This allows a direct comparison of accuracy. Furthermore, the authors present real-world 3D reconstructions of a dry human spine segment and an electronic device. The reconstructions were computed from projections taken with a commercial dental CBCT device having two different focus-to-detector distances that were both calibrated with their method. The authors compare their reconstruction with a reconstruction computed by the manufacturer of the CBCT device to 15. Auto calibration of a cone-beam-CT. PubMed Gross, Daniel; Heil, Ulrich; Schulze, Ralf; Schoemer, Elmar; Schwanecke, Ulrich 2012-10-01 This paper introduces a novel autocalibration method for cone-beam-CTs (CBCT) or flat-panel CTs, assuming a perfect rotation. The method is based on ellipse-fitting. Autocalibration refers to accurate recovery of the geometric alignment of a CBCT device from projection images alone, without any manual measurements. The authors use test objects containing small arbitrarily positioned radio-opaque markers. No information regarding the relative positions of the markers is used. In practice, the authors use three to eight metal ball bearings (diameter of 1 mm), e.g., positioned roughly in a vertical line such that their projection image curves on the detector preferably form large ellipses over the circular orbit. From this ellipse-to-curve mapping and also from its inversion the authors derive an explicit formula. Nonlinear optimization based on this mapping enables them to determine the six relevant parameters of the system up to the device rotation angle, which is sufficient to define the geometry of a CBCT-machine assuming a perfect rotational movement. These parameters also include out-of-plane rotations. The authors evaluate their method by simulation based on data used in two similar approaches [L. Smekal, M. Kachelriess, S. E, and K. Wa, "Geometric misalignment and calibration in cone-beam tomography," Med. Phys. 31(12), 3242-3266 (2004); K. Yang, A. L. C. Kwan, D. F. Miller, and J. M. Boone, "A geometric calibration method for cone beam CT systems," Med. Phys. 33(6), 1695-1706 (2006)]. This allows a direct comparison of accuracy. Furthermore, the authors present real-world 3D reconstructions of a dry human spine segment and an electronic device. The reconstructions were computed from projections taken with a commercial dental CBCT device having two different focus-to-detector distances that were both calibrated with their method. The authors compare their reconstruction with a reconstruction computed by the manufacturer of the CBCT device to demonstrate the 16. Algorithm for x-ray beam hardening and scatter correction in low-dose cone-beam CT: phantom studies Liu, Wenlei; Rong, Junyan; Gao, Peng; Liao, Qimei; Lu, HongBing 2016-03-01 X-ray scatter poses a significant limitation to image quality in cone-beam CT (CBCT), as well as beam hardening, resulting in image artifacts, contrast reduction, and lack of CT number accuracy. Meanwhile the x-ray radiation dose is also non-ignorable. Considerable scatter or beam hardening correction methods have been developed, independently, and rarely combined with low-dose CT reconstruction. In this paper, we combine scatter suppression with beam hardening correction for sparse-view CT reconstruction to improve CT image quality and reduce CT radiation. Firstly, scatter was measured, estimated, and removed using measurement-based methods, assuming that signal in the lead blocker shadow is only attributable to x-ray scatter. Secondly, beam hardening was modeled by estimating an equivalent attenuation coefficient at the effective energy, which was integrated into the forward projector of the algebraic reconstruction technique (ART). Finally, the compressed sensing (CS) iterative reconstruction is carried out for sparse-view CT reconstruction to reduce the CT radiation. Preliminary Monte Carlo simulated experiments indicate that with only about 25% of conventional dose, our method reduces the magnitude of cupping artifact by a factor of 6.1, increases the contrast by a factor of 1.4 and the CNR by a factor of 15. The proposed method could provide good reconstructed image from a few view projections, with effective suppression of artifacts caused by scatter and beam hardening, as well as reducing the radiation dose. With this proposed framework and modeling, it may provide a new way for low-dose CT imaging. 17. Cone Beam CT Versus Multislice CT: Radiologic Diagnostic Agreement in the Postoperative Assessment of Cochlear Implantation. PubMed Razafindranaly, Victor; Truy, Eric; Pialat, Jean-Baptiste; Martinon, Amanda; Bourhis, Magali; Boublay, Nawele; Faure, Frédéric; Ltaïef-Boudrigua, Aïcha 2016-10-01 To evaluate the diagnostic concordance between multislice computed tomography (MSCT) and cone beam computed tomography (CBCT) in the early postoperative assessment of patients after cochlear implantation. Prospective, randomized, single-center, interventional, pilot study on the diagnostic performance of a medical device. Tertiary referral center. Patients aged over 18 years requiring a computed tomographic (CT) scan after cochlear implant surgery. Nine patients were implanted with electrode arrays from three different manufacturers, including one bilateral. High-resolution MSCT and CBCT were then performed, and two experienced radiologists blinded to the imaging modality evaluated the randomized images, twice. Concordance between MSCT and CBCT for assessing the scalar position (tympani or vestibuli) of the electrodes. Secondary outcome measures were also studied: length of the intracochlear electrode array, percentage of implanted cochlea, number of intracochlear electrodes, and radiation doses. There was a good agreement between both CT scanners in determining the scalar position and estimating the number of implanted electrodes and percentage of implanted cochlea. CBCT had a lower radiation exposure. The CBCT appears to be a useful tool for postoperative assessment of cochlear implanted adult patients and is comparable to the conventional scanner in determining the scalar position, with lower radiation exposure. 18. Conversion coefficients for the estimation of effective dose in cone-beam CT PubMed Central Kim, Dong-Soo; Rashsuren, Oyuntugs 2014-01-01 Purpose To determine the conversion coefficients (CCs) from the dose-area product (DAP) value to effective dose in cone-beam CT. Materials and Methods A CBCT scanner with four fields of view (FOV) was used. Using two exposure settings of the adult standard and low dose exposure, DAP values were measured with a DAP meter in C mode (200mm×179 mm), P mode (154 mm×154 mm), I mode (102 mm×102 mm), and D mode (51 mm×51 mm). The effective doses were also investigated at each mode using an adult male head and neck phantom and thermoluminescent chips. Linear regressive analysis of the DAP and effective dose values was used to calculate the CCs for each CBCT examination. Results For the C mode, the P mode at the maxilla, and the P mode at the mandible, the CCs were 0.049 µSv/mGycm2, 0.067 µSv/mGycm2, and 0.064 µSv/mGycm2, respectively. For the I mode, the CCs at the maxilla and mandible were 0.076 µSv/mGycm2 and 0.095 µSv/mGycm2, respectively. For the D mode at the maxillary incisors, molars, and mandibular molars, the CCs were 0.038 µSv/mGycm2, 0.041 µSv/mGycm2, and 0.146 µSv/mGycm2, respectively. Conclusion The CCs in one CBCT device with fixed 80 kV ranged from 0.038 µSv/mGycm2 to 0.146 µSv/mGycm2 according to the imaging modes and irradiated region and were highest for the D mode at the mandibular molar. PMID:24701455 19. Application of 70 kV Third-generation High-pitch Dual-source Coronary CT Angiography in Patients with Different Body Mass Index. PubMed Yi, Yan; Cao, Jian; Lin, Lu; Kong, Lingyan; Jiang, Shu; Li, Xiao; Liu, Peijun; Wang, Ming; Wang, Man; Wang, Yun; Jin, Zhengyu; Wang, Yining 2017-02-20 Objective  To investigate the optimized range of body mass index (BMI) selection for patients undergoing 70 kV high-pitch dual-source coronary CT angiography (CCTA) on the third-generation dual-source CT (DSCT). Methods  Patients undergoing prospective high-pitch ultra-low contrast media (CM) CCTA on the third-generation DSCT using the automatic tube voltage selection at 70 kV were included and divided into three groups:group A,with BMI≤24 kg/m(2);group B,with 24 kg/m(2)0.05). Subjective image quality in group A(Z=2.91,P=0.004) and B(Z=2.27,P=0.021) were both significantly better than that in group C. Conclusion  The ultra-low tube voltage (70 kV) combined with ultra-low CM CCTA protocol on third-generation high-pitch DSCT may be better for patients with BMI<28 kg/m2 than those with BMI ≥28 kg/m(2) in China. 20. Dependence Of The Computerized Tomography (CT) Number - Electron Density Relationship On Patient Size And X-Ray Beam Filtration For Fan Beam CT Scanners Masterson, M. E.; Thomason, C. L.; McGary, R.; Hunt, M. A.; Simpson, L. D.; Miller, D. W.; Laughlin, J. S. 1981-07-01 The applicability of quantitative information contained in CT scans to diagnostic radiology and to radiation therapy treatment planning and the heterogeneity problem has been recognized by members of the radiological community and by manufacturers. Determination of the relationship between electron density and CT number is important for these applications. As CT technology has evolved, CT number generation has changed. CT number variation was limited in the early water bag systems. However, later generation "air" scanners may exhibit variation in CT numbers across a reconstructed image which are re-lated to positioning within the scan circle and scan field size. Results of experimental investigations using tissue-equivalent phantoms of different cross-sectional shapes and areas on the Technicare Delta 2020 are presented. Investigations also cover the effect of "shaped" and "flat" x-ray beam filters. A variation in CT number is demonstrated on this fan beam geometry scanner for phantoms of different sizes and for different scan circle diameters. An explanation of these effects is given. Differences of as much as 20% in determination of tissue electron density relative to water under different experimental conditions are obtained and reported. A family of curves (electron density vs. CT number) is presented for different patient cross-sectional areas and different scanner settings. 1. Intracranial physiological calcifications evaluated with cone beam CT PubMed Central Sedghizadeh, P P; Nguyen, M; Enciso, R 2012-01-01 Objectives The purpose of this study was to evaluate cone beam CT (CBCT) scans for the presence of physiological and pathological intracranial calcifications. Methods CBCT scans from male and female patients that met our ascertainment criteria were evaluated retrospectively (n = 500) for the presence of either physiological or pathological intracranial calcifications. Results Out of the 500 patients evaluated, 176 had evidence of intracranial physiological calcification (35.2% prevalence), and none had evidence of pathological calcification. There was a 3:2 male-to-female ratio and no ethnic predilection; the ages of affected patients ranged from 13 years to 82 years with a mean age of 52 years. The majority of calcifications appeared in the pineal/habenular region (80%), with some also appearing in the choroid plexus region bilaterally (12%), and a smaller subset appearing in the petroclinoid ligament region bilaterally (8%). Conclusions Intracranial physiological calcifications can be a common finding on CBCT scans, whereas pathological intracranial calcifications are rare. PMID:22842632 2. Effective doses from cone beam CT investigation of the jaws PubMed Central Davies, J; Johnson, B; Drage, NA 2012-01-01 Objectives The purpose of the study was to calculate the effective dose delivered to the patient undergoing cone beam (CB) CT of the jaws and maxillofacial complex using the i-CAT Next Generation CBCT scanner (Imaging Sciences International, Hatfield, PA). Methods A RANDO® phantom (The Phantom Laboratory, Salem, NY) containing thermoluminence dosemeters were scanned 10 times for each of the 6 imaging protocols. Effective doses for each protocol were calculated using the 1990 and approved 2007 International Commission on Radiological Protection (ICRP) recommended tissue weighting factors (E1990, E2007). Results The effective dose for E1990 and E2007, respectively, were: full field of view (FOV) of the head, 47 μSv and 78 μSv; 13 cm scan of the jaws, 44 μSv and 77 μSv; 6 cm standard mandible, 35 μSv and 58 μSv; 6 cm high resolution mandible, 69 μSv and 113 μSv; 6 cm standard maxilla, 18 μSv and 32 μSv; and 6 cm high resolution maxilla, 35 μSv and 60 μSv. Conclusions Using the new generation of CBCT scanner, the effective dose is lower than the original generation machine for a similar FOV using the ICRP 2007 tissue weighting factors. PMID:22184626 3. Effective dose span of ten different cone beam CT devices. PubMed Rottke, D; Patzelt, S; Poxleitner, P; Schulze, D 2013-01-01 Evaluation and reduction of dose are important issues. Since cone beam CT (CBCT) has been established now not just in dentistry, the number of acquired examinations continues to rise. Unfortunately, it is very difficult to compare the doses of available devices on the market owing to different exposition parameters, volumes and geometries. The aim of this study was to evaluate the spans of effective doses (EDs) of ten different CBCT devices. 48 thermoluminescent dosemeters were placed in 24 sites in a RANDO(®) head phantom. Protocols with lowest exposition parameters and protocols with highest exposition parameters were performed for each of the ten devices. The ED was calculated from the measured energy doses according to the International Commission on Radiological Protection 2007 recommendations for each protocol and device, and the statistical values were evaluated afterwards. The calculation of the ED resulted in values between 17.2 µSv and 396 µSv for the ten devices. The mean values for protocols with lowest and highest exposition parameters were 31.6 µSv and 209 µSv, respectively. It was not the aim of this study to evaluate the image quality depending on different exposition parameters but to define the spans of EDs in which different CBCT devices work. There is a wide span of ED for different CBCT devices depending on the selected exposition parameters, required spatial resolution and many other factors. 4. Effective dose span of ten different cone beam CT devices PubMed Central Rottke, D; Patzelt, S; Poxleitner, P; Schulze, D 2013-01-01 Objectives: Evaluation and reduction of dose are important issues. Since cone beam CT (CBCT) has been established now not just in dentistry, the number of acquired examinations continues to rise. Unfortunately, it is very difficult to compare the doses of available devices on the market owing to different exposition parameters, volumes and geometries. The aim of this study was to evaluate the spans of effective doses (EDs) of ten different CBCT devices. Methods: 48 thermoluminescent dosemeters were placed in 24 sites in a RANDO® head phantom. Protocols with lowest exposition parameters and protocols with highest exposition parameters were performed for each of the ten devices. The ED was calculated from the measured energy doses according to the International Commission on Radiological Protection 2007 recommendations for each protocol and device, and the statistical values were evaluated afterwards. Results: The calculation of the ED resulted in values between 17.2 µSv and 396 µSv for the ten devices. The mean values for protocols with lowest and highest exposition parameters were 31.6 µSv and 209 µSv, respectively. Conclusions: It was not the aim of this study to evaluate the image quality depending on different exposition parameters but to define the spans of EDs in which different CBCT devices work. There is a wide span of ED for different CBCT devices depending on the selected exposition parameters, required spatial resolution and many other factors. PMID:23584925 5. Streak artifact reduction in cardiac cone beam CT Shechter, Gilad; Naveh, Galit; Lessick, Jonathan; Altman, Ami 2005-04-01 Cone beam reconstructed cardiac CT images suffer from characteristic streak artifacts that affect the quality of coronary artery imaging. These artifacts arise from inhomogeneous distribution of noise. While in non-tagged reconstruction inhomogeneity of noise distribution is mainly due to anisotropy of the attenuation of the scanned object (e.g. shoulders), in cardiac imaging it is largely influenced by the non-uniform distribution of the acquired data used for reconstructing the heart at a given phase. We use a cardiac adaptive filter to reduce these streaks. In difference to previous methods of adaptive filtering that locally smooth data points on the basis of their attenuation values, our filter is applied as a function of the noise distribution of the data as it is used in the phase selective reconstruction. We have reconstructed trans-axial images without adaptive filtering, with a regular adaptive filter and with the cardiac adaptive filter. With the cardiac adaptive filter significant reduction of streaks is achieved, and thus image quality is improved. The coronary vessel is much more pronounced in the cardiac adaptive filtered images, in slab MIP the main coronary artery branches are more visible, and non-calcified plaque is better differentiated from vessel wall. This improvement is accomplished without altering significantly the border definition of calcified plaques. 6. Evaluation of patient dose using a virtual CT scanner: Applications to 4DCT simulation and Kilovoltage cone-beam imaging DeMarco, J. J.; McNitt-Gray, M. F.; Cagnon, C. H.; Angel, E.; Agazaryan, N.; Zankl, M. 2008-02-01 This work evaluates the effects of patient size on radiation dose from simulation imaging studies such as four-dimensional computed tomography (4DCT) and kilovoltage cone-beam computed tomography (kV-CBCT). 4DCT studies are scans that include temporal information, frequently incorporating highly over-sampled imaging series necessary for retrospective sorting as a function of respiratory phase. This type of imaging study can result in a significant dose increase to the patient due to the slower table speed as compared with a conventional axial or helical scan protocol. Kilovoltage cone-beam imaging is a relatively new imaging technique that requires an on-board kilovoltage x-ray tube and a flat-panel detector. Instead of porting individual reference fields, the kV tube and flat-panel detector are rotated about the patient producing a cone-beam CT data set (kV-CBCT). To perform these investigations, we used Monte Carlo simulation methods with detailed models of adult patients and virtual source models of multidetector computed tomography (MDCT) scanners. The GSF family of three-dimensional, voxelized patient models, were implemented as input files using the Monte Carlo code MCNPX. The adult patient models represent a range of patient sizes and have all radiosensitive organs previously identified and segmented. Simulated 4DCT scans of each voxelized patient model were performed using a multi-detector CT source model that includes scanner specific spectra, bow-tie filtration, and helical source path. Standard MCNPX tally functions were applied to each model to estimate absolute organ dose based upon an air-kerma normalization measurement for nominal scanner operating parameters. 7. Presentation of floating mass transducer and Vibroplasty couplers on CT and cone beam CT. PubMed Mlynski, Robert; Nguyen, Thi Dao; Plontke, Stefan K; Kösling, Sabrina 2014-04-01 Various titanium coupling elements, Vibroplasty Couplers, maintaining the attachment of the Floating Mass Transducer (FMT) of the active middle ear implant Vibrant Soundbridge (VSB) to the round window, the stapes suprastructure or the stapes footplate are in use to optimally transfer energy from the FMT to the inner ear fluids. In certain cases it is of interest to radiologically verify the correct position of the FMT coupler assembly. The imaging appearance of FMT connected to these couplers, however, is not well known. The aim of this study was to present the radiological appearance of correctly positioned Vibroplasty Couplers together with the FMT using two different imaging techniques. Vibroplasty Couplers were attached to the FMT of a Vibrant Soundbridge and implanted in formalin fixed human temporal bones. Five FMT coupler assemblies were implanted in different positions: conventionally to the incus, a Bell-Coupler, a CliP-Coupler, a Round Window-Coupler and an Oval Window-Coupler. High spatial resolution imaging with Multi-Detector CT (MDCT) and Cone Beam CT (CBCT) was performed in each specimen. Images were blind evaluated by two radiologists on a visual basis. Middle ear details, identification of FMT and coupler, position of FMT coupler assembly and artefacts were assessed. CBCT showed a better spatial resolution and a higher visual image quality than MDCT, but there was no significant advantage over MDCT in delineating the structures or the temporal bone of the FMT Coupler assemblies. The FMT with its coupler element could be clearly identified in the two imaging techniques. The correct positioning of the FMT and all types of couplers could be demonstrated. Both methods, MDCT and CBCT, are appropriate methods for postoperative localization of FMT in combination with Vibroplasty Couplers and for verifying their correct position. If CBCT is available, this method is recommended due to the better spatial resolution and less metal artifacts. 8. Dedicated Cone-Beam Breast CT: Feasibility Study with Surgical Mastectomy Specimens PubMed Central Yang, Wei Tse; Carkaci, Selin; Chen, Lingyun; Lai, Chao-Jen; Sahin, Aysegul; Whitman, Gary J.; Shaw, Chris C. 2010-01-01 OBJECTIVE The purpose of this study was to investigate the feasibility of diagnostic breast imaging using a flat-panel detector-based cone-beam CT system. CONCLUSION Imaging of 12 mastectomy specimens was performed at 50–80 kVp with a voxel size of 145 or 290 μm. Our study shows that cone-beam breast CT images have exceptional tissue contrast and can potentially reduce examination time with comparable radiation dose. PMID:18029864 9. Measurement of cone beam CT coincidence with megavoltage isocentre and image sharpness using the QUASAR™ Penta-Guide phantom Sykes, J. R.; Lindsay, R.; Dean, C. J.; Brettle, D. S.; Magee, D. R.; Thwaites, D. I. 2008-10-01 For image-guided radiotherapy (IGRT) systems based on cone beam CT (CBCT) integrated into a linear accelerator, the reproducible alignment of imager to x-ray source is critical to the registration of both the x-ray-volumetric image with the megavoltage (MV) beam isocentre and image sharpness. An enhanced method of determining the CBCT to MV isocentre alignment using the QUASAR™ Penta-Guide phantom was developed which improved both precision and accuracy. This was benchmarked against our existing method which used software and a ball-bearing (BB) phantom provided by Elekta. Additionally, a method of measuring an image sharpness metric (MTF50) from the edge response function of a spherical air cavity within the Penta-Guide phantom was developed and its sensitivity was tested by simulating misalignments of the kV imager. Reproducibility testing of the enhanced Penta-Guide method demonstrated a systematic error of <0.2 mm when compared to the BB method with near equivalent random error (s = 0.15 mm). The mean MTF50 for five measurements was 0.278 ± 0.004 lp mm-1 with no applied misalignment. Simulated misalignments exhibited a clear peak in the MTF50 enabling misalignments greater than 0.4 mm to be detected. The Penta-Guide phantom can be used to precisely measure CBCT MV coincidence and image sharpness on CBCT-IGRT systems. 10. Measurement of cone beam CT coincidence with megavoltage isocentre and image sharpness using the QUASAR Penta-Guide phantom. PubMed Sykes, J R; Lindsay, R; Dean, C J; Brettle, D S; Magee, D R; Thwaites, D I 2008-10-07 For image-guided radiotherapy (IGRT) systems based on cone beam CT (CBCT) integrated into a linear accelerator, the reproducible alignment of imager to x-ray source is critical to the registration of both the x-ray-volumetric image with the megavoltage (MV) beam isocentre and image sharpness. An enhanced method of determining the CBCT to MV isocentre alignment using the QUASAR Penta-Guide phantom was developed which improved both precision and accuracy. This was benchmarked against our existing method which used software and a ball-bearing (BB) phantom provided by Elekta. Additionally, a method of measuring an image sharpness metric (MTF(50)) from the edge response function of a spherical air cavity within the Penta-Guide phantom was developed and its sensitivity was tested by simulating misalignments of the kV imager. Reproducibility testing of the enhanced Penta-Guide method demonstrated a systematic error of <0.2 mm when compared to the BB method with near equivalent random error (s=0.15 mm). The mean MTF(50) for five measurements was 0.278+/-0.004 lp mm(-1) with no applied misalignment. Simulated misalignments exhibited a clear peak in the MTF(50) enabling misalignments greater than 0.4 mm to be detected. The Penta-Guide phantom can be used to precisely measure CBCT-MV coincidence and image sharpness on CBCT-IGRT systems. 11. Image-Guided Radiotherapy (IGRT) for Prostate Cancer Comparing kV Imaging of Fiducial Markers With Cone Beam Computed Tomography (CBCT) SciTech Connect Barney, Brandon M.; Lee, R. Jeffrey; Handrahan, Diana; Welsh, Keith T.; Cook, J. Taylor; Sause, William T. 2011-05-01 Purpose: To present our single-institution experience with image-guided radiotherapy comparing fiducial markers and cone-beam computed tomography (CBCT) for daily localization of prostate cancer. Methods and Materials: From April 2007 to October 2008, 36 patients with prostate cancer received intensity-modulated radiotherapy with daily localization by use of implanted fiducials. Orthogonal kilovoltage (kV) portal imaging preceded all 1244 treatments. Cone-beam computed tomography images were also obtained before 286 treatments (23%). Shifts in the anterior-posterior (AP), superior-inferior (SI), and left-right (LR) dimensions were made from kV fiducial imaging. Cone-beam computed tomography shifts based on soft tissues were recorded. Shifts were compared by use of Bland-Altman limits of agreement. Mean and standard deviation of absolute differences were also compared. A difference of 5 mm or less was acceptable. Subsets including start date, body mass index, and prostate size were analyzed. Results: Of 286 treatments, 81 (28%) resulted in a greater than 5.0-mm difference in one or more dimensions. Mean differences in the AP, SI, and LR dimensions were 3.4 {+-} 2.6 mm, 3.1 {+-} 2.7 mm, and 1.3 {+-} 1.6 mm, respectively. Most deviations occurred in the posterior (fiducials, 78%; CBCT, 59%), superior (79%, 61%), and left (57%, 63%) directions. Bland-Altman 95% confidence intervals were -4.0 to 9.3 mm for AP, -9.0 to 5.3 mm for SI, and -4.1 to 3.9 mm for LR. The percentages of shift agreements within {+-}5 mm were 72.4% for AP, 72.7% for SI, and 97.2% for LR. Correlation between imaging techniques was not altered by time, body mass index, or prostate size. Conclusions: Cone-beam computed tomography and kV fiducial imaging are similar; however, more than one-fourth of CBCT and kV shifts differed enough to affect target coverage. This was even more pronounced with smaller margins (3 mm). Fiducial imaging requires less daily physician input, is less time-consuming, and is 12. Scatter correction for cone-beam CT in radiation therapy PubMed Central Zhu, Lei; Xie, Yaoqin; Wang, Jing; Xing, Lei 2009-01-01 Cone-beam CT (CBCT) is being increasingly used in modern radiation therapy for patient setup and adaptive replanning. However, due to the large volume of x-ray illumination, scatter becomes a rather serious problem and is considered as one of the fundamental limitations of CBCT image quality. Many scatter correction algorithms have been proposed in literature, while a standard practical solution still remains elusive. In radiation therapy, the same patient is scanned repetitively during a course of treatment, a natural question to ask is whether one can obtain the scatter distribution on the first day of treatment and then use the data for scatter correction in the subsequent scans on different days. To realize this scatter removal scheme, two technical pieces must be in place: (i) A strategy to obtain the scatter distribution in on-board CBCT imaging and (ii) a method to spatially match a prior scatter distribution with the on-treatment CBCT projection data for scatter subtraction. In this work, simple solutions to the two problems are provided. A partially blocked CBCT is used to extract the scatter distribution. The x-ray beam blocker has a strip pattern, such that partial volume can still be accurately reconstructed and the whole-field scatter distribution can be estimated from the detected signals in the shadow regions using interpolation∕extrapolation. In the subsequent scans, the patient transformation is determined using a rigid registration of the conventional CBCT and the prior partial CBCT. From the derived patient transformation, the measured scatter is then modified to adapt the new on-treatment patient geometry for scatter correction. The proposed method is evaluated using physical experiments on a clinical CBCT system. On the Catphan©600 phantom, the errors in Hounsfield unit (HU) in the selected regions of interest are reduced from about 350 to below 50 HU; on an anthropomorphic phantom, the error is reduced from 15.7% to 5.4%. The proposed 13. Scatter correction for cone-beam CT in radiation therapy SciTech Connect Zhu Lei; Xie Yaoqin; Wang Jing; Xing Lei 2009-06-15 Cone-beam CT (CBCT) is being increasingly used in modern radiation therapy for patient setup and adaptive replanning. However, due to the large volume of x-ray illumination, scatter becomes a rather serious problem and is considered as one of the fundamental limitations of CBCT image quality. Many scatter correction algorithms have been proposed in literature, while a standard practical solution still remains elusive. In radiation therapy, the same patient is scanned repetitively during a course of treatment, a natural question to ask is whether one can obtain the scatter distribution on the first day of treatment and then use the data for scatter correction in the subsequent scans on different days. To realize this scatter removal scheme, two technical pieces must be in place: (i) A strategy to obtain the scatter distribution in on-board CBCT imaging and (ii) a method to spatially match a prior scatter distribution with the on-treatment CBCT projection data for scatter subtraction. In this work, simple solutions to the two problems are provided. A partially blocked CBCT is used to extract the scatter distribution. The x-ray beam blocker has a strip pattern, such that partial volume can still be accurately reconstructed and the whole-field scatter distribution can be estimated from the detected signals in the shadow regions using interpolation/extrapolation. In the subsequent scans, the patient transformation is determined using a rigid registration of the conventional CBCT and the prior partial CBCT. From the derived patient transformation, the measured scatter is then modified to adapt the new on-treatment patient geometry for scatter correction. The proposed method is evaluated using physical experiments on a clinical CBCT system. On the Catphan(c)600 phantom, the errors in Hounsfield unit (HU) in the selected regions of interest are reduced from about 350 to below 50 HU; on an anthropomorphic phantom, the error is reduced from 15.7% to 5.4%. The proposed method 14. Accurate patient dosimetry of kilovoltage cone-beam CT in radiation therapy SciTech Connect Ding, George X.; Duggan, Dennis M.; Coffey, Charles W. 2008-03-15 The increased utilization of x-ray imaging in image-guided radiotherapy has dramatically improved the radiation treatment and the lives of cancer patients. Daily imaging procedures, such as cone-beam computed tomography (CBCT), for patient setup may significantly increase the dose to the patient's normal tissues. This study investigates the dosimetry from a kilovoltage (kV) CBCT for real patient geometries. Monte Carlo simulations were used to study the kV beams from a Varian on-board imager integrated into the Trilogy accelerator. The Monte Carlo calculated results were benchmarked against measurements and good agreement was obtained. The authors developed a novel method to calibrate Monte Carlo simulated beams with measurements using an ionization chamber in which the air-kerma calibration factors are obtained from an Accredited Dosimetry Calibration Laboratory. The authors have introduced a new Monte Carlo calibration factor, f{sub MCcal}, which is determined from the calibration procedure. The accuracy of the new method was validated by experiment. When a Monte Carlo simulated beam has been calibrated, the simulated beam can be used to accurately predict absolute dose distributions in the irradiated media. Using this method the authors calculated dose distributions to patient anatomies from a typical CBCT acquisition for different treatment sites, such as head and neck, lung, and pelvis. Their results have shown that, from a typical head and neck CBCT, doses to soft tissues, such as eye, spinal cord, and brain can be up to 8, 6, and 5 cGy, respectively. The dose to the bone, due to the photoelectric effect, can be as much as 25 cGy, about three times the dose to the soft tissue. The study provides detailed information on the additional doses to the normal tissues of a patient from a typical kV CBCT acquisition. The methodology of the Monte Carlo beam calibration developed and introduced in this study allows the user to calculate both relative and absolute 15. 4D cone beam CT via spatiotemporal tensor framelet SciTech Connect Gao, Hao; Li, Ruijiang; Xing, Lei; Lin, Yuting 2012-11-15 Purpose: On-board 4D cone beam CT (4DCBCT) offers respiratory phase-resolved volumetric imaging, and improves the accuracy of target localization in image guided radiation therapy. However, the clinical utility of this technique has been greatly impeded by its degraded image quality, prolonged imaging time, and increased imaging dose. The purpose of this letter is to develop a novel iterative 4DCBCT reconstruction method for improved image quality, increased imaging speed, and reduced imaging dose. Methods: The essence of this work is to introduce the spatiotemporal tensor framelet (STF), a high-dimensional tensor generalization of the 1D framelet for 4DCBCT, to effectively take into account of highly correlated and redundant features of the patient anatomy during respiration, in a multilevel fashion with multibasis sparsifying transform. The STF-based algorithm is implemented on a GPU platform for improved computational efficiency. To evaluate the method, 4DCBCT full-fan scans were acquired within 30 s, with a gantry rotation of 200°; STF is also compared with a state-of-art reconstruction method via spatiotemporal total variation regularization. Results: Both the simulation and experimental results demonstrate that STF-based reconstruction achieved superior image quality. The reconstruction of 20 respiratory phases took less than 10 min on an NVIDIA Tesla C2070 GPU card. The STF codes are available at https://sites.google.com/site/spatiotemporaltensorframelet . Conclusions: By effectively utilizing the spatiotemporal coherence of the patient anatomy among different respiratory phases in a multilevel fashion with multibasis sparsifying transform, the proposed STF method potentially enables fast and low-dose 4DCBCT with improved image quality. 16. Characterization and correction of beam-hardening artifacts during dynamic volume CT assessment of myocardial perfusion. PubMed Kitagawa, Kakuya; George, Richard T; Arbab-Zadeh, Armin; Lima, João A C; Lardo, Albert C 2010-07-01 To fully characterize beam-hardening effects caused by iodinated contrast medium in the left ventricular (LV) cavity and aorta in the assessment of myocardial perfusion at computed tomography (CT) and to validate a beam-hardening artifact correction algorithm that considers fluid-filled vessels and chambers important sources of beam hardening. The Johns Hopkins University animal care and use committee approved all procedures. An anatomically correct LV and myocardial phantom to characterize beam-hardening artifacts was designed. Following validation in the phantom, the beam-hardening correction (BHC) algorithm was applied to 256-detector row dynamic volume CT images in a canine ischemia model (n = 5) during adenosine stress, and the effect of beam hardening was determined by comparing regional dynamic volume CT perfusion metrics (myocardial upslope normalized by maximum LV blood pool attenuation) with microsphere-derived myocardial blood flow (MBF). A paired Student t test was used to compare continuous variables from the same subject but under different conditions, while linear regression analysis was performed to estimate the slope and statistical significance of the relationship between CT-derived perfusion metrics and microsphere-derived MBF. Beam-hardening artifacts were successfully reproduced in phantom studies and were eliminated with the BHC algorithm. The correlation coefficient of CT-derived perfusion metrics and microsphere-derived MBF improved from 0.60 to 0.74 (P > .05) following correction in the animal model. Beam-hardening artifacts confound dynamic volume CT assessment of myocardial perfusion. Application of the BHC algorithm is helpful for improving accuracy of myocardial perfusion at dynamic volume CT. 17. Fast kilovoltage/megavoltage (kVMV) breathhold cone-beam CT for image-guided radiotherapy of lung cancer Wertz, Hansjoerg; Stsepankou, Dzmitry; Blessing, Manuel; Rossi, Michael; Knox, Chris; Brown, Kevin; Gros, Uwe; Boda-Heggemann, Judit; Walter, Cornelia; Hesser, Juergen; Lohr, Frank; Wenz, Frederik 2010-08-01 Long image acquisition times of 60-120 s for cone-beam CT (CBCT) limit the number of patients with lung cancer who can undergo volume image guidance under breathhold. We developed a low-dose dual-energy kilovoltage-megavoltage-cone-beam CT (kVMV-CBCT) based on a clinical treatment unit reducing imaging time to <=15 s. Simultaneous kVMV-imaging was achieved by dedicated synchronization hardware controlling the output of the linear accelerator (linac) based on detector panel readout signals, preventing imaging artifacts from interference of the linac's MV-irradiation and panel readouts. Optimization was performed to minimize the imaging dose. Single MV-projections, reconstructed MV-CBCT images and images of simultaneous 90° kV- and 90° MV-CBCT (180° kVMV-CBCT) were acquired with different parameters. Image quality and imaging dose were evaluated and compared to kV-imaging. Hardware-based kVMV synchronization resulted in artifact-free projections. A combined 180° kVMV-CBCT scan with a total MV-dose of 5 monitor units was acquired in 15 s and with sufficient image quality. The resolution was 5-6 line pairs cm-1 (Catphan phantom). The combined kVMV-scan dose was equivalent to a kV-radiation scan dose of ~33 mGy. kVMV-CBCT based on a standard linac is promising and can provide ultra-fast online volume image guidance with low imaging dose and sufficient image quality for fast and accurate patient positioning for patients with lung cancer under breathhold. 18. Imaging task-based optimal kV and mA selection for CT radiation dose reduction: from filtered backprojection (FBP) to statistical model based iterative reconstruction (MBIR) Li, Ke; Gomez-Cardona, Daniel; Lubner, Meghan G.; Pickhardt, Perry J.; Chen, Guang-Hong 2015-03-01 Optimal selections of tube potential (kV) and tube current (mA) are essential in maximizing the diagnostic potential of a given CT technology while minimizing radiation dose. The use of a lower tube potential may improve image contrast, but may also require a significantly higher tube current to compensate for the rapid decrease of tube output at lower tube potentials. Therefore, the selection of kV and mA should take those kinds of constraints as well as the specific diagnostic imaging task in to consideration. For conventional quasi-linear CT systems employing linear filtered back-projection (FBP) image reconstruction algorithm, the optimization of kV-mA combinations are relatively straightforward, as neither spatial resolution nor noise texture has significant dependence on kV and mA settings. In these cases, zero-frequency analysis such as contrast-to-noise ratio (CNR) or normalized CNR by dose (CNRD) can be used for optimal kV-mA selection. The recently introduced statistical model-based iterative reconstruction (MBIR) method, however, has introduced new challenges to optimal kV and mA selection, as both spatial resolution and noise texture become closely correlated with kV and mA. In this work, a task-based approach based on modern signal detection theory and the corresponding frequency-dependent analysis has been proposed to perform the kV and mA optimization for both FBP and MBIR. By performing exhaustive measurements of task-based detectability index through the technically accessible kV-mA parameter space, iso-detectability contours were generated and overlaid on top of iso-dose contours, from which the kV-mA pair that minimize dose yet still achieving the desired detectability level can be identified. 19. Single-slice reconstruction method for helical cone-beam differential phase-contrast CT. PubMed Fu, Jian; Chen, Liyuan 2014-01-01 X-ray phase-contrast computed tomography (PC-CT) can provide the internal structure information of biomedical specimens with high-quality cross-section images and has become an invaluable analysis tool. Here a simple and fast reconstruction algorithm is reported for helical cone-beam differential PC-CT (DPC-CT), which is called the DPC-CB-SSRB algorithm. It combines the existing CB-SSRB method of helical cone-beam absorption-contrast CT with the differential nature of DPC imaging. The reconstruction can be performed using 2D fan-beam filtered back projection algorithm with the Hilbert imaginary filter. The quality of the results for large helical pitches is surprisingly good. In particular, with this algorithm comparable quality is obtained using helical cone-beam DPC-CT data with a normalized pitch of 10 to that obtained using the traditional inter-row interpolation reconstruction with a normalized pitch of 2. This method will push the future medical helical cone-beam DPC-CT imaging applications. 20. Dual-source multi-energy CT with triple or quadruple x-ray beams Yu, Lifeng; Li, Zhoubo; Leng, Shuai; McCollough, Cynthia H. 2016-03-01 Energy-resolved photon-counting CT (PCCT) is promising for material decomposition with multi-contrast agents. However, corrections for non-idealities of PCCT detectors are required, which are still active research areas. In addition, PCCT is associated with very high cost due to lack of mass production. In this work, we proposed an alternative approach to performing multi-energy CT, which was achieved by acquiring triple or quadruple x-ray beam measurements on a dual-source CT scanner. This strategy was based on a "Twin Beam" design on a single-source scanner for dual-energy CT. Examples of beam filters and spectra for triple and quadruple x-ray beam were provided. Computer simulation studies were performed to evaluate the accuracy of material decomposition for multi-contrast mixtures using both tri-beam and quadruple-beam configurations. The proposed strategy can be readily implemented on a dual-source scanner, which may allow material decomposition of multi-contrast agents to be performed on clinical CT scanners with energy-integrating detector. 1. Identification of dental root canals and their medial line from micro-CT and cone-beam CT records PubMed Central 2012-01-01 Background Shape of the dental root canal is highly patient specific. Automated identification methods of the medial line of dental root canals and the reproduction of their 3D shape can be beneficial for planning endodontic interventions as severely curved root canals or multi-rooted teeth may pose treatment challenges. Accurate shape information of the root canals may also be used by manufacturers of endodontic instruments in order to make more efficient clinical tools. Method Novel image processing procedures dedicated to the automated detection of the medial axis of the root canal from dental micro-CT and cone-beam CT records are developed. For micro-CT, the 3D model of the root canal is built up from several hundred parallel cross sections, using image enhancement, histogram based fuzzy c-means clustering, center point detection in the segmented slice, three dimensional inner surface reconstruction, and potential field driven curve skeleton extraction in three dimensions. Cone-beam CT records are processed with image enhancement filters and fuzzy chain based regional segmentation, followed by the reconstruction of the root canal surface and detecting its skeleton via a mesh contraction algorithm. Results The proposed medial line identification and root canal detection algorithms are validated on clinical data sets. 25 micro-CT and 36 cone-beam-CT records are used in the validation procedure. The overall success rate of the automatic dental root canal identification was about 92% in both procedures. The algorithms proved to be accurate enough for endodontic therapy planning. Conclusions Accurate medial line identification and shape detection algorithms of dental root canal have been developed. Different procedures are defined for micro-CT and cone-beam CT records. The automated execution of the subsequent processing steps allows easy application of the algorithms in the dental care. The output data of the image processing procedures is suitable for mathematical 2. Beam hardening and smoothing correction effects on performance of micro-ct SkyScan 1173 for imaging low contrast density materials Sriwayu, Wa Ode; Haryanto, Freddy; Khotimah, Siti Nurul; Latief, Fourier Dzar Eljabbar 2015-04-01 We have designed and fabricated phantom mimicking breast cancer composition known as a region that has low contrast density. The used compositions are a microcalcifications, fatty tissues and tumor mass by using Al2O3, C27H46O, and hard nylon materials. Besides, phantom also has a part to calculate low cost criteria /CNR (Contrast to Noise Ratio). Uniformity will be measured at water distillation medium located in a part of phantom scale contrast. Phantom will be imaged by using micro ct-sky scan 1173 high energy type, and then also can be quantified CT number to examine SkyScan 1173 performance in imaging low contrast density materials. Evaluation of CT number is done at technique configuration parameter using voltage of 30 kV, exposure 0.160 mAs, and camera resolution 560x560 pixel, the effect of image quality to reconstruction process is evaluated by varying image processing parameters in the form of beam hardening corrections with amount of 25%, 66% and100% with each smoothing level S10,S2 and S7. To obtain the better high quality image, the adjustment of beam hardening correction should be 66% and smoothing level reach maximal value at level 10. 3. Reducing radiation dose to the female breast during CT coronary angiography: A simulation study comparing breast shielding, angular tube current modulation, reduced kV, and partial angle protocols using an unknown-location signal-detectability metric SciTech Connect Rupcich, Franco; Gilat Schmidt, Taly; Badal, Andreu; Popescu, Lucretiu M.; Kyprianou, Iacovos 2013-08-15 Purpose: The authors compared the performance of five protocols intended to reduce dose to the breast during computed tomography (CT) coronary angiography scans using a model observer unknown-location signal-detectability metric.Methods: The authors simulated CT images of an anthropomorphic female thorax phantom for a 120 kV reference protocol and five “dose reduction” protocols intended to reduce dose to the breast: 120 kV partial angle (posteriorly centered), 120 kV tube-current modulated (TCM), 120 kV with shielded breasts, 80 kV, and 80 kV partial angle (posteriorly centered). Two image quality tasks were investigated: the detection and localization of 4-mm, 3.25 mg/ml and 1-mm, 6.0 mg/ml iodine contrast signals randomly located in the heart region. For each protocol, the authors plotted the signal detectability, as quantified by the area under the exponentially transformed free response characteristic curve estimator (A-caret{sub FE}), as well as noise and contrast-to-noise ratio (CNR) versus breast and lung dose. In addition, the authors quantified each protocol's dose performance as the percent difference in dose relative to the reference protocol achieved while maintaining equivalent A-caret{sub FE}.Results: For the 4-mm signal-size task, the 80 kV full scan and 80 kV partial angle protocols decreased dose to the breast (80.5% and 85.3%, respectively) and lung (80.5% and 76.7%, respectively) with A-caret{sub FE} = 0.96, but also resulted in an approximate three-fold increase in image noise. The 120 kV partial protocol reduced dose to the breast (17.6%) at the expense of increased lung dose (25.3%). The TCM algorithm decreased dose to the breast (6.0%) and lung (10.4%). Breast shielding increased breast dose (67.8%) and lung dose (103.4%). The 80 kV and 80 kV partial protocols demonstrated greater dose reductions for the 4-mm task than for the 1-mm task, and the shielded protocol showed a larger increase in dose for the 4-mm task than for the 1-mm task 4. Region-of-interest image reconstruction with intensity weighting in circular cone-beam CT for image-guided radiation therapy. PubMed Cho, Seungryong; Pearson, Erik; Pelizzari, Charles A; Pan, Xiaochuan 2009-04-01 Imaging plays a vital role in radiation therapy and with recent advances in technology considerable emphasis has been placed on cone-beam CT (CBCT). Attaching a kV x-ray source and a flat panel detector directly to the linear accelerator gantry has enabled progress in target localization techniques, which can include daily CBCT setup scans for some treatments. However, with an increasing number of CT scans there is also an increasing concern for patient exposure. An intensity-weighted region-of-interest (IWROI) technique, which has the potential to greatly reduce CBCT dose, in conjunction with the chord-based backprojection-filtration (BPF) reconstruction algorithm, has been developed and its feasibility in clinical use is demonstrated in this article. A nonuniform filter is placed in the x-ray beam to create regions of two different beam intensities. In this manner, regions outside the target area can be given a reduced dose but still visualized with a lower contrast to noise ratio. Image artifacts due to transverse data truncation, which would have occurred in conventional reconstruction algorithms, are avoided and image noise levels of the low- and high-intensity regions are well controlled by use of the chord-based BPF reconstruction algorithm. The proposed IWROI technique can play an important role in image-guided radiation therapy. 5. EGS_cbct: Simulation of a fan beam CT and RMI phantom for measured HU verification. PubMed van Eeden, Dete; du Plessis, Freek 2016-10-01 A mathematical 3D model of an existing computed tomography (CT) scanner was created and used in the EGSnrc-based BEAMnrc and egs_cbct Monte Carlo codes. Simulated transmission dose profiles of a RMI-465 phantom were analysed to verify Hounsfield numbers against measured data obtained from the CT scanner. The modelled CT unit is based on the design of a Toshiba Aquilion 16 LB CT scanner. As a first step, BEAMnrc simulated the X-ray tube, filters, and secondary collimation to obtain phase space data of the X-ray beam. A bowtie filter was included to create a more uniform beam intensity and to remove the beam hardening effects. In a second step the Interactive Data Language (IDL) code was used to build an EGSPHANT file that contained the RMI phantom which was used in egs_cbct simulations. After simulation a series of profiles were sampled from the detector model and the Feldkamp-Davis-Kress (FDK) algorithm was used to reconstruct transversal images. The results were tested against measured data obtained from CT scans. The egs_cbct code can be used for the simulation of a fan beam CT unit. The calculated bowtie filter ensured a uniform flux on the detectors. Good correlation between measured and simulated CT numbers was obtained. In principle, Monte Carlo codes such as egs_cbct can model a fan beam CT unit. After reconstruction, the images contained Hounsfield values comparable to measured data. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved. 6. Evaluation of the OSC-TV iterative reconstruction algorithm for cone-beam optical CT SciTech Connect Matenine, Dmitri Mascolo-Fortin, Julia; Goussard, Yves 2015-11-15 Purpose: The present work evaluates an iterative reconstruction approach, namely, the ordered subsets convex (OSC) algorithm with regularization via total variation (TV) minimization in the field of cone-beam optical computed tomography (optical CT). One of the uses of optical CT is gel-based 3D dosimetry for radiation therapy, where it is employed to map dose distributions in radiosensitive gels. Model-based iterative reconstruction may improve optical CT image quality and contribute to a wider use of optical CT in clinical gel dosimetry. Methods: This algorithm was evaluated using experimental data acquired by a cone-beam optical CT system, as well as complementary numerical simulations. A fast GPU implementation of OSC-TV was used to achieve reconstruction times comparable to those of conventional filtered backprojection. Images obtained via OSC-TV were compared with the corresponding filtered backprojections. Spatial resolution and uniformity phantoms were scanned and respective reconstructions were subject to evaluation of the modulation transfer function, image uniformity, and accuracy. The artifacts due to refraction and total signal loss from opaque objects were also studied. Results: The cone-beam optical CT data reconstructions showed that OSC-TV outperforms filtered backprojection in terms of image quality, thanks to a model-based simulation of the photon attenuation process. It was shown to significantly improve the image spatial resolution and reduce image noise. The accuracy of the estimation of linear attenuation coefficients remained similar to that obtained via filtered backprojection. Certain image artifacts due to opaque objects were reduced. Nevertheless, the common artifact due to the gel container walls could not be eliminated. Conclusions: The use of iterative reconstruction improves cone-beam optical CT image quality in many ways. The comparisons between OSC-TV and filtered backprojection presented in this paper demonstrate that OSC-TV can 7. Evaluation of the OSC-TV iterative reconstruction algorithm for cone-beam optical CT. PubMed Matenine, Dmitri; Mascolo-Fortin, Julia; Goussard, Yves; Després, Philippe 2015-11-01 The present work evaluates an iterative reconstruction approach, namely, the ordered subsets convex (OSC) algorithm with regularization via total variation (TV) minimization in the field of cone-beam optical computed tomography (optical CT). One of the uses of optical CT is gel-based 3D dosimetry for radiation therapy, where it is employed to map dose distributions in radiosensitive gels. Model-based iterative reconstruction may improve optical CT image quality and contribute to a wider use of optical CT in clinical gel dosimetry. This algorithm was evaluated using experimental data acquired by a cone-beam optical CT system, as well as complementary numerical simulations. A fast GPU implementation of OSC-TV was used to achieve reconstruction times comparable to those of conventional filtered backprojection. Images obtained via OSC-TV were compared with the corresponding filtered backprojections. Spatial resolution and uniformity phantoms were scanned and respective reconstructions were subject to evaluation of the modulation transfer function, image uniformity, and accuracy. The artifacts due to refraction and total signal loss from opaque objects were also studied. The cone-beam optical CT data reconstructions showed that OSC-TV outperforms filtered backprojection in terms of image quality, thanks to a model-based simulation of the photon attenuation process. It was shown to significantly improve the image spatial resolution and reduce image noise. The accuracy of the estimation of linear attenuation coefficients remained similar to that obtained via filtered backprojection. Certain image artifacts due to opaque objects were reduced. Nevertheless, the common artifact due to the gel container walls could not be eliminated. The use of iterative reconstruction improves cone-beam optical CT image quality in many ways. The comparisons between OSC-TV and filtered backprojection presented in this paper demonstrate that OSC-TV can potentially improve the rendering of 8. TU-EF-204-03: Task-Based KV and MAs Optimization for Radiation Dose Reduction in CT: From FBP to Statistical Model-Based Iterative Reconstruction (MBIR) SciTech Connect Gomez-Cardona, D; Li, K; Lubner, M G; Pickhardt, P J; Chen, G-H 2015-06-15 Purpose: The introduction of the highly nonlinear MBIR algorithm to clinical CT systems has made CNR an invalid metric for kV optimization. The purpose of this work was to develop a task-based framework to unify kV and mAs optimization for both FBP- and MBIR-based CT systems. Methods: The kV-mAs optimization was formulated as a constrained minimization problem: to select kV and mAs to minimize dose under the constraint of maintaining the detection performance as clinically prescribed. To experimentally solve this optimization problem, exhaustive measurements of detectability index (d’) for a hepatic lesion detection task were performed at 15 different mA levels and 4 kV levels using an anthropomorphic phantom. The measured d’ values were used to generate an iso-detectability map; similarly, dose levels recorded at different kV-mAs combinations were used to generate an iso-dose map. The iso-detectability map was overlaid on top of the iso-dose map so that for a prescribed detectability level d’, the optimal kV-mA can be determined from the crossing between the d’ contour and the dose contour that corresponds to the minimum dose. Results: Taking d’=16 as an example: the kV-mAs combinations on the measured iso-d’ line of MBIR are 80–150 (3.8), 100–140 (6.6), 120–150 (11.3), and 140–160 (17.2), where values in the parentheses are measured dose values. As a Result, the optimal kV was 80 and optimal mA was 150. In comparison, the optimal kV and mA for FBP were 100 and 500, which corresponded to a dose level of 24 mGy. Results of in vivo animal experiments were consistent with the phantom results. Conclusion: A new method to optimize kV and mAs selection has been developed. This method is applicable to both linear and nonlinear CT systems such as those using MBIR. Additional dose savings can be achieved by combining MBIR with this method. This work was partially supported by an NIH grant R01CA169331 and GE Healthcare. K. Li, D. Gomez-Cardona, M. G 9. Dual-Source Multi-Energy CT with Triple or Quadruple X-ray Beams. PubMed Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H 2016-02-01 Energy-resolved photon-counting CT (PCCT) is promising for material decomposition with multi-contrast agents. However, corrections for non-idealities of PCCT detectors are required, which are still active research areas. In addition, PCCT is associated with very high cost due to lack of mass production. In this work, we proposed an alternative approach to performing multi-energy CT, which was achieved by acquiring triple or quadruple x-ray beam measurements on a dual-source CT scanner. This strategy was based on a "Twin Beam" design on a single-source scanner for dual-energy CT. Examples of beam filters and spectra for triple and quadruple x-ray beam were provided. Computer simulation studies were performed to evaluate the accuracy of material decomposition for multi-contrast mixtures using a tri-beam configuration. The proposed strategy can be readily implemented on a dual-source scanner, which may allow material decomposition of multi-contrast agents to be performed on clinical CT scanners with energy-integrating detector. 10. Metal Artifact Reduction for Polychromatic X-ray CT Based on a Beam-Hardening Corrector. PubMed Park, Hyoung Suk; Hwang, Dosik; Seo, Jin Keun 2016-02-01 This paper proposes a new method to correct beam hardening artifacts caused by the presence of metal in polychromatic X-ray computed tomography (CT) without degrading the intact anatomical images. Metal artifacts due to beam-hardening, which are a consequence of X-ray beam polychromaticity, are becoming an increasingly important issue affecting CT scanning as medical implants become more common in a generally aging population. The associated higher-order beam-hardening factors can be corrected via analysis of the mismatch between measured sinogram data and the ideal forward projectors in CT reconstruction by considering the known geometry of high-attenuation objects. Without prior knowledge of the spectrum parameters or energy-dependent attenuation coefficients, the proposed correction allows the background CT image (i.e., the image before its corruption by metal artifacts) to be extracted from the uncorrected CT image. Computer simulations and phantom experiments demonstrate the effectiveness of the proposed method to alleviate beam hardening artifacts. 11. WE-D-9A-02: Automated Landmark-Guided CT to Cone-Beam CT Deformable Image Registration SciTech Connect Kearney, V; Gu, X; Chen, S; Jiang, L; Liu, H; Chiu, T; Yordy, J; Nedzi, L; Mao, W 2014-06-15 Purpose: The anatomical changes that occur between the simulation CT and daily cone-beam CT (CBCT) are investigated using an automated landmark-guided deformable image registration (LDIR) algorithm with simultaneous intensity correction. LDIR was designed to be accurate in the presence of tissue intensity mismatch and heavy noise contamination. Method: An auto-landmark generation algorithm was used in conjunction with a local small volume (LSV) gradient matching search engine to map corresponding landmarks between the CBCT and planning CT. The LSVs offsets were used to perform an initial deformation, generate landmarks, and correct local intensity mismatch. The landmarks act as stabilizing controlpoints in the Demons objective function. The accuracy of the LDIR algorithm was evaluated on one synthetic case with ground truth and data of ten head and neck cancer patients. The deformation vector field (DVF) accuracy was accessed using a synthetic case. The Root mean square error of the 3D canny edge (RMSECE), mutual information (MI), and feature similarity index metric (FSIM) were used to access the accuracy of LDIR on the patient data. The quality of the corresponding deformed contours was verified by an attending physician. Results: The resulting 90 percentile DVF error for the synthetic case was within 5.63mm for the original demons algorithm, 2.84mm for intensity correction alone, 2.45mm using controlpoints without intensity correction, and 1.48 mm for the LDIR algorithm. For the five patients the mean RMSECE of the original CT, Demons deformed CT, intensity corrected Demons CT, control-point stabilized deformed CT, and LDIR CT was 0.24, 0.26, 0.20, 0.20, and 0.16 respectively. Conclusion: LDIR is accurate in the presence of multimodal intensity mismatch and CBCT noise contamination. Since LDIR is GPU based it can be implemented with minimal additional strain on clinical resources. This project has been supported by a CPRIT individual investigator award RP11032. 12. Assessment of non-invasive chronic fungal rhinosinusitis by cone beam CT: comparison with multidetector CT findings. PubMed Yamauchi, Tomohiko; Tani, Akiko; Yokoyama, Shuji; Ogawa, Hiroshi 2017-08-09 To investigate the accuracy of cone beam CT (CBCT) to diagnose non-invasive chronic fungal rhinosinusitis. Preoperative CT evaluation of non-invasive chronic fungal rhinosinusitis was performed by CBCT (3D Accuitomo 170(®)) and traditional multidetector CT (MDCT) (Aquilion 32(®)) in 13 and 38 patients with non-invasive chronic fungal maxillary sinusitis, respectively, in different facilities. Detection of intrasinus calcification was compared between these two groups. Detection of intrasinus calcification in patients with non-invasive chronic fungal maxillary sinusitis was higher in the MDCT group (84.2%) than the CBCT group (46.2%). CBCT is inferior to MDCT in detection of intrasinus calcification in patients with non-invasive chronic fungal maxillary sinusitis. CBCT is frequently used in the screening of the paranasal lesion, but it is not enough to evaluate non-invasive chronic fungal maxillary sinusitis alone. Retrospective study. 13. Physical dose distribution due to multi-sliced kV X-ray beam in labeled tissue-like media: an experimental approach. PubMed Ghasemi, M; Kakuee, O R; Fathollahi, V; Shahvar, A; Mohati, M; Ghafoori, M 2011-02-01 Radiotherapy remains a major modality of cancer therapy. Thanks to high flux and high brilliance of synchrotron-generated X-ray, laboratory research with planar microscopically thin X-ray beam promise exciting new opportunities for treatment of cancer. High tolerance of normal tissues at doses up to several hundred Gy in a single dose fraction and preferential damage of tumors at very high doses have been uniquely observed in animal models exposed to microbeams. The fact that beams as thick as 0.68 mm could retain a part of these effects, opens the possibility that the required beam can be produced by high power X-ray tubes besides a dedicated synchrotron. Fortunately, dose distribution due to kilovolt X-rays could be enhanced by the introduction of high-Z contrast agents to tissue-like media. In this work, dose deposition in a phantom--partially loaded with Au and I as contrast agents--irradiated by multi-sliced kV X-ray beam was experimentally investigated in the peak and valley regions both on the surface and in the depth of phantom. The results of experimental dosimetry using Gaf-chromic films were compared with corresponding Monte-Carlo simulation. Relative reduction in the deposited dose in the peak regions downstream the area containing contrast agents in comparison with the adjacent areas was experimentally observed. 14. Cone beam CT findings of retromolar canals: Report of cases and literature review PubMed Central Han, Sang-Sun 2013-01-01 A retromolar canal is an anatomical variation in the mandible. As it includes the neurovascular bundle, local anesthetic insufficiency can occur, and an injury of the retromolar canal during dental surgery in the mandible may result in excessive bleeding, paresthesia, and traumatic neuroma. Using imaging analysis software, we evaluated the cone-beam computed tomography (CT) images of two Korean patients who presented with retromolar canals. Retromolar canals were detectable on the sagittal and cross-sectional images of cone-beam CT, but not on the panoramic radiographs of the patients. Therefore, the clinician should pay particular attention to the identification of retromolar canals by preoperative radiographic examination, and additional cone beam CT scanning would be recommended. PMID:24380072 15. Dynamic cone beam CT angiography of carotid and cerebral arteries using canine model SciTech Connect Cai Weixing; Zhao Binghui; Conover, David; Liu Jiangkun; Ning Ruola 2012-01-15 Purpose: This research is designed to develop and evaluate a flat-panel detector-based dynamic cone beam CT system for dynamic angiography imaging, which is able to provide both dynamic functional information and dynamic anatomic information from one multirevolution cone beam CT scan. Methods: A dynamic cone beam CT scan acquired projections over four revolutions within a time window of 40 s after contrast agent injection through a femoral vein to cover the entire wash-in and wash-out phases. A dynamic cone beam CT reconstruction algorithm was utilized and a novel recovery method was developed to correct the time-enhancement curve of contrast flow. From the same data set, both projection-based subtraction and reconstruction-based subtraction approaches were utilized and compared to remove the background tissues and visualize the 3D vascular structure to provide the dynamic anatomic information. Results: Through computer simulations, the new recovery algorithm for dynamic time-enhancement curves was optimized and showed excellent accuracy to recover the actual contrast flow. Canine model experiments also indicated that the recovered time-enhancement curves from dynamic cone beam CT imaging agreed well with that of an IV-digital subtraction angiography (DSA) study. The dynamic vascular structures reconstructed using both projection-based subtraction and reconstruction-based subtraction were almost identical as the differences between them were comparable to the background noise level. At the enhancement peak, all the major carotid and cerebral arteries and the Circle of Willis could be clearly observed. Conclusions: The proposed dynamic cone beam CT approach can accurately recover the actual contrast flow, and dynamic anatomic imaging can be obtained with high isotropic 3D resolution. This approach is promising for diagnosis and treatment planning of vascular diseases and strokes. 16. CT based treatment planning system of proton beam therapy for ocular melanoma Nakano, Takashi; Kanai, Tatsuaki; Furukawa, Shigeo; Shibayama, Kouichi; Sato, Sinichiro; Hiraoka, Takeshi; Morita, Shinroku; Tsujii, Hirohiko 2003-09-01 A computed tomography (CT) based treatment planning system of proton beam therapy was established specially for ocular melanoma treatment. A technique of collimated proton beams with maximum energy of 70 MeV are applied for treatment for ocular melanoma. The vertical proton beam line has a range modulator for spreading beams out, a multi-leaf collimator, an aperture, light beam localizer, field light, and X-ray verification system. The treatment planning program includes; eye model, selecting the best direction of gaze, designing the shape of aperture, determining the proton range and range modulation necessary to encompass the target volume, and indicating the relative positions of the eyes, beam center and creation of beam aperture. Tumor contours are extracted from CT/MRI images of 1 mm thickness by assistant by various information of fundus photography and ultrasonography. The CT image-based treatment system for ocular melanoma is useful for Japanese patients as having thick choroid membrane in terms of dose sparing to skin and normal organs in the eye. The characteristics of the system and merits/demerits were reported. 17. Simulation of mammograms and tomosynthesis imaging with cone beam breast CT images Han, Tao; Shaw, Chris C.; Chen, Lingyun; Lai, Chao-jen; Liu, Xinming; Wang, Tianpeng 2008-03-01 The use of mammography techniques for the screening and diagnosis of breast cancers has been limited by the overlapping of cancer symptoms with normal tissue structures. To overcome this problem, two methods have been developed and actively investigated recently: digital tomosynthesis mammography and cone beam breast CT. Comparison study with these three techniques will be helpful to understand their difference and further might be supervise the direction of breast imaging. This paper describes and discusses about a technique using a general-purpose PC cluster to develop a parallel computer simulation model to simulate mammograms and tomosynthesis imaging with cone beam CT images of a mastectomy breast specimen. The breast model used in simulating mammography and tomosynthesis was developed by re-scaling the CT numbers of cone beam CT images from 80kVp to 20 kev. The compression of breast was simulated by deformation of the breast model. Re-projection software with parallel computation was developed and used to compute projection images of this simulated compressed breast for a stationary detector and a linearly shifted x-ray source. The resulting images were then used to reconstruct tomosynthesis mammograms using shift-and-add algorithms. It was found that MCs in cone beam CT images were not visible in regular mammograms but faintly visible in tomosynthesis images. The scatter signal and noise property needs to be simulated and incorporated in the future. 18. Point spread function modeling and image restoration for cone-beam CT Zhang, Hua; Huang, Kui-Dong; Shi, Yi-Kai; Xu, Zhe 2015-03-01 X-ray cone-beam computed tomography (CT) has such notable features as high efficiency and precision, and is widely used in the fields of medical imaging and industrial non-destructive testing, but the inherent imaging degradation reduces the quality of CT images. Aimed at the problems of projection image degradation and restoration in cone-beam CT, a point spread function (PSF) modeling method is proposed first. The general PSF model of cone-beam CT is established, and based on it, the PSF under arbitrary scanning conditions can be calculated directly for projection image restoration without the additional measurement, which greatly improved the application convenience of cone-beam CT. Secondly, a projection image restoration algorithm based on pre-filtering and pre-segmentation is proposed, which can make the edge contours in projection images and slice images clearer after restoration, and control the noise in the equivalent level to the original images. Finally, the experiments verified the feasibility and effectiveness of the proposed methods. Supported by National Science and Technology Major Project of the Ministry of Industry and Information Technology of China (2012ZX04007021), Young Scientists Fund of National Natural Science Foundation of China (51105315), Natural Science Basic Research Program of Shaanxi Province of China (2013JM7003) and Northwestern Polytechnical University Foundation for Fundamental Research (JC20120226, 3102014KYJD022) 19. Dual-Source Multi-Energy CT with Triple or Quadruple X-ray Beams PubMed Central Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H. 2016-01-01 Energy-resolved photon-counting CT (PCCT) is promising for material decomposition with multi-contrast agents. However, corrections for non-idealities of PCCT detectors are required, which are still active research areas. In addition, PCCT is associated with very high cost due to lack of mass production. In this work, we proposed an alternative approach to performing multi-energy CT, which was achieved by acquiring triple or quadruple x-ray beam measurements on a dual-source CT scanner. This strategy was based on a “Twin Beam” design on a single-source scanner for dual-energy CT. Examples of beam filters and spectra for triple and quadruple x-ray beam were provided. Computer simulation studies were performed to evaluate the accuracy of material decomposition for multi-contrast mixtures using a tri-beam configuration. The proposed strategy can be readily implemented on a dual-source scanner, which may allow material decomposition of multi-contrast agents to be performed on clinical CT scanners with energy-integrating detector. PMID:27330237 20. Region-of-interest reconstruction for a cone-beam dental CT with a circular trajectory Hu, Zhanli; Zou, Jing; Gui, Jianbao; Zheng, Hairong; Xia, Dan 2013-04-01 Dental CT is the most appropriate and accurate device for preoperative evaluation of dental implantation. It can demonstrate the quantity of bone in three dimensions (3D), the location of important adjacent anatomic structures and the quality of available bone with minimal geometric distortion. Nevertheless, with the rapid increase of dental CT examinations, we are facing the problem of dose reduction without loss of image quality. In this work, backprojection-filtration (BPF) and Feldkamp-Davis-Kress (FDK) algorithm was applied to reconstruct the 3D full image and region-of-interest (ROI) image from complete and truncated circular cone-beam data respectively by computer-simulation. In addition, the BPF algorithm was evaluated based on the 3D ROI-image reconstruction from real data, which was acquired from our developed circular cone-beam prototype dental CT system. The results demonstrated that the ROI-image quality reconstructed from truncated data using the BPF algorithm was comparable to that reconstructed from complete data. The FDK algorithm, however, created artifacts while reconstructing ROI-image. Thus it can be seen, for circular cone-beam dental CT, reducing scanning angular range of the BPF algorithm used for ROI-image reconstruction are helpful for reducing the radiation dose and scanning time. Finally, an analytical method was developed for estimation of the ROI projection area on the detector before CT scanning, which would help doctors to roughly estimate the total radiation dose before the CT examination. 1. Dosimetric feasibility of cone-beam CT-based treatment planning compared to CT-based treatment planning SciTech Connect Yoo, Sua . E-mail: [email protected]; Yin, F.-F. 2006-12-01 Purpose: Cone-beam computed tomography (CBCT) images are currently used for positioning verification. However, it is yet unknown whether CBCT could be used in dose calculation for replanning in adaptive radiation therapy. This study investigates the dosimetric feasibility of CBCT-based treatment planning. Methods and Materials: Hounsfield unit (HU) values and profiles of Catphan, homogeneous/inhomogeneous phantoms, and various tissue regions of patients in CBCT images were compared to those in CT. The dosimetric consequence of the HU variation was investigated by comparing CBCT-based treatment plans to conventional CT-based plans for both phantoms and patients. Results: The maximum HU difference between CBCT and CT of Catphan was 34 HU in the Teflon. The differences in other materials were less than 10 HU. The profiles for the homogeneous phantoms in CBCT displayed reduced HU values up to 150 HU in the peripheral regions compared to those in CT. The scatter and artifacts in CBCT became severe surrounding inhomogeneous tissues with reduced HU values up to 200 HU. The MU/cGy differences were less than 1% for most phantom cases. The isodose distributions between CBCT-based and CT-based plans agreed very well. However, the discrepancy was larger when CBCT was scanned without a bowtie filter than with bowtie filter. Also, up to 3% dosimetric error was observed in the plans for the inhomogeneous phantom. In the patient studies, the discrepancies of isodose lines between CT-based and CBCT-based plans, both 3D and IMRT, were less than 2 mm. Again, larger discrepancy occurred for the lung cancer patients. Conclusion: This study demonstrated the feasibility of CBCT-based treatment planning. CBCT-based treatment plans were dosimetrically comparable to CT-based treatment plans. Dosimetric data in the inhomogeneous tissue regions should be carefully validated. 2. Cone beam CT assisted re-treatment of class 3 invasive cervical resorption PubMed Central Krishnan, Unni; Moule, Alex J; Alawadhi, Abdulwahab 2015-01-01 Invasive cervical root resorption is an uncommon external root resorption which initiates at the cervical aspect of the tooth. This case report involves a case of cervical root resorption which was initially misdiagnosed and managed as cervical root caries. It was later diagnosed with cone beam CT and the lesion microsurgically removed and restored with resin modified glass ionomer cement. The importance of increasing awareness of this uncommon pathology and the role of cone beam CT in mapping the extent of the lesion is emphasised. PMID:25795743 3. Single-scan scatter correction for cone-beam CT using a stationary beam blocker: a preliminary study Niu, Tianye; Zhu, Lei 2011-03-01 The performance of cone-beam CT (CBCT) is greatly limited by scatter artifacts. The existing measurement-based methods have promising advantages as a standard scatter correction solution, except that they currently require multiple scans or moving the beam blocker during data acquisition to compensate for the missing primary data. These approaches are therefore unpractical in clinical applications. In this work, we propose a new measurement-based scatter correction method to achieve accurate reconstruction with one single scan and a stationary beam blocker, two seemingly incompatible features which enable simple and effective scatter correction without increase of scan time or patient dose. Based on CT reconstruction theory, we distribute the blocked areas over one projection where primary signals are considered to be redundant in a full scan. The CT image quality is not degraded even with primary loss. Scatter is accurately estimated by interpolation and scatter-corrected CT images are obtained using an FDK-based reconstruction. In a Monte Carlo simulation study, we first optimize the beam blocker geometry using projections on the Shepp-Logan phantom and then carry out a complete simulation of a CBCT scan on a water phantom. With the scatter-to-primary ratio around 1.0, our method reduces the CT number error from 293 to 2.9 Hounsfield unit (HU) around the phantom center. The proposed approach is further evaluated on a CBCT tabletop system. On the Catphan©600 phantom, the reconstruction error is reduced from 202 to 10 HU in the selected region of interest after the proposed correction. 4. Comparison of cone-beam CT-guided and CT fluoroscopy-guided transthoracic needle biopsy of lung nodules. PubMed Rotolo, Nicola; Floridi, Chiara; Imperatori, Andrea; Fontana, Federico; Ierardi, Anna Maria; Mangini, Monica; Arlant, Veronica; De Marchi, Giuseppe; Novario, Raffaele; Dominioni, Lorenzo; Fugazzola, Carlo; Carrafiello, Gianpaolo 2016-02-01 To compare the diagnostic performance of cone-beam CT (CBCT)-guided and CT fluoroscopy (fluoro-CT)-guided technique for transthoracic needle biopsy (TNB) of lung nodules. The hospital records of 319 consecutive patients undergoing 324 TNBs of lung nodules in a single radiology unit in 2009-2013 were retrospectively evaluated. The newly introduced CBCT technology was used to biopsy 123 nodules; 201 nodules were biopsied by conventional fluoro-CT-guided technique. We assessed the performance of the two biopsy systems for diagnosis of malignancy and the radiation exposure. Nodules biopsied by CBCT-guided and by fluoro-CT-guided technique had similar characteristics: size, 20 ± 6.5 mm (mean ± standard deviation) vs. 20 ± 6.8 mm (p = 0.845); depth from pleura, 15 ± 15 mm vs. 15 ± 16 mm (p = 0.595); malignant, 60% vs. 66% (p = 0.378). After a learning period, the newly introduced CBCT-guided biopsy system and the conventional fluoro-CT-guided system showed similar sensitivity (95% and 92%), specificity (100% and 100%), accuracy for diagnosis of malignancy (96% and 94%), and delivered non-significantly different median effective doses [11.1 mSv (95 % CI 8.9-16.0) vs. 14.5 mSv (95% CI 9.5-18.1); p = 0.330]. The CBCT-guided and fluoro-CT-guided systems for lung nodule biopsy are similar in terms of diagnostic performance and effective dose, and may be alternatively used to optimize the available technological resources. • CBCT-guided and fluoro-CT-guided lung nodule biopsy provided high and similar diagnostic accuracy. • Effective dose from CBCT-guided and fluoro-CT-guided lung nodule biopsy was similar. • To optimize resources, CBCT-guided lung nodule biopsy may be an alternative to fluoro-CT-guided. 5. A curve-filtered FDK (C-FDK) reconstruction algorithm for circular cone-beam CT. PubMed Li, Liang; Xing, Yuxiang; Chen, Zhiqiang; Zhang, Li; Kang, Kejun 2011-01-01 Circular cone-beam CT is one of the most popular configurations in both medical and industrial applications. The FDK algorithm is the most popular method for circular cone-beam CT. However, with increasing cone-angle the cone-beam artifacts associated with the FDK algorithm deteriorate because the circular trajectory does not satisfy the data sufficiency condition. Along with an experimental evaluation and verification, this paper proposed a curve-filtered FDK (C-FDK) algorithm. First, cone-parallel projections are rebinned from the native cone-beam geometry in two separate directions. C-FDK rebins and filters projections along different curves from T-FDK in the centrally virtual detector plane. Then, numerical experiments are done to validate the effectiveness of the proposed algorithm by comparing with both FDK and T-FDK reconstruction. Without any other extra trajectories supplemental to the circular orbit, C-FDK has a visible image quality improvement. 6. Lifetime Measurements of High Polarization Strained-Superlattice Gallium Arsenide at Beam Current > 1 Milliamp using a New 100kV Load Lock Photogun SciTech Connect J. M. Grames; P. A. Adderley; J. Brittian; J. Clark; J. Hansknecht; D. Machie; M. Poelker; M. L. Stutzman; R. Suleiman; K. E. L. Surles-Law 2007-08-01 A new 100 kV GaAs DC Load Lock Photogun has been constructed at Jefferson Laboratory, with improvements for photocathode preparation and for operation in a high voltage, ultra-high vacuum environment. Although difficult to gauge directly, we believe that the new gun design has better vacuum conditions compared to the previous gun design, as evidenced by longer photocathode lifetime, that is, the amount of charge extracted before the quantum efficiency of the photocathode drops by 1/e of the initial value via the ion back-bombardment mechanism. Photocathode lifetime measurements at DC beam intensity of up to 10 mA have been performed to benchmark operation of the new gun and for fundamental studies of the use of GaAs photocathodes at high average current*. These measurements demonstrate photocathode lifetime longer than one million Coulombs per square centimeter at a beam intensity higher than 1 mA. The photogun has been reconfigured with a high polarization strained superlattice photocathode (GaAs/GaAsP) and a mode-locked Ti:Sapphire laser operating near band-gap. Photocathode lifetime measurements at beam intensity greater than 1 mA are measured and presented for comparison. 7. Region-of-interest image reconstruction in circular cone-beam microCT SciTech Connect Cho, Seungryong; Bian, Junguo; Pelizzari, Charles A.; Chen, C.-T.; He, T.-C.; Pan Xiaochuan 2007-12-15 Cone-beam microcomputed tomography (microCT) is one of the most popular choices for small animal imaging which is becoming an important tool for studying animal models with transplanted diseases. Region-of-interest (ROI) imaging techniques in CT, which can reconstruct an ROI image from the projection data set of the ROI, can be used not only for reducing imaging-radiation exposure to the subject and scatters to the detector but also for potentially increasing spatial resolution of the reconstructed images. Increasing spatial resolution in microCT images can facilitate improved accuracy in many assessment tasks. A method proposed previously for increasing CT image spatial resolution entails the exploitation of the geometric magnification in cone-beam CT. Due to finite detector size, however, this method can lead to data truncation for a large geometric magnification. The Feldkamp-Davis-Kress (FDK) algorithm yields images with artifacts when truncated data are used, whereas the recently developed backprojection filtration (BPF) algorithm is capable of reconstructing ROI images without truncation artifacts from truncated cone-beam data. We apply the BPF algorithm to reconstructing ROI images from truncated data of three different objects acquired by our circular cone-beam microCT system. Reconstructed images by use of the FDK and BPF algorithms from both truncated and nontruncated cone-beam data are compared. The results of the experimental studies demonstrate that, from certain truncated data, the BPF algorithm can reconstruct ROI images with quality comparable to that reconstructed from nontruncated data. In contrast, the FDK algorithm yields ROI images with truncation artifacts. Therefore, an implication of the studies is that, when truncated data are acquired with a configuration of a large geometric magnification, the BPF algorithm can be used for effective enhancement of the spatial resolution of a ROI image. 8. Comparison of full-scan and half-scan for cone beam breast CT imaging Chen, Lingyun; Shaw, Chris C.; Lai, Chao-jen; Altunbas, Mustafa C.; Wang, Tianpeng; Tu, Shu-ju; Liu, Xinming 2006-03-01 The half-scan cone beam technique, requiring a scan for 180° plus detector width only, can help achieve both shorter scan time as well as higher exposure in each individual projection image. This purpose of this paper is to investigate whether half-scan cone beam CT technique can provide acceptable images for clinical application. The half-scan cone beam reconstruction algorithm uses modified Parker's weighting function and reconstructs from slightly more than half of the projection views for full-scan, giving out promising results. A rotation phantom, stationary gantry bench top system was built to conduct experiments to evaluate half-scan cone beam breast CT technique. A post-mastectomy breast specimen, a stack of lunch meat slices embedded with various sizes of calcifications and a polycarbonate phantom inserted with glandular and adipose tissue equivalents are imaged and reconstructed for comparison study. A subset of full-scan projection images of a mastectomy specimen were extracted and used as the half-scan projection data for reconstruction. The results show half-scan reconstruction algorithm for cone beam breast CT images does not significantly degrade image quality when compared with the images of same or even half the radiation dose level. Our results are encouraging, emphasizing the potential advantages in the use of half-scan technique for cone beam breast imaging. 9. SU-E-I-15: Quantitative Evaluation of Dose Distributions From Axial, Helical and Cone-Beam CT Imaging by Measurement Using a Two-Dimensional Diode-Array Detector SciTech Connect 2015-06-15 Purpose: To evaluate quantitatively dose distributions from helical, axial and cone-beam CT clinical imaging techniques by measurement using a two-dimensional (2D) diode-array detector. Methods: 2D-dose distributions from selected clinical protocols used for axial, helical and cone-beam CT imaging were measured using a diode-array detector (MapCheck2). The MapCheck2 is composed from solid state diode detectors that are arranged in horizontal and vertical lines with a spacing of 10 mm. A GE-Light-Speed CT-simulator was used to acquire axial and helical CT images and a kV on-board-imager integrated with a Varian TrueBeam-STx machine was used to acquire cone-beam CT (CBCT) images. Results: The dose distributions from axial, helical and cone-beam CT were non-uniform over the region-of-interest with strong spatial and angular dependence. In axial CT, a large dose gradient was measured that decreased from lateral sides to the middle of the phantom due to large superficial dose at the side of the phantom in comparison with larger beam attenuation at the center. The dose decreased at the superior and inferior regions in comparison to the center of the phantom in axial CT. An asymmetry was found between the right-left or superior-inferior sides of the phantom which possibly to angular dependence in the dose distributions. The dose level and distribution varied from one imaging technique into another. For the pelvis technique, axial CT deposited a mean dose of 3.67 cGy, helical CT deposited a mean dose of 1.59 cGy, and CBCT deposited a mean dose of 1.62 cGy. Conclusions: MapCheck2 provides a robust tool to measure directly 2D-dose distributions for CT imaging with high spatial resolution detectors in comparison with ionization chamber that provides a single point measurement or an average dose to the phantom. The dose distributions measured with MapCheck2 consider medium heterogeneity and can represent specific patient dose. 10. SU-E-J-99: Reconstruction of Cone Beam CT Image Using Volumetric Modulated Arc Therapy Exit Beams SciTech Connect Jeong, K; Goddard, L; Savacool, M; Mynampati, D; Godoy Scripes, P; Tome', W; Kuo, H; Basavatia, A; Hong, L; Yaparpalvi, R; Kalnicki, S 2014-06-01 Purpose: To test the possibility of obtaining an image of the treated volume during volumetric modulated arc therapy (VMAT) with exit beams. Method: Using a Varian Clinac 21EX and MVCT detector the following three sets of detector projection data were obtained for cone beam CT reconstruction with and without a Catphan 504 phantom. 1) 72 projection images from 20 × 16 cm{sup 2} open beam with 3 MUs, 2) 72 projection images from 20 × 16 cm{sup 2} MLC closed beam with 14 MUs. 3) 137 projection images from a test RapicArc QA plan. All projection images were obtained in ‘integrated image’ mode. We used OSCaR code to reconstruct the cone beam CT images. No attempts were made to reduce scatter or artifacts. Results: With projection set 1) we obtained a good quality MV CBCT image by optimizing the reconstruction parameters. Using projection set 2) we were not able to obtain a CBCT image of the phantom, which was determined to be due to the variation of interleaf leakage with gantry angle. From projection set 3), we were able to obtain a weak but meaningful signal in the image, especially in the target area where open beam signals were dominant. This finding suggests that one might be able to acquire CBCT images with rough body shape and some details inside the irradiated target area. Conclusion: Obtaining patient images using the VMAT exit beam is challenging but possible. We were able to determine sources of image degradation such as gantry angle dependent interleaf leakage and beams with a large scatter component. We are actively working on improving image quality. 11. The effect of beam purity and scanner complexity on proton CT accuracy. PubMed Piersimoni, P; Ramos-Méndez, J; Geoghegan, T; Bashkirov, V A; Schulte, R W; Faddegon, B A 2017-01-01 To determine the dependence of the accuracy in reconstruction of relative stopping power (RSP) with proton computerized tomography (pCT) scans on the purity of the proton beam and the technological complexity of the pCT scanner using standard phantoms and a digital representation of a pediatric patient. The Monte Carlo method was applied to simulate the pCT scanner, using both a pure proton beam (uniform 200 MeV mono-energetic, parallel beam) and the Northwestern Medicine Chicago Proton Center (NMCPC) clinical beam in uniform scanning mode. The accuracy of the simulation was validated with measurements performed at NMCPC including reconstructed RSP images obtained with a preclinical prototype pCT scanner. The pCT scanner energy detector was then simulated in three configurations of increasing complexity: an ideal totally absorbing detector, a single stage detector and a multi-stage detector. A set of 15 cm diameter water cylinders containing either water alone or inserts of different material, size, and position were simulated at 90 projection angles (4° steps) for the pure and clinical proton beams and the three pCT configurations. A pCT image of the head of a detailed digital pediatric phantom was also reconstructed from the simulated pCT scan with the prototype detector. The RSP error increased for all configurations for insert sizes under 7.5 mm in radius, with a sharp increase below 5 mm in radius, attributed to a limit in spatial resolution. The highest accuracy achievable using the current pCT calibration step phantom and reconstruction algorithm, calculated for the ideal case of a pure beam with totally absorbing energy detector, was 1.3% error in RSP for inserts of 5 mm radius or more, 0.7 mm in range for the 2.5 mm radius inserts, or better. When the highest complexity of the scanner geometry was introduced, some artifacts arose in the reconstructed images, particularly in the center of the phantom. Replacing the step phantom used for calibration with a 12. Percutaneous Bone Biopsies: Comparison between Flat-Panel Cone-Beam CT and CT-Scan Guidance SciTech Connect Tselikas, Lambros Joskin, Julien; Roquet, Florian; Farouil, Geoffroy; Dreuil, Serge; Hakimé, Antoine Teriitehau, Christophe; Auperin, Anne; Baere, Thierry de Deschamps, Frederic 2015-02-15 PurposeThis study was designed to compare the accuracy of targeting and the radiation dose of bone biopsies performed either under fluoroscopic guidance using a cone-beam CT with real-time 3D image fusion software (FP-CBCT-guidance) or under conventional computed tomography guidance (CT-guidance).MethodsSixty-eight consecutive patients with a bone lesion were prospectively included. The bone biopsies were scheduled under FP-CBCT-guidance or under CT-guidance according to operating room availability. Thirty-four patients underwent a bone biopsy under FP-CBCT and 34 under CT-guidance. We prospectively compared the two guidance modalities for their technical success, accuracy, puncture time, and pathological success rate. Patient and physician radiation doses also were compared.ResultsAll biopsies were technically successful, with both guidance modalities. Accuracy was significantly better using FP-CBCT-guidance (3 and 5 mm respectively: p = 0.003). There was no significant difference in puncture time (32 and 31 min respectively, p = 0.51) nor in pathological results (88 and 88 % of pathological success respectively, p = 1). Patient radiation doses were significantly lower with FP-CBCT (45 vs. 136 mSv, p < 0.0001). The percentage of operators who received a dose higher than 0.001 mSv (dosimeter detection dose threshold) was lower with FP-CBCT than CT-guidance (27 vs. 59 %, p = 0.01).ConclusionsFP-CBCT-guidance for bone biopsy is accurate and reduces patient and operator radiation doses compared with CT-guidance. 13. Cone-Beam CT Localization of Internal Target Volumes for Stereotactic Body Radiotherapy of Lung Lesions SciTech Connect Wang Zhiheng Wu, Q. Jackie; Marks, Lawrence B.; Larrier, Nicole; Yin Fangfang 2007-12-01 Purpose: In this study, we investigate a technique of matching internal target volumes (ITVs) in four-dimensional (4D) simulation computed tomography (CT) to the composite target volume in free-breathing on-board cone-beam (CB) CT. The technique is illustrated by using both phantom and patient cases. Methods and Materials: A dynamic phantom with a target ball simulating respiratory motion with various amplitude and cycle times was used to verify localization accuracy. The dynamic phantom was scanned using simulation CT with a phase-based retrospective sorting technique. The ITV was then determined based on 10 sets of sorted images. The size and epicenter of the ITV identified from 4D simulation CT images and the composite target volume identified from on-board CBCT images were compared to assess localization accuracy. Similarly, for two clinical cases of patients with lung cancer, ITVs defined from 4D simulation CT images and CBCT images were compared. Results: For the phantom, localization accuracy between the ITV in 4D simulation CT and the composite target volume in CBCT was within 1 mm, and ITV was within 8.7%. For patient cases, ITVs on simulation CT and CBCT were within 8.0%. Conclusion: This study shows that CBCT is a useful tool to localize ITV for targets affected by respiratory motion. Verification of the ITV from 4D simulation CT using on-board free-breathing CBCT is feasible for the target localization of lung tumors. 14. ASSESSMENT OF EFFECTIVE DOSE FROM CONE BEAM CT IMAGING IN SPECT/CT EXAMINATION IN COMPARISON WITH OTHER MODALITIES. PubMed Tonkopi, Elena; Ross, Andrew A 2016-12-01 15. The application of cone-beam CT in the aging of bone calluses: a new perspective? PubMed Cappella, A; Amadasi, A; Gaudio, D; Gibelli, D; Borgonovo, S; Di Giancamillo, M; Cattaneo, C 2013-11-01 In the forensic and anthropological fields, the assessment of the age of a bone callus can be crucial for a correct analysis of injuries in the skeleton. To our knowledge, the studies which have focused on this topic are mainly clinical and still leave much to be desired for forensic purposes, particularly in looking for better methods for aging calluses in view of criminalistic applications. This study aims at evaluating the aid cone-beam CT can give in the investigation of the inner structure of fractures and calluses, thus acquiring a better knowledge of the process of bone remodeling. A total of 13 fractures (three without callus formation and ten with visible callus) of known age from cadavers were subjected to radiological investigations with digital radiography (DR) (conventional radiography) and cone-beam CT with the major aim of investigating the differences between DR and tomographic images when studying the inner and outer structures of bone healing. Results showed how with cone-beam CT the structure of the callus is clearly visible with higher specificity and definition and much more information on mineralization in different sections and planes. These results could lay the foundation for new perspectives on bone callus evaluation and aging with cone-beam CT, a user-friendly and skillful technique which in some instances can also be used extensively on the living (e.g., in cases of child abuse) with reduced exposition to radiation. 16. Fan-beam monochromatic x-ray CT using fluorescent x rays excited by synchrotron radiation Toyofuku, Fukai; Tokumori, Kenji; Kanda, Shigenobu; Higashida, Yoshiharu; Ohki, Masafumi; Cho, Tetsuji; Nishimura, Katsuyuki; Hyodo, Kazuyuki; Ando, Masami; Uyama, Chikao 1999-10-01 Monochromatic x-ray CT has several advantages over conventional CT, which utilizes bremsstrahlung white x-rays from an x-ray tube. Although various types of monochromatic x-ray CT systems using synchrotron radiation have been developed using a parallel x-ray beam for imaging of small samples with a high spatial resolution, imaging of large objects such as the human body have not been developed yet. We have developed a fan-beam monochromatic x-ray CT using fluorescent x-rays generated by irradiating metal targets by synchrotron radiation. A CdTe linear array detector of 512 mm sensitive width was used in the photon counting mode. We made phantom experiments using fluorescent x-rays ranging from 32 to 75 keV. Monochromatic x-ray CT images of a cylindrical lucite phantom filled with several contrast media have been obtained. Measured CT numbers are compared with linear attenuation coefficients, and they showed a good linearity over a wide range of contrast media concentrations. 17. Cone beam CT in orthodontics: the current picture. PubMed Makdissi, Jimmy 2013-03-01 The introduction of cone beam computed tomography (CBCT) technology to dentistry and orthodontics revolutionized the diagnosis, treatment and monitoring of orthodontic patients. This review article discusses the use of CBCT in diagnosis and treatment planning in orthodontics. The steps required to install and operate a CBCT facility within the orthodontic practice as well as the challenges are highlighted. The available guidelines in relation to the clinical applications of CBCT in orthodontics are explored. 18. Tetrahedron beam computed tomography (TBCT): a new design of volumetric CT system. PubMed Zhang, Tiezhi; Schulze, Derek; Xu, Xiaochao; Kim, Joshua 2009-06-07 Volumetric CT imaging systems usually comprise a point x-ray source and a 2D detector. Flat panel imager (FPI)-based cone beam CT (CBCT) has become an important online imaging modality for image-guided radiotherapy and intervention. However, due to excessive scatter photons and inferior detector performance, the image quality of current CBCT is significantly inferior to diagnostic fan-beam CT. We propose a novel tetrahedron beam computed tomography (TBCT) imaging system which consists of a linear scan x-ray source and a linear x-ray detector array. The linear x-ray tube and detector array are aligned perpendicular and parallel to the rotation plane, respectively. The x-ray beams are narrowly collimated into fan beams and focused on the linear detector array. The linear detector and linear x-ray source form a 'tetrahedron' volume instead of a 'cone' volume. TBCT is similar to CBCT in image reconstruction geometry; however, its image quality will be significantly superior to that of CBCT due to its scatter rejection mechanism and the use of high-performance discrete x-ray detectors. In this paper, we describe the design of the TBCT system for image-guided radiotherapy and some results of preliminary studies. 19. Cone beam CT for diagnosis and treatment planning in trauma cases. PubMed Palomo, Leena; Palomo, J Martin 2009-10-01 Three-dimensional imaging offers many advantages in making diagnoses and planning treatment. This article focuses on cone beam CT (CBCT) for making diagnoses and planning treatment in trauma-related cases. CBCT equipment is smaller and less expensive than traditional medical CT equipment and is tailored to address challenges specific to the dentoalveolar environment. Like medical CT, CBCT offers a three-dimensional view that conventional two-dimensional dental radiography fails to provide. CBCT combines the strengths of medical CT with those of conventional dental radiography to accommodate unique diagnostic and treatment-planning applications that have particular utility in dentoalveolar trauma cases. CBCT is useful, for example, in identifying tooth fractures relative to surrounding alveolar bone, in determining alveolar fracture location and morphology, in analyzing ridge-defect height and width, and in imaging temporomandibular joints. Treatment-planning applications include those involving extraction of fractured teeth, placement of implants, exposure of impacted teeth, and analyses of airways. 20. SU-E-J-32: Dosimetric Evaluation Based On Pre-Treatment Cone Beam CT for Spine Stereotactic Body Radiotherapy: Does Region of Interest Focus Matter? SciTech Connect Magnelli, A; Xia, P 2015-06-15 Purpose: Spine stereotactic body radiotherapy requires very conformal dose distributions and precise delivery. Prior to treatment, a KV cone-beam CT (KV-CBCT) is registered to the planning CT to provide image-guided positional corrections, which depend on selection of the region of interest (ROI) because of imperfect patient positioning and anatomical deformation. Our objective is to determine the dosimetric impact of ROI selections. Methods: Twelve patients were selected for this study with the treatment regions varied from C-spine to T-spine. For each patient, the KV-CBCT was registered to the planning CT three times using distinct ROIs: one encompassing the entire patient, a large ROI containing large bony anatomy, and a small target-focused ROI. Each registered CBCT volume, saved as an aligned dataset, was then sent to the planning system. The treated plan was applied to each dataset and dose was recalculated. The tumor dose coverage (percentage of target volume receiving prescription dose), maximum point dose to 0.03 cc of the spinal cord, and dose to 10% of the spinal cord volume (V10) for each alignment were compared to the original plan. Results: The average magnitude of tumor coverage deviation was 3.9%±5.8% with external contour, 1.5%±1.1% with large ROI, 1.3%±1.1% with small ROI. Spinal cord V10 deviation from plan was 6.6%±6.6% with external contour, 3.5%±3.1% with large ROI, and 1.2%±1.0% with small ROI. Spinal cord max point dose deviation from plan was: 12.2%±13.3% with external contour, 8.5%±8.4% with large ROI, and 3.7%±2.8% with small ROI. Conclusion: A small ROI focused on the target results in the smallest deviation from planned dose to target and cord although rotations at large distances from the targets were observed. It is recommended that image fusion during CBCT focus narrowly on the target volume to minimize dosimetric error. Improvement in patient setups may further reduce residual errors. 1. Cone-Beam Computed Tomography (CBCT) Versus CT in Lung Ablation Procedure: Which is Faster? SciTech Connect Cazzato, Roberto Luigi Battistuzzi, Jean-Benoit Catena, Vittorio; Grasso, Rosario Francesco Zobel, Bruno Beomonte; Schena, Emiliano; Buy, Xavier Palussiere, Jean 2015-10-15 AimTo compare cone-beam CT (CBCT) versus computed tomography (CT) guidance in terms of time needed to target and place the radiofrequency ablation (RFA) electrode on lung tumours.Materials and MethodsPatients at our institution who received CBCT- or CT-guided RFA for primary or metastatic lung tumours were retrospectively included. Time required to target and place the RFA electrode within the lesion was registered and compared across the two groups. Lesions were stratified into three groups according to their size (<10, 10–20, >20 mm). Occurrences of electrode repositioning, repositioning time, RFA complications, and local recurrence after RFA were also reported.ResultsForty tumours (22 under CT, 18 under CBCT guidance) were treated in 27 patients (19 male, 8 female, median age 67.25 ± 9.13 years). Thirty RFA sessions (16 under CBCT and 14 under CT guidance) were performed. Multivariable linear regression analysis showed that CBCT was faster than CT to target and place the electrode within the tumour independently from its size (β = −9.45, t = −3.09, p = 0.004). Electrode repositioning was required in 10/22 (45.4 %) tumours under CT guidance and 5/18 (27.8 %) tumours under CBCT guidance. Pneumothoraces occurred in 6/14 (42.8 %) sessions under CT guidance and in 6/16 (37.5 %) sessions under CBCT guidance. Two recurrences were noted for tumours receiving CBCT-guided RFA (2/17, 11.7 %) and three after CT-guided RFA (3/19, 15.8 %).ConclusionCBCT with live 3D needle guidance is a useful technique for percutaneous lung ablation. Despite lesion size, CBCT allows faster lung RFA than CT. 2. Influence of anatomical location on CT numbers in cone beam computed tomography. PubMed Oliveira, Matheus L; Tosoni, Guilherme M; Lindsey, David H; Mendoza, Kristopher; Tetradis, Sotirios; Mallya, Sanjay M 2013-04-01 To assess the influence of anatomical location on computed tomography (CT) numbers in mid- and full field of view (FOV) cone beam computed tomography (CBCT) scans. Polypropylene tubes with varying concentrations of dipotassium hydrogen phosphate (K₂HPO₄) solutions (50-1200 mg/mL) were imaged within the incisor, premolar, and molar dental sockets of a human skull phantom. CBCT scans were acquired using the NewTom 3G and NewTom 5G units. The CT numbers of the K₂HPO₄ phantoms were measured, and the relationship between CT numbers and K₂HPO₄ concentration was examined. The measured CT numbers of the K₂HPO₄ phantoms were compared between anatomical sites. At all six anatomical locations, there was a strong linear relationship between CT numbers and K₂HPO₄ concentration (R(2)>0.93). However, the absolute CT numbers varied considerably with the anatomical location. The relationship between CT numbers and object density is not uniform through the dental arch on CBCT scans. Copyright © 2013 Elsevier Inc. All rights reserved. 3. Determination of size-specific exposure settings in dental cone-beam CT. PubMed Pauwels, Ruben; Jacobs, Reinhilde; Bogaerts, Ria; Bosmans, Hilde; Panmekiate, Soontra 2017-01-01 To estimate the possible reduction of tube output as a function of head size in dental cone-beam computed tomography (CBCT). A 16 cm PMMA phantom, containing a central and six peripheral columns filled with PMMA, was used to represent an average adult male head. The phantom was scanned using CBCT, with 0-6 peripheral columns having been removed in order to simulate varying head sizes. For five kV settings (70-90 kV), the mAs required to reach a predetermined image noise level was determined, and corresponding radiation doses were derived. Results were expressed as a function of head size, age, and gender, based on growth reference charts. The use of 90 kV consistently resulted in the largest relative dose reduction. A potential mAs reduction ranging from 7 % to 50 % was seen for the different simulated head sizes, showing an exponential relation between head size and mAs. An optimized exposure protocol based on head circumference or age/gender is proposed. A considerable dose reduction, through reduction of the mAs rather than the kV, is possible for small-sized patients in CBCT, including children and females. Size-specific exposure protocols should be clinically implemented. • Fixed exposure settings in CBCT results in overexposure for smaller patients • For children, considerable dose reduction is possible without compromising image quality • A reduction in mAs is more dose-efficient than a kV reduction • An optimized exposure protocol was proposed based on phantom measurements • This protocol should be validated in a clinical setting. 4. An index of beam hardening artifact for two-dimensional cone-beam CT tomographic images: establishment and preliminary evaluation Yuan, Fusong; Lv, Peijun; Yang, Huifang; Wang, Yong; Sun, Yuchun 2015-07-01 Objectives: Based on the pixel gray value measurements, establish a beam-hardening artifacts index of the cone-beam CT tomographic image, and preliminarily evaluate its applicability. Methods: The 5mm-diameter metal ball and resin ball were fixed on the light-cured resin base plate respectively, while four vitro molars were fixed above and below the ball, on the left and right respectively, which have 10mm distance with the metal ball. Then, cone beam CT was used to scan the fixed base plate twice. The same layer tomographic images were selected from the two data and imported into the Photoshop software. The circle boundary was built through the determination of the center and radius of the circle, according to the artifact-free images section. Grayscale measurement tools were used to measure the internal boundary gray value G0, gray value G1 and G2 of 1mm and 20mm artifacts outside the circular boundary, the length L1 of the arc with artifacts in the circular boundary, the circumference L2. Hardening artifacts index was set A = (G1 / G0) * 0.5 + (G2 / G1) * 0.4 + (L2 / L1) * 0.1. Then, the A values of metal and resin materials were calculated respectively. Results: The A value of cobalt-chromium alloy material is 1, and resin material is 0. Conclusion: The A value reflects comprehensively the three factors of hardening artifacts influencing normal oral tissue image sharpness of cone beam CT. The three factors include relative gray value, the decay rate and range of artifacts. 5. Dynamic bowtie filter for cone-beam/multi-slice CT. PubMed Liu, Fenglin; Yang, Qingsong; Cong, Wenxiang; Wang, Ge 2014-01-01 A pre-patient attenuator ("bowtie filter" or "bowtie") is used to modulate an incoming x-ray beam as a function of the angle of the x-ray with respect to a patient to balance the photon flux on a detector array. While the current dynamic bowtie design is focused on fan-beam geometry, in this study we propose a methodology for dynamic bowtie design in multi-slice/cone-beam geometry. The proposed 3D dynamic bowtie is an extension of the 2D prior art. The 3D bowtie consists of a highly attenuating bowtie (HB) filled in with heavy liquid and a weakly attenuating bowtie (WB) immersed in the liquid of the HB. The HB targets a balanced flux distribution on a detector array when no object is in the field of view (FOV). The WB compensates for an object in the FOV, and hence is a scaled-down version of the object. The WB is rotated and translated in synchrony with the source rotation and patient translation so that the overall flux balance is maintained on the detector array. First, the mathematical models of different scanning modes are established for an elliptical water phantom. Then, a numerical simulation study is performed to compare the performance of the scanning modes in the cases of the water phantom and a patient cross-section without any bowtie and with a dynamic bowtie. The dynamic bowtie can equalize the numbers of detected photons in the case of the water phantom. In practical cases, the dynamic bowtie can effectively reduce the dynamic range of detected signals inside the FOV. Furthermore, the WB can be individualized using a 3D printing technique as the gold standard. We have extended the dynamic bowtie concept from 2D to 3D by using highly attenuating liquid and moving a scale-reduced negative copy of an object being scanned. Our methodology can be applied to reduce radiation dose and facilitate photon-counting detection. 6. Dynamic Bowtie Filter for Cone-Beam/Multi-Slice CT PubMed Central Liu, Fenglin; Yang, Qingsong; Cong, Wenxiang; Wang, Ge 2014-01-01 A pre-patient attenuator (“bowtie filter” or “bowtie”) is used to modulate an incoming x-ray beam as a function of the angle of the x-ray with respect to a patient to balance the photon flux on a detector array. While the current dynamic bowtie design is focused on fan-beam geometry, in this study we propose a methodology for dynamic bowtie design in multi-slice/cone-beam geometry. The proposed 3D dynamic bowtie is an extension of the 2D prior art. The 3D bowtie consists of a highly attenuating bowtie (HB) filled in with heavy liquid and a weakly attenuating bowtie (WB) immersed in the liquid of the HB. The HB targets a balanced flux distribution on a detector array when no object is in the field of view (FOV). The WB compensates for an object in the FOV, and hence is a scaled-down version of the object. The WB is rotated and translated in synchrony with the source rotation and patient translation so that the overall flux balance is maintained on the detector array. First, the mathematical models of different scanning modes are established for an elliptical water phantom. Then, a numerical simulation study is performed to compare the performance of the scanning modes in the cases of the water phantom and a patient cross-section without any bowtie and with a dynamic bowtie. The dynamic bowtie can equalize the numbers of detected photons in the case of the water phantom. In practical cases, the dynamic bowtie can effectively reduce the dynamic range of detected signals inside the FOV. Furthermore, the WB can be individualized using a 3D printing technique as the gold standard. We have extended the dynamic bowtie concept from 2D to 3D by using highly attenuating liquid and moving a scale-reduced negative copy of an object being scanned. Our methodology can be applied to reduce radiation dose and facilitate photon-counting detection. PMID:25051067 7. Evaluation of accuracy of 3D reconstruction images using multi-detector CT and cone-beam CT PubMed Central Kim, Mija; YI, Won-Jin; Heo, Min-Suk; Lee, Sam-Sun; Choi, Soon-Chul 2012-01-01 Purpose This study was performed to determine the accuracy of linear measurements on three-dimensional (3D) images using multi-detector computed tomography (MDCT) and cone-beam computed tomography (CBCT). Materials and Methods MDCT and CBCT were performed using 24 dry skulls. Twenty-one measurements were taken on the dry skulls using digital caliper. Both types of CT data were imported into OnDemand software and identification of landmarks on the 3D surface rendering images and calculation of linear measurements were performed. Reproducibility of the measurements was assessed using repeated measures ANOVA and ICC, and the measurements were statistically compared using a Student t-test. Results All assessments under the direct measurement and image-based measurements on the 3D CT surface rendering images using MDCT and CBCT showed no statistically difference under the ICC examination. The measurements showed no differences between the direct measurements of dry skull and the image-based measurements on the 3D CT surface rendering images (P>.05). Conclusion Three-dimensional reconstructed surface rendering images using MDCT and CBCT would be appropriate for 3D measurements. PMID:22474645 8. Kv5, Kv6, Kv8, and Kv9 subunits: No simple silent bystanders PubMed Central 2016-01-01 Members of the electrically silent voltage-gated K+ (Kv) subfamilies (Kv5, Kv6, Kv8, and Kv9, collectively identified as electrically silent voltage-gated K+ channel [KvS] subunits) do not form functional homotetrameric channels but assemble with Kv2 subunits into heterotetrameric Kv2/KvS channels with unique biophysical properties. Unlike the ubiquitously expressed Kv2 subunits, KvS subunits show a more restricted expression. This raises the possibility that Kv2/KvS heterotetramers have tissue-specific functions, making them potential targets for the development of novel therapeutic strategies. Here, I provide an overview of the expression of KvS subunits in different tissues and discuss their proposed role in various physiological and pathophysiological processes. This overview demonstrates the importance of KvS subunits and Kv2/KvS heterotetramers in vivo and the importance of considering KvS subunits and Kv2/KvS heterotetramers in the development of novel treatments. PMID:26755771 9. Analytical cone-beam reconstruction using a multi-source inverse geometry CT system Yin, Zhye; De Man, Bruno; Pack, Jed 2007-03-01 In a 3rd generation CT system, a single source projects the entire field of view (FOV) onto a large detector opposite the source. In multi-source CT imaging, a multitude of sources sequentially project a part of the FOV on a much smaller detector. These sources may be distributed in both the trans-axial and axial directions in order to jointly cover the entire FOV. Scan data from multiple sources in the axial direction provide complementary information, which is not available in a conventional single-source CT system. In this work, an analytical 3D cone-beam reconstruction algorithm for multi-source CT is proposed. This approach has three distinctive features. First, multi-source data are re-binned transaxially to multiple offset third-generation datasets. Second, data points in sinograms from multiple source sets are either accepted or rejected for contribution to the backprojection of a given voxel. Third, instead of using a ramp filter, a Hilbert transform is combined with a parallel derivative to form the filtering mechanism. Phantom simulations are performed using a multi-source CT geometry and compared to conventional 3rd generation CT geometry. We show that multi-source CT can extend the axial scan coverage to 120mm without cone-beam artifacts, while a third-generation geometry results in compromised image quality at 60mm of axial coverage. Moreover, given that the cone-angle in the proposed geometry is limited to 7 degrees, there are no degrading effects such as the Heel effect and scattered radiation, unlike in a third-generation geometry with comparable coverage. An additional benefit is the uniform flux profile resulting in uniform image noise throughout the FOV and a uniform dose absorption profile. 10. FFT and cone-beam CT reconstruction on graphics hardware Després, Philippe; Sun, Mingshan; Hasegawa, Bruce H.; Prevrhal, Sven 2007-03-01 Graphics processing units (GPUs) are increasingly used for general purpose calculations. Their pipelined architecture can be exploited to accelerate various parallelizable algorithms. Medical imaging applications are inherently well suited to benefit from the development of GPU-based computational platforms. We evaluate in this work the potential of GPUs to improve the execution speed of two common medical imaging tasks, namely Fourier transforms and tomographic reconstructions. A two-dimensional fast Fourier transform (FFT) algorithm was GPU-implemented and compared, in terms of execution speed, to two popular CPU-based FFT routines. Similarly, the Feldkamp, David and Kress (FDK) algorithm for cone-beam tomographic reconstruction was implemented on the GPU and its performance compared to a CPU version. Different reconstruction strategies were employed to assess the performance of various GPU memory layouts. For the specific hardware used, GPU implementations of the FFT were up to 20 times faster than their CPU counterparts, but slower than highly optimized CPU versions of the algorithm. Tomographic reconstructions were faster on the GPU by a factor up to 30, allowing 256 3 voxel reconstructions of 256 projections in about 20 seconds. Overall, GPUs are an attractive alternative to other imaging-dedicated computing hardware like application-specific integrated circuits (ASICs) and field programmable gate arrays (FPGAs) in terms of cost, simplicity and versatility. With the development of simpler language extensions and programming interfaces, GPUs are likely to become essential tools in medical imaging. 11. Noise suppression in scatter correction for cone-beam CT PubMed Central Zhu, Lei; Wang, Jing; Xing, Lei 2009-01-01 Scatter correction is crucial to the quality of reconstructed images in x-ray cone-beam computed tomography (CBCT). Most of existing scatter correction methods assume smooth scatter distributions. The high-frequency scatter noise remains in the projection images even after a perfect scatter correction. In this paper, using a clinical CBCT system and a measurement-based scatter correction, the authors show that a scatter correction alone does not provide satisfactory image quality and the loss of the contrast-to-noise ratio (CNR) of the scatter corrected image may overwrite the benefit of scatter removal. To circumvent the problem and truly gain from scatter correction, an effective scatter noise suppression method must be in place. They analyze the noise properties in the projections after scatter correction and propose to use a penalized weighted least-squares (PWLS) algorithm to reduce the noise in the reconstructed images. Experimental results on an evaluation phantom (Catphan©600) show that the proposed algorithm further reduces the reconstruction error in a scatter corrected image from 10.6% to 1.7% and increases the CNR by a factor of 3.6. Significant image quality improvement is also shown in the results on an anthropomorphic phantom, in which the global noise level is reduced and the local streaking artifacts around bones are suppressed. PMID:19378735 12. Poster — Thur Eve — 06: Dose assessment of cone beam CT imaging protocols as part of SPECT/CT examinations SciTech Connect Tonkopi, E; Ross, AA 2014-08-15 Purpose: To assess radiation dose from the cone beam CT (CBCT) component of SPECT/CT studies and to compare with other CT examinations performed in our institution. Methods: We used an anthropomorphic chest phantom and the 6 cc ion chamber to measure entrance breast dose for several CBCT and diagnostic CT acquisition protocols. The CBCT effective dose was calculated with ImPACT software; the CT effective dose was evaluated from the DLP value and conversion factor, dependent on the anatomic region. The RADAR medical procedure radiation dose calculator was used to assess the nuclear medicine component of exam dose. Results: The entrance dose to the breast measured with the anthropomorphic phantom was 0.48 mGy and 9.41 mGy for cardiac and chest CBCT scans; and 4.59 mGy for diagnostic thoracic CT. The effective doses were 0.2 mSv, 3.2 mSv and 2.8 mSv respectively. For a small patient represented by the anthropomorphic phantom, the dose from the diagnostic CT was lower than from the CBCT scan, as a result of the exposure reduction options available on modern CT scanners. The CBCT protocols used the same fixed scanning techniques. The diagnostic CT dose based on the patient data was 35% higher than the phantom dose. For most SPECT/CT studies the dose from the CBCT component was comparable with the dose from the radiopharmaceutical. Conclusions: The patient radiation dose from the cone beam CT scan can be higher than that from a diagnostic CT and should be taken into consideration in evaluating total SPECT/CT patient dose. 13. In vivo verification of proton beam path by using post-treatment PET/CT imaging SciTech Connect Hsi, Wen C.; Indelicato, Daniel J.; Vargas, Carlos; Duvvuri, Srividya; Li Zuofeng; Palta, Jatinder 2009-09-15 Purpose: The purpose of this study is to establish the in vivo verification of proton beam path by using proton-activated positron emission distributions. Methods: A total of 50 PET/CT imaging studies were performed on ten prostate cancer patients immediately after daily proton therapy treatment through a single lateral portal. The PET/CT and planning CT were registered by matching the pelvic bones, and the beam path of delivered protons was defined in vivo by the positron emission distribution seen only within the pelvic bones, referred to as the PET-defined beam path. Because of the patient position correction at each fraction, the marker-defined beam path, determined by the centroid of implanted markers seen in the post-treatment (post-Tx) CT, is used for the planned beam path. The angular variation and discordance between the PET- and marker-defined paths were derived to investigate the intrafraction prostate motion. For studies with large discordance, the relative location between the centroid and pelvic bones seen in the post-Tx CT was examined. The PET/CT studies are categorized for distinguishing the prostate motion that occurred before or after beam delivery. The post-PET CT was acquired after PET imaging to investigate prostate motion due to physiological changes during the extended PET acquisition. Results: The less than 2 deg. of angular variation indicates that the patient roll was minimal within the immobilization device. Thirty of the 50 studies with small discordance, referred as good cases, show a consistent alignment between the field edges and the positron emission distributions from the entrance to the distal edge. For those good cases, average displacements are 0.6 and 1.3 mm along the anterior-posterior (D{sub AP}) and superior-inferior (D{sub SI}) directions, respectively, with 1.6 mm standard deviations in both directions. For the remaining 20 studies demonstrating a large discordance (more than 6 mm in either D{sub AP} or D{sub SI}), 13 14. TH-A-18C-06: A Scatter Elimination Scheme for Cone Beam CT Using An Oscillating Narrow Beam SciTech Connect Yan, H; Folkerts, M; Jia, X; Jiang, S; Xu, Y 2014-06-15 Purpose: While cone beam CT (CBCT) has been widely used in image guided radiation therapy, its low image quality, primarily caused by scattered x-rays, hinders advanced clinical applications, e.g., CBCT based on-line adaptive re-planning. We propose in this abstract a new scheme called oscillating narrow beam CBCT (ONB-CBCT) to eliminate scatter signals. Methods: ONB-CBCT consists of two major components. 1) Oscillating narrow beam (ONB) scan and 2) partitioned flat panel containing multiple individual detector strips and their own readouts. Both the beam oscillation and detector partition are along the superior-inferior (SI) direction. During data acquisition, at a given projection, the narrow beam sweep through the detector region, and different portions of the detector acquires projection data in synchrony with the narrow beam. ONB can be generated by a rotating slit collimator design with conventional tube with single focal spot, or by directly using a new source with multiple focal spots. A proof-of-principle study via Monte Carlo simulation is conducted to demonstrate the feasibility of ONB-CBCT. Results: As the beam becomes narrower, more and more scatter signals are eliminated. For the case with a bowtie filter and using 15 ONBs, the maximum and the average intensity error due to scatter are below 20 and 10 HU, respectively. Conclusion: ONB yields a narrowed exposure field at each snapshot and hence an inherently negligible scatter effect. Meanwhile, the individualized detector units guarantee high frame rate detection and hence a same large volume coverage as that in conventional CBCT. In summary, ONB-CBCT is a promising design to achieve high-quality CBCT imaging. This study is supported in part by NIH (1R01CA154747-01) 15. MR cone-beam CT fusion image overlay for fluoroscopically guided percutaneous biopsies in pediatric patients. PubMed Thakor, Avnesh S; Patel, Premal A; Gu, Richard; Rea, Vanessa; Amaral, Joao; Connolly, Bairbre L 2016-03-01 Lesions only visible on magnetic resonance (MR) imaging cannot easily be targeted for image-guided biopsy using ultrasound or X-rays but instead require MR guidance with MR-compatible needles and long procedure times (acquisition of multiple MR sequences). We developed an alternative method for performing these difficult biopsies in a standard interventional suite, by fusing MR with cone-beam CT images. The MR cone-beam CT fusion image is then used as an overlay to guide a biopsy needle to the target area under live fluoroscopic guidance. Advantages of this technique include (i) the ability for it to be performed in a conventional interventional suite, (ii) three-dimensional planning of the needle trajectory using cross-sectional imaging, (iii) real-time fluoroscopic guidance for needle trajectory correction and (iv) targeting within heterogeneous lesions based on MR signal characteristics to maximize the potential biopsy yield. 16. SU-E-J-135: Feasibility of Using Quantitative Cone Beam CT for Proton Adaptive Planning SciTech Connect Jingqian, W; Wang, Q; Zhang, X; Wen, Z; Zhu, X; Frank, S; Li, H; Tsui, T; Zhu, L; Wei, J 2015-06-15 Purpose: To investigate the feasibility of using scatter corrected cone beam CT (CBCT) for proton adaptive planning. Methods: Phantom study was used to evaluate the CT number difference between the planning CT (pCT), quantitative CBCT (qCBCT) with scatter correction and calibrated Hounsfield units using adaptive scatter kernel superposition (ASKS) technique, and raw CBCT (rCBCT). After confirming the CT number accuracy, prostate patients, each with a pCT and several sets of weekly CBCT, were investigated for this study. Spot scanning proton treatment plans were independently generated on pCT, qCBCT and rCBCT. The treatment plans were then recalculated on all images. Dose-volume-histogram (DVH) parameters and gamma analysis were used to compare between dose distributions. Results: Phantom study suggested that Hounsfield unit accuracy for different materials are within 20 HU for qCBCT and over 250 HU for rCBCT. For prostate patients, proton dose could be calculated accurately on qCBCT but not on rCBCT. When the original plan was recalculated on qCBCT, tumor coverage was maintained when anatomy was consistent with pCT. However, large dose variance was observed when patient anatomy change. Adaptive plan using qCBCT was able to recover tumor coverage and reduce dose to normal tissue. Conclusion: It is feasible to use qu antitative CBCT (qCBCT) with scatter correction and calibrated Hounsfield units for proton dose calculation and adaptive planning in proton therapy. Partly supported by Varian Medical Systems. 17. Three-rooted premolar analyzed by high-resolution and cone beam CT. PubMed Marca, Caroline; Dummer, Paul M H; Bryant, Susan; Vier-Pelisser, Fabiana Vieira; Só, Marcus Vinicius Reis; Fontanella, Vania; Dutra, Vinicius D'avila; de Figueiredo, José Antonio Poli 2013-07-01 The aim of this study was to analyze the variations in canal and root cross-sectional area in three-rooted maxillary premolars between high-resolution computed tomography (μCT) and cone beam computed tomography (CBCT). Sixteen extracted maxillary premolars with three distinct roots and fully formed apices were scanned using μCT and CBCT. Photoshop CS software was used to measure root and canal cross-sectional areas at the most cervical and the most apical points of each root third in images obtained using the two tomographic computed (CT) techniques, and at 30 root sections equidistant from both root ends using μCT images. Canal and root areas were compared between each method using the Student t test for paired samples and 95 % confidence intervals. Images using μCT were sharper than those obtained using CBCT. There were statistically significant differences in mean area measurements of roots and canals between the μCT and CBCT techniques (P < 0.05). Root and canal areas had similar variations in cross-sectional μCT images and became proportionally smaller in a cervical to apical direction as the cementodentinal junction was approached, from where the area then increased apically. Although variation was similar in the roots and canals under study, CBCT produced poorer image details than μCT. Although CBCT is a strong diagnosis tool, it still needs improvement to provide accuracy in details of the root canal system, especially in cases with anatomical variations, such as the three-rooted maxillary premolars. 18. Dental cone beam CT image quality possibly reduced by patient movement. PubMed Donaldson, K; O'Connor, S; Heath, N 2013-01-01 Patient artefacts in dental cone beam CT scans can happen for various reasons. These range from artefacts from metal restorations to movement. An audit was carried out in the Glasgow Dental Hospital analysing how many scans showed signs of "motion artefact", and then to assess if there was any correlation between patient age and movement artefacts. Specific age demographics were then analysed to see if these cohorts were at a higher risk of "movement artefacts". 19. [The study of associated reconstruction using MV linear accelerator and cone-beam CT]. PubMed Liu, Zun-gang; Zhao, Jun; Zhuang, Tian-ge 2006-07-01 In this paper, we proposed a new scan mode and image reconstruction method, which combines the data from both the linear accelerator and the cone-beam CT to reconstruct the volume with a limited rotation angle and low sampling rate. The classical filtered backprojection method and the iterative method are utilized to reconstruct the volume. The reconstruction results of the two methods are compared with each other with a relavant anlysis given here. 20. SparseCT: interrupted-beam acquisition and sparse reconstruction for radiation dose reduction Koesters, Thomas; Knoll, Florian; Sodickson, Aaron; Sodickson, Daniel K.; Otazo, Ricardo 2017-03-01 State-of-the-art low-dose CT methods reduce the x-ray tube current and use iterative reconstruction methods to denoise the resulting images. However, due to compromises between denoising and image quality, only moderate dose reductions up to 30-40% are accepted in clinical practice. An alternative approach is to reduce the number of x-ray projections and use compressed sensing to reconstruct the full-tube-current undersampled data. This idea was recognized in the early days of compressed sensing and proposals for CT dose reduction appeared soon afterwards. However, no practical means of undersampling has yet been demonstrated in the challenging environment of a rapidly rotating CT gantry. In this work, we propose a moving multislit collimator as a practical incoherent undersampling scheme for compressed sensing CT and evaluate its application for radiation dose reduction. The proposed collimator is composed of narrow slits and moves linearly along the slice dimension (z), to interrupt the incident beam in different slices for each x-ray tube angle (θ). The reduced projection dataset is then reconstructed using a sparse approach, where 3D image gradients are employed to enforce sparsity. The effects of the collimator slits on the beam profile were measured and represented as a continuous slice profile. SparseCT was tested using retrospective undersampling and compared against commercial current-reduction techniques on phantoms and in vivo studies. Initial results suggest that SparseCT may enable higher performance than current-reduction, particularly for high dose reduction factors. 1. Can fan beam iCT accurately predict indirect decompression in MISS fusion procedures? PubMed Janssen, Insa; Lang, Gernot; Navarro-Ramirez, Rodrigo; Jada, Ajit; Berlin, Connor; Hilis, Aaron; Zubkov, Micaella; Gandevia, Lena; Härtl, Roger 2017-08-07 2. Actively triggered 4d cone-beam CT acquisition SciTech Connect Fast, Martin F.; Wisotzky, Eric; Oelfke, Uwe; Nill, Simeon 2013-09-15 Purpose: 4d cone-beam computed tomography (CBCT) scans are usually reconstructed by extracting the motion information from the 2d projections or an external surrogate signal, and binning the individual projections into multiple respiratory phases. In this “after-the-fact” binning approach, however, projections are unevenly distributed over respiratory phases resulting in inefficient utilization of imaging dose. To avoid excess dose in certain respiratory phases, and poor image quality due to a lack of projections in others, the authors have developed a novel 4d CBCT acquisition framework which actively triggers 2d projections based on the forward-predicted position of the tumor.Methods: The forward-prediction of the tumor position was independently established using either (i) an electromagnetic (EM) tracking system based on implanted EM-transponders which act as a surrogate for the tumor position, or (ii) an external motion sensor measuring the chest-wall displacement and correlating this external motion to the phase-shifted diaphragm motion derived from the acquired images. In order to avoid EM-induced artifacts in the imaging detector, the authors devised a simple but effective “Faraday” shielding cage. The authors demonstrated the feasibility of their acquisition strategy by scanning an anthropomorphic lung phantom moving on 1d or 2d sinusoidal trajectories.Results: With both tumor position devices, the authors were able to acquire 4d CBCTs free of motion blurring. For scans based on the EM tracking system, reconstruction artifacts stemming from the presence of the EM-array and the EM-transponders were greatly reduced using newly developed correction algorithms. By tuning the imaging frequency independently for each respiratory phase prior to acquisition, it was possible to harmonize the number of projections over respiratory phases. Depending on the breathing period (3.5 or 5 s) and the gantry rotation time (4 or 5 min), between ∼90 and 145 3. Accuracy of linear intraoral measurements using cone beam CT and multidetector CT: a tale of two CTs PubMed Central Patcas, R; Markic, G; Müller, L; Ullrich, O; Peltomäki, T; Kellenberger, C J; Karlo, C A 2012-01-01 Objectives The aim was to compare the accuracy of linear bone measurements of cone beam CT (CBCT) with multidetector CT (MDCT) and validate intraoral soft-tissue measurements in CBCT. Methods Comparable views of CBCT and MDCT were obtained from eight intact cadaveric heads. The anatomical positions of the gingival margin and the buccal alveolar bone ridge were determined. Image measurements (CBCT/MDCT) were performed upon multiplanar reformatted data sets and compared with the anatomical measurements; the number of non-assessable sites (NASs) was evaluated. Results Radiological measurements were accurate with a mean difference from anatomical measurements of 0.14 mm (CBCT) and 0.23 mm (MDCT). These differences were statistically not significant, but the limits of agreement for bone measurements were broader in MDCT (−1.35 mm; 1.82 mm) than in CBCT (−0.93 mm; 1.21 mm). The limits of agreement for soft-tissue measurements in CBCT were smaller (−0.77 mm; 1.07 mm), indicating a slightly higher accuracy. More NASs occurred in MDCT (14.5%) than in CBCT (8.3%). Conclusions CBCT is slightly more reliable for linear measurements than MDCT and less affected by metal artefacts. CBCT accuracy of linear intraoral soft-tissue measurements is similar to the accuracy of bone measurements. PMID:22554987 4. Dose calculation accuracy using cone-beam CT (CBCT) for pelvic adaptive radiotherapy Guan, Huaiqun; Dong, Hang 2009-10-01 This study is to evaluate the dose calculation accuracy using Varian's cone-beam CT (CBCT) for pelvic adaptive radiotherapy. We first calibrated the Hounsfield Unit (HU) to electron density (ED) for CBCT using a mini CT QC phantom embedded into an IMRT QA phantom. We then used a Catphan 500 with an annulus around it to check the calibration. The combined CT QC and IMRT phantom provided correct HU calibration, but not Catphan with an annulus. For the latter, not only was the Teflon an incorrect substitute for bone, but the inserts were also too small to provide correct HUs for air and bone. For the former, three different scan ranges (6 cm, 12 cm and 20.8 cm) were used to investigate the HU dependence on the amount of scatter. To evaluate the dose calculation accuracy, CBCT and plan-CT for a pelvic phantom were acquired and registered. The single field plan, 3D conformal and IMRT plans were created on both CT sets. Without inhomogeneity correction, the two CT generated nearly the same plan. With inhomogeneity correction, the dosimetric difference between the two CT was mainly from the HU calibration difference. The dosimetric difference for 6 MV was found to be the largest for the single lateral field plan (maximum 6.7%), less for the 3D conformal plan (maximum 3.3%) and the least for the IMRT plan (maximum 2.5%). Differences for 18 MV were generally 1-2% less. For a single lateral field, calibration with 20.8 cm achieved the minimum dosimetric difference. For 3D and IMRT plans, calibration with a 12 cm range resulted in better accuracy. Because Catphan is the standard QA phantom for the on-board imager (OBI) device, we specifically recommend not using it for the HU calibration of CBCT. 5. Dose calculation accuracy using cone-beam CT (CBCT) for pelvic adaptive radiotherapy. PubMed Guan, Huaiqun; Dong, Hang 2009-10-21 This study is to evaluate the dose calculation accuracy using Varian's cone-beam CT (CBCT) for pelvic adaptive radiotherapy. We first calibrated the Hounsfield Unit (HU) to electron density (ED) for CBCT using a mini CT QC phantom embedded into an IMRT QA phantom. We then used a Catphan 500 with an annulus around it to check the calibration. The combined CT QC and IMRT phantom provided correct HU calibration, but not Catphan with an annulus. For the latter, not only was the Teflon an incorrect substitute for bone, but the inserts were also too small to provide correct HUs for air and bone. For the former, three different scan ranges (6 cm, 12 cm and 20.8 cm) were used to investigate the HU dependence on the amount of scatter. To evaluate the dose calculation accuracy, CBCT and plan-CT for a pelvic phantom were acquired and registered. The single field plan, 3D conformal and IMRT plans were created on both CT sets. Without inhomogeneity correction, the two CT generated nearly the same plan. With inhomogeneity correction, the dosimetric difference between the two CT was mainly from the HU calibration difference. The dosimetric difference for 6 MV was found to be the largest for the single lateral field plan (maximum 6.7%), less for the 3D conformal plan (maximum 3.3%) and the least for the IMRT plan (maximum 2.5%). Differences for 18 MV were generally 1-2% less. For a single lateral field, calibration with 20.8 cm achieved the minimum dosimetric difference. For 3D and IMRT plans, calibration with a 12 cm range resulted in better accuracy. Because Catphan is the standard QA phantom for the on-board imager (OBI) device, we specifically recommend not using it for the HU calibration of CBCT. 6. Investigation of the dose distribution for a cone beam CT system dedicated to breast imaging. PubMed Lanconelli, Nico; Mettivier, Giovanni; Lo Meo, Sergio; Russo, Paolo 2013-06-01 Cone-beam breast Computed Tomography (bCT) is an X-ray imaging technique for breast cancer diagnosis, in principle capable of delivering a much more homogeneous dose spatial pattern to the breast volume than conventional mammography, at dose levels comparable to two-view mammography. We present an investigation of the three-dimensional dose distribution for a cone-beam CT system dedicated to breast imaging. We employed Monte Carlo simulations for estimating the dose deposited within a breast phantom having a hemiellipsoidal shape placed on a cylinder of 3.5 cm thickness that simulates the chest wall. This phantom represents a pendulant breast in a bCT exam with the average diameter at chest wall, assumed to correspond to a 5-cm-thick compressed breast in mammography. The phantom is irradiated in a circular orbit with an X-ray cone beam selected from four different techniques: 50, 60, 70, and 80 kVp from a tube with tungsten anode, 1.8 mm Al inherent filtration and additional filtration of 0.2 mm Cu. Using the Monte Carlo code GEANT4 we simulated a system similar to the experimental apparatus available in our lab. Simulations were performed at a constant free-in-air air kerma at the isocenter (1 μGy); the corresponding total number of photon histories per scan was 288 million at 80 kVp. We found that the more energetic beams provide a more uniform dose distribution than at low energy: the 50 kVp beam presents a frequency distribution of absorbed dose values with a coefficient of variation almost double than that for the 80 kVp beam. This is confirmed by the analysis of the relative dose profiles along the radial (i.e. parallel to the "chest wall") and longitudinal (i.e. from "chest wall" to "nipple") directions. Maximum radial deviations are on the order of 25% for the 80 kVp beam, whereas for the 50 kVp beam variations around 43% were observed, with the lowest dose values being found along the central longitudinal axis of the phantom. Copyright © 2012 7. Experimental Scatter Correction Methods in Industrial X-Ray Cone-Beam CT Schörner, K.; Goldammer, M.; Stephan, J. 2011-06-01 Scattered radiation presents a major source of image degradation in industrial cone-beam computed tomography systems. Scatter artifacts introduce streaks, cupping and a loss of contrast in the reconstructed CT-volumes. In order to overcome scatter artifacts, we present two complementary experimental correction methods: the beam-stop array (BSA) and an inverse technique we call beam-hole array (BHA). Both correction methods are examined in comparative measurements where it is shown that the aperture-based BHA technique has practical and scatter-reducing advantages over the BSA. The proposed BHA correction method is successfully applied to a large-scale industrial specimen whereby scatter artifacts are reduced and contrast is enhanced significantly. 8. Electron stripping processes of H{sup −} ion beam in the 80 kV high voltage extraction column and low energy beam transport line at LANSCE SciTech Connect Draganic, I. N. 2016-02-15 Basic vacuum calculations were performed for various operating conditions of the Los Alamos National Neutron Science H{sup −} Cockcroft-Walton (CW) injector and the Ion Source Test Stand (ISTS). The vacuum pressure was estimated for both the CW and ISTS at five different points: (1) inside the H{sup −} ion source, (2) in front of the Pierce electrode, (3) at the extraction electrode, (4) at the column electrode, and (5) at the ground electrode. A static vacuum analysis of residual gases and the working hydrogen gas was completed for the normal ion source working regime. Gas density and partial pressure were estimated for the injected hydrogen gas. The attenuation of H{sup −} beam current and generation of electron current in the high voltage acceleration columns and low energy beam transport lines were calculated. The interaction of H{sup −} ions on molecular hydrogen (H{sub 2}) is discussed as a dominant collision process in describing electron stripping rates. These results are used to estimate the observed increase in the ratio of electrons to H{sup −} ion beam in the ISTS beam transport line. 9. Electron stripping processes of H- ion beam in the 80 kV high voltage extraction column and low energy beam transport line at LANSCE Draganic, I. N. 2016-02-01 Basic vacuum calculations were performed for various operating conditions of the Los Alamos National Neutron Science H- Cockcroft-Walton (CW) injector and the Ion Source Test Stand (ISTS). The vacuum pressure was estimated for both the CW and ISTS at five different points: (1) inside the H- ion source, (2) in front of the Pierce electrode, (3) at the extraction electrode, (4) at the column electrode, and (5) at the ground electrode. A static vacuum analysis of residual gases and the working hydrogen gas was completed for the normal ion source working regime. Gas density and partial pressure were estimated for the injected hydrogen gas. The attenuation of H- beam current and generation of electron current in the high voltage acceleration columns and low energy beam transport lines were calculated. The interaction of H- ions on molecular hydrogen (H2) is discussed as a dominant collision process in describing electron stripping rates. These results are used to estimate the observed increase in the ratio of electrons to H- ion beam in the ISTS beam transport line. 10. Three-dimensional cone-beam region-of-interest (ROI) CT reconstruction Liu, Ruijie Rachel 2001-06-01 ROI cone-beam (CB) CT is proposed in this thesis as a new technique in which a set of partially filtered ROI images is used for CT reconstruction. It can be used in neural surgery as fluoroscopy 3D CT images with reduced dose outside the ROL Depending on the contrast resolution requirement of the CT images and therefore the thickness of the ROI filter, the dose can be reduced by 50%-70%. In this study, the shape of the ROI is rectangular in the middle of the image. In order to do CB reconstruction, the image distortion has to be corrected, because all the images were acquired by the image intensifier (II) where distortion occurs. We proposed a new method, named as super-global (SG) model with a set of parameters, to do distortion correction for all the images in a rotational run. The SG model is carefully tested and compared with the conventional global correction method Study shows that it is an accurate efficient method for the distortion correction of images acquired by rotational C-arms. The intensity of ROI images has to be equalized within and without ROI after distortion correction. First, the edge is detected; then equalization factors are applied to the filtered areas. Now the distortion-free and intensity-equalized images are used for CB backprojection to reconstruct the 3D CT images by Feldkamp algorithm. This algorithm is tested by a mathematical phantom reconstruction. Iso-center correction and uniform scattering subtraction is discussed and applied in the experimental phantom reconstruction. We took images of a phantom and a rabbit with and without contrast medium injection, and with and without ROI filtration. Their 3D CT images were obtained. For the rabbit, vascular CT images were obtained by subtracting CT images with contrast medium from those without contrast. The CT images of the ROI edges are in good shape, and no obvious artifact is observed. Though the filtered area has higher noise, its intensify is equalized pretty well with that of the 11. GPU-accelerated regularized iterative reconstruction for few-view cone beam CT SciTech Connect Matenine, Dmitri; Goussard, Yves 2015-04-15 Purpose: The present work proposes an iterative reconstruction technique designed for x-ray transmission computed tomography (CT). The main objective is to provide a model-based solution to the cone-beam CT reconstruction problem, yielding accurate low-dose images via few-views acquisitions in clinically acceptable time frames. Methods: The proposed technique combines a modified ordered subsets convex (OSC) algorithm and the total variation minimization (TV) regularization technique and is called OSC-TV. The number of subsets of each OSC iteration follows a reduction pattern in order to ensure the best performance of the regularization method. Considering the high computational cost of the algorithm, it is implemented on a graphics processing unit, using parallelization to accelerate computations. Results: The reconstructions were performed on computer-simulated as well as human pelvic cone-beam CT projection data and image quality was assessed. In terms of convergence and image quality, OSC-TV performs well in reconstruction of low-dose cone-beam CT data obtained via a few-view acquisition protocol. It compares favorably to the few-view TV-regularized projections onto convex sets (POCS-TV) algorithm. It also appears to be a viable alternative to full-dataset filtered backprojection. Execution times are of 1–2 min and are compatible with the typical clinical workflow for nonreal-time applications. Conclusions: Considering the image quality and execution times, this method may be useful for reconstruction of low-dose clinical acquisitions. It may be of particular benefit to patients who undergo multiple acquisitions by reducing the overall imaging radiation dose and associated risks. 12. GPU-accelerated regularized iterative reconstruction for few-view cone beam CT. PubMed Matenine, Dmitri; Goussard, Yves; Després, Philippe 2015-04-01 The present work proposes an iterative reconstruction technique designed for x-ray transmission computed tomography (CT). The main objective is to provide a model-based solution to the cone-beam CT reconstruction problem, yielding accurate low-dose images via few-views acquisitions in clinically acceptable time frames. The proposed technique combines a modified ordered subsets convex (OSC) algorithm and the total variation minimization (TV) regularization technique and is called OSC-TV. The number of subsets of each OSC iteration follows a reduction pattern in order to ensure the best performance of the regularization method. Considering the high computational cost of the algorithm, it is implemented on a graphics processing unit, using parallelization to accelerate computations. The reconstructions were performed on computer-simulated as well as human pelvic cone-beam CT projection data and image quality was assessed. In terms of convergence and image quality, OSC-TV performs well in reconstruction of low-dose cone-beam CT data obtained via a few-view acquisition protocol. It compares favorably to the few-view TV-regularized projections onto convex sets (POCS-TV) algorithm. It also appears to be a viable alternative to full-dataset filtered backprojection. Execution times are of 1-2 min and are compatible with the typical clinical workflow for nonreal-time applications. Considering the image quality and execution times, this method may be useful for reconstruction of low-dose clinical acquisitions. It may be of particular benefit to patients who undergo multiple acquisitions by reducing the overall imaging radiation dose and associated risks. 13. Enlarged longitudinal dose profiles in cone-beam CT and the need for modified dosimetry SciTech Connect Mori, Shinichiro; Endo, Masahiro; Nishizawa, Kanae; Tsunoo, Takanori; Aoyama, Takahiko; Fujiwara, Hideaki; Murase, Kenya 2005-04-01 In order to examine phantom length necessary to assess radiation dose delivered to patients in cone-beam CT with an enlarged beamwidth, we measured dose profiles in cylindrical phantoms of sufficient length using a prototype 256-slice CT-scanner developed at our institute. Dose profiles parallel to the rotation axis were measured at the central and peripheral positions in PMMA (polymethylmethacrylate) phantoms of 160 or 320 mm diameter and 900 mm length. For practical application, we joined unit cylinders (150 mm long) together to provide phantoms of 900 mm length. Dose profiles were measured with a pin photodiode sensor having a sensitive region of approximately 2.8x2.8 mm{sup 2} and 2.7 mm thickness. Beamwidths of the scanner were varied from 20 to 138 mm. Dose profile integrals (DPI) were calculated using the measured dose profiles for various beamwidths and integration ranges. For the body phantom (320-mm-diam phantom), 76% of the DPI was represented for a 20 mm beamwidth and 60% was represented for a 138 mm beamwidth if dose profiles were integrated over a 100 mm range, while more than 90% of the DPI was represented for beamwidths between 20 and 138 mm if integration was carried out over a 300 mm range. The phantom length and integration range for dosimetry of cone-beam CT needed to be more than 300 mm to represent more than 90% of the DPI for the body phantom with the beamwidth of more than 20 mm. Although we reached this conclusion using the prototype 256-slice CT-scanner, it may be applied to other multislice CT-scanners as well. 14. Effect of temperature on beam damage of asbestos fibers in the transmission electron microscope (TEM) at 100kV. PubMed Martin, Joannie; Beauparlant, Martin; Sauvé, Sébastien; L'Espérance, Gilles 2017-03-01 Damage to asbestos fibers by the transmission electron microscope (TEM) electron beam is a known limitation of this powerful method of analysis. Although it is often considered only in terms of loss of crystallinity, recent studies have shown that the damage may also change the elemental composition of fibers, thus causing significant identification errors. In this study, the main objective was to assess whether temperature is a factor influencing damage to asbestos fibers and, if so, how it can be used to minimize damage. It was found that lowering the temperature to 123K can inhibit, for a given time, the manifestation of the damage. The significant decrease of atom diffusion at low temperature momentarily prevents mass loss, greatly reducing the possibility of misidentification of anthophyllite asbestos fibers. The results obtained in this study strongly suggest that the predominant mechanism damage is probably related to the induced-electric-field model relegating radiolysis to the status of a subsidiary damage mechanism. Copyright © 2016 Elsevier Ltd. All rights reserved. 15. 3D In Vivo Dosimetry Using Megavoltage Cone-Beam CT and EPID Dosimetry SciTech Connect Elmpt, Wouter van Nijsten, Sebastiaan; Petit, Steven; Mijnheer, Ben; Lambin, Philippe; Dekker, Andre 2009-04-01 Purpose: To develop a method that reconstructs, independently of previous (planning) information, the dose delivered to patients by combining in-room imaging with transit dose measurements during treatment. Methods and Materials: A megavoltage cone-beam CT scan of the patient anatomy was acquired with the patient in treatment position. During treatment, delivered fields were measured behind the patient with an electronic portal imaging device. The dose information in these images was back-projected through the cone-beam CT scan and used for Monte Carlo simulation of the dose distribution inside the cone-beam CT scan. Validation was performed using various phantoms for conformal and IMRT plans. Clinical applicability is shown for a head-and-neck cancer patient treated with IMRT. Results: For single IMRT beams and a seven-field IMRT step-and-shoot plan, the dose distribution was reconstructed within 3%/3mm compared with the measured or planned dose. A three-dimensional conformal plan, verified using eight point-dose measurements, resulted in a difference of 1.3 {+-} 3.3% (1 SD) compared with the reconstructed dose. For the patient case, planned and reconstructed dose distribution was within 3%/3mm for about 95% of the points within the 20% isodose line. Reconstructed mean dose values, obtained from dose-volume histograms, were within 3% of prescribed values for target volumes and normal tissues. Conclusions: We present a new method that verifies the dose delivered to a patient by combining in-room imaging with the transit dose measured during treatment. This verification procedure opens possibilities for offline adaptive radiotherapy and dose-guided radiotherapy strategies taking into account the dose distribution delivered during treatment sessions. 16. Method for measuring the intensity profile of a CT fan-beam filter Whiting, Bruce R.; Dohatcu, Andreea 2014-03-01 Research on CT systems often requires knowledge of intensity as a function of angle in the fan-beam, due to the presence of bowtie filters, for studies such as dose reduction simulation, Monte Carlo dose calculations, or statistical reconstruction algorithms. Since manufacturers consider the x-ray bowtie filter design to be proprietary information, several methods have been proposed to measure the beam intensity profile independently: 1) calculate statistical properties of noise in acquired sinograms (requires access to raw data files, which is also vendor proprietary); 2) measure the waveform of a dosimeter located away from the isocenter (requires dosimeter equipment costing > 10K). We present a novel method that is inexpensive (parts costing 100 from any hardware store, using Gafchromic film at 3 per measurement), requires no proprietary information, and can be performed in a few minutes. A fixture is built from perforated steel tubing, which forms an aperture that selectively samples the intensity at a particular fan-beam angle in a rotating gantry. Two exposures (1× and 2×) are made and self-developing radiochromic film (Gafchromic XR- Ashland Inc.) is then scanned on an inexpensive PC document scanner. An analysis method is described that linearizes the measurements for relative exposure. The resultant profile is corrected for geometric effects (1/LΛ2 fall-off, gantry dwell time) and background exposure, providing a noninvasive estimate of the CT fan-beam intensity present in an operational CT system. This method will allow researchers to conveniently measure parameters required for modeling the effects of bowtie filters in clinical scanners. 17. Investigation of dosimetric variations of liver radiotherapy using deformable registration of planning CT and cone-beam CT. PubMed Huang, Pu; Yu, Gang; Chen, Jinhu; Ma, Changsheng; Qin, Shaohua; Yin, Yong; Liang, Yueqiang; Li, Hongsheng; Li, Dengwang 2017-01-01 Many patients with technically unresectable or medically inoperable hepatocellular carcinoma (HCC) had hepatic anatomy variations as a result of interfraction deformation during fractionated radiotherapy. We conducted this retrospective study to investigate interfractional normal liver dosimetric consequences via reconstructing weekly dose in HCC patients. Twenty-three patients with HCC received conventional fractionated three-dimensional conformal radiation therapy (3DCRT) were enrolled in this retrospective investigation. Among them, seven patients had been diagnosed of radiation-induced liver disease (RILD) and the other 16 patients had good prognosis after treatment course. The cone-beam CT (CBCT) scans were acquired once weekly for each patient throughout the treatment, deformable image registration (DIR) of planning CT (pCT) and CBCT was performed to acquire modified CBCT (mCBCT), and the structural contours were propagated by the DIR. The same plan was applied to mCBCT to perform dose calculation. Weekly dose distribution was displayed on the pCT dose space and compared using dose difference, target coverage, and dose volume histograms. Statistical analysis was performed to identify the significant dosimetric variations. Among the 23 patients, the three weekly normal liver D50 increased by 0.2 Gy, 4.2 Gy, and 4.7 Gy, respectively, for patients with RILD, and 1.0 Gy, 2.7 Gy, and 3.1 Gy, respectively, for patients without RILD. Mean dose to the normal liver (Dmean) increased by 0.5 Gy, 2.6 Gy, and 4.0 Gy, respectively, for patients with RILD, and 0.4 Gy, 3.1 Gy, and 3.4 Gy, respectively, for patients without RILD. Regarding patients with RILD, the average values of the third weekly D50 and Dmean were both over hepatic radiation tolerance, while the values of patients without RILD were below. The dosimetric consequence showed that the liver dose between patients with and without RILD were different relative to the planned dose, and the RILD patients suffered 18. Target delineation for radiosurgery of a small brain arteriovenous malformation using high-resolution contrast-enhanced cone beam CT. PubMed van der Bom, Imramsjah M J; Gounis, Matthew J; Ding, Linda; Kühn, Anna Luisa; Goff, David; Puri, Ajit S; Wakhloo, Ajay K 2014-06-01 Three years following endovascular embolization of a 3 mm ruptured arteriovenous malformation (AVM) of the left superior colliculus in a 42-year-old man, digital subtraction angiography showed continuous regrowth of the lesion. Thin-slice MRI acquired for treatment planning did not show the AVM nidus. The patient was brought back to the angiography suite for high-resolution contrast-enhanced cone beam CT (VasoCT) acquired using an angiographic c-arm system. The lesion and nidus were visualized with VasoCT. MRI, CT and VasoCT data were transferred to radiation planning software and mutually co-registered. The nidus was annotated for radiation on VasoCT data by an experienced neurointerventional radiologist and a dose/treatment plan was completed. Due to image registration, the treatment area could be directly adopted into the MRI and CT data. The AVM was completely obliterated 10 months following completion of the radiosurgery treatment. 19. Target delineation for radiosurgery of a small brain arteriovenous malformation using high-resolution contrast-enhanced cone beam CT. PubMed van der Bom, Imramsjah M J; Gounis, Matthew J; Ding, Linda; Kühn, Anna Luisa; Goff, David; Puri, Ajit S; Wakhloo, Ajay K 2013-08-14 Three years following endovascular embolization of a 3 mm ruptured arteriovenous malformation (AVM) of the left superior colliculus in a 42-year-old man, digital subtraction angiography showed continuous regrowth of the lesion. Thin-slice MRI acquired for treatment planning did not show the AVM nidus. The patient was brought back to the angiography suite for high-resolution contrast-enhanced cone beam CT (VasoCT) acquired using an angiographic c-arm system. The lesion and nidus were visualized with VasoCT. MRI, CT and VasoCT data were transferred to radiation planning software and mutually co-registered. The nidus was annotated for radiation on VasoCT data by an experienced neurointerventional radiologist and a dose/treatment plan was completed. Due to image registration, the treatment area could be directly adopted into the MRI and CT data. The AVM was completely obliterated 10 months following completion of the radiosurgery treatment. 20. X-Ray Scatter Correction on Soft Tissue Images for Portable Cone Beam CT. PubMed Aootaphao, Sorapong; Thongvigitmanee, Saowapak S; Rajruangrabin, Jartuwat; Thanasupsombat, Chalinee; Srivongsa, Tanapon; Thajchayapong, Pairash 2016-01-01 Soft tissue images from portable cone beam computed tomography (CBCT) scanners can be used for diagnosis and detection of tumor, cancer, intracerebral hemorrhage, and so forth. Due to large field of view, X-ray scattering which is the main cause of artifacts degrades image quality, such as cupping artifacts, CT number inaccuracy, and low contrast, especially on soft tissue images. In this work, we propose the X-ray scatter correction method for improving soft tissue images. The X-ray scatter correction scheme to estimate X-ray scatter signals is based on the deconvolution technique using the maximum likelihood estimation maximization (MLEM) method. The scatter kernels are obtained by simulating the PMMA sheet on the Monte Carlo simulation (MCS) software. In the experiment, we used the QRM phantom to quantitatively compare with fan-beam CT (FBCT) data in terms of CT number values, contrast to noise ratio, cupping artifacts, and low contrast detectability. Moreover, the PH3 angiography phantom was also used to mimic human soft tissues in the brain. The reconstructed images with our proposed scatter correction show significant improvement on image quality. Thus the proposed scatter correction technique has high potential to detect soft tissues in the brain. 1. Analysis of Cone-Beam Artifacts in off-Centered Circular CT for Four Reconstruction Methods PubMed Central Peyrin, F.; Sappey-Marinier, D. 2006-01-01 Cone-beam (CB) acquisition is increasingly used for truly three-dimensional X-ray computerized tomography (CT). However, tomographic reconstruction from data collected along a circular trajectory with the popular Feldkamp algorithm is known to produce the so-called CB artifacts. These artifacts result from the incompleteness of the source trajectory and the resulting missing data in the Radon space increasing with the distance to the plane containing the source orbit. In the context of the development of integrated PET/CT microscanners, we introduced a novel off-centered circular CT cone-beam geometry. We proposed a generalized Feldkamp formula (α-FDK) adapted to this geometry, but reconstructions suffer from increased CB artifacts. In this paper, we evaluate and compare four different reconstruction methods for correcting CB artifacts in off-centered geometry. We consider the α-FDK algorithm, the shift-variant FBP method derived from the T-FDK, an FBP method based on the Grangeat formula, and an iterative algebraic method (SART). The results show that the low contrast artifacts can be efficiently corrected by the shift-variant method and the SART method to achieve good quality images at the expense of increased computation time, but the geometrical deformations are still not compensated for by these techniques. PMID:23165048 2. SU-E-T-416: VMAT Dose Calculations Using Cone Beam CT Images: A Preliminary Study SciTech Connect Yu, S; Sehgal, V; Kuo, J; Daroui, P; Ramsinghani, N; Al-Ghazi, M 2014-06-01 Purpose: Cone beam CT (CBCT) images have been used routinely for patient positioning throughout the treatment course. However, use of CBCT for dose calculation is still investigational. The purpose of this study is to assess the utility of CBCT images for Volumetric Modulated Arc Therapy (VMAT) plan dose calculation. Methods: A CATPHAN 504 phantom (The Phantom Laboratory, Salem, NY) was used to compare the dosimetric and geometric accuracy between conventional CT and CBCT (in both full and half fan modes). Hounsfield units (HU) profiles at different density areas were evaluated. A C shape target that surrounds a central avoidance structure was created and a VMAT plan was generated on the CT images and copied to the CBCT phantom images. Patient studies included three brain patients, and one head and neck (H'N) patient. VMAT plans generated on the patients treatment planning CT was applied to CBCT images obtained during the first treatment. Isodose distributions and dosevolume- histograms (DVHs) were compared. Results: For the phantom study, the HU difference between CT and CBCT is within 100 (maximum 96 HU for Teflon CBCT images in full fan mode). The impact of these differences on the calculated dose distributions was clinically insignificant. In both phantom and patient studies, target DVHs based on CBCT images were in excellent agreement with those based on planning CT images. Mean, Median, near minimum (D98%), and near maximum (D2%) doses agreed within 0-2.5%. A slightly larger discrepancy is observed in the patient studies compared to that seen in the phantom study, (0-1% vs. 0 - 2.5%). Conclusion: CBCT images can be used to accurately predict dosimetric results, without any HU correction. It is feasible to use CBCT to evaluate the actual dose delivered at each fraction. The dosimetric consequences resulting from tumor response and patient geometry changes could be monitored. 3. Experimental realization of fluence field modulated CT using digital beam attenuation PubMed Central Szczykutowicz, TP; Mistretta, CA 2014-01-01 Purpose Tailoring CT scan acquisition parameters to individual patients is a topic of much research in the CT imaging community. It is now common place to find automatically adjusted tube current options for modern CT scanners. In addition, the use of beam shaping filters, commonly called bowtie filters, is available on most CT systems and allows for different body regions to receive different incident x-ray fluence distributions. However, no method currently exists which allows for the form of the incident x-ray fluence distribution to change as a function of view angle. This study represents the first experimental realization of fluence field modulated CT (FFMCT) for a c-arm geometry CT scan. Methods: X-ray fluence modulation is accomplished using a digital beam attenuator (DBA). The device is composed of 10 iron wedge pairs that modulate the thickness of iron x-rays must traverse before reaching a patient. Using this device, experimental data was taken using a Siemens Zeego c-arm scanner. Scans were performed on a cylindrical polyethylene phantom and on two different sections of an anthropomorphic phantom. The DBA was used to equalize the x-ray fluence striking the detector for each scan. Non DBA, or “flat field” scans were also acquired of the same phantom objects for comparison. In addition, a scan was performed in which the DBA was used to enable volume of interest (VOI) imaging. In VOI, only a small sub-volume within a patient receives full dose and the rest of the patient receives a much lower dose. Data corrections unique to using a piece-wise constant modulator were also developed. Results The feasibility of FFMCT implemented using a DBA device has been demonstrated. Initial results suggest dose reductions of up to 3.6 times relative to “flat field” CT. In addition to dose reduction, the DBA enables a large improvement in image noise uniformity and the ability to provide regionally enhanced signal to noise using VOI imaging techniques. Conclusions 4. SU-D-207-06: Clinical Validations of Shading Correction for Cone-Beam CT Using Planning CT as a Prior SciTech Connect Tsui, T; Zhu, L; Wei, J 2015-06-15 Purpose: Current cone-beam CT (CBCT) images contain severe shading artifacts mainly due to scatter, hindering their quantitative use in current radiation therapy. We have previously proposed an effective shading correction method for CBCT using planning CT (pCT) as prior knowledge. In this work, we investigate the method robustness via statistical analyses on studies of a large patient group and compare the performance with that of a state-of-the-art method implemented on the current commercial radiation therapy machine -- the Varian Truebeam system. Methods: Since radiotherapy patients routinely undergo multiple-detector CT (MDCT) scans in the planning procedure, we use the high-quality pCT as “free” prior knowledge for CBCT image improvement. The CBCT image with no correction is first spatially registered with the pCT. Primary CBCT projections are estimated via forward projections of the registered image. The low frequency errors in the projections, which stem from mainly scatter, are estimated by filtering the difference between original line integral and the estimated scatter projections. The corrected CBCT image is then reconstructed from the scatter corrected projections. The proposed method is evaluated on 40 cancer patients. Results: On all patient images, we compare errors on CT number, spatial non-uniformity (SNU) and image contrast, using pCT as the ground truth. T-tests show that our algorithm improves over the Varian method on CBCT accuracies of CT number and SNU with 90% confident. The average CT number error is reduced from 54.8 HU on the Varian method to 40.9 HU, and the SNU error is reduced from 7.7% to 3.8%. There is no obvious improvement on image contrast. Conclusion: Large-group patient studies show that the proposed pCT-based algorithm outperforms the Varian method of the Truebeam system on CBCT shading correction, by providing CBCT images with higher CT number accuracy and greater image uniformity. 5. Passive breath gating equipment for cone beam CT-guided RapidArc gastric cancer treatments. PubMed Hu, Weigang; Li, Guichao; Ye, Jinsong; Wang, Jiazhou; Peng, Jiayuan; Gong, Min; Yu, Xiaoli; Studentski, Matthew T; Xiao, Ying; Zhang, Zhen 2015-01-01 To report preliminary results of passive breath gating (PBG) equipment for cone-beam CT image-guided gated RapidArc gastric cancer treatments. Home-developed PBG equipment integrated with the real-time position management system (RPM) for passive patient breath hold was used in CT simulation, online partial breath hold (PBH) CBCT acquisition, and breath-hold gating (BHG) RapidArc delivery. The treatment was discontinuously delivered with beam on during BH and beam off for free breathing (FB). Pretreatment verification PBH CBCT was obtained with the PBG-RPM system. Additionally, the reproducibility of the gating accuracy was evaluated. A total of 375 fractions of breath-hold gating RapidArc treatments were successfully delivered and 233 PBH CBCTs were available for analysis. The PBH CBCT images were acquired with 2-3 breath holds and 1-2 FB breaks. The imaging time was the same for PBH CBCT and conventional FB CBCT (60s). Compared to FB CBCT, the motion artifacts seen in PBH CBCT images were remarkably reduced. The average BHG RapidArc delivery time was 103 s for one 270-degree arc and 269 s for two full arcs. The PBG-RPM based PBH CBCT verification and BHG RapidArc delivery was successfully implemented clinically. The BHG RapidArc treatment was accomplished using a conventional RapidArc machine with high delivery efficiency. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved. 6. Individualized volume CT dose index determined by cross-sectional area and mean density of the body to achieve uniform image noise of contrast-enhanced pediatric chest CT obtained at variable kV levels and with combined tube current modulation. PubMed Goo, Hyun Woo 2011-07-01 A practical body-size adaptive protocol providing uniform image noise at various kV levels is not available for pediatric CT. To develop a practical contrast-enhanced pediatric chest CT protocol providing uniform image noise by using an individualized volume CT dose index (CTDIvol) determined by the cross-sectional area and density of the body at variable kV levels and with combined tube current modulation. A total of 137 patients (mean age, 7.6 years) underwent contrast-enhanced pediatric chest CT based on body weight. From the CTDIvol, image noise, and area and mean density of the cross-section at the lung base in the weight-based group, the best fit equation was estimated with a very high correlation coefficient (γ(2) = 0.86, P < 0.001). For the next study, 177 patients (mean age, 7.9 years; the CTDIvol group) underwent contrast-enhanced pediatric chest CT with the CTDIvol determined individually by the best fit equation. CTDIvol values on the dose report after CT scanning, noise differences from the target noise, areas, and mean densities were compared between these two groups. The CTDIvol values (mean ± standard deviation, 1.6 ± 0.7 mGy) and the noise differences from the target noise (1.1 ± 0.9 HU) of the CTDIvol group were significantly lower than those of the weight-based group (2.0 ± 1.0 mGy, 1.8 ± 1.4 HU) (P < 0.001). In contrast, no statistically significant difference was found in area (317.0 ± 136.8 cm(2) vs. 326.3 ± 124.8 cm(2)), mean density (-212.9 ± 53.1 HU vs. -221.1 ± 56.3 HU), and image noise (13.8 ± 2.3 vs. 13.6 ± 1.7 HU) between the weight-based and the CTDIvol groups (P > 0.05). Contrast-enhanced pediatric chest CT with the CTDIvol determined individually by the cross-sectional area and density of the body provides more uniform noise and better dose adaptation to body habitus than does weight-based CT at variable kV levels and with combined tube current modulation. 7. SimDoseCT: dose reporting software based on Monte Carlo simulation for a 320 detector-row cone-beam CT scanner and ICRP computational adult phantoms NASA Astrophysics Data System (ADS) Cros, Maria; Joemai, Raoul M. S.; Geleijns, Jacob; Molina, Diego; Salvadó, Marçal 2017-08-01 This study aims to develop and test software for assessing and reporting doses for standard patients undergoing computed tomography (CT) examinations in a 320 detector-row cone-beam scanner. The software, called SimDoseCT, is based on the Monte Carlo (MC) simulation code, which was developed to calculate organ doses and effective doses in ICRP anthropomorphic adult reference computational phantoms for acquisitions with the Aquilion ONE CT scanner (Toshiba). MC simulation was validated by comparing CTDI measurements within standard CT dose phantoms with results from simulation under the same conditions. SimDoseCT consists of a graphical user interface connected to a MySQL database, which contains the look-up-tables that were generated with MC simulations for volumetric acquisitions at different scan positions along the phantom using any tube voltage, bow tie filter, focal spot and nine different beam widths. Two different methods were developed to estimate organ doses and effective doses from acquisitions using other available beam widths in the scanner. A correction factor was used to estimate doses in helical acquisitions. Hence, the user can select any available protocol in the Aquilion ONE scanner for a standard adult male or female and obtain the dose results through the software interface. Agreement within 9% between CTDI measurements and simulations allowed the validation of the MC program. Additionally, the algorithm for dose reporting in SimDoseCT was validated by comparing dose results from this tool with those obtained from MC simulations for three volumetric acquisitions (head, thorax and abdomen). The comparison was repeated using eight different collimations and also for another collimation in a helical abdomen examination. The results showed differences of 0.1 mSv or less for absolute dose in most organs and also in the effective dose calculation. The software provides a suitable tool for dose assessment in standard adult patients undergoing CT 8. SimDoseCT: dose reporting software based on Monte Carlo simulation for a 320 detector-row cone-beam CT scanner and ICRP computational adult phantoms. PubMed Cros, Maria; Joemai, Raoul M S; Geleijns, Jacob; Molina, Diego; Salvadó, Marçal 2017-07-17 This study aims to develop and test software for assessing and reporting doses for standard patients undergoing computed tomography (CT) examinations in a 320 detector-row cone-beam scanner. The software, called SimDoseCT, is based on the Monte Carlo (MC) simulation code, which was developed to calculate organ doses and effective doses in ICRP anthropomorphic adult reference computational phantoms for acquisitions with the Aquilion ONE CT scanner (Toshiba). MC simulation was validated by comparing CTDI measurements within standard CT dose phantoms with results from simulation under the same conditions. SimDoseCT consists of a graphical user interface connected to a MySQL database, which contains the look-up-tables that were generated with MC simulations for volumetric acquisitions at different scan positions along the phantom using any tube voltage, bow tie filter, focal spot and nine different beam widths. Two different methods were developed to estimate organ doses and effective doses from acquisitions using other available beam widths in the scanner. A correction factor was used to estimate doses in helical acquisitions. Hence, the user can select any available protocol in the Aquilion ONE scanner for a standard adult male or female and obtain the dose results through the software interface. Agreement within 9% between CTDI measurements and simulations allowed the validation of the MC program. Additionally, the algorithm for dose reporting in SimDoseCT was validated by comparing dose results from this tool with those obtained from MC simulations for three volumetric acquisitions (head, thorax and abdomen). The comparison was repeated using eight different collimations and also for another collimation in a helical abdomen examination. The results showed differences of 0.1 mSv or less for absolute dose in most organs and also in the effective dose calculation. The software provides a suitable tool for dose assessment in standard adult patients undergoing CT 9. Characterization and correction of cupping effect artefacts in cone beam CT PubMed Central Hunter, AK; McDavid, WD 2012-01-01 Objective The purpose of this study was to demonstrate and correct the cupping effect artefact that occurs owing to the presence of beam hardening and scatter radiation during image acquisition in cone beam CT (CBCT). Methods A uniform aluminium cylinder (6061) was used to demonstrate the cupping effect artefact on the Planmeca Promax 3D CBCT unit (Planmeca OY, Helsinki, Finland). The cupping effect was studied using a line profile plot of the grey level values using ImageJ software (National Institutes of Health, Bethesda, MD). A hardware-based correction method using copper pre-filtration was used to address this artefact caused by beam hardening and a software-based subtraction algorithm was used to address scatter contamination. Results The hardware-based correction used to address the effects of beam hardening suppressed the cupping effect artefact but did not eliminate it. The software-based correction used to address the effects of scatter resulted in elimination of the cupping effect artefact. Conclusion Compensating for the presence of beam hardening and scatter radiation improves grey level uniformity in CBCT. PMID:22378754 10. Characterization and correction of cupping effect artefacts in cone beam CT. PubMed Hunter, A K; McDavid, W D 2012-03-01 The purpose of this study was to demonstrate and correct the cupping effect artefact that occurs owing to the presence of beam hardening and scatter radiation during image acquisition in cone beam CT (CBCT). A uniform aluminium cylinder (6061) was used to demonstrate the cupping effect artefact on the Planmeca Promax 3D CBCT unit (Planmeca OY, Helsinki, Finland). The cupping effect was studied using a line profile plot of the grey level values using ImageJ software (National Institutes of Health, Bethesda, MD). A hardware-based correction method using copper pre-filtration was used to address this artefact caused by beam hardening and a software-based subtraction algorithm was used to address scatter contamination. The hardware-based correction used to address the effects of beam hardening suppressed the cupping effect artefact but did not eliminate it. The software-based correction used to address the effects of scatter resulted in elimination of the cupping effect artefact. Compensating for the presence of beam hardening and scatter radiation improves grey level uniformity in CBCT. 11. Evaluation of flat panel detector cone beam CT breast imaging with different sizes of breast phantoms NASA Astrophysics Data System (ADS) Ning, Ruola; Conover, David; Lu, Xianghua; Zhang, Yan; Yu, Yong; Schiffhauer, Linda; Cullinan, Jeanne 2005-04-01 The sensitivity to detect small breast cancers and the specificity of conventional mammography (CM) remain limited owing to an overlap in the appearances of lesions and surrounding structure. We propose to address the limitations accompanying CM using flat panel detector (FPD)-based cone beam CT breast imaging (CBCTBI). The purpose of the study is to determine optimal x-ray operation ranges for different sizes of normal breasts and corresponding glandular dose levels. The current CBCT prototype consists of a modified GE HighSpeed Advantage CT gantry, an x-ray tube, a Varian PaxScan 4030CB FPD, a CT table and a PC. Two uncompressed breast phantoms, with the diameters of 10.8 and 13.8 cm, consist of three inserts: a layer of silicone jell simulating a background structure, a lucite plate on which five simulated carcinomas are mounted, and a plate on which six calcifications are attached. With a single scan, 300 projections were acquired for all phantom scans. The optimal x-ray techniques for different phantom sizes were determined. The total mean glandular doses for different size phantoms were measured using a CT pencil ionization chamber. With the optimal x-ray techniques that result in the maximal dose efficiency for the different tissue thickness, the image quality with two different phantoms was evaluated. The results demonstrate that the CBCTBI can detect a few millimeter-size simulated carcinoma and ~ 0.2 mm calcification with clinically acceptable mean glandular doses for different size breasts. 12. [Periinterventional cone-beam-CT: application at transarterial chemoembolization of liver tumors]. PubMed Adamus, R; Uder, M; Wilhelm, M; Loose, R W 2011-07-01 Periinterventional Cone-Beam CT (CBCT) today is a valuable tool in complex radiological interventions. Only little experience exists about CBCT in transarterial chemoembolisations (TACE) of liver tumors. 25 patients underwent periinterventional CBCT. We used a C-arc DSA with 30 × 40 cm flat panel detector. Image data with axial, coronal and 3D-reconstruction were acquired by 217° rotation in 8 seconds. In all 25 cases CBCT had an influence on the TACE regarding the decision which vessels to catheterize, the amount of retention of the embolisation agent or an abort because of insufficient vascularisation. In comparison with DSA alone, CBCT allows a better visualisation of tumour vessels, simplifies selective catheterisation, the decision whether an embolisation is possible and enables a good visualisation of Lipiodol retention. Hence, CBCT is a helpful periinterventional tool but cannot substitute CT and MRI in follow up. 13. Implant planning and placement using optical scanning and cone beam CT technology. PubMed van der Zel, Jef M 2008-08-01 There is a growing interest in minimally invasive implant therapy as a standard prosthodontic treatment, providing complete restoration of occlusal function. A new treatment method (CADDIMA), which combines both computerized tomographic (CT) and optical laser-scan data for planning and design of surgical guides, implant abutments, and prosthetic devices, is described. Imaging using a "NewTom 3G" cone beam CT scanner and a modified laser triangulation scanner "D200c" is discussed, as are impression and surgical guide fabrication, which allow for flapless, precise implant placement and an accurate provisional prosthesis. The new approach gives the operator full control over the design of the implant prosthesis for planning of proper occlusal relations and shows promise for further evaluation. 14. Scattered radiation in flat-detector based cone-beam CT: analysis of voxelized patient simulations NASA Astrophysics Data System (ADS) Wiegert, Jens; Bertram, Matthias 2006-03-01 This paper presents a systematic assessment of scattered radiation in flat-detector based cone-beam CT. The analysis is based on simulated scatter projections of voxelized CT images of different body regions allowing to accurately quantify scattered radiation of realistic and clinically relevant patient geometries. Using analytically computed primary projection data of high spatial resolution in combination with Monte-Carlo simulated scattered radiation, practically noise-free reference data sets are computed with and without inclusion of scatter. The impact of scatter is studied both in the projection data and in the reconstructed volume for the head, thorax, and pelvis regions. Currently available anti-scatter grid geometries do not sufficiently compensate scatter induced cupping and streak artifacts, requiring additional software-based scatter correction. The required accuracy of scatter compensation approaches increases with increasing patient size. 15. Motion compensation for cone-beam CT using Fourier consistency conditions NASA Astrophysics Data System (ADS) Berger, M.; Xia, Y.; Aichinger, W.; Mentl, K.; Unberath, M.; Aichert, A.; Riess, C.; Hornegger, J.; Fahrig, R.; Maier, A. 2017-09-01 In cone-beam CT, involuntary patient motion and inaccurate or irreproducible scanner motion substantially degrades image quality. To avoid artifacts this motion needs to be estimated and compensated during image reconstruction. In previous work we showed that Fourier consistency conditions (FCC) can be used in fan-beam CT to estimate motion in the sinogram domain. This work extends the FCC to 3\\text{D} cone-beam CT. We derive an efficient cost function to compensate for 3\\text{D} motion using 2\\text{D} detector translations. The extended FCC method have been tested with five translational motion patterns, using a challenging numerical phantom. We evaluated the root-mean-square-error and the structural-similarity-index between motion corrected and motion-free reconstructions. Additionally, we computed the mean-absolute-difference (MAD) between the estimated and the ground-truth motion. The practical applicability of the method is demonstrated by application to respiratory motion estimation in rotational angiography, but also to motion correction for weight-bearing imaging of knees. Where the latter makes use of a specifically modified FCC version which is robust to axial truncation. The results show a great reduction of motion artifacts. Accurate estimation results were achieved with a maximum MAD value of 708 μm and 1184 μm for motion along the vertical and horizontal detector direction, respectively. The image quality of reconstructions obtained with the proposed method is close to that of motion corrected reconstructions based on the ground-truth motion. Simulations using noise-free and noisy data demonstrate that FCC are robust to noise. Even high-frequency motion was accurately estimated leading to a considerable reduction of streaking artifacts. The method is purely image-based and therefore independent of any auxiliary data. 16. SU-E-J-43: Deformed Planning CT as An Electron Density Substitute for Cone-Beam CT SciTech Connect Mishra, K; Godley, A 2014-06-01 Purpose: To confirm that deforming the planning CT to the daily Cone-Beam CTs (CBCT) can provide suitable electron density for adaptive planning. We quantify the dosimetric difference between plans calculated on deformed planning CTs (DPCT) and daily CT-on-rails images (CTOR). CTOR is used as a test of the method as CTOR already contains accurate electron density to compare against. Methods: Five prostate only IMRT patients, each with five CTOR images, were selected and re-planned on Panther (Prowess Inc.) with a uniform 5 mm PTV expansion, prescribed 78 Gy. The planning CT was deformed to match each CTOR using ABAS (Elekta Inc.). Contours were drawn on the CTOR, and copied to the DPCT. The original treatment plan was copied to both the CTOR and DPCT, keeping the center of the prostate as the isocenter. The plans were then calculated using the collapsed cone heterogeneous dose engine of Prowess and typical DVH planning parameters used to compare them. Results: Each DPCT was visually compared to its CTOR with no differences observed. The agreement of the copied CTOR contours with the DPCT anatomy further demonstrated the deformation accuracy. The plans calculated using CTOR and DPCT were compared. Over the 25 plan pairs, the average difference between them for prostate D100, D98 and D95 were 0.5%, 0.2%, and 0.2%; PTV D98, D95 and mean dose: 0.3%, 0.2% and 0.3%; bladder V70, V60 and mean dose: 1.1%, 0.7%, and 0.2%; and rectum mean dose: 0.3%. (D100 is the dose covering 100% of the target; V70 is the volume of the organ receiving 70 Gy). Conclusion: We observe negligible difference between the dose calculated on the DPCT and the CTOR, implying that deformed planning CTs are a suitable substitute for electron density. The method can now be applied to CBCTs. Research version of Panther provided by Prowess Inc. Research version of ABAS provided by Elekta Inc. 17. Development of high-resolution x-ray CT system using parallel beam geometry SciTech Connect Yoneyama, Akio Baba, Rika; Hyodo, Kazuyuki; Takeda, Tohoru; Nakano, Haruhisa; Maki, Koutaro; Sumitani, Kazushi; Hirai, Yasuharu 2016-01-28 For fine three-dimensional observations of large biomedical and organic material samples, we developed a high-resolution X-ray CT system. The system consists of a sample positioner, a 5-μm scintillator, microscopy lenses, and a water-cooled sCMOS detector. Parallel beam geometry was adopted to attain a field of view of a few mm square. A fine three-dimensional image of birch branch was obtained using a 9-keV X-ray at BL16XU of SPring-8 in Japan. The spatial resolution estimated from the line profile of a sectional image was about 3 μm. 18. Automated volume of interest delineation and rendering of cone beam CT images in interventional cardiology NASA Astrophysics Data System (ADS) Lorenz, Cristian; Schäfer, Dirk; Eshuis, Peter; Carroll, John; Grass, Michael 2012-02-01 Interventional C-arm systems allow the efficient acquisition of 3D cone beam CT images. They can be used for intervention planning, navigation, and outcome assessment. We present a fast and completely automated volume of interest (VOI) delineation for cardiac interventions, covering the whole visceral cavity including mediastinum and lungs but leaving out rib-cage and spine. The problem is addressed in a model based approach. The procedure has been evaluated on 22 patient cases and achieves an average surface error below 2mm. The method is able to cope with varying image intensities, varying truncations due to the limited reconstruction volume, and partially with heavy metal and motion artifacts. 19. Development of high-resolution x-ray CT system using parallel beam geometry NASA Astrophysics Data System (ADS) Yoneyama, Akio; Baba, Rika; Hyodo, Kazuyuki; Takeda, Tohoru; Nakano, Haruhisa; Maki, Koutaro; Sumitani, Kazushi; Hirai, Yasuharu 2016-01-01 For fine three-dimensional observations of large biomedical and organic material samples, we developed a high-resolution X-ray CT system. The system consists of a sample positioner, a 5-μm scintillator, microscopy lenses, and a water-cooled sCMOS detector. Parallel beam geometry was adopted to attain a field of view of a few mm square. A fine three-dimensional image of birch branch was obtained using a 9-keV X-ray at BL16XU of SPring-8 in Japan. The spatial resolution estimated from the line profile of a sectional image was about 3 μm. 20. Local filtration based scatter correction for cone-beam CT using primary modulation. PubMed Zhu, Lei 2016-11-01 Excessive scatter contamination fundamentally limits the image quality of cone-beam CT (CBCT), hindering its quantitative use in clinical applications. The author has previously proposed an effective scatter correction method for CBCT using primary modulation. A Fourier transform-based algorithm (FTPM) was implemented to estimate scatter from modulated projections, with a few limitations including the assumption of uniform modulation frequency and magnitude that becomes less accurate in the presence of beam-hardening and other nonideal effects. This paper aims to overcome the above drawbacks by developing a new algorithm for the primary modulation method with improved accuracy and reliability. Incident x-ray intensities for each detector pixel with and without the interception of the modulator blocker are estimated from a modulated flat-field image. A new signal relationship is then developed to obtain a first scatter estimate from a modulated projection using a spatially varying modulation distribution. The method empirically adjusts the effective modulation magnitude for each projection ray to account for the beam-hardening effects. Estimated scatter signals with high expected errors are discarded in the generation of the final scatter distribution. The author proposes a technique of local filtration to accelerate major portions of the signal processing, and the new algorithm is referred to as local filtration based primary modulation (LFPM). The study on the Catphan® 600 phantom shows that LFPM effectively removes scatter-induced cupping artifacts on CBCT images and reduces the CT image error from 222 to 15 HU. In addition, the image contrast on eight contrast rods of the phantom is enhanced by a factor of 2 on average. On an anthropomorphic head phantom, LFPM reduces the CT image error from 153 to 18 HU and eliminates the streak artifacts observed on the result of FTPM with substantially improved image uniformity. On the Rando® phantom, LFPM reduces the CT 1. Investigation of uncertainties in image registration of cone beam CT to CT on an image-guided radiotherapy system. PubMed Sykes, J R; Brettle, D S; Magee, D R; Thwaites, D I 2009-12-21 Methods of measuring uncertainties in rigid body image registration of fan beam computed tomography (FBCT) to cone beam CT (CBCT) have been developed for automatic image registration algorithms in a commercial image guidance system (Synergy, Elekta, UK). The relationships between image registration uncertainty and both imaging dose and image resolution have been investigated with an anthropomorphic skull phantom and further measurements performed with patient images of the head. A new metric of target registration error is proposed. The metric calculates the mean distance traversed by a set of equi-spaced points on the surface of a 5 cm sphere, centred at the isocentre when transformed by the residual error of registration. Studies aimed at giving practical guidance on the use of the Synergy automated image registration, including choice of algorithm and use of the Clipbox are reported. The chamfer-matching algorithm was found to be highly robust to the increased noise induced by low-dose acquisitions. This would allow the imaging dose to be reduced from the current clinical norm of 2 mGy to 0.2 mGy without a clinically significant loss of accuracy. A study of the effect of FBCT slice thickness/spacing and CBCT voxel size showed that 2.5 mm and 1 mm, respectively, gave acceptable image registration performance. Registration failures were highly infrequent if the misalignment was typical of normal clinical set-up errors and these were easily identified. The standard deviation of translational registration errors, measured with patient images, was 0.5 mm on the surface of a 5 cm sphere centred on the treatment centre. The chamfer algorithm is suitable for routine clinical use with minimal need for close inspection of image misalignment. 2. Investigation of uncertainties in image registration of cone beam CT to CT on an image-guided radiotherapy system NASA Astrophysics Data System (ADS) Sykes, J. R.; Brettle, D. S.; Magee, D. R.; Thwaites, D. I. 2009-12-01 Methods of measuring uncertainties in rigid body image registration of fan beam computed tomography (FBCT) to cone beam CT (CBCT) have been developed for automatic image registration algorithms in a commercial image guidance system (Synergy, Elekta, UK). The relationships between image registration uncertainty and both imaging dose and image resolution have been investigated with an anthropomorphic skull phantom and further measurements performed with patient images of the head. A new metric of target registration error is proposed. The metric calculates the mean distance traversed by a set of equi-spaced points on the surface of a 5 cm sphere, centred at the isocentre when transformed by the residual error of registration. Studies aimed at giving practical guidance on the use of the Synergy automated image registration, including choice of algorithm and use of the Clipbox are reported. The chamfer-matching algorithm was found to be highly robust to the increased noise induced by low-dose acquisitions. This would allow the imaging dose to be reduced from the current clinical norm of 2 mGy to 0.2 mGy without a clinically significant loss of accuracy. A study of the effect of FBCT slice thickness/spacing and CBCT voxel size showed that 2.5 mm and 1 mm, respectively, gave acceptable image registration performance. Registration failures were highly infrequent if the misalignment was typical of normal clinical set-up errors and these were easily identified. The standard deviation of translational registration errors, measured with patient images, was 0.5 mm on the surface of a 5 cm sphere centred on the treatment centre. The chamfer algorithm is suitable for routine clinical use with minimal need for close inspection of image misalignment. 3. Improving image accuracy of region-of-interest in cone-beam CT using prior image. PubMed Lee, Jiseoc; Kim, Jin Sung; Cho, Seungryong 2014-03-06 In diagnostic follow-ups of diseases, such as calcium scoring in kidney or fat content assessment in liver using repeated CT scans, quantitatively accurate and consistent CT values are desirable at a low cost of radiation dose to the patient. Region of-interest (ROI) imaging technique is considered a reasonable dose reduction method in CT scans for its shielding geometry outside the ROI. However, image artifacts in the reconstructed images caused by missing data outside the ROI may degrade overall image quality and, more importantly, can decrease image accuracy of the ROI substantially. In this study, we propose a method to increase image accuracy of the ROI and to reduce imaging radiation dose via utilizing the outside ROI data from prior scans in the repeated CT applications. We performed both numerical and experimental studies to validate our proposed method. In a numerical study, we used an XCAT phantom with its liver and stomach changing their sizes from one scan to another. Image accuracy of the liver has been improved as the error decreased from 44.4 HU to -0.1 HU by the proposed method, compared to an existing method of data extrapolation to compensate for the missing data outside the ROI. Repeated cone-beam CT (CBCT) images of a patient who went through daily CBCT scans for radiation therapy were also used to demonstrate the performance of the proposed method experimentally. The results showed improved image accuracy inside the ROI. The magnitude of error decreased from -73.2 HU to 18 HU, and effectively reduced image artifacts throughout the entire image. 4. Design and optimization of a dedicated cone-beam CT system for musculoskeletal extremities imaging NASA Astrophysics Data System (ADS) Zbijewski, W.; De Jean, P.; Prakash, P.; Ding, Y.; Stayman, J. W.; Packard, N.; Senn, R.; Yang, D.; Yorkston, J.; Machado, A.; Carrino, J. A.; Siewerdsen, J. H. 2011-03-01 The design, initial imaging performance, and model-based optimization of a dedicated cone-beam CT (CBCT) scanner for musculoskeletal extremities is presented. The system offers a compact scanner that complements conventional CT and MR by providing sub-mm isotropic spatial resolution, the ability to image weight-bearing extremities, and the capability for integrated real-time fluoroscopy and digital radiography. The scanner employs a flat-panel detector and a fixed anode x-ray source and has a field of view of ~ (20x20x20) cm3. The gantry allows a "standing" configuration for imaging of weight-bearing lower extremities and a "sitting" configuration for imaging of upper extremities and unloaded lower extremities. Cascaded systems analysis guided the selection of x-ray technique (e.g., kVp, filtration, and dose) and system design (e.g., magnification factor), yielding input-quantum-limited performance at detector signal of 100 times the electronic noise, while maintaining patient dose below 5 mGy (a factor of ~2-3 less than conventional CT). A magnification of 1.3 optimized tradeoffs between source and detector blur for a 0.5 mm focal spot. A custom antiscatter grid demonstrated significant reduction of artifacts without loss of contrast-to-noise ratio or increase in dose. Image quality in cadaveric specimens was assessed on a CBCT bench, demonstrating exquisite bone detail, visualization of intra-articular morphology, and soft-tissue visibility approaching that of diagnostic CT. The capability to image loaded extremities and conduct multi-modality CBCT/fluoroscopy with improved workflow compared to whole-body CT could be of value in a broad spectrum of applications, including orthopaedics, rheumatology, surgical planning, and treatment assessment. A clinical prototype has been constructed for deployment in pilot study trials. 5. SU-E-T-161: Evaluation of Dose Calculation Based On Cone-Beam CT SciTech Connect Abe, T; Nakazawa, T; Saitou, Y; Nakata, A; Yano, M; Tateoka, K; Fujimoto, K; Sakata, K 2014-06-01 Purpose: The purpose of this study is to convert pixel values in cone-beam CT (CBCT) using histograms of pixel values in the simulation CT (sim-CT) and the CBCT images and to evaluate the accuracy of dose calculation based on the CBCT. Methods: The sim-CT and CBCT images immediately before the treatment of 10 prostate cancer patients were acquired. Because of insufficient calibration of the pixel values in the CBCT, it is difficult to be directly used for dose calculation. The pixel values in the CBCT images were converted using an in-house program. A 7 fields treatment plans (original plan) created on the sim-CT images were applied to the CBCT images and the dose distributions were re-calculated with same monitor units (MUs). These prescription doses were compared with those of original plans. Results: In the results of the pixel values conversion in the CBCT images,the mean differences of pixel values for the prostate,subcutaneous adipose, muscle and right-femur were −10.78±34.60, 11.78±41.06, 29.49±36.99 and 0.14±31.15 respectively. In the results of the calculated doses, the mean differences of prescription doses for 7 fields were 4.13±0.95%, 0.34±0.86%, −0.05±0.55%, 1.35±0.98%, 1.77±0.56%, 0.89±0.69% and 1.69±0.71% respectively and as a whole, the difference of prescription dose was 1.54±0.4%. Conclusion: The dose calculation on the CBCT images achieve an accuracy of <2% by using this pixel values conversion program. This may enable implementation of efficient adaptive radiotherapy. 6. Optimal slice thickness for cone-beam CT with on-board imager PubMed Central Seet, KYT; Barghi, A; Yartsev, S; Van Dyk, J 2010-01-01 Purpose: To find the optimal slice thickness (Δτ) setting for patient registration with kilovoltage cone-beam CT (kVCBCT) on the Varian On Board Imager (OBI) system by investigating the relationship of slice thickness to automatic registration accuracy and contrast-to-noise ratio. Materials and method: Automatic registration was performed on kVCBCT studies of the head and pelvis of a RANDO anthropomorphic phantom. Images were reconstructed with 1.0 ≤ Δτ (mm) ≤ 5.0 at 1.0 mm increments. The phantoms were offset by a known amount, and the suggested shifts were compared to the known shifts by calculating the residual error. A uniform cylindrical phantom with cylindrical inserts of various known CT numbers was scanned with kVCBCT at 1.0 ≤ Δτ (mm) ≤ 5.0 at increments of 0.5 mm. The contrast-to-noise ratios for the inserts were measured at each Δτ. Results: For the planning CT slice thickness used in this study, there was no significant difference in residual error below a threshold equal to the planning CT slice thickness. For Δτ > 3.0 mm, residual error increased for both the head and pelvis phantom studies. The contrast-to-noise ratio is proportional to slice thickness until Δτ = 2.5 mm. Beyond this point, the contrast-to-noise ratio was not affected by Δτ. Conclusion: Automatic registration accuracy is greatest when 1.0 ≤ Δτ (mm) ≤ 3.0 is used. Contrast-to-noise ratio is optimal for the 2.5 ≤ Δτ (mm) ≤ 5.0 range. Therefore 2.5 ≤ Δτ (mm) ≤ 3.0 is recommended for kVCBCT patient registration where the planning CT is 3.0 mm. PMID:21611047 7. Poster — Thur Eve — 10: Partial kV CBCT, complete kV CBCT and EPID in breast treatment: a dose comparison study for skin, breasts, heart and lungs SciTech Connect Roussin, E; Archambault, L K; Wierzbicki, W 2014-08-15 The advantages of kilovoltage cone beam CT (kV CBCT) imaging over electronic portal imaging device (EPID) such as accurate 3D anatomy, soft tissue visualization, fast rigid registration and enhanced precision on patient positioning has lead to its increasing use in clinics. The benefits of this imaging technique are at the cost of increasing the dose to healthy surrounding organs. Our center has moved toward the use of daily partial rotation kV CBCT to restrict the dose to healthy tissues. This study aims to better quantify radiation doses from different image-guidance techniques such as tangential EPID, complete and partial kV CBCT for breast treatments. Cross-calibrated ionization chambers and kV calibrated Gafchromic films were used to measure the dose to the heart, lungs, breasts and skin. It was found that performing partial kV CBCT decreases the heart dose by about 36%, the lungs dose by 31%, the contralateral breast dose by 41% and the ipsilateral breast dose by 43% when compared to a full rotation CBCT. The skin dose measured for a full rotation CBCT was about 0.8 cGy for the contralateral breast and about 0.3 cGy for the ipsilateral breast. The study is still ongoing and results on skin doses for partial rotation kV CBCT as well as for tangential EPID images are upcoming. 8. Quantitative evaluation of beam-hardening artefact correction in dual-energy CT myocardial perfusion imaging. PubMed Bucher, Andreas M; Wichmann, Julian L; Schoepf, U Joseph; Wolla, Christopher D; Canstein, Christian; McQuiston, Andrew D; Krazinski, Aleksander W; De Cecco, Carlo N; Meinel, Felix G; Vogl, Thomas J; Geyer, Lucas L 2016-09-01 To assess quantitatively the impact of a novel reconstruction algorithm ("kernel") with beam-hardening correction (BHC) on beam-hardening artefacts of the myocardium at dual-energy CT myocardial perfusion imaging (DE-CTMPI). Rest-series of DE-CTMPI examinations from 14 patients were retrospectively analyzed. Six image series were reconstructed for each patient: a) 100 kV, b) 140 kV, and c) linearly blended MIX0.5, each with BHC (D33f kernel) and without (D30f kernel). Seven hundred and fifty-six myocardial regions were assessed. Seven equal regions of interest divided the myocardium in the axial section. Three subdivisions were created within these regions in areas prone to BHA. Reports of SPECT studies performed within 30 days of CT examination were used to confirm the presence and location of true perfusion defects. Paired student t-test was used for statistical evaluation. Overall mean myocardial attenuation was lower using BHC (D30f: 87.3 ± 24.1 HU; D33f: 85.5 ± 21.5 HU; p = 0.009). Overall relative difference from average myocardial attenuation (RDMA) was more homogeneous using BHC (D30f: -0.3 ± 11.4 %; D33f: 0.1 ± 10.1 %; p < 0.001). Changes in RDMA were greatest in the posterobasal myocardium (D30f: -16.2 ± 10.0 %; D33f: 3.4 ± 10.7 %; p < 0.001). A dedicated reconstruction algorithm with BHC can significantly reduce beam-hardening artefacts in DE-CTMPI. • Beam-hardening artefacts (BHA) cause interference with attenuation-based CT myocardial perfusion assessment (CTMPI). • BHA occur mostly in the posterobasal left ventricular wall. • Beam-hardening correction homogenized and decreased mean myocardial attenuation. • BHC can help avoid false-positive findings and increase specificity of static CTMPI. 9. Enhancement of breast calcification visualization and detection using a modified PG method in Cone Beam Breast CT. PubMed Liu, Jiangkun; Ning, Ruola; Cai, Weixing; Benitez, Ricardo Betancourt 2012-01-01 Cone Beam Breast CT is a promising diagnostic modality in breast imaging. Its isotropic 3D spatial resolution enhances the characterization of micro-calcifications in breasts that might not be easily distinguishable in mammography. However, due to dose level considerations, it is beneficial to further enhance the visualization of calcifications in Cone Beam Breast CT images that might be masked by noise. In this work, the Papoulis-Gerchberg method was modified and implemented in Cone Beam Breast CT images to improve the visualization and detectability of calcifications. First, the PG method was modified and applied to the projections acquired during the scanning process; its effects on the reconstructed images were analyzed by measuring the Modulation Transfer Function and the Noise Power Spectrum. Second, Cone Beam Breast CT images acquired at different dose levels were pre-processed using this technique to enhance the visualization of calcification. Finally, a computer-aided diagnostic algorithm was utilized to evaluate the efficacy of this method to improve calcification detectability. The results demonstrated that this technique can effectively improve image quality by improving the Modulation Transfer Function with a minor increase in noise level. Consequently, the visualization and detectability of calcifications were improved in Cone Beam Breast CT images. This technique was also proved to be useful in reducing the x-ray dose without degrading visualization and detectability of calcifications. 10. Enhancement of Breast Calcification Visualization and Detection Using a Modified PG Method in Cone Beam Breast CT PubMed Central Liu, Jiangkun; Cai, Weixing; Benitez, Ricardo Betancourt 2012-01-01 Cone Beam Breast CT is a promising diagnostic modality in breast imaging. Its isotropic 3D spatial resolution enhances the characterization of micro-calcifications in breasts that might not be easily distinguishable in mammography. However, due to dose level considerations, it is beneficial to further enhance the visualization of calcifications in Cone Beam Breast CT images that might be masked by noise. In this work, the Papoulis-Gerchberg method was modified and implemented in Cone Beam Breast CT images to improve the visualization and detectability of calcifications. First, the PG method was modified and applied to the projections acquired during the scanning process; its effects on the reconstructed images were analyzed by measuring the Modulation Transfer Function and the Noise Power Spectrum. Second, Cone Beam Breast CT images acquired at different dose levels were pre-processed using this technique to enhance the visualization of calcification. Finally, a computer-aided diagnostic algorithm was utilized to evaluate the efficacy of this method to improve calcification detectability. The results demonstrated that this technique can effectively improve image quality by improving the Modulation Transfer Function with a minor increase in noise level. Consequently, the visualization and detectability of calcifications were improved in Cone Beam Breast CT images. This technique was also proved to be useful in reducing the x-ray dose without degrading visualization and detectability of calcifications. PMID:22398591 11. Development of a quality control program for a cone beam CT imaging system NASA Astrophysics Data System (ADS) Betancourt Benítez, Ricardo; Ning, Ruola; Conover, David; Zhang, Yan; Cai, Weixing 2008-03-01 Routine quality control assessments of medical equipment are crucial for an accurate patient medical treatment as well as for the safety of the patient and staff involved. These regular evaluations become especially important when dealing with radiation-emitting equipment. Therefore, a quality control (QC) program has been developed to quantitatively evaluate imaging systems by measuring standard parameters related to image quality such as the Modulation Transfer Function (MTF), the Noise Power Spectrum (NPS), uniformity, linearity and noise level among others. First, the methods of evaluating the aforementioned parameters have been investigated using a cone beam CT imaging system. Different exposure techniques, phantoms, acquisition modes of the flat panel detector (FPD) and reconstruction algorithms relevant to a clinical environment were all included in this investigation. Second, using the results of the first part of this study, a set of parameters for the QC program was established that yields both, an accurate depiction of the system image quality and an integrated program for easy and practical implementation. Lastly, this QC program will be implemented and practiced in our cone beam CT imaging system. The results using our available phantoms demonstrate that the QC program is adequate to evaluate stability and image quality of this system since it provides comparable parameters to other QC programs. 12. Cone beam CT evaluation of the presence of anatomic accessory canals in the jaws PubMed Central Eshak, M; Brooks, S; Abdel-Wahed, N 2014-01-01 Objectives: To assess the prevalence, location and anatomical course of accessory canals of the jaws using cone beam CT. Methods: A retrospective analysis of 4200 successive cone beam CT scans, for patients of both genders and ages ranging from 7 to 88 years, was performed. They were exposed at the School of Dentistry, University of Michigan, Ann Arbor, MI. After applying the exclusion criteria (the presence of severe ridge resorption, pre-existing implants, a previously reported history of craniofacial malformations or syndromes, a previous history of trauma or surgery, inadequate image quality and subsequent scans from the same individuals), 4051 scans were ultimately included in this study. Results: Of the 4051 scans (2306 females and 1745 males) that qualified for inclusion in this study, accessory canals were identified in 1737 cases (42.9%; 1004 females and 733 males). 532 scans were in the maxilla (13.1%; 296 females and 236 males) and 1205 in the mandible (29.8%; 708 females and 497 males). Conclusions: A network of accessory canals bringing into communication the inner and outer cortical plates of the jaws was identified. In light of these findings, clinicians should carefully assess for the presence of accessory canals prior to any surgical intervention to decrease the risk for complications. PMID:24670010 13. Does cone beam CT actually ameliorate stab wound analysis in bone? PubMed Gaudio, D; Di Giancamillo, M; Gibelli, D; Galassi, A; Cerutti, E; Cattaneo, C 2014-01-01 This study aims at verifying the potential of a recent radiological technology, cone beam CT (CBCT), for the reproduction of digital 3D models which may allow the user to verify the inner morphology of sharp force wounds within the bone tissue. Several sharp force wounds were produced by both single and double cutting edge weapons on cancellous and cortical bone, and then acquired by cone beam CT scan. The lesions were analysed by different software (a DICOM file viewer and reverse engineering software). Results verified the limited performances of such technology for lesions made on cortical bone, whereas on cancellous bone reliable models were obtained, and the precise morphology within the bone tissues was visible. On the basis of such results, a method for differential diagnosis between cutmarks by sharp tools with a single and two cutting edges can be proposed. On the other hand, the metrical computerised analysis of lesions highlights a clear increase of error range for measurements under 3 mm. Metric data taken by different operators shows a strong dispersion (% relative standard deviation). This pilot study shows that the use of CBCT technology can improve the investigation of morphological stab wounds on cancellous bone. Conversely metric analysis of the lesions as well as morphological analysis of wound dimension under 3 mm do not seem to be reliable. 14. Accurate Coregistration between Ultra-High-Resolution Micro-SPECT and Circular Cone-Beam Micro-CT Scanners. PubMed Ji, Changguo; van der Have, Frans; Gratama van Andel, Hugo; Ramakers, Ruud; Beekman, Freek 2010-01-01 Introduction. Spatially registering SPECT with CT makes it possible to anatomically localize SPECT tracers. In this study, an accurate method for the coregistration of ultra-high-resolution SPECT volumes and multiple cone-beam CT volumes is developed and validated, which does not require markers during animal scanning. Methods. Transferable animal beds were developed with an accurate mounting interface. Simple calibration phantoms make it possible to obtain both the spatial transformation matrix for stitching multiple CT scans of different parts of the animal and to register SPECT and CT. The spatial transformation for image coregistration is calculated once using Horn's matching algorithm. Animal images can then be coregistered without using markers. Results. For mouse-sized objects, average coregistration errors between SPECT and CT in X, Y, and Z directions are within 0.04 mm, 0.10 mm, and 0.19 mm, respectively. For rat-sized objects, these numbers are 0.22 mm, 0.14 mm, and 0.28 mm. Average 3D coregistration errors were within 0.24 mm and 0.42 mm for mouse and rat imaging, respectively. Conclusion. Extending the field-of-view of cone-beam CT by stitching is improved by prior registration of the CT volumes. The accuracy of registration between SPECT and CT is typically better than the image resolution of current ultra-high-resolution SPECT. 15. Simulations and experimental feasibility study of fan-beam coherent-scatter CT NASA Astrophysics Data System (ADS) Harding, Adrian; Schlomka, Jens-Peter; Harding, Geoffrey L. 2002-11-01 Fan-beam coherent scatter computer tomography (CSCT) has been employed to obtain 2-dimensional images of spatially resolved diffraction patterns in order to supplement CT images in material discrimination. A Monte Carlo simulation tool DiPhoS (Diagnostic Photon Simulation) was used to create 2-dimensional scatter projection data sets of high-contrast water and Lucite phantom objects with plastic inserts. The results were used as input to a reconstruction routine based on a novel simultaneous iterative reconstruction technique (SIRT). At the same time an experimental demonstrator was assembled to confirm the simulations by measurements and to show the feasibility of coherent scatter CT. It consisted of a 4.5kW constant power X-ray tube, a rotatable object plate and a vertical detector column that could be panned around the object. Spatial resolution was ensured by mechanical collimation. Phantoms similar to those simulated were measured and reconstructed and the contrast achieved by CSCT between the materials under examination substantially exceeded that achieved in CT. A further step was taken by examining an animal tissue sample in the same way, the results of which show remarkable contrast between muscle, cartilage and fat, suggesting that CSCT can also be used in a medical scenario. 16. Tooth labeling in cone-beam CT using deep convolutional neural network for forensic identification NASA Astrophysics Data System (ADS) Miki, Yuma; Muramatsu, Chisako; Hayashi, Tatsuro; Zhou, Xiangrong; Hara, Takeshi; Katsumata, Akitoshi; Fujita, Hiroshi 2017-03-01 In large disasters, dental record plays an important role in forensic identification. However, filing dental charts for corpses is not an easy task for general dentists. Moreover, it is laborious and time-consuming work in cases of large scale disasters. We have been investigating a tooth labeling method on dental cone-beam CT images for the purpose of automatic filing of dental charts. In our method, individual tooth in CT images are detected and classified into seven tooth types using deep convolutional neural network. We employed the fully convolutional network using AlexNet architecture for detecting each tooth and applied our previous method using regular AlexNet for classifying the detected teeth into 7 tooth types. From 52 CT volumes obtained by two imaging systems, five images each were randomly selected as test data, and the remaining 42 cases were used as training data. The result showed the tooth detection accuracy of 77.4% with the average false detection of 5.8 per image. The result indicates the potential utility of the proposed method for automatic recording of dental information. 17. Improving the accuracy of CT dimensional metrology by a novel beam hardening correction method NASA Astrophysics Data System (ADS) Zhang, Xiang; Li, Lei; Zhang, Feng; Xi, Xiaoqi; Deng, Lin; Yan, Bin 2015-01-01 Its powerful nondestructive characteristics are attracting more and more research into the study of computed tomography (CT) for dimensional metrology, which offers a practical alternative to the common measurement methods. However, the inaccuracy and uncertainty severely limit the further utilization of CT for dimensional metrology due to many factors, among which the beam hardening (BH) effect plays a vital role. This paper mainly focuses on eliminating the influence of the BH effect in the accuracy of CT dimensional metrology. To correct the BH effect, a novel exponential correction model is proposed. The parameters of the model are determined by minimizing the gray entropy of the reconstructed volume. In order to maintain the consistency and contrast of the corrected volume, a punishment term is added to the cost function, enabling more accurate measurement results to be obtained by the simple global threshold method. The proposed method is efficient, and especially suited to the case where there is a large difference in gray value between material and background. Different spheres with known diameters are used to verify the accuracy of dimensional measurement. Both simulation and real experimental results demonstrate the improvement in measurement precision. Moreover, a more complex workpiece is also tested to show that the proposed method is of general feasibility. 18. An investigation into factors affecting electron density calibration for a megavoltage cone-beam CT system. PubMed Hughes, Jessica; Holloway, Lois C; Quinn, Alexandra; Fielding, Andrew 2012-09-06 There is a growing interest in the use of megavoltage cone-beam computed tomography (MV CBCT) data for radiotherapy treatment planning. To calculate accurate dose distributions, knowledge of the electron density (ED) of the tissues being irradiated is required. In the case of MV CBCT, it is necessary to determine a calibration-relating CT number to ED, utilizing the photon beam produced for MV CBCT. A number of different parameters can affect this calibration. This study was undertaken on the Siemens MV CBCT system, MVision, to evaluate the effect of the following parameters on the reconstructed CT pixel value to ED calibration: the number of monitor units (MUs) used (5, 8, 15 and 60 MUs), the image reconstruction filter (head and neck, and pelvis), reconstruction matrix size (256 by 256 and 512 by 512), and the addition of extra solid water surrounding the ED phantom. A Gammex electron density CT phantom containing EDs from 0.292 to 1.707 was imaged under each of these conditions. The linear relationship between MV CBCT pixel value and ED was demonstrated for all MU settings and over the range of EDs. Changes in MU number did not dramatically alter the MV CBCT ED calibration. The use of different reconstruction filters was found to affect the MV CBCT ED calibration, as was the addition of solid water surrounding the phantom. Dose distributions from treatment plans calculated with simulated image data from a 15 MU head and neck reconstruction filter MV CBCT image and a MV CBCT ED calibration curve from the image data parameters and a 15 MU pelvis reconstruction filter showed small and clinically insignificant differences. Thus, the use of a single MV CBCT ED calibration curve is unlikely to result in any clinical differences. However, to ensure minimal uncertainties in dose reporting, MV CBCT ED calibration measurements could be carried out using parameter-specific calibration measurements. 19. Optimizing 4D cone-beam CT acquisition protocol for external beam radiotherapy SciTech Connect Li Tianfang; Xing Lei . E-mail: [email protected] 2007-03-15 Purpose: Four-dimensional cone-beam computed tomography (4D-CBCT) imaging is sensitive to parameters such as gantry rotation speed, number of gantry rotations, X-ray pulse rate, and tube current, as well as a patient's breathing pattern. The aim of this study is to optimize the image acquisition on a patient-specific basis while minimizing the scan time and the radiation dose. Methods and Materials: More than 60 sets of 4D-CBCT images, each with a temporal resolution of 10 phases, were acquired using multiple-gantry rotation and slow-gantry rotation techniques. The image quality was quantified with a relative root mean-square error (RE) and correlated with various acquisition settings; specifically, varying gantry rotation speed, varying both the rotation speed and the number of rotations, and varying both the rotation speed and tube current to keep the radiation exposure constant. These experiments were repeated for three different respiratory periods. Results: With similar radiation dose, 4D-CBCT images acquired with low current and low rotation speed have better quality over images obtained with high current and high rotation speed. In general, a one-rotation low-speed scan is superior to a two-rotation double-speed scan, even though they provide the same number of projections. Furthermore, it is found that the image quality behaves monotonically with the relative speed as defined by the gantry rotation speed and the patient respiratory period. Conclusions: The RE curves established in this work can be used to predict the 4D-CBCT image quality before a scan. This allows the acquisition protocol to be optimized individually to balance the desired quality with the associated scanning time and patient radiation dose. 20. 4D-Imaging of the Lung: Reproducibility of Lesion Size and Displacement on Helical CT, MRI, and Cone Beam CT in a Ventilated Ex Vivo System SciTech Connect Biederer, Juergen Dinkel, Julien; Remmert, Gregor; Jetter, Siri; Nill, Simeon; Moser, Torsten; Bendl, Rolf; Thierfelder, Carsten; Fabel, Michael; Oelfke, Uwe; Bock, Michael; Plathow, Christian; Bolte, Hendrik; Welzel, Thomas; Hoffmann, Beata; Hartmann, Guenter; Schlegel, Wolfgang; Debus, Juergen; Heller, Martin 2009-03-01 Purpose: Four-dimensional (4D) imaging is a key to motion-adapted radiotherapy of lung tumors. We evaluated in a ventilated ex vivo system how size and displacement of artificial pulmonary nodules are reproduced with helical 4D-CT, 4D-MRI, and linac-integrated cone beam CT (CBCT). Methods and Materials: Four porcine lungs with 18 agarose nodules (mean diameters 1.3-1.9 cm), were ventilated inside a chest phantom at 8/min and subject to 4D-CT (collimation 24 x 1.2 mm, pitch 0.1, slice/increment 24x10{sup 2}/1.5/0.8 mm, pitch 0.1, temporal resolution 0.5 s), 4D-MRI (echo-shared dynamic three-dimensional-flash; repetition/echo time 2.13/0.72 ms, voxel size 2.7 x 2.7 x 4.0 mm, temporal resolution 1.4 s) and linac-integrated 4D-CBCT (720 projections, 3-min rotation, temporal resolution {approx}1 s). Static CT without respiration served as control. Three observers recorded lesion size (RECIST-diameters x/y/z) and axial displacement. Interobserver- and interphase-variation coefficients (IO/IP VC) of measurements indicated reproducibility. Results: Mean x/y/z lesion diameters in cm were equal on static and dynamic CT (1.88/1.87; 1.30/1.39; 1.71/1.73; p > 0.05), but appeared larger on MRI and CBCT (2.06/1.95 [p < 0.05 vs. CT]; 1.47/1.28 [MRI vs. CT/CBCT p < 0.05]; 1.86/1.83 [CT vs. CBCT p < 0.05]). Interobserver-VC for lesion sizes were 2.54-4.47% (CT), 2.29-4.48% (4D-CT); 5.44-6.22% (MRI) and 4.86-6.97% (CBCT). Interphase-VC for lesion sizes ranged from 2.28% (4D-CT) to 10.0% (CBCT). Mean displacement in cm decreased from static CT (1.65) to 4D-CT (1.40), CBCT (1.23) and MRI (1.16). Conclusions: Lesion sizes are exactly reproduced with 4D-CT but overestimated on 4D-MRI and CBCT with a larger variability due to limited temporal and spatial resolution. All 4D-modalities underestimate lesion displacement. 1. 4D-Imaging of the lung: reproducibility of lesion size and displacement on helical CT, MRI, and cone beam CT in a ventilated ex vivo system. PubMed Biederer, Juergen; Dinkel, Julien; Remmert, Gregor; Jetter, Siri; Nill, Simeon; Moser, Torsten; Bendl, Rolf; Thierfelder, Carsten; Fabel, Michael; Oelfke, Uwe; Bock, Michael; Plathow, Christian; Bolte, Hendrik; Welzel, Thomas; Hoffmann, Beata; Hartmann, Günter; Schlegel, Wolfgang; Debus, Jürgen; Heller, Martin; Kauczor, Hans-Ulrich 2009-03-01 Four-dimensional (4D) imaging is a key to motion-adapted radiotherapy of lung tumors. We evaluated in a ventilated ex vivo system how size and displacement of artificial pulmonary nodules are reproduced with helical 4D-CT, 4D-MRI, and linac-integrated cone beam CT (CBCT). Four porcine lungs with 18 agarose nodules (mean diameters 1.3-1.9 cm), were ventilated inside a chest phantom at 8/min and subject to 4D-CT (collimation 24 x 1.2 mm, pitch 0.1, slice/increment 24 x 10(2)/1.5/0.8 mm, pitch 0.1, temporal resolution 0.5 s), 4D-MRI (echo-shared dynamic three-dimensional-flash; repetition/echo time 2.13/0.72 ms, voxel size 2.7 x 2.7 x 4.0 mm, temporal resolution 1.4 s) and linac-integrated 4D-CBCT (720 projections, 3-min rotation, temporal resolution approximately 1 s). Static CT without respiration served as control. Three observers recorded lesion size (RECIST-diameters x/y/z) and axial displacement. Interobserver- and interphase-variation coefficients (IO/IP VC) of measurements indicated reproducibility. Mean x/y/z lesion diameters in cm were equal on static and dynamic CT (1.88/1.87; 1.30/1.39; 1.71/1.73; p > 0.05), but appeared larger on MRI and CBCT (2.06/1.95 [p < 0.05 vs. CT]; 1.47/1.28 [MRI vs. CT/CBCT p < 0.05]; 1.86/1.83 [CT vs. CBCT p < 0.05]). Interobserver-VC for lesion sizes were 2.54-4.47% (CT), 2.29-4.48% (4D-CT); 5.44-6.22% (MRI) and 4.86-6.97% (CBCT). Interphase-VC for lesion sizes ranged from 2.28% (4D-CT) to 10.0% (CBCT). Mean displacement in cm decreased from static CT (1.65) to 4D-CT (1.40), CBCT (1.23) and MRI (1.16). Lesion sizes are exactly reproduced with 4D-CT but overestimated on 4D-MRI and CBCT with a larger variability due to limited temporal and spatial resolution. All 4D-modalities underestimate lesion displacement. 2. Validation of a deformable image registration technique for cone beam CT-based dose verification SciTech Connect Moteabbed, M. Sharp, G. C.; Wang, Y.; Trofimov, A.; Efstathiou, J. A.; Lu, H.-M. 2015-01-15 Purpose: As radiation therapy evolves toward more adaptive techniques, image guidance plays an increasingly important role, not only in patient setup but also in monitoring the delivered dose and adapting the treatment to patient changes. This study aimed to validate a method for evaluation of delivered intensity modulated radiotherapy (IMRT) dose based on multimodal deformable image registration (DIR) for prostate treatments. Methods: A pelvic phantom was scanned with CT and cone-beam computed tomography (CBCT). Both images were digitally deformed using two realistic patient-based deformation fields. The original CT was then registered to the deformed CBCT resulting in a secondary deformed CT. The registration quality was assessed as the ability of the DIR method to recover the artificially induced deformations. The primary and secondary deformed CT images as well as vector fields were compared to evaluate the efficacy of the registration method and it’s suitability to be used for dose calculation. PLASTIMATCH, a free and open source software was used for deformable image registration. A B-spline algorithm with optimized parameters was used to achieve the best registration quality. Geometric image evaluation was performed through voxel-based Hounsfield unit (HU) and vector field comparison. For dosimetric evaluation, IMRT treatment plans were created and optimized on the original CT image and recomputed on the two warped images to be compared. The dose volume histograms were compared for the warped structures that were identical in both warped images. This procedure was repeated for the phantom with full, half full, and empty bladder. Results: The results indicated mean HU differences of up to 120 between registered and ground-truth deformed CT images. However, when the CBCT intensities were calibrated using a region of interest (ROI)-based calibration curve, these differences were reduced by up to 60%. Similarly, the mean differences in average vector field 3. Validation of a deformable image registration technique for cone beam CT-based dose verification PubMed Central Moteabbed, M.; Sharp, G. C.; Wang, Y.; Trofimov, A.; Efstathiou, J. A.; Lu, H.-M. 2015-01-01 Purpose: As radiation therapy evolves toward more adaptive techniques, image guidance plays an increasingly important role, not only in patient setup but also in monitoring the delivered dose and adapting the treatment to patient changes. This study aimed to validate a method for evaluation of delivered intensity modulated radiotherapy (IMRT) dose based on multimodal deformable image registration (dir) for prostate treatments. Methods: A pelvic phantom was scanned with CT and cone-beam computed tomography (CBCT). Both images were digitally deformed using two realistic patient-based deformation fields. The original CT was then registered to the deformed CBCT resulting in a secondary deformed CT. The registration quality was assessed as the ability of the dir method to recover the artificially induced deformations. The primary and secondary deformed CT images as well as vector fields were compared to evaluate the efficacy of the registration method and it’s suitability to be used for dose calculation. plastimatch, a free and open source software was used for deformable image registration. A B-spline algorithm with optimized parameters was used to achieve the best registration quality. Geometric image evaluation was performed through voxel-based Hounsfield unit (HU) and vector field comparison. For dosimetric evaluation, IMRT treatment plans were created and optimized on the original CT image and recomputed on the two warped images to be compared. The dose volume histograms were compared for the warped structures that were identical in both warped images. This procedure was repeated for the phantom with full, half full, and empty bladder. Results: The results indicated mean HU differences of up to 120 between registered and ground-truth deformed CT images. However, when the CBCT intensities were calibrated using a region of interest (ROI)-based calibration curve, these differences were reduced by up to 60%. Similarly, the mean differences in average vector field 4. WE-G-18A-06: Sinogram Restoration in Helical Cone-Beam CT SciTech Connect Little, K; Riviere, P La 2014-06-15 Purpose: To extend CT sinogram restoration, which has been shown in 2D to reduce noise and to correct for geometric effects and other degradations at a low computational cost, from 2D to a 3D helical cone-beam geometry. Methods: A method for calculating sinogram degradation coefficients for a helical cone-beam geometry was proposed. These values were used to perform penalized-likelihood sinogram restoration on simulated data that were generated from the FORBILD thorax phantom. Sinogram restorations were performed using both a quadratic penalty and the edge-preserving Huber penalty. After sinogram restoration, Fourier-based analytical methods were used to obtain reconstructions. Resolution-variance trade-offs were investigated for several locations within the reconstructions for the purpose of comparing sinogram restoration to no restoration. In order to compare potential differences, reconstructions were performed using different groups of neighbors in the penalty, two analytical reconstruction methods (Katsevich and single-slice rebinning), and differing helical pitches. Results: The resolution-variance properties of reconstructions restored using sinogram restoration with a Huber penalty outperformed those of reconstructions with no restoration. However, the use of a quadratic sinogram restoration penalty did not lead to an improvement over performing no restoration at the outer regions of the phantom. Application of the Huber penalty to neighbors both within a view and across views did not perform as well as only applying the penalty to neighbors within a view. General improvements in resolution-variance properties using sinogram restoration with the Huber penalty were not dependent on the reconstruction method used or the magnitude of the helical pitch. Conclusion: Sinogram restoration for noise and degradation effects for helical cone-beam CT is feasible and should be able to be applied to clinical data. When applied with the edge-preserving Huber penalty 5. Effect of voxel size on the accuracy of 3D reconstructions with cone beam CT PubMed Central Maret, D; Telmon, N; Peters, O A; Lepage, B; Treil, J; Inglèse, J M; Peyre, A; Kahn, J L; Sixou, M 2012-01-01 Objectives The various types of cone beam CT (CBCT) differ in several technical characteristics, notably their spatial resolution, which is defined by the acquisition voxel size. However, data are still lacking on the effects of voxel size on the metric accuracy of three-dimensional (3D) reconstructions. This study was designed to assess the effect of isotropic voxel size on the 3D reconstruction accuracy and reproducibility of CBCT data. Methods The study sample comprised 70 teeth (from the Institut d’Anatomie Normale, Strasbourg, France). The teeth were scanned with a KODAK 9500 3D® CBCT (Carestream Health, Inc., Marne-la-Vallée, France), which has two voxel sizes: 200 µm (CBCT 200 µm group) and 300 µm (CBCT 300 µm group). These teeth had also been scanned with the KODAK 9000 3D® CBCT (Carestream Health, Inc.) (CBCT 76 µm group) and the SCANCO Medical micro-CT XtremeCT (SCANCO Medical, Brüttisellen, Switzerland) (micro-CT 41 µm group) considered as references. After semi-automatic segmentation with AMIRA® software (Visualization Sciences Group, Burlington, MA), tooth volumetric measurements were obtained. Results The Bland–Altman method showed no difference in tooth volumes despite a slight underestimation for the CBCT 200 µm and 300 µm groups compared with the two reference groups. The underestimation was statistically significant for the volumetric measurements of the CBCT 300 µm group relative to the two reference groups (Passing–Bablok method). Conclusions CBCT is not only a tool that helps in diagnosis and detection but it has the complementary advantage of being a measuring instrument, the accuracy of which appears connected to the size of the voxels. Future applications of such measurements with CBCT are discussed. PMID:23166362 6. Effect of voxel size on the accuracy of 3D reconstructions with cone beam CT. PubMed Maret, D; Telmon, N; Peters, O A; Lepage, B; Treil, J; Inglèse, J M; Peyre, A; Kahn, J L; Sixou, M 2012-12-01 The various types of cone beam CT (CBCT) differ in several technical characteristics, notably their spatial resolution, which is defined by the acquisition voxel size. However, data are still lacking on the effects of voxel size on the metric accuracy of three-dimensional (3D) reconstructions. This study was designed to assess the effect of isotropic voxel size on the 3D reconstruction accuracy and reproducibility of CBCT data. The study sample comprised 70 teeth (from the Institut d'Anatomie Normale, Strasbourg, France). The teeth were scanned with a KODAK 9500 3D® CBCT (Carestream Health, Inc., Marne-la-Vallée, France), which has two voxel sizes: 200 µm (CBCT 200 µm group) and 300 µm (CBCT 300 µm group). These teeth had also been scanned with the KODAK 9000 3D® CBCT (Carestream Health, Inc.) (CBCT 76 µm group) and the SCANCO Medical micro-CT XtremeCT (SCANCO Medical, Brüttisellen, Switzerland) (micro-CT 41 µm group) considered as references. After semi-automatic segmentation with AMIRA® software (Visualization Sciences Group, Burlington, MA), tooth volumetric measurements were obtained. The Bland-Altman method showed no difference in tooth volumes despite a slight underestimation for the CBCT 200 µm and 300 µm groups compared with the two reference groups. The underestimation was statistically significant for the volumetric measurements of the CBCT 300 µm group relative to the two reference groups (Passing-Bablok method). CBCT is not only a tool that helps in diagnosis and detection but it has the complementary advantage of being a measuring instrument, the accuracy of which appears connected to the size of the voxels. Future applications of such measurements with CBCT are discussed. 7. Classification of teeth in cone-beam CT using deep convolutional neural network. PubMed Miki, Yuma; Muramatsu, Chisako; Hayashi, Tatsuro; Zhou, Xiangrong; Hara, Takeshi; Katsumata, Akitoshi; Fujita, Hiroshi 2017-01-01 Dental records play an important role in forensic identification. To this end, postmortem dental findings and teeth conditions are recorded in a dental chart and compared with those of antemortem records. However, most dentists are inexperienced at recording the dental chart for corpses, and it is a physically and mentally laborious task, especially in large scale disasters. Our goal is to automate the dental filing process by using dental x-ray images. In this study, we investigated the application of a deep convolutional neural network (DCNN) for classifying tooth types on dental cone-beam computed tomography (CT) images. Regions of interest (ROIs) including single teeth were extracted from CT slices. Fifty two CT volumes were randomly divided into 42 training and 10 test cases, and the ROIs obtained from the training cases were used for training the DCNN. For examining the sampling effect, random sampling was performed 3 times, and training and testing were repeated. We used the AlexNet network architecture provided in the Caffe framework, which consists of 5 convolution layers, 3 pooling layers, and 2 full connection layers. For reducing the overtraining effect, we augmented the data by image rotation and intensity transformation. The test ROIs were classified into 7 tooth types by the trained network. The average classification accuracy using the augmented training data by image rotation and intensity transformation was 88.8%. Compared with the result without data augmentation, data augmentation resulted in an approximately 5% improvement in classification accuracy. This indicates that the further improvement can be expected by expanding the CT dataset. Unlike the conventional methods, the proposed method is advantageous in obtaining high classification accuracy without the need for precise tooth segmentation. The proposed tooth classification method can be useful in automatic filing of dental charts for forensic identification. 8. Design and development of C-arm based cone-beam CT for image-guided interventions: initial results NASA Astrophysics Data System (ADS) Chen, Guang-Hong; Zambelli, Joseph; Nett, Brian E.; Supanich, Mark; Riddell, Cyril; Belanger, Barry; Mistretta, Charles A. 2006-03-01 X-ray cone-beam computed tomography (CBCT) is of importance in image-guided intervention (IGI) and image-guided radiation therapy (IGRT). In this paper, we present a cone-beam CT data acquisition system using a GE INNOVA 4100 (GE Healthcare Technologies, Waukesha, Wisconsin) clinical system. This new cone-beam data acquisition mode was developed for research purposes without interfering with any clinical function of the system. It provides us a basic imaging pipeline for more advanced cone-beam data acquisition methods. It also provides us a platform to study and overcome the limiting factors such as cone-beam artifacts and limiting low contrast resolution in current C-arm based cone-beam CT systems. A geometrical calibration method was developed to experimentally determine parameters of the scanning geometry to correct the image reconstruction for geometric non-idealities. Extensive phantom studies and some small animal studies have been conducted to evaluate the performance of our cone-beam CT data acquisition system. 9. Usefulness of a lead shielding device for reducing the radiation dose to tissues outside the primary beams during CT. PubMed Chung, Jae-Joon; Cho, Eun-Suk; Kang, Sung Min; Yu, Jeong-Sik; Kim, Dae Jung; Kim, Joo Hee 2014-12-01 This study was done to investigate the efficacy of a lead shield in protecting the tissues outside the primary beams, such as the breast and thyroid, by measurement of the entrance skin dose during CT of the brain, neck, abdomen, and lumbar spine. Institutional Review Board approval was obtained. This study included 150 patients (male:female 25:125, age range 15-45 years). In females, brain, lumbar spine, and abdominal CT scans, pre-/post-contrast neck CT scans, and post-contrast liver dynamic CT scans were performed. In males, brain CT scans only were performed. Breast shielding was performed in all females, and thyroid shielding was conducted in patients with brain CT. During all CT studies, the left breast or left thyroid was shielded using a lead shield, and the contralateral side was left unshielded. Thus, each breast or thyroid measurement had its own control for the same demographic data. The efficacy of the shielding of both breasts and thyroids during CT was assessed. During brain, abdominal, lumbar, pre-/post-contrast neck, and post-contrast liver dynamic CT, 33.5, 26.0, 17.4, 26.5, and 16.2 % of the breast skin dose were reduced, respectively. During brain CT, the thyroid skin dose was reduced by 17.9 % (females) and 20.6 % (males). There were statistically significant differences in the skin doses of shielded organs (p < 0.05). Breast shielding during neck and liver dynamic CT was the most effective compared with breast or thyroid shielding during other CT scans. We recommend breast shielding during neck and liver dynamic CT in young female patients to avoid unnecessary radiation exposure. 10. SU-E-J-103: Setup Errors Analysis by Cone-Beam CT (CBCT)-Based Imaged-Guided Intensity Modulated Radiotherapy for Esophageal Cancer SciTech Connect Yang, H; Wang, W; Hu, W; Chen, X; Wang, X; Yu, C 2014-06-01 Purpose: To quantify setup errors by pretreatment kilovolt cone-beam computed tomography(KV-CBCT) scans for middle or distal esophageal carcinoma patients. Methods: Fifty-two consecutive middle or distal esophageal carcinoma patients who underwent IMRT were included this study. A planning CT scan using a big-bore CT simulator was performed in the treatment position and was used as the reference scan for image registration with CBCT. CBCT scans(On-Board Imaging v1. 5 system, Varian Medical Systems) were acquired daily during the first treatment week. A total of 260 CBCT scans was assessed with a registration clip box defined around the PTV-thorax in the reference scan based on(nine CBCTs per patient) bony anatomy using Offline Review software v10.0(Varian Medical Systems). The anterior-posterior(AP), left-right(LR), superiorinferior( SI) corrections were recorded. The systematic and random errors were calculated. The CTV-to-PTV margins in each CBCT frequency was based on the Van Herk formula (2.5Σ+0.7σ). Results: The SD of systematic error (Σ) was 2.0mm, 2.3mm, 3.8mm in the AP, LR and SI directions, respectively. The average random error (σ) was 1.6mm, 2.4mm, 4.1mm in the AP, LR and SI directions, respectively. The CTV-to-PTV safety margin was 6.1mm, 7.5mm, 12.3mm in the AP, LR and SI directions based on van Herk formula. Conclusion: Our data recommend the use of 6 mm, 8mm, and 12 mm for esophageal carcinoma patient setup in AP, LR, SI directions, respectively. 11. The value of cone beam CT in assessing and managing a dilated odontome of a maxillary canine. PubMed Wall, Aoibheann; Ng, Suk; Djemal, Serpil 2015-03-01 A case of an unusual anomaly in a maxillary canine is described. A deep enamel invagination resulted in pulpal necrosis, longstanding infection and development of an associated radicular cyst. Diagnostic X-ray imaging was invaluable in demonstrating the complex root anatomy of the dilated odontome. In particular, a cone beam CT scan helped in the formulation of an appropriate treatment plan. Clinical Relevance: Three-dimensional imaging using cone beam CT was valuable in this case to demonstrate the complicated anatomy of a rare dental anomaly, and to help plan treatment. 12. Bone Forming Potential of An-Organic Bovine Bone Graft: A Cone Beam CT study. PubMed Uzbek, Usman Haider; Rahman, Shaifulizan Ab; Alam, Mohammad Khursheed; Gillani, Syed Wasif 2014-12-01 An-organic bovine bone graft is a xenograft with the potential of bone formation. The aim of this study was to evaluate the bone density using cone beam computed tomography scans around functional endosseous implant in the region of both augmented maxillary sinus with the an-organic bovine bone graft and the alveolar bone over which the graft was placed to provide space for the implants. Sterile freeze dried bovine bone graft produced by National Tissue Bank, University Sains, Malaysia was used for stage-1 implant placement with maxillary sinus augmentation in a total of 19 subjects with 19 implants. The age of all subjects ranged between 40-60 years with a mean age 51±4.70. All subjects underwent a follow up CT scan using PlanmecaPromax 3D(®) Cone beam computed tomography scanner at the Radiology department, Hospital University Sains, Malaysia. The collected data was then analysed to evaluate bone density in Hounsfield Units using PlanmecaRomexis" Imaging Software 2.2(®) which is specialized accompanying software of the cone beam computed tomography machine. There was bone formation seen at the site of the augmented sinus. A significant increase (p<0.005) in bone density was reported at the augmented site compared to the bone density of the existing alveolar bone. An-organic bovine bone graft is an osteoconductive material that can be used for the purpose of maxillary sinus augmentation. 13. Dose distribution for dental cone beam CT and its implication for defining a dose index PubMed Central Pauwels, R; Theodorakou, C; Walker, A; Bosmans, H; Jacobs, R; Horner, K; Bogaerts, R 2012-01-01 Objectives To characterize the dose distribution for a range of cone beam CT (CBCT) units, investigating different field of view sizes, central and off-axis geometries, full or partial rotations of the X-ray tube and different clinically applied beam qualities. The implications of the dose distributions on the definition and practicality of a CBCT dose index were assessed. Methods Dose measurements on CBCT devices were performed by scanning cylindrical head-size water and polymethyl methacrylate phantoms, using thermoluminescent dosemeters, a small-volume ion chamber and radiochromic films. Results It was found that the dose distribution can be asymmetrical for dental CBCT exposures throughout a homogeneous phantom, owing to an asymmetrical positioning of the isocentre and/or partial rotation of the X-ray source. Furthermore, the scatter tail along the z-axis was found to have a distinct shape, generally resulting in a strong drop (90%) in absorbed dose outside the primary beam. Conclusions There is no optimal dose index available owing to the complicated exposure geometry of CBCT and the practical aspects of quality control measurements. Practical validation of different possible dose indices is needed, as well as the definition of conversion factors to patient dose. PMID:22752320 14. Bone Forming Potential of An-Organic Bovine Bone Graft: A Cone Beam CT study PubMed Central Rahman, Shaifulizan AB.; Alam, Mohammad Khursheed; Gillani, Syed Wasif 2014-01-01 Purpose: An-organic bovine bone graft is a xenograft with the potential of bone formation. The aim of this study was to evaluate the bone density using cone beam computed tomography scans around functional endosseous implant in the region of both augmented maxillary sinus with the an-organic bovine bone graft and the alveolar bone over which the graft was placed to provide space for the implants. Materials and Methods: Sterile freeze dried bovine bone graft produced by National Tissue Bank, University Sains, Malaysia was used for stage-1 implant placement with maxillary sinus augmentation in a total of 19 subjects with 19 implants. The age of all subjects ranged between 40-60 years with a mean age 51±4.70. All subjects underwent a follow up CT scan using PlanmecaPromax 3D® Cone beam computed tomography scanner at the Radiology department, Hospital University Sains, Malaysia. The collected data was then analysed to evaluate bone density in Hounsfield Units using PlanmecaRomexis” Imaging Software 2.2® which is specialized accompanying software of the cone beam computed tomography machine. Results: There was bone formation seen at the site of the augmented sinus. A significant increase (p<0.005) in bone density was reported at the augmented site compared to the bone density of the existing alveolar bone. Conclusion: An-organic bovine bone graft is an osteoconductive material that can be used for the purpose of maxillary sinus augmentation. PMID:25654037 15. Beam-specific planning target volumes incorporating 4D CT for pencil beam scanning proton therapy of thoracic tumors. PubMed Lin, Liyong; Kang, Minglei; Huang, Sheng; Mayer, Rulon; Thomas, Andrew; Solberg, Timothy D; McDonough, James E; Simone, Charles B 2015-11-08 The purpose of this study is to determine whether organ sparing and target coverage can be simultaneously maintained for pencil beam scanning (PBS) proton therapy treatment of thoracic tumors in the presence of motion, stopping power uncertainties, and patient setup variations. Ten consecutive patients that were previously treated with proton therapy to 66.6/1.8 Gy (RBE) using double scattering (DS) were replanned with PBS. Minimum and maximum intensity images from 4D CT were used to introduce flexible smearing in the determination of the beam specific PTV (BSPTV). Datasets from eight 4D CT phases, using ± 3% uncertainty in stopping power and ± 3 mm uncertainty in patient setup in each direction, were used to create 8 × 12 × 10 = 960 PBS plans for the evaluation of 10 patients. Plans were normalized to provide identical coverage between DS and PBS. The average lung V20, V5, and mean doses were reduced from 29.0%, 35.0%, and 16.4 Gy with DS to 24.6%, 30.6%, and 14.1 Gy with PBS, respectively. The average heart V30 and V45 were reduced from 10.4% and 7.5% in DS to 8.1% and 5.4% for PBS, respectively. Furthermore, the maximum spinal cord, esophagus, and heart doses were decreased from 37.1 Gy, 71.7 Gy, and 69.2 Gy with DS to 31.3 Gy, 67.9 Gy, and 64.6 Gy with PBS. The conformity index (CI), homogeneity index (HI), and global maximal dose were improved from 3.2, 0.08, 77.4 Gy with DS to 2.8, 0.04, and 72.1 Gy with PBS. All differences are statistically significant, with p-values <0.05, with the exception of the heart V45 (p = 0.146). PBS with BSPTV achieves better organ sparing and improves target coverage using a repainting method for the treatment of thoracic tumors. Incorporating motion-related uncertainties is essential. 16. Beam-specific planning target volumes incorporating 4D CT for pencil beam scanning proton therapy of thoracic tumors. PubMed Lin, Liyong; Kang, Minglei; Huang, Sheng; Mayer, Rulon; Thomas, Andrew; Solberg, Timothy D; McDonough, James E; Simone, Charles B 2015-11-01 The purpose of this study is to determine whether organ sparing and target coverage can be simultaneously maintained for pencil beam scanning (PBS) proton therapy treatment of thoracic tumors in the presence of motion, stopping power uncertainties, and patient setup variations. Ten consecutive patients that were previously treated with proton therapy to 66.6/1.8 Gy (RBE) using double scattering (DS) were replanned with PBS. Minimum and maximum intensity images from 4D CT were used to introduce flexible smearing in the determination of the beam specific PTV (BSPTV). Datasets from eight 4D CT phases, using ±3% uncertainty in stopping power and ±3 mm uncertainty in patient setup in each direction, were used to create 8×12×10=960 PBS plans for the evaluation of 10 patients. Plans were normalized to provide identical coverage between DS and PBS. The average lung V20, V5, and mean doses were reduced from 29.0%, 35.0%, and 16.4 Gy with DS to 24.6%, 30.6%, and 14.1 Gy with PBS, respectively. The average heart V30 and V45 were reduced from 10.4% and 7.5% in DS to 8.1% and 5.4% for PBS, respectively. Furthermore, the maximum spinal cord, esophagus, and heart doses were decreased from 37.1 Gy, 71.7 Gy, and 69.2 Gy with DS to 31.3 Gy, 67.9 Gy, and 64.6 Gy with PBS. The conformity index (CI), homogeneity index (HI), and global maximal dose were improved from 3.2, 0.08, 77.4 Gy with DS to 2.8, 0.04, and 72.1 Gy with PBS. All differences are statistically significant, with p-values <0.05, with the exception of the heart V45 (p=0.146). PBS with BSPTV achieves better organ sparing and improves target coverage using a repainting method for the treatment of thoracic tumors. Incorporating motion-related uncertainties is essential. PACS number: 87.55.D. 17. Relationship between low tube voltage (70 kV) and the iodine delivery rate (IDR) in CT angiography: An experimental in-vivo study PubMed Central Pietsch, Hubertus; Korporaal, Johannes G.; Haberland, Ulrike; Mahnken, Andreas H.; Flohr, Thomas G.; Uder, Michael; Jost, Gregor 2017-01-01 Objective Very short acquisition times and the use of low-kV protocols in CTA demand modifications in the contrast media (CM) injection regimen. The aim of this study was to optimize the use of CM delivery parameters in thoraco-abdominal CTA in a porcine model. Materials and methods Six pigs (55–68 kg) were examined with a dynamic CTA protocol (454 mm scan length, 2.5 s temporal resolution, 70 s total acquisition time). Four CM injection protocols were applied in a randomized order. 120 kV CTA protocol: (A) 300 mg iodine/kg bodyweight (bw), IDR = 1.5 g/s (flow = 5 mL/s), injection time (ti) 12 s (60 kg bw). 70 kV CTA protocols: 150 mg iodine/kg bw: (B) IDR = 0.75 g/s (flow = 2.5 mL/s), ti = 12 s (60 kg bw); (C) IDR = 1.5 g/s (flow = 5 mL/s), ti = 12 s (60 kg bw); (D) IDR = 3.0 g/s (flow = 10 mL/s), ti = 3 s (60 kg bw). The complete CM bolus shape was monitored by creating time attenuation curves (TAC) in different vascular territories. Based on the TAC, the time to peak (TTP) and the peak enhancement were determined. The diagnostic window (relative enhancement > 300 HU), was calculated and compared to visual inspection of the corresponding CTA data sets. Results The average relative arterial peak enhancements after baseline correction were 358.6 HU (A), 356.6 HU (B), 464.0 HU (C), and 477.6 HU (D). The TTP decreased with increasing IDR and decreasing ti, protocols A and B did not differ significantly (systemic arteries, p = 0.843; pulmonary arteries, p = 0.183). The delay time for bolus tracking (trigger level 100 HU; target enhancement 300 HU) for single-phase CTA was comparable for protocol A and B (3.9, 4.3 s) and C and D (2.4, 2.0 s). The scan window time frame was comparable for the different protocols by visual inspection of the different CTA data sets and by analyzing the TAC. Conclusions All protocols provided sufficient arterial enhancement. The use of a 70 kV CTA protocol is recommended because of a 50% reduction of total CM volume and a 50% reduced 18. Empirical binary tomography calibration (EBTC) for the precorrection of beam hardening and scatter for flat panel CT SciTech Connect Grimmer, Rainer; Kachelriess, Marc 2011-04-15 Purpose: Scatter and beam hardening are prominent artifacts in x-ray CT. Currently, there is no precorrection method that inherently accounts for tube voltage modulation and shaped prefiltration. Methods: A method for self-calibration based on binary tomography of homogeneous objects, which was proposed by B. Li et al. [''A novel beam hardening correction method for computed tomography,'' in Proceedings of the IEEE/ICME International Conference on Complex Medical Engineering CME 2007, pp. 891-895, 23-27 May 2007], has been generalized in order to use this information to preprocess scans of other, nonbinary objects, e.g., to reduce artifacts in medical CT applications. Further on, the method was extended to handle scatter besides beam hardening and to allow for detector pixel-specific and ray-specific precorrections. This implies that the empirical binary tomography calibration (EBTC) technique is sensitive to spectral effects as they are induced by the heel effect, by shaped prefiltration, or by scanners with tube voltage modulation. The presented method models the beam hardening correction by using a rational function, while the scatter component is modeled using the pep model of B. Ohnesorge et al. [''Efficient object scatter correction algorithm for third and fourth generation CT scanners,'' Eur. Radiol. 9(3), 563-569 (1999)]. A smoothness constraint is applied to the parameter space to regularize the underdetermined system of nonlinear equations. The parameters determined are then used to precorrect CT scans. Results: EBTC was evaluated using simulated data of a flat panel cone-beam CT scanner with tube voltage modulation and bow-tie prefiltration and using real data of a flat panel cone-beam CT scanner. In simulation studies, where the ground truth is known, the authors' correction model proved to be highly accurate and was able to reduce beam hardening by 97% and scatter by about 75%. Reconstructions of measured data showed significantly less artifacts than 19. Empirical binary tomography calibration (EBTC) for the precorrection of beam hardening and scatter for flat panel CT. PubMed Grimmer, Rainer; Kachelriess, Marc 2011-04-01 Scatter and beam hardening are prominent artifacts in x-ray CT. Currently, there is no precorrection method that inherently accounts for tube voltage modulation and shaped prefiltration. A method for self-calibration based on binary tomography of homogeneous objects, which was proposed by B. Li et al. ["A novel beam hardening correction method for computed tomography," in Proceedings of the IEEE/ICME International Conference on Complex Medical Engineering CME 2007, pp. 891-895, 23-27 May 2007], has been generalized in order to use this information to preprocess scans of other, nonbinary objects, e.g., to reduce artifacts in medical CT applications. Further on, the method was extended to handle scatter besides beam hardening and to allow for detector pixel-specific and ray-specific precorrections. This implies that the empirical binary tomography calibration (EBTC) technique is sensitive to spectral effects as they are induced by the heel effect, by shaped prefiltration, or by scanners with tube voltage modulation. The presented method models the beam hardening correction by using a rational function, while the scatter component is modeled using the pep model of B. Ohnesorge et al. ["Efficient object scatter correction algorithm for third and fourth generation CT scanners," Eur. Radiol. 9(3), 563-569 (1999)]. A smoothness constraint is applied to the parameter space to regularize the underdetermined system of nonlinear equations. The parameters determined are then used to precorrect CT scans. EBTC was evaluated using simulated data of a flat panel cone-beam CT scanner with tube voltage modulation and bow-tie prefiltration and using real data of a flat panel cone-beam CT scanner. In simulation studies, where the ground truth is known, the authors' correction model proved to be highly accurate and was able to reduce beam hardening by 97% and scatter by about 75%. Reconstructions of measured data showed significantly less artifacts than the standard reconstruction 20. Dacryocystography using cone beam CT in patients with lacrimal drainage system obstruction. PubMed Tschopp, Markus; Bornstein, Michael M; Sendi, Pedram; Jacobs, Reinhilde; Goldblum, David 2014-01-01 To assess the usefulness of cone beam CT (CBCT) for dacryocystography (DCG) using either direct syringing or passive application of contrast medium. Ten consecutive patients with epiphora who had CBCT-DCG in a sitting position were retrospectively analyzed. CBCT-DCGs were performed using 2 techniques: direct syringing with contrast medium or using the passive technique, where patients received 3 drops of contrast medium into the conjunctival sac before CBCT-DCG. Clinical and radiologic diagnoses were compared for both groups. The 10 patients (men = 3) had a mean age of 63.2 years. Both techniques proved to be simple procedures with good delineation of the bone, soft tissue, and the contrast medium in the lacrimal system. No side effects were noted. CBCT-DCG is a useful alternative to determine the localization of stenosis in patients with chronic epiphora. 1. Limited-angle reverse helical cone-beam CT for pipeline with low rank decomposition NASA Astrophysics Data System (ADS) Wu, Dong; Zeng, Li 2014-10-01 In this paper, tomographic imaging of pipeline in service by cone-beam computed tomography (CBCT) is studied. With the developed scanning strategy and image model, the quality of reconstructed image is improved. First, a limited-angle reverse helical scanning strategy based on C-arm computed tomography (C-arm CT) is developed for the projection data acquisition of pipeline in service. Then, an image model which considering the resemblance among slices of pipeline is developed. Finally, split Bregman method based algorithm is implemented in solving the model aforementioned. Preliminary results of simulation experiments show that the projection data acquisition strategy and reconstruction method are efficient and feasible, and our method is superior to Feldkamp-Davis-Kress (FDK) algorithm and simultaneous algebraic reconstruction technique (SART). 2. [Application of elastic registration based on Demons algorithm in cone beam CT]. PubMed Pang, Haowen; Sun, Xiaoyang 2014-02-01 We applied Demons and accelerated Demons elastic registration algorithm in radiotherapy cone beam CT (CBCT) images, We provided software support for real-time understanding of organ changes during radiotherapy. We wrote a 3D CBCT image elastic registration program using Matlab software, and we tested and verified the images of two patients with cervical cancer 3D CBCT images for elastic registration, based on the classic Demons algorithm, minimum mean square error (MSE) decreased 59.7%, correlation coefficient (CC) increased 11.0%. While for the accelerated Demons algorithm, MSE decreased 40.1%, CC increased 7.2%. The experimental verification with two methods of Demons algorithm obtained the desired results, but the small difference appeared to be lack of precision, and the total registration time was a little long. All these problems need to be further improved for accuracy and reducing of time. 3. Reconstruction of a cone-beam CT image via forward iterative projection matching SciTech Connect Brock, R. Scott; Docef, Alen; Murphy, Martin J. 2010-12-15 Purpose: To demonstrate the feasibility of reconstructing a cone-beam CT (CBCT) image by deformably altering a prior fan-beam CT (FBCT) image such that it matches the anatomy portrayed in the CBCT projection data set. Methods: A prior FBCT image of the patient is assumed to be available as a source image. A CBCT projection data set is obtained and used as a target image set. A parametrized deformation model is applied to the source FBCT image, digitally reconstructed radiographs (DRRs) that emulate the CBCT projection image geometry are calculated and compared to the target CBCT projection data, and the deformation model parameters are adjusted iteratively until the DRRs optimally match the CBCT projection data set. The resulting deformed FBCT image is hypothesized to be an accurate representation of the patient's anatomy imaged by the CBCT system. The process is demonstrated via numerical simulation. A known deformation is applied to a prior FBCT image and used to create a synthetic set of CBCT target projections. The iterative projection matching process is then applied to reconstruct the deformation represented in the synthetic target projections; the reconstructed deformation is then compared to the known deformation. The sensitivity of the process to the number of projections and the DRR/CBCT projection mismatch is explored by systematically adding noise to and perturbing the contrast of the target projections relative to the iterated source DRRs and by reducing the number of projections. Results: When there is no noise or contrast mismatch in the CBCT projection images, a set of 64 projections allows the known deformed CT image to be reconstructed to within a nRMS error of 1% and the known deformation to within a nRMS error of 7%. A CT image nRMS error of less than 4% is maintained at noise levels up to 3% of the mean projection intensity, at which the deformation error is 13%. At 1% noise level, the number of projections can be reduced to 8 while maintaining 4. A quality assurance program for image quality of cone-beam CT guidance in radiation therapy SciTech Connect Bissonnette, Jean-Pierre; Moseley, Douglas J.; Jaffray, David A. 2008-05-15 The clinical introduction of volumetric x-ray image-guided radiotherapy systems necessitates formal commissioning of the hardware and image-guided processes to be used and drafts quality assurance (QA) for both hardware and processes. Satisfying both requirements provides confidence on the system's ability to manage geometric variations in patient setup and internal organ motion. As these systems become a routine clinical modality, the authors present data from their QA program tracking the image quality performance of ten volumetric systems over a period of 3 years. These data are subsequently used to establish evidence-based tolerances for a QA program. The volumetric imaging systems used in this work combines a linear accelerator with conventional x-ray tube and an amorphous silicon flat-panel detector mounted orthogonally from the accelerator central beam axis, in a cone-beam computed tomography (CBCT) configuration. In the spirit of the AAPM Report No. 74, the present work presents the image quality portion of their QA program; the aspects of the QA protocol addressing imaging geometry have been presented elsewhere. Specifically, the authors are presenting data demonstrating the high linearity of CT numbers, the uniformity of axial reconstructions, and the high contrast spatial resolution of ten CBCT systems (1-2 mm) from two commercial vendors. They are also presenting data accumulated over the period of several months demonstrating the long-term stability of the flat-panel detector and of the distances measured on reconstructed volumetric images. Their tests demonstrate that each specific CBCT system has unique performance. In addition, scattered x rays are shown to influence the imaging performance in terms of spatial resolution, axial reconstruction uniformity, and the linearity of CT numbers. 5. A quality assurance program for image quality of cone-beam CT guidance in radiation therapy. PubMed Bissonnette, Jean-Pierre; Moseley, Douglas J; Jaffray, David A 2008-05-01 The clinical introduction of volumetric x-ray image-guided radiotherapy systems necessitates formal commissioning of the hardware and image-guided processes to be used and drafts quality assurance (QA) for both hardware and processes. Satisfying both requirements provides confidence on the system's ability to manage geometric variations in patient setup and internal organ motion. As these systems become a routine clinical modality, the authors present data from their QA program tracking the image quality performance of ten volumetric systems over a period of 3 years. These data are subsequently used to establish evidence-based tolerances for a QA program. The volumetric imaging systems used in this work combines a linear accelerator with conventional x-ray tube and an amorphous silicon flat-panel detector mounted orthogonally from the accelerator central beam axis, in a cone-beam computed tomography (CBCT) configuration. In the spirit of the AAPM Report No. 74, the present work presents the image quality portion of their QA program; the aspects of the QA protocol addressing imaging geometry have been presented elsewhere. Specifically, the authors are presenting data demonstrating the high linearity of CT numbers, the uniformity of axial reconstructions, and the high contrast spatial resolution of ten CBCT systems (1-2 mm) from two commercial vendors. They are also presenting data accumulated over the period of several months demonstrating the long-term stability of the flat-panel detector and of the distances measured on reconstructed volumetric images. Their tests demonstrate that each specific CBCT system has unique performance. In addition, scattered x rays are shown to influence the imaging performance in terms of spatial resolution, axial reconstruction uniformity, and the linearity of CT numbers. 6. Automatic tracking of implanted fiducial markers in cone beam CT projection images SciTech Connect Marchant, T. E.; Skalski, A.; Matuszewski, B. J. 2012-03-15 Purpose: This paper describes a novel method for simultaneous intrafraction tracking of multiple fiducial markers. Although the proposed method is generic and can be adopted for a number of applications including fluoroscopy based patient position monitoring and gated radiotherapy, the tracking results presented in this paper are specific to tracking fiducial markers in a sequence of cone beam CT projection images. Methods: The proposed method is accurate and robust thanks to utilizing the mean shift and random sampling principles, respectively. The performance of the proposed method was evaluated with qualitative and quantitative methods, using data from two pancreatic and one prostate cancer patients and a moving phantom. The ground truth, for quantitative evaluation, was calculated based on manual tracking preformed by three observers. Results: The average dispersion of marker position error calculated from the tracking results for pancreas data (six markers tracked over 640 frames, 3840 marker identifications) was 0.25 mm (at iscoenter), compared with an average dispersion for the manual ground truth estimated at 0.22 mm. For prostate data (three markers tracked over 366 frames, 1098 marker identifications), the average error was 0.34 mm. The estimated tracking error in the pancreas data was < 1 mm (2 pixels) in 97.6% of cases where nearby image clutter was detected and in 100.0% of cases with no nearby image clutter. Conclusions: The proposed method has accuracy comparable to that of manual tracking and, in combination with the proposed batch postprocessing, superior robustness. Marker tracking in cone beam CT (CBCT) projections is useful for a variety of purposes, such as providing data for assessment of intrafraction motion, target tracking during rotational treatment delivery, motion correction of CBCT, and phase sorting for 4D CBCT. 7. Modulation transfer function determination using the edge technique for cone-beam micro-CT NASA Astrophysics Data System (ADS) Rong, Junyan; Liu, Wenlei; Gao, Peng; Liao, Qimei; Lu, Hongbing 2016-03-01 Evaluating spatial resolution is an essential work for cone-beam computed tomography (CBCT) manufacturers, prototype designers or equipment users. To investigate the cross-sectional spatial resolution for different transaxial slices with CBCT, the slanted edge technique with a 3D slanted edge phantom are proposed and implemented on a prototype cone-beam micro-CT. Three transaxial slices with different cone angles are under investigation. An over-sampled edge response function (ERF) is firstly generated from the intensity of the slightly tiled air to plastic edge in each row of the transaxial reconstruction image. Then the oversampled ESF is binned and smoothed. The derivative of the binned and smoothed ERF gives the line spread function (LSF). At last the presampled modulation transfer function (MTF) is calculated by taking the modulus of the Fourier transform of the LSF. The spatial resolution is quantified with the spatial frequencies at 10% MTF level and full-width-half-maximum (FWHM) value. The spatial frequencies at 10% of MTFs are 3.1+/-0.08mm-1, 3.0+/-0.05mm-1, and 3.2+/-0.04mm-1 for the three transaxial slices at cone angles of 3.8°, 0°, and -3.8° respectively. The corresponding FWHMs are 252.8μm, 261.7μm and 253.6μm. Results indicate that cross-sectional spatial resolution has no much differences when transaxial slices being 3.8° away from z=0 plane for the prototype conebeam micro-CT. 8. The influence of bowtie filtration on x-ray photons distribution in cone beam CT NASA Astrophysics Data System (ADS) Jiang, Shanghai; Feng, Peng; Wei, Biao; He, Peng; Deng, Luzhen; Zhang, Wei 2015-10-01 Bowtie filters are used to modulate an incoming x-ray beam as a function of the angle of the x-ray to balance the photon flux on a detector array. Because of their key roles in radiation dose reduction and multi-energy imaging, bowtie filters have attracted a major attention in modern X-ray computed tomography (CT). However, few researches are concerned on the effects of the structure and materials for the bowtie filter in the Cone Beam CT (CBCT). In this study, the influence of bowtie filters' structure and materials on X-ray photons distribution are analyzed using Monte Carlo (MC) simulations by MCNP5 code. In the current model, the phantom was radiated by virtual X-ray source (its' energy spectrum calculated by SpekCalc program) filtered using bowtie, then all photons were collected through array photoncounting detectors. In the process above, two bowtie filters' parameters which include center thickness (B), edge thickness (controlled by A), changed respectively. Two kinds of situation are simulated: 1) A=0.036, B=1, 2, 3, 4, 5, 6mm and the material is aluminum; 2) A=0.016, 0.036, 0.056, 0.076, 0.096, B=2mm and the material is aluminum. All the X-ray photons' distribution are measured through MCNP. The results show that reduction in center thickness and edge thickness can reduce the number of background photons in CBCT. Our preliminary research shows that structure parameters of bowtie filter can influence X-ray photons, furthermore, radiation dose distribution, which provide some evidences in design of bowtie filter for reducing radiation dose in CBCT. 9. A patient set-up protocol based on partially blocked cone-beam CT. PubMed Zhu, Lei; Wang, Jing; Xie, Yaoqin; Starman, Jared; Fahrig, Rebecca; Xing, Lei 2010-04-01 Three-dimensional x-ray cone-beam CT (CBCT) is being increasingly used in radiation therapy. Since the whole treatment course typically lasts several weeks, the repetitive x-ray imaging results in large radiation dose delivered on the patient. In the current radiation therapy treatment, CBCT is mainly used for patient set-up, and a rigid transformation of the CBCT data from the planning CT data is also assumed. For an accurate rigid registration, it is not necessary to acquire a full 3D image. In this paper, we propose a patient set-up protocol based on partially blocked CBCT. A sheet of lead strips is inserted between the x-ray source and the scanned patient. From the incomplete projection data, only several axial slices are reconstructed and used in the image registration for patient set-up. Since the radiation is partially blocked, the dose delivered onto the patient is significantly reduced, with an additional benefit of reduced scatter signals. The proposed approach is validated using experiments on two anthropomorphic phantoms. As x-ray beam blocking ratio increases, more dose reduction is achieved, while the patient set-up error also increases. To investigate this tradeoff, two lead sheets with different strip widths are implemented, which correspond to radiation dose reduction of approximately 6 and approximately 11, respectively. We compare the registration results using the partially blocked CBCT with those using the regular CBCT. Both lead sheets achieve high patient set-up accuracies. It is seen that, using the lead sheet with radiation dose reduction by a factor of approximately 11, the patient set-up error is still less than 1mm in translation and less than 0.2 degrees in rotation. The comparison of the reconstructed images also shows that the image quality of the illuminated slices in the partially blocked CBCT is much improved over that in the regular CBCT. 10. Iterative reconstruction of cone-beam CT data on a cluster NASA Astrophysics Data System (ADS) Benson, Thomas M.; Gregor, Jens 2007-02-01 Three-dimensional iterative reconstruction of large CT data sets poses several challenges in terms of the associated computational and memory requirements. In this paper, we present results obtained by implementing a computational framework for reconstructing axial cone-beam CT data using a cluster of inexpensive dualprocessor PCs. In particular, we discuss our parallelization approach, which uses POSIX threads and message passing (MPI) for local and remote load distribution, as well as the interaction of that load distribution with the implementation of ordered subset based algorithms. We also consider a heuristic data-driven 3D focus of attention algorithm that reduces the amount of data that must be considered for many data sets. Furthermore, we present a modification to the SIRT algorithm that reduces the amount of data that must be communicated between processes. Finally, we introduce a method of separating the work in such a way that some computation can be overlapped with the MPI communication thus further reducing the overall run-time. We summarize the performance results using reconstructions of experimental data. 11. Dose and image quality for a cone-beam C-arm CT system SciTech Connect Fahrig, Rebecca; Dixon, Robert; Payne, Thomas; Morin, Richard L.; Ganguly, Arundhuti; Strobel, Norbert 2006-12-15 We assess dose and image quality of a state-of-the-art angiographic C-arm system (Axiom Artis dTA, Siemens Medical Solutions, Forchheim, Germany) for three-dimensional neuro-imaging at various dose levels and tube voltages and an associated measurement method. Unlike conventional CT, the beam length covers the entire phantom, hence, the concept of computed tomography dose index (CTDI) is not the metric of choice, and one can revert to conventional dosimetry methods by directly measuring the dose at various points using a small ion chamber. This method allows us to define and compute a new dose metric that is appropriate for a direct comparison with the familiar CTDI{sub W} of conventional CT. A perception study involving the CATPHAN 600 indicates that one can expect to see at least the 9 mm inset with 0.5% nominal contrast at the recommended head-scan dose (60 mGy) when using tube voltages ranging from 70 kVp to 125 kVp. When analyzing the impact of tube voltage on image quality at a fixed dose, we found that lower tube voltages gave improved low contrast detectability for small-diameter objects. The relationships between kVp, image noise, dose, and contrast perception are discussed. 12. Flat panel detector-based cone beam CT for dynamic imaging: system evaluation NASA Astrophysics Data System (ADS) Ning, Ruola; Conover, David; Yu, Yong; Zhang, Yan; Cai, Weixing; Yang, Dong; Lu, Xianghua 2006-03-01 The purpose of this study is to characterize a newly built flat panel detector (FPD)-based cone beam CT (CBCT) prototype for dynamic imaging. A CBCT prototype has been designed and constructed by completely modifying a GE HiSpeed Advantage (HSA) CT gantry, incorporating a newly acquired large size real-time FPD (Varian PaxScan 4030CB), a new x-ray generator and a dual focal spot angiography x-ray tube that allows the full coverage of the detector. During data acquisition, the x-ray tube and the FPD can be rotated on the gantry over Nx360 degrees due to integrated slip ring technology with the rotation speed of one second/revolution. With a single scan time of up to 40 seconds , multiple sets of reconstructions can be performed for dynamic studies. The upgrade of this system has been completed. The prototype was used for a series of preliminary phantom studies: different sizes of breast phantoms, a Humanoid chest phantom and scatter correction studies. The results of the phantom studies demonstrate that good image quality can be achieved with this newly built prototype. 13. Deriving Hounsfield units using grey levels in cone beam CT: a clinical application. PubMed Reeves, T E; Mah, P; McDavid, W D 2012-09-01 To present a clinical study demonstrating a method to derive Hounsfield units from grey levels in cone beam CT (CBCT). An acrylic intraoral reference object with aluminium, outer bone equivalent material (cortical bone), inner bone equivalent material (trabecular bone), polymethlymethacrylate and water equivalent material was used. Patients were asked if they would be willing to have an acrylic bite plate with the reference object placed in their mouth during a routine CBCT scan. There were 31 scans taken on the Asahi Alphard 3030 (Belmont Takara, Kyoto, Japan) and 30 scans taken on the Planmeca ProMax 3D (Planmeca, Helsinki, Finland) CBCT. Linear regression between the grey levels of the reference materials and their linear attenuation coefficients was performed for various photon energies. The energy with the highest regression coefficient was chosen as the effective energy. The attenuation coefficients for the five materials at the effective energy were scaled as Hounsfield units using the standard Hounsfield units equation and compared to those derived from the measured grey levels of the materials using the regression equation. In general, there was a satisfactory linear relation between the grey levels and the attenuation coefficients. This made it possible to calculate Hounsfield units from the measured grey levels. Uncertainty in determining effective energies resulted in unrealistic effective energies and significant variability of calculated CT numbers. Linear regression from grey levels directly to Hounsfield units at specified energies resulted in greater consistency. The clinical application of a method for deriving Hounsfield units from grey levels in CBCT was demonstrated. 14. CT metal artifact reduction method correcting for beam hardening and missing projections NASA Astrophysics Data System (ADS) Verburg, Joost M.; Seco, Joao 2012-05-01 We present and validate a computed tomography (CT) metal artifact reduction method that is effective for a wide spectrum of clinical implant materials. Projections through low-Z implants such as titanium were corrected using a novel physics correction algorithm that reduces beam hardening errors. In the case of high-Z implants (dental fillings, gold, platinum), projections through the implant were considered missing and regularized iterative reconstruction was performed. Both algorithms were combined if multiple implant materials were present. For comparison, a conventional projection interpolation method was implemented. In a blinded and randomized evaluation, ten radiation oncologists ranked the quality of patient scans on which the different methods were applied. For scans that included low-Z implants, the proposed method was ranked as the best method in 90% of the reviews. It was ranked superior to the original reconstruction (p = 0.0008), conventional projection interpolation (p < 0.0001) and regularized limited data reconstruction (p = 0.0002). All reviewers ranked the method first for scans with high-Z implants, and better as compared to the original reconstruction (p < 0.0001) and projection interpolation (p = 0.004). We conclude that effective reduction of CT metal artifacts can be achieved by combining algorithms tailored to specific types of implant materials. 15. Technical note: cone beam CT imaging for 3D image guided brachytherapy for gynecological HDR brachytherapy. PubMed Reniers, Brigitte; Verhaegen, Frank 2011-05-01 This paper focuses on a novel image guidance technique for gynecological brachytherapy treatment. The present standard technique is orthogonal x-ray imaging to reconstruct the 3D position of the applicator when the availability of CT or MR is limited. Our purpose is to introduce 3D planning in the brachytherapy suite using a cone beam CT (CBCT) scanner dedicated to brachytherapy. This would avoid moving the patient between imaging and treatment procedures which may cause applicator motion. This could be used to replace the x-ray images or to verify the treatment position immediately prior to dose delivery. The sources of CBCT imaging artifacts in the case of brachytherapy were identified and removed where possible. The image quality was further improved by modifying the x-ray tube voltage, modifying the compensator bowtie filter and optimizing technical parameters such as the detector gain or tube current. The image quality was adequate to reconstruct the applicators in the treatment planning system. The position of points A and the localization of the organs at risk (OAR) ICRU points is easily achieved. This allows identification of cases where the rectum had moved with respect to the ICRU point which would require asymmetrical source loading. A better visualization is a first step toward a better sparing of the OAR. Treatment planning for gynecological brachytherapy is aided by CBCT images. CBCT presents advantages over CT: acquisition in the treatment room and in the treatment position due to the larger clearance of the CBCT, thereby reducing problems associated to moving patients between rooms. 16. Reduction of radiation exposure by lead curtain shielding in dedicated extremity cone beam CT. PubMed Lee, C-H; Ryu, J H; Lee, Y-H; Yoon, K-H 2015-06-01 A dedicated extremity cone beam CT (CBCT) was introduced recently, and is rapidly becoming an attractive modality for extremity imaging. This study aimed to evaluate the effectiveness of a curtain-shaped lead shielding in reducing the exposure of patients to scattered radiation in dedicated extremity CBCT. A dedicated extremity CBCT scanner was used. The lead shielding curtain was 42 × 60 cm with 0.5-mm lead equivalent. Scattered radiation dose from CBCT was measured using thermoluminescence dosimetry chips at 20 points, at different distances and directions from the CT gantry. Two sets of scattered radiation dose measurements were performed before and after installation of curtain-shaped lead shield, and the percentage reduction in dose in air was calculated. Mean radiation exposure dose at measured points was 34.46 ± 48.40 μGy without curtains and 9.67 ± 4.53 μGy with curtains, exhibiting 71.94% reduction (p = 0.000). The use of lead shielding curtains significantly reduced scattered radiation at 0.5, 1.0 and 1.5 m from the CT gantry, with percent reductions of 84.8%, 58.0% and 35.5%, respectively (p = 0.000, 0.000 and 0.002). The percent reduction in the diagonal (+45°, -45°) and vertical forward (0°) directions were 86.3%, 83.1% and 77.7%, respectively, and were statistically significant (p = 0.029, 0.020 and 0.041). Shielding with lead curtains suggests an easy and effective method for reducing patient exposure to radiation in extremity CBCT imaging. Lead shielding curtains are an effective technique to reduce scattered radiation dose in dedicated extremity CBCT, with higher dose reduction closer to the gantry opening. 17. Directional sinogram interpolation for motion weighted 4D cone-beam CT reconstruction NASA Astrophysics Data System (ADS) Zhang, Hua; Kruis, Matthijs; Sonke, Jan-Jakob 2017-03-01 The image quality of respiratory sorted four-dimensional (4D) cone-beam (CB) computed tomography (CT) is often limited by streak artifacts due to insufficient projections. A motion weighted reconstruction (MWR) method is proposed to decrease streak artifacts and improve image quality. Firstly, respiratory correlated CBCT projections were interpolated by directional sinogram interpolation (DSI) to generate additional CB projections for each phase and subsequently reconstructed. Secondly, local motion was estimated by deformable image registration of the interpolated 4D CBCT. Thirdly, a regular 3D FDK CBCT was reconstructed from the non-interpolated projections. Finally, weights were assigned to each voxel, based on the local motion, and then were used to combine the 3D FDK CBCT and interpolated 4D CBCT to generate the final 4D image. MWR method was compared with regular 4D CBCT scans as well as McKinnon and Bates (MKB) based reconstructions. Comparisons were made in terms of (1) comparing the steepness of an extracted profile from the boundary of the region-of-interest (ROI), (2) contrast-to-noise ratio (CNR) inside certain ROIs, and (3) the root-mean-square-error (RMSE) between the planning CT and CBCT inside a homogeneous moving region. Comparisons were made for both a phantom and four patient scans. In a 4D phantom, RMSE were reduced by 24.7% and 38.7% for MKB and MWR respectively, compared to conventional 4D CBCT. Meanwhile, interpolation induced blur was minimal in static regions for MWR based reconstructions. In regions with considerable respiratory motion, image blur using MWR is less than the MKB and 3D Feldkamp (FDK) methods. In the lung cancer patients, average CNRs of MKB, DSI and MWR improved by a factor 1.7, 2.8 and 3.5 respectively relative to 4D FDK. MWR effectively reduces RMSE in 4D cone-beam CT and improves the image quality in both the static and respiratory moving regions compared to 4D FDK and MKB methods. 18. Assessment of the effective doses from two dental cone beam CT devices PubMed Central Schilling, R; Geibel, M-A 2013-01-01 Objectives: This study compares the effective dose for different fields of view (FOVs), resolutions and X-ray parameters from two cone beam CT units: the KaVo 3D (three-dimensional) eXam and the KaVo Pan eXam Plus 3D (KaVo Dental, Biberach, Germany). Methods: Measurements were made using thermoluminescent dosemeter chips in a radiation analog dosimetry head and neck phantom. The calculations of effective doses are based on the ICRP 60 and ICRP 103 recommendations of the International Commission on Radiological Protection. Results: Effective doses from the 3D eXam ranged between 32.8 µSv and 169.8 µSv, and for the Pan eXam Plus effective doses ranged between 40.2 µSv and 183.7 µSv; these were measured using ICRP 103 weighting factors in each case. The increase in effective dose between ICRP 60 and ICRP 103 recommendations averaged 157% for all measurements. Conclusions: Effective doses can be reduced significantly with the choice of lower resolutions and mAs settings as well as smaller FOVs to avoid tissues sensitive to radiation being inside the direct beam. Larger FOVs do not necessarily lead to higher effective doses. PMID:23420855 19. Assessment of the effective doses from two dental cone beam CT devices. PubMed Schilling, R; Geibel, M-A 2013-01-01 This study compares the effective dose for different fields of view (FOVs), resolutions and X-ray parameters from two cone beam CT units: the KaVo 3D (three-dimensional) eXam and the KaVo Pan eXam Plus 3D (KaVo Dental, Biberach, Germany). Measurements were made using thermoluminescent dosemeter chips in a radiation analog dosimetry head and neck phantom. The calculations of effective doses are based on the ICRP 60 and ICRP 103 recommendations of the International Commission on Radiological Protection. Effective doses from the 3D eXam ranged between 32.8 µSv and 169.8 µSv, and for the Pan eXam Plus effective doses ranged between 40.2 µSv and 183.7 µSv; these were measured using ICRP 103 weighting factors in each case. The increase in effective dose between ICRP 60 and ICRP 103 recommendations averaged 157% for all measurements. Effective doses can be reduced significantly with the choice of lower resolutions and mAs settings as well as smaller FOVs to avoid tissues sensitive to radiation being inside the direct beam. Larger FOVs do not necessarily lead to higher effective doses. 20. Reconstruction algorithm for polychromatic CT imaging: application to beam hardening correction NASA Technical Reports Server (NTRS) Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Yen, S. Y.; Napel, S. 2000-01-01 This paper presents a new reconstruction algorithm for both single- and dual-energy computed tomography (CT) imaging. By incorporating the polychromatic characteristics of the X-ray beam into the reconstruction process, the algorithm is capable of eliminating beam hardening artifacts. The single energy version of the algorithm assumes that each voxel in the scan field can be expressed as a mixture of two known substances, for example, a mixture of trabecular bone and marrow, or a mixture of fat and flesh. These assumptions are easily satisfied in a quantitative computed tomography (QCT) setting. We have compared our algorithm to three commonly used single-energy correction techniques. Experimental results show that our algorithm is much more robust and accurate. We have also shown that QCT measurements obtained using our algorithm are five times more accurate than that from current QCT systems (using calibration). The dual-energy mode does not require any prior knowledge of the object in the scan field, and can be used to estimate the attenuation coefficient function of unknown materials. We have tested the dual-energy setup to obtain an accurate estimate for the attenuation coefficient function of K2 HPO4 solution. 1. Differences between panoramic and Cone Beam-CT in the surgical evaluation of lower third molars PubMed Central Rodriguez y Baena, Ruggero; Beltrami, Riccardo; Tagliabo, Angelo; Rizzo, Silvana 2017-01-01 Background The aim of this study was to evaluate the ability to identify the contiguity between the root of the mandibular third molar and the mandibular canal (MC) in panoramic radiographs compared with Cone Beam-CT. Material and Methods Panoramic radiographs of 326 third molars and CBCT radiographs of 86 cases indicated for surgery and considered at risk were evaluated. The following signs were assessed in panoramic radiographs as risk factors: radiolucent band, loss of MC border, change in MC direction, MC narrowing, root narrowing, root deviation, bifid apex, superimposition, and contact between the root third molar and the MC. Results Radiographic signs associated with absence of MC cortical bone are: radiolucent band, loss of MC border, change in MC direction, and superimposition. The number of risk factors was significantly increased with an increasing depth of inclusion. CBCT revealed a significant association between the absence of MC cortical bone and a lingual or interradicular position of the MC. Conclusions In cases in which panoramic radiographs do not exclude contiguity between the MC and tooth, careful assessment the signs and risks on CBCT radiographs is indicated for proper identification of the relationships between anatomic structures. Key words:Panoramic radiography, Cone-Beam computed tomography, third molar, mandibular nerve. PMID:28210446 2. Development of a Beam Hardening Correction Method for a microCT Scanner Prototype SciTech Connect Kikushima, J.; Rodriguez-Villafuerte, M.; Martinez-Davalos, A. 2010-12-07 The radiographic projections acquired with a microCT were simulated and then corrected for beam hardening effects using the linearized signal to equivalent thickness (LSET) method. This procedure requires a calibration signal for each pixel obtained from a set of images with filters of increasing thickness. The projections are corrected by converting the signal to an equivalent thickness using interpolation over the calibration images. The method was validated using simulated projections of different phantoms. Two calibration sets were simulated using aluminum and water filters of thicknesses ranging from 0 to 5 mm and from 0 to 50 mm, respectively. A simulation of the phantoms' projections using a monoenergetic beam was also obtained to establish the relative intensity on the tomographic images when no cupping artifacts are present. Comparison between corrected and uncorrected tomographic images shows that the LSET method effectively corrects the cupping artifact. Streaking artifacts correction with the LSET method shows better results than with the traditional water correction method. Results are independent of the two calibration materials used. 3. A Method to Improve Electron Density Measurement of Cone-Beam CT Using Dual Energy Technique PubMed Central Men, Kuo; Dai, Jian-Rong; Li, Ming-Hui; Chen, Xin-Yuan; Zhang, Ke; Tian, Yuan; Huang, Peng; Xu, Ying-Jie 2015-01-01 Purpose. To develop a dual energy imaging method to improve the accuracy of electron density measurement with a cone-beam CT (CBCT) device. Materials and Methods. The imaging system is the XVI CBCT system on Elekta Synergy linac. Projection data were acquired with the high and low energy X-ray, respectively, to set up a basis material decomposition model. Virtual phantom simulation and phantoms experiments were carried out for quantitative evaluation of the method. Phantoms were also scanned twice with the high and low energy X-ray, respectively. The data were decomposed into projections of the two basis material coefficients according to the model set up earlier. The two sets of decomposed projections were used to reconstruct CBCT images of the basis material coefficients. Then, the images of electron densities were calculated with these CBCT images. Results. The difference between the calculated and theoretical values was within 2% and the correlation coefficient of them was about 1.0. The dual energy imaging method obtained more accurate electron density values and reduced the beam hardening artifacts obviously. Conclusion. A novel dual energy CBCT imaging method to calculate the electron densities was developed. It can acquire more accurate values and provide a platform potentially for dose calculation. PMID:26346510 4. An experimental study on the influence of scatter and beam hardening in x-ray CT for dimensional metrology NASA Astrophysics Data System (ADS) Lifton, J. J.; Malcolm, A. A.; McBride, J. W. 2016-01-01 Scattered radiation and beam hardening introduce artefacts that degrade the quality of data in x-ray computed tomography (CT). It is unclear how these artefacts influence dimensional measurements evaluated from CT data. Understanding and quantifying the influence of these artefacts on dimensional measurements is required to evaluate the uncertainty of CT-based dimensional measurements. In this work the influence of scatter and beam hardening on dimensional measurements is investigated using the beam stop array scatter correction method and spectrum pre-filtration for the measurement of an object with internal and external cylindrical dimensional features. Scatter and beam hardening are found to influence dimensional measurements when evaluated using the ISO50 surface determination method. On the other hand, a gradient-based surface determination method is found to be robust to the influence of artefacts and leads to more accurate dimensional measurements than those evaluated using the ISO50 method. In addition to these observations the GUM method for evaluating standard measurement uncertainties is applied and the standard measurement uncertainty due to scatter and beam hardening is estimated. 5. High-fidelity artifact correction for cone-beam CT imaging of the brain NASA Astrophysics Data System (ADS) Sisniega, A.; Zbijewski, W.; Xu, J.; Dang, H.; Stayman, J. W.; Yorkston, J.; Aygun, N.; Koliatsos, V.; Siewerdsen, J. H. 2015-02-01 CT is the frontline imaging modality for diagnosis of acute traumatic brain injury (TBI), involving the detection of fresh blood in the brain (contrast of 30-50 HU, detail size down to 1 mm) in a non-contrast-enhanced exam. A dedicated point-of-care imaging system based on cone-beam CT (CBCT) could benefit early detection of TBI and improve direction to appropriate therapy. However, flat-panel detector (FPD) CBCT is challenged by artifacts that degrade contrast resolution and limit application in soft-tissue imaging. We present and evaluate a fairly comprehensive framework for artifact correction to enable soft-tissue brain imaging with FPD CBCT. The framework includes a fast Monte Carlo (MC)-based scatter estimation method complemented by corrections for detector lag, veiling glare, and beam hardening. The fast MC scatter estimation combines GPU acceleration, variance reduction, and simulation with a low number of photon histories and reduced number of projection angles (sparse MC) augmented by kernel de-noising to yield a runtime of ~4 min per scan. Scatter correction is combined with two-pass beam hardening correction. Detector lag correction is based on temporal deconvolution of the measured lag response function. The effects of detector veiling glare are reduced by deconvolution of the glare response function representing the long range tails of the detector point-spread function. The performance of the correction framework is quantified in experiments using a realistic head phantom on a testbench for FPD CBCT. Uncorrected reconstructions were non-diagnostic for soft-tissue imaging tasks in the brain. After processing with the artifact correction framework, image uniformity was substantially improved, and artifacts were reduced to a level that enabled visualization of ~3 mm simulated bleeds throughout the brain. Non-uniformity (cupping) was reduced by a factor of 5, and contrast of simulated bleeds was improved from ~7 to 49.7 HU, in good agreement 6. A One-Step Cone-Beam CT-Enabled Planning-to-Treatment Model for Palliative Radiotherapy-From Development to Implementation SciTech Connect Wong, Rebecca K.S.; Letourneau, Daniel; Varma, Anita; Bissonnette, Jean Pierre; Fitzpatrick, David; Grabarz, Daniel; Elder, Christine; Martin, Melanie; Bezjak, Andrea; Panzarella, Tony; Gospodarowicz, Mary; Jaffray, David A. 2012-11-01 Purpose: To develop a cone-beam computed tomography (CT)-enabled one-step simulation-to-treatment process for the treatment of bone metastases. Methods and Materials: A three-phase prospective study was conducted. Patients requiring palliative radiotherapy to the spine, mediastinum, or abdomen/pelvis suitable for treatment with simple beam geometry ({<=}2 beams) were accrued. Phase A established the accuracy of cone-beam CT images for the purpose of gross tumor target volume (GTV) definition. Phase B evaluated the feasibility of implementing the cone-beam CT-enabled planning process at the treatment unit. Phase C evaluated the online cone-beam CT-enabled process for the planning and treatment of patients requiring radiotherapy for bone metastases. Results: Eighty-four patients participated in this study. Phase A (n = 9) established the adequacy of cone-beam CT images for target definition. Phase B (n = 45) established the quality of treatment plans to be adequate for clinical implementation for bone metastases. When the process was applied clinically in bone metastases (Phase C), the degree of overlap between planning computed tomography (PCT) and cone-beam CT for GTV and between PCT and cone-beam CT for treatment field was 82% {+-} 11% and 97% {+-} 4%, respectively. The oncologist's decision to accept the plan under a time-pressured environment remained of high quality, with the cone-beam CT-generated treatment plan delivering at least 90% of the prescribed dose to 100% {+-} 0% of the cone-beam CT planning target volume (PTV). With the assumption that the PCT PTV is the gold-standard target, the cone-beam CT-generated treatment plan delivered at least 90% and at least 95% of dose to 98% {+-} 2% and 97% {+-} 5% of the PCT PTV, respectively. The mean time for the online planning and treatment process was 32.7 {+-} 4.0 minutes. Patient satisfaction was high, with a trend for superior satisfaction with the cone-beam CT-enabled process. Conclusions: The cone-beam CT 7. A one-step cone-beam CT-enabled planning-to-treatment model for palliative radiotherapy-from development to implementation. PubMed Wong, Rebecca K S; Letourneau, Daniel; Varma, Anita; Bissonnette, Jean Pierre; Fitzpatrick, David; Grabarz, Daniel; Elder, Christine; Martin, Melanie; Bezjak, Andrea; Panzarella, Tony; Gospodarowicz, Mary; Jaffray, David A 2012-11-01 To develop a cone-beam computed tomography (CT)-enabled one-step simulation-to-treatment process for the treatment of bone metastases. A three-phase prospective study was conducted. Patients requiring palliative radiotherapy to the spine, mediastinum, or abdomen/pelvis suitable for treatment with simple beam geometry (≤2 beams) were accrued. Phase A established the accuracy of cone-beam CT images for the purpose of gross tumor target volume (GTV) definition. Phase B evaluated the feasibility of implementing the cone-beam CT-enabled planning process at the treatment unit. Phase C evaluated the online cone-beam CT-enabled process for the planning and treatment of patients requiring radiotherapy for bone metastases. Eighty-four patients participated in this study. Phase A (n = 9) established the adequacy of cone-beam CT images for target definition. Phase B (n = 45) established the quality of treatment plans to be adequate for clinical implementation for bone metastases. When the process was applied clinically in bone metastases (Phase C), the degree of overlap between planning computed tomography (PCT) and cone-beam CT for GTV and between PCT and cone-beam CT for treatment field was 82% ± 11% and 97% ± 4%, respectively. The oncologist's decision to accept the plan under a time-pressured environment remained of high quality, with the cone-beam CT-generated treatment plan delivering at least 90% of the prescribed dose to 100% ± 0% of the cone-beam CT planning target volume (PTV). With the assumption that the PCT PTV is the gold-standard target, the cone-beam CT-generated treatment plan delivered at least 90% and at least 95% of dose to 98% ± 2% and 97% ± 5% of the PCT PTV, respectively. The mean time for the online planning and treatment process was 32.7 ± 4.0 minutes. Patient satisfaction was high, with a trend for superior satisfaction with the cone-beam CT-enabled process. The cone-beam CT-enabled palliative treatment process is feasible and is ready for 8. TU-A-9A-09: Proton Beam X-Ray Fluorescence CT SciTech Connect Bazalova, M; Ahmad, M; Fahrig, R; Xing, L 2014-06-15 Purpose: To evaluate x-ray fluorescence computed tomography induced with proton beams (pXFCT) for imaging of gold contrast agent. Methods: Proton-induced x-ray fluorescence was studied by means of Monte Carlo (MC) simulations using TOPAS, a MC code based on GEANT4. First, proton-induced K-shell and L-shell fluorescence was studied as a function of proton beam energy and 1) depth in water and 2) size of contrast object. Second, pXFCT images of a 2-cm diameter cylindrical phantom with four 5- mm diameter contrast vials and of a 20-cm diameter phantom with 1-cm diameter vials were simulated. Contrast vials were filled with water and water solutions with 1-5% gold per weight. Proton beam energies were varied from 70-250MeV. pXFCT sinograms were generated based on the net number of gold K-shell or L-shell x-rays determined by interpolations from the neighboring 0.5keV energy bins of spectra collected with an idealized 4π detector. pXFCT images were reconstructed with filtered-back projection, and no attenuation correction was applied. Results: Proton induced x-ray fluorescence spectra showed very low background compared to x-ray induced fluorescence. Proton induced L-shell fluorescence had a higher cross-section compared to K-shell fluorescence. Excitation of L-shell fluorescence was most efficient for low-energy protons, i.e. at the Bragg peak. K-shell fluorescence increased with increasing proton beam energy and object size. The 2% and 5% gold contrast vials were accurately reconstructed in K-shell pXFCT images of both the 2-cm and 20-cm diameter phantoms. Small phantom L-shell pXFCT image required attenuation correction and had a higher sensitivity for 70MeV protons compared to 250MeV protons. With attenuation correction, L-shell pXFCT might be a feasible option for imaging of small size (∼2cm) objects. Imaging doses for all simulations were 5-30cGy. Conclusion: Proton induced x-ray fluorescence CT promises to be an alternative quantitative imaging technique to 9. High-quality four-dimensional cone-beam CT by deforming prior images NASA Astrophysics Data System (ADS) Wang, Jing; Gu, Xuejun 2013-01-01 Due to a limited number of projections at each phase, severe view aliasing artifacts are present in four-dimensional cone beam computed tomography (4D-CBCT) when reconstruction is performed using conventional algorithms. In this work, we aim to obtain high-quality 4D-CBCT of lung cancer patients in radiation therapy by deforming the planning CT. The deformation vector fields (DVF) to deform the planning CT are estimated through matching the forward projection of the deformed prior image and measured on-treatment CBCT projection. The estimation of the DVF is formulated as an unconstrained optimization problem, where the objective function to be minimized is the sum of the squared difference between the forward projection of the deformed planning CT and the measured 4D-CBCT projection. A nonlinear conjugate gradient method is used to solve the DVF. As the number of the variables in the DVF is much greater than the number of measurements, the solution to such a highly ill-posed problem is very sensitive to the initials during the optimization process. To improve the estimation accuracy of DVF, we proposed a new strategy to obtain better initials for the optimization. In this strategy, 4D-CBCT is first reconstructed by total variation minimization. Demons deformable registration is performed to register the planning CT and the 4D-CBCT reconstructed by total variation minimization. The resulted DVF from demons registration is then used as the initial parameters in the optimization process. A 4D nonuniform rotational B-spline-based cardiac-torso (NCAT) phantom and a patient 4D-CBCT are used to evaluate the algorithm. Image quality of 4D-CBCT is substantially improved by using the proposed strategy in both NCAT phantom and patient studies. The proposed method has the potential to improve the temporal resolution of 4D-CBCT. Improved 4D-CBCT can better characterize the motion of lung tumors and will be a valuable tool for image-guided adaptive radiation therapy. 10. A hybrid reconstruction algorithm for fast and accurate 4D cone-beam CT imaging SciTech Connect Yan, Hao; Folkerts, Michael; Jiang, Steve B. E-mail: [email protected]; Jia, Xun E-mail: [email protected]; Zhen, Xin; Li, Yongbao; Pan, Tinsu; Cervino, Laura 2014-07-15 Purpose: 4D cone beam CT (4D-CBCT) has been utilized in radiation therapy to provide 4D image guidance in lung and upper abdomen area. However, clinical application of 4D-CBCT is currently limited due to the long scan time and low image quality. The purpose of this paper is to develop a new 4D-CBCT reconstruction method that restores volumetric images based on the 1-min scan data acquired with a standard 3D-CBCT protocol. Methods: The model optimizes a deformation vector field that deforms a patient-specific planning CT (p-CT), so that the calculated 4D-CBCT projections match measurements. A forward-backward splitting (FBS) method is invented to solve the optimization problem. It splits the original problem into two well-studied subproblems, i.e., image reconstruction and deformable image registration. By iteratively solving the two subproblems, FBS gradually yields correct deformation information, while maintaining high image quality. The whole workflow is implemented on a graphic-processing-unit to improve efficiency. Comprehensive evaluations have been conducted on a moving phantom and three real patient cases regarding the accuracy and quality of the reconstructed images, as well as the algorithm robustness and efficiency. Results: The proposed algorithm reconstructs 4D-CBCT images from highly under-sampled projection data acquired with 1-min scans. Regarding the anatomical structure location accuracy, 0.204 mm average differences and 0.484 mm maximum difference are found for the phantom case, and the maximum differences of 0.3–0.5 mm for patients 1–3 are observed. As for the image quality, intensity errors below 5 and 20 HU compared to the planning CT are achieved for the phantom and the patient cases, respectively. Signal-noise-ratio values are improved by 12.74 and 5.12 times compared to results from FDK algorithm using the 1-min data and 4-min data, respectively. The computation time of the algorithm on a NVIDIA GTX590 card is 1–1.5 min per phase 11. High-quality four-dimensional cone-beam CT by deforming prior images. PubMed Wang, Jing; Gu, Xuejun 2013-01-21 Due to a limited number of projections at each phase, severe view aliasing artifacts are present in four-dimensional cone beam computed tomography (4D-CBCT) when reconstruction is performed using conventional algorithms. In this work, we aim to obtain high-quality 4D-CBCT of lung cancer patients in radiation therapy by deforming the planning CT. The deformation vector fields (DVF) to deform the planning CT are estimated through matching the forward projection of the deformed prior image and measured on-treatment CBCT projection. The estimation of the DVF is formulated as an unconstrained optimization problem, where the objective function to be minimized is the sum of the squared difference between the forward projection of the deformed planning CT and the measured 4D-CBCT projection. A nonlinear conjugate gradient method is used to solve the DVF. As the number of the variables in the DVF is much greater than the number of measurements, the solution to such a highly ill-posed problem is very sensitive to the initials during the optimization process. To improve the estimation accuracy of DVF, we proposed a new strategy to obtain better initials for the optimization. In this strategy, 4D-CBCT is first reconstructed by total variation minimization. Demons deformable registration is performed to register the planning CT and the 4D-CBCT reconstructed by total variation minimization. The resulted DVF from demons registration is then used as the initial parameters in the optimization process. A 4D nonuniform rotational B-spline-based cardiac-torso (NCAT) phantom and a patient 4D-CBCT are used to evaluate the algorithm. Image quality of 4D-CBCT is substantially improved by using the proposed strategy in both NCAT phantom and patient studies. The proposed method has the potential to improve the temporal resolution of 4D-CBCT. Improved 4D-CBCT can better characterize the motion of lung tumors and will be a valuable tool for image-guided adaptive radiation therapy. 12. [Impact of planning CT slice thickness on the accuracy of automatic target registration using the on-board cone-beam CT]. PubMed Inoue, Hiroyuki; Tanooka, Masao; Doi, Hiroshi; Miura, Hideharu; Nakagawa, Hideo; Sakai, Toshiyuki; Oda, Masahiko; Yasumasa, Katsumi; Sakamoto, Kiyoshi; Kamikonya, Norihiko; Hirota, Shozo 2011-01-01 We have evaluated relationship between planning CT slice thickness and the accuracy of automatic target registration using cone-beam CT (CBCT). Planning CT images were acquired with reconstructed slice thickness of 1, 2, 3, 5, and 10mm for three different phantoms: Penta-Guide phantom, acrylic ball phantom, and pelvic phantom. After correctly placing the phantom at the isocenter using an in-room laser, we purposely displaced it by moving the treatment couch and then obtained CBCT images. Registration between the planning CT and the CBCT was performed using automatic target registration software, and the registration errors were recorded for each planning CT data set with different slice thickness. The respective average and standard deviation of errors for 10 mm slice thickness CT in the lateral, longitudinal, and vertical directions (n=15 data sets) were: 0.7 +/- 0.2mm, 0.8 +/- 0.2mm, and 0.2 +/- 0.2mm for the Penta-Guide phantom; 0.5 +/- 0.4 mm, 0.6 +/- 0.3 mm, and 0.4 +/- 0.3 mm for the acrylic ball phantom; and 0.6 +/- 0.2 mm, 0.9 +/- 0.2 mm, and 0.2 +/- 0.2 mm for the pelvic phantom. We found that the mean registration errors were always less than 1 mm regardless of the slice thickness tested. The results suggest that there is no obvious correlation between the planning CT slice thickness and the registration errors. 13. Data consistency-driven scatter kernel optimization for x-ray cone-beam CT NASA Astrophysics Data System (ADS) Kim, Changhwan; Park, Miran; Sung, Younghun; Lee, Jaehak; Choi, Jiyoung; Cho, Seungryong 2015-08-01 Accurate and efficient scatter correction is essential for acquisition of high-quality x-ray cone-beam CT (CBCT) images for various applications. This study was conducted to demonstrate the feasibility of using the data consistency condition (DCC) as a criterion for scatter kernel optimization in scatter deconvolution methods in CBCT. As in CBCT, data consistency in the mid-plane is primarily challenged by scatter, we utilized data consistency to confirm the degree of scatter correction and to steer the update in iterative kernel optimization. By means of the parallel-beam DCC via fan-parallel rebinning, we iteratively optimized the scatter kernel parameters, using a particle swarm optimization algorithm for its computational efficiency and excellent convergence. The proposed method was validated by a simulation study using the XCAT numerical phantom and also by experimental studies using the ACS head phantom and the pelvic part of the Rando phantom. The results showed that the proposed method can effectively improve the accuracy of deconvolution-based scatter correction. Quantitative assessments of image quality parameters such as contrast and structure similarity (SSIM) revealed that the optimally selected scatter kernel improves the contrast of scatter-free images by up to 99.5%, 94.4%, and 84.4%, and of the SSIM in an XCAT study, an ACS head phantom study, and a pelvis phantom study by up to 96.7%, 90.5%, and 87.8%, respectively. The proposed method can achieve accurate and efficient scatter correction from a single cone-beam scan without need of any auxiliary hardware or additional experimentation. 14. Comparison measurements of DQE for two flat panel detectors: fluoroscopic detector vs. cone beam CT detector NASA Astrophysics Data System (ADS) Betancourt Benítez, Ricardo; Ning, Ruola; Conover, David 2006-03-01 The physical performance of two flat panel detectors (FPD) has been evaluated using a standard x-ray beam quality set by IEC, namely RQA5. The FPDs evaluated in this study are based on an amorphous silicon photodiode array that is coupled to a thallium-doped Cesium Iodide scintillator and to a thin film transistor (TFT) array. One detector is the PaxScan 2520 that is designed for fluoro imaging, and has a small dynamic range and a large image lag. The other detector is the PaxScan 4030CB that is designed for cone beam CT, and has a large dynamic range (>16-bit), a reduced image lag and many imaging modes. Varian Medical Systems manufactured both detectors. The linearity of the FPDs was investigated by using an ionization chamber and aluminum filtration in order to obtain the beam quality. Since the FPDs are used in fluoroscopic mode, image lag of the FPD was measured in order to investigate its effect on this study, especially its effect on DQE. The spatial resolution of the FPDs was determined by obtaining the pre-sampling modulation transfer function for each detector. A sharp edge was used in accordance to IEC 62220-1. Next, the Normalized Noise Power Spectrum (NNPS) was calculated for various exposures levels at RQA5 radiation quality. Finally, the DQE of each FPD was obtained with a modified version of the international standard set by IEC 62220-1. The results show that the physical performance in DQE and MTF of the PaxScan 4030CB is superior to that of PaxScan2520. 15. Influence of electron density spatial distribution and X-ray beam quality during CT simulation on dose calculation accuracy. PubMed Nobah, Ahmad; Moftah, Belal; Tomic, Nada; Devic, Slobodan 2011-04-06 Impact of the various kVp settings used during computed tomography (CT) simulation that provides data for heterogeneity corrected dose distribution calculations in patients undergoing external beam radiotherapy with either high-energy photon or electron beams have been investigated. The change of the Hounsfield Unit (HU) values due to the influence of kVp settings and geometrical distribution of various tissue substitute materials has also been studied. The impact of various kVp settings and electron density (ED) distribution on the accuracy of dose calculation in high-energy photon beams was found to be well within 2%. In the case of dose distributions obtained with a commercially available Monte Carlo dose calculation algorithm for electron beams, differences of more than 10% were observed for different geometrical setups and kVp settings. Dose differences for the electron beams are relatively small at shallow depths but increase with depth around lower isodose values. 16. SU-E-J-72: Dosimetric Study of Cone-Beam CT-Based Radiation Treatment Planning Using a Patient-Specific Stepwise CT-Density Table SciTech Connect Chen, S; Le, Q; Mutaf, Y; Yi, B; D’Souza, W 2015-06-15 Purpose: To assess dose calculation accuracy of cone-beam CT (CBCT) based treatment plans using a patient-specific stepwise CT-density conversion table in comparison to conventional CT-based treatment plans. Methods: Unlike CT-based treatment planning which use fixed CT-density table, this study used patient-specific CT-density table to minimize the errors in reconstructed mass densities due to the effects of CBCT Hounsfield unit (HU) uncertainties. The patient-specific CT-density table was a stepwise function which maps HUs to only 6 classes of materials with different mass densities: air (0.00121g/cm3), lung (0.26g/cm3), adipose (0.95g/cm3), tissue (1.05 g/cm3), cartilage/bone (1.6g/cm3), and other (3g/cm3). HU thresholds to define different materials were adjusted for each CBCT via best match with the known tissue types in these images. Dose distributions were compared between CT-based plans and CBCT-based plans (IMRT/VMAT) for four types of treatment sites: head and neck (HN), lung, pancreas, and pelvis. For dosimetric comparison, PTV mean dose in both plans were compared. A gamma analysis was also performed to directly compare dosimetry in the two plans. Results: Compared to CT-based plans, the differences for PTV mean dose were 0.1% for pelvis, 1.1% for pancreas, 1.8% for lung, and −2.5% for HN in CBCT-based plans. The gamma passing rate was 99.8% for pelvis, 99.6% for pancreas, and 99.3% for lung with 3%/3mm criteria, and 80.5% for head and neck with 5%/3mm criteria. Different dosimetry accuracy level was observed: 1% for pelvis, 3% for lung and pancreas, and 5% for head and neck. Conclusion: By converting CBCT data to 6 classes of materials for dose calculation, 3% of dose calculation accuracy can be achieved for anatomical sites studied here, except HN which had a 5% accuracy. CBCT-based treatment planning using a patient-specific stepwise CT-density table can facilitate the evaluation of dosimetry changes resulting from variation in patient anatomy. 17. Comparison of micro-CT and cone beam CT on the feasibility of assessing trabecular structures in mandibular condyle. PubMed Liang, Xin; Zhang, Zuyan; Gu, Jianping; Wang, Zhihui; Vandenberghe, Bart; Jacobs, Reinhilde; Yang, Jie; Ma, Guowu; Ling, Haibin; Ma, Xuchen 2017-07-01 To evaluate the accuracy of CBCT in assessing trabecular structures. Two human mandibles were scanned by micro-CT (Skyscan 1173 high-energy spiral scan micro-CT; Skyscan NV, Kontich, Belgium) and CBCT (3D Accuitomo 170; Morita, Japan). The CBCT images were reconstructed with 0.5 and 1 mm thicknesses. The condylar images were selected for registration. A parallel algorithm for histogram computation was introduced to perform the registration. A mutual information (MI) value was used to evaluate the match between the images obtained from micro-CT and CBCT. In comparison with the micro-CT image for the two samples, the CBCT image with 0.5 mm thickness has a MI value of 0.873 and 0.903 while that with 1.0 mm thickness has a MI value of 0.741 and 0.752. The CBCT images with 0.5 mm thickness were better matched with micro-CT images. CBCT shows comparable accuracy with high-resolution micro-CT in assessing trabecular structures. CBCT can be a feasible tool to evaluate osseous changes of jaw bones. 18. Low kV settings CT angiography (CTA) with low dose contrast medium volume protocol in the assessment of thoracic and abdominal aorta disease: a feasibility study PubMed Central Talei Franzesi, C; Fior, D; Bonaffini, P A; Minutolo, O; Sironi, S 2015-01-01 Objective: To assess the diagnostic quality of low dose (100 kV) CT angiography (CTA), by using ultra-low contrast medium volume (30 ml), for thoracic and abdominal aorta evaluation. Methods: 67 patients with thoracic or abdominal vascular disease underwent multidetector CT study using a 256 slice scanner, with low dose radiation protocol (automated tube current modulation, 100 kV) and low contrast medium volume (30 ml; 4 ml s−1). Density measurements were performed on ascending, arch, descending thoracic aorta, anonymous branch, abdominal aorta, and renal and common iliac arteries. Radiation dose exposure [dose–length product (DLP)] was calculated. A control group of 35 patients with thoracic or abdominal vascular disease were evaluated with standard CTA protocol (automated tube current modulation, 120 kV; contrast medium, 80 ml). Results: In all patients, we correctly visualized and evaluated main branches of the thoracic and abdominal aorta. No difference in density measurements was achieved between low tube voltage protocol (mean attenuation value of thoracic aorta, 304 HU; abdominal, 343 HU; renal arteries, 331 HU) and control group (mean attenuation value of thoracic aorta, 320 HU; abdominal, 339; renal arteries, 303 HU). Radiation dose exposure in low tube voltage protocol was significantly different between thoracic and abdominal low tube voltage studies (490 and 324 DLP, respectively) and the control group (thoracic DLP, 1032; abdomen, DLP 1078). Conclusion: Low-tube-voltage protocol may provide a diagnostic performance comparable with that of the standard protocol, decreasing radiation dose exposure and contrast material volume amount. Advances in knowledge: Low-tube-voltage-setting protocol combined with ultra-low contrast agent volume (30 ml), by using new multidetector-row CT scanners, represents a feasible diagnostic tool to significantly reduce the radiation dose delivered to patients and to preserve renal function 19. Hounsfield unit recovery in clinical cone beam CT images of the thorax acquired for image guided radiation therapy. PubMed Thing, Rune Slot; Bernchou, Uffe; Mainegra-Hing, Ernesto; Hansen, Olfred; Brink, Carsten 2016-08-07 A comprehensive artefact correction method for clinical cone beam CT (CBCT) images acquired for image guided radiation therapy (IGRT) on a commercial system is presented. The method is demonstrated to reduce artefacts and recover CT-like Hounsfield units (HU) in reconstructed CBCT images of five lung cancer patients. Projection image based artefact corrections of image lag, detector scatter, body scatter and beam hardening are described and applied to CBCT images of five lung cancer patients. Image quality is evaluated through visual appearance of the reconstructed images, HU-correspondence with the planning CT images, and total volume HU error. Artefacts are reduced and CT-like HUs are recovered in the artefact corrected CBCT images. Visual inspection confirms that artefacts are indeed suppressed by the proposed method, and the HU root mean square difference between reconstructed CBCTs and the reference CT images are reduced by 31% when using the artefact corrections compared to the standard clinical CBCT reconstruction. A versatile artefact correction method for clinical CBCT images acquired for IGRT has been developed. HU values are recovered in the corrected CBCT images. The proposed method relies on post processing of clinical projection images, and does not require patient specific optimisation. It is thus a powerful tool for image quality improvement of large numbers of CBCT images. 20. Hounsfield unit recovery in clinical cone beam CT images of the thorax acquired for image guided radiation therapy NASA Astrophysics Data System (ADS) Slot Thing, Rune; Bernchou, Uffe; Mainegra-Hing, Ernesto; Hansen, Olfred; Brink, Carsten 2016-08-01 A comprehensive artefact correction method for clinical cone beam CT (CBCT) images acquired for image guided radiation therapy (IGRT) on a commercial system is presented. The method is demonstrated to reduce artefacts and recover CT-like Hounsfield units (HU) in reconstructed CBCT images of five lung cancer patients. Projection image based artefact corrections of image lag, detector scatter, body scatter and beam hardening are described and applied to CBCT images of five lung cancer patients. Image quality is evaluated through visual appearance of the reconstructed images, HU-correspondence with the planning CT images, and total volume HU error. Artefacts are reduced and CT-like HUs are recovered in the artefact corrected CBCT images. Visual inspection confirms that artefacts are indeed suppressed by the proposed method, and the HU root mean square difference between reconstructed CBCTs and the reference CT images are reduced by 31% when using the artefact corrections compared to the standard clinical CBCT reconstruction. A versatile artefact correction method for clinical CBCT images acquired for IGRT has been developed. HU values are recovered in the corrected CBCT images. The proposed method relies on post processing of clinical projection images, and does not require patient specific optimisation. It is thus a powerful tool for image quality improvement of large numbers of CBCT images. 1. Cone-beam CT breast imaging with a flat panel detector: a simulation study NASA Astrophysics Data System (ADS) Chen, Lingyun; Shaw, Chris C.; Tu, Shu-Ju; Altunbas, Mustafa C.; Wang, Tianpeng; Lai, Chao-Jen; Liu, Xinming; Kappadath, S. C. 2005-04-01 This paper investigates the feasibility of using a flat panel based cone-beam computer tomography (CT) system for 3-D breast imaging with computer simulation and imaging experiments. In our simulation study, 3-D phantoms were analytically modeled to simulate a breast loosely compressed into cylindrical shape with embedded soft tissue masses and calcifications. Attenuation coefficients were estimated to represent various types of breast tissue, soft tissue masses and calcifications to generate realistic image signal and contrast. Projection images were computed to incorporate x-ray attenuation, geometric magnification, x-ray detection, detector blurring, image pixelization and digitization. Based on the two-views mammography comparable dose level on the central axis of the phantom (also the rotation axis), x-ray kVp/filtration, transmittance through the phantom, detected quantum efficiency (DQE), exposure level, and imaging geometry, the photon fluence was estimated and used to estimate the phantom noise level on a pixel-by-pixel basis. This estimated noise level was then used with the random number generator to produce and add a fluctuation component to the noiseless transmitted image signal. The noise carrying projection images were then convolved with a Gaussian-like kernel, computed from measured 1-D line spread function (LSF) to simulated detector blurring. Additional 2-D Gaussian-like kernel is designed to suppress the noise fluctuation that inherently originates from projection images so that the reconstructed image detectability of low contrast masses phantom can be improved. Image reconstruction was performed using the Feldkamp algorithm. All simulations were performed on a 24 PC (2.4 GHz Dual-Xeon CPU) cluster with MPI parallel programming. With 600 mrads mean glandular dose (MGD) at the phantom center, soft tissue masses as small as 1 mm in diameter can be detected in a 10 cm diameter 50% glandular 50% adipose or fatter breast tissue, and 2 mm or larger 2. Patient dose considerations for routine megavoltage cone-beam CT imaging SciTech Connect Morin, Olivier; Gillis, Amy; Descovich, Martina; Chen, Josephine; Aubin, Michele; Aubry, Jean-Francois; Chen Hong; Gottschalk, Alexander R.; Xia Ping; Pouliot, Jean 2007-05-15 Megavoltage cone-beam CT (MVCBCT), the recent addition to the family of in-room CT imaging systems for image-guided radiation therapy (IGRT), uses a conventional treatment unit equipped with a flat panel detector to obtain a three-dimensional representation of the patient in treatment position. MVCBCT has been used for more than two years in our clinic for anatomy verification and to improve patient alignment prior to dose delivery. The objective of this research is to evaluate the image acquisition dose delivered to patients for MVCBCT and to develop a simple method to reduce the additional dose resulting from routine MVCBCT imaging. Conventional CT scans of phantoms and patients were imported into a commercial treatment planning system (TPS: Phillips, Pinnacle) and an arc treatment mimicking the MVCBCT acquisition process was generated to compute the delivered acquisition dose. To validate the dose obtained from the TPS, a simple water-equivalent cylindrical phantom with spaces for MOSFETs and an ion chamber was used to measure the MVCBCT image acquisition dose. Absolute dose distributions were obtained by simulating MVCBCTs of 9 and 5 monitor units (MU) on pelvis and head and neck patients, respectively. A compensation factor was introduced to generate composite plans of treatment and MVCBCT imaging dose. The article provides a simple equation to compute the compensation factor. The developed imaging compensation method was tested on routinely used clinical plans for prostate and head and neck patients. The quantitative comparison between the calculated dose by the TPS and measurement points on the cylindrical phantom were all within 3%. The dose percentage difference for the ion chamber placed in the center of the phantom was only 0.2%. For a typical MVCBCT, the dose delivered to patients forms a small anterior-posterior gradient ranging from 0.6 to 1.2 cGy per MVCBCT MU. MVCBCT acquisitions in the pelvis and head and neck areas deliver slightly more dose than 3. Improved image quality of cone beam CT scans for radiotherapy image guidance using fiber-interspaced antiscatter grid SciTech Connect Stankovic, Uros; Herk, Marcel van; Ploeger, Lennert S.; Sonke, Jan-Jakob 2014-06-15 Purpose: Medical linear accelerator mounted cone beam CT (CBCT) scanner provides useful soft tissue contrast for purposes of image guidance in radiotherapy. The presence of extensive scattered radiation has a negative effect on soft tissue visibility and uniformity of CBCT scans. Antiscatter grids (ASG) are used in the field of diagnostic radiography to mitigate the scatter. They usually do increase the contrast of the scan, but simultaneously increase the noise. Therefore, and considering other scatter mitigation mechanisms present in a CBCT scanner, the applicability of ASGs with aluminum interspacing for a wide range of imaging conditions has been inconclusive in previous studies. In recent years, grids using fiber interspacers have appeared, providing grids with higher scatter rejection while maintaining reasonable transmission of primary radiation. The purpose of this study was to evaluate the impact of one such grid on CBCT image quality. Methods: The grid used (Philips Medical Systems) had ratio of 21:1, frequency 36 lp/cm, and nominal selectivity of 11.9. It was mounted on the kV flat panel detector of an Elekta Synergy linear accelerator and tested in a phantom and a clinical study. Due to the flex of the linac and presence of gridline artifacts an angle dependent gain correction algorithm was devised to mitigate resulting artifacts. Scan reconstruction was performed using XVI4.5 augmented with inhouse developed image lag correction and Hounsfield unit calibration. To determine the necessary parameters for Hounsfield unit calibration and software scatter correction parameters, the Catphan 600 (The Phantom Laboratory) phantom was used. Image quality parameters were evaluated using CIRS CBCT Image Quality and Electron Density Phantom (CIRS) in two different geometries: one modeling head and neck and other pelvic region. Phantoms were acquired with and without the grid and reconstructed with and without software correction which was adapted for the different 4. Radiation dose saving through the use of cone-beam CT in hearing-impaired patients. PubMed Faccioli, N; Barillari, M; Guariglia, S; Zivelonghi, E; Rizzotti, A; Cerini, R; Mucelli, R Pozzi 2009-12-01 Bionic ear implants provide a solution for deafness. Patients treated with these hearing devices are often children who require close follow-up with frequent functional and radiological examinations; in particular, multislice computed tomography (MSCT). Dental volumetric cone-beam CT (CBCT) has been reported as a reliable technique for acquiring images of the temporal bone while delivering low radiation doses and containing costs. The aim of this study was to assess, in terms of radiation dose and image quality, the possibility of using CBCT as an alternative to MSCT in patients with bionic ear implants. One hundred patients (mean age 26 years, range 7-43) with Vibrant SoundBridge implants on the round window underwent follow-up: 85 with CBCT and 15 with MSCT. We measured the average tissue-absorbed doses during both MSCT and CBCT scans. Each scan was focused on the temporal bone with the smallest field of view and a low-dose protocol. In order to estimate image quality, we obtained data about slice thickness, high- and low-contrast resolution, uniformity and noise by using an AAPM CT performance phantom. Although the CBCT images were qualitatively inferior to those of MSCT, they were sufficiently diagnostic to allow evaluation of the position of the implants. The effective dose of MSCT was almost three times higher than that of CBCT. Owing to low radiation dose and sufficient image quality, CBCT could be considered an adequate technique for postoperative imaging and follow-up of patients with bionic ear implants. 5. Volume-of-change cone-beam CT for image-guided surgery NASA Astrophysics Data System (ADS) Lee, Junghoon; Webster Stayman, J.; Otake, Yoshito; Schafer, Sebastian; Zbijewski, Wojciech; Khanna, A. Jay; Prince, Jerry L.; Siewerdsen, Jeffrey H. 2012-08-01 C-arm cone-beam CT (CBCT) can provide intraoperative 3D imaging capability for surgical guidance, but workflow and radiation dose are the significant barriers to broad utilization. One main reason is that each 3D image acquisition requires a complete scan with a full radiation dose to present a completely new 3D image every time. In this paper, we propose to utilize patient-specific CT or CBCT as prior knowledge to accurately reconstruct the aspects of the region that have changed by the surgical procedure from only a sparse set of x-rays. The proposed methods consist of a 3D-2D registration between the prior volume and a sparse set of intraoperative x-rays, creating digitally reconstructed radiographs (DRRs) from the registered prior volume, computing difference images by subtracting DRRs from the intraoperative x-rays, a penalized likelihood reconstruction of the volume of change (VOC) from the difference images, and finally a fusion of VOC reconstruction with the prior volume to visualize the entire surgical field. When the surgical changes are local and relatively small, the VOC reconstruction involves only a small volume size and a small number of projections, allowing less computation and lower radiation dose than is needed to reconstruct the entire surgical field. We applied this approach to sacroplasty phantom data obtained from a CBCT test bench and vertebroplasty data with a fresh cadaver acquired from a C-arm CBCT system with a flat-panel detector. The VOCs were reconstructed from a varying number of images (10-66 images) and compared to the CBCT ground truth using four different metrics (mean squared error, correlation coefficient, structural similarity index and perceptual difference model). The results show promising reconstruction quality with structural similarity to the ground truth close to 1 even when only 15-20 images were used, allowing dose reduction by the factor of 10-20. 6. Volume-of-change cone-beam CT for image-guided surgery. PubMed Lee, Junghoon; Stayman, J Webster; Otake, Yoshito; Schafer, Sebastian; Zbijewski, Wojciech; Khanna, A Jay; Prince, Jerry L; Siewerdsen, Jeffrey H 2012-08-07 C-arm cone-beam CT (CBCT) can provide intraoperative 3D imaging capability for surgical guidance, but workflow and radiation dose are the significant barriers to broad utilization. One main reason is that each 3D image acquisition requires a complete scan with a full radiation dose to present a completely new 3D image every time. In this paper, we propose to utilize patient-specific CT or CBCT as prior knowledge to accurately reconstruct the aspects of the region that have changed by the surgical procedure from only a sparse set of x-rays. The proposed methods consist of a 3D-2D registration between the prior volume and a sparse set of intraoperative x-rays, creating digitally reconstructed radiographs (DRRs) from the registered prior volume, computing difference images by subtracting DRRs from the intraoperative x-rays, a penalized likelihood reconstruction of the volume of change (VOC) from the difference images, and finally a fusion of VOC reconstruction with the prior volume to visualize the entire surgical field. When the surgical changes are local and relatively small, the VOC reconstruction involves only a small volume size and a small number of projections, allowing less computation and lower radiation dose than is needed to reconstruct the entire surgical field. We applied this approach to sacroplasty phantom data obtained from a CBCT test bench and vertebroplasty data with a fresh cadaver acquired from a C-arm CBCT system with a flat-panel detector. The VOCs were reconstructed from a varying number of images (10-66 images) and compared to the CBCT ground truth using four different metrics (mean squared error, correlation coefficient, structural similarity index and perceptual difference model). The results show promising reconstruction quality with structural similarity to the ground truth close to 1 even when only 15-20 images were used, allowing dose reduction by the factor of 10-20. 7. Soft-tissue imaging with C-arm cone-beam CT using statistical reconstruction NASA Astrophysics Data System (ADS) Wang, Adam S.; Webster Stayman, J.; Otake, Yoshito; Kleinszig, Gerhard; Vogt, Sebastian; Gallia, Gary L.; Khanna, A. Jay; Siewerdsen, Jeffrey H. 2014-02-01 The potential for statistical image reconstruction methods such as penalized-likelihood (PL) to improve C-arm cone-beam CT (CBCT) soft-tissue visualization for intraoperative imaging over conventional filtered backprojection (FBP) is assessed in this work by making a fair comparison in relation to soft-tissue performance. A prototype mobile C-arm was used to scan anthropomorphic head and abdomen phantoms as well as a cadaveric torso at doses substantially lower than typical values in diagnostic CT, and the effects of dose reduction via tube current reduction and sparse sampling were also compared. Matched spatial resolution between PL and FBP was determined by the edge spread function of low-contrast (˜40-80 HU) spheres in the phantoms, which were representative of soft-tissue imaging tasks. PL using the non-quadratic Huber penalty was found to substantially reduce noise relative to FBP, especially at lower spatial resolution where PL provides a contrast-to-noise ratio increase up to 1.4-2.2× over FBP at 50% dose reduction across all objects. Comparison of sampling strategies indicates that soft-tissue imaging benefits from fully sampled acquisitions at dose above ˜1.7 mGy and benefits from 50% sparsity at dose below ˜1.0 mGy. Therefore, an appropriate sampling strategy along with the improved low-contrast visualization offered by statistical reconstruction demonstrates the potential for extending intraoperative C-arm CBCT to applications in soft-tissue interventions in neurosurgery as well as thoracic and abdominal surgeries by overcoming conventional tradeoffs in noise, spatial resolution, and dose. 8. Regularization design for high-quality cone-beam CT of intracranial hemorrhage using statistical reconstruction NASA Astrophysics Data System (ADS) Dang, H.; Stayman, J. W.; Xu, J.; Sisniega, A.; Zbijewski, W.; Wang, X.; Foos, D. H.; Aygun, N.; Koliatsos, V. E.; Siewerdsen, J. H. 2016-03-01 Intracranial hemorrhage (ICH) is associated with pathologies such as hemorrhagic stroke and traumatic brain injury. Multi-detector CT is the current front-line imaging modality for detecting ICH (fresh blood contrast 40-80 HU, down to 1 mm). Flat-panel detector (FPD) cone-beam CT (CBCT) offers a potential alternative with a smaller scanner footprint, greater portability, and lower cost potentially well suited to deployment at the point of care outside standard diagnostic radiology and emergency room settings. Previous studies have suggested reliable detection of ICH down to 3 mm in CBCT using high-fidelity artifact correction and penalized weighted least-squared (PWLS) image reconstruction with a post-artifact-correction noise model. However, ICH reconstructed by traditional image regularization exhibits nonuniform spatial resolution and noise due to interaction between the statistical weights and regularization, which potentially degrades the detectability of ICH. In this work, we propose three regularization methods designed to overcome these challenges. The first two compute spatially varying certainty for uniform spatial resolution and noise, respectively. The third computes spatially varying regularization strength to achieve uniform "detectability," combining both spatial resolution and noise in a manner analogous to a delta-function detection task. Experiments were conducted on a CBCT test-bench, and image quality was evaluated for simulated ICH in different regions of an anthropomorphic head. The first two methods improved the uniformity in spatial resolution and noise compared to traditional regularization. The third exhibited the highest uniformity in detectability among all methods and best overall image quality. The proposed regularization provides a valuable means to achieve uniform image quality in CBCT of ICH and is being incorporated in a CBCT prototype for ICH imaging. 9. Adaptive planning using megavoltage fan-beam CT for radiation therapy with testicular shielding SciTech Connect Yadav, Poonam; Kozak, Kevin; Tolakanahalli, Ranjini; Ramasubramanian, V.; Paliwal, Bhudatt R.; Welsh, James S.; Rong, Yi 2012-07-01 This study highlights the use of adaptive planning to accommodate testicular shielding in helical tomotherapy for malignancies of the proximal thigh. Two cases of young men with large soft tissue sarcomas of the proximal thigh are presented. After multidisciplinary evaluation, preoperative radiation therapy was recommended. Both patients were referred for sperm banking and lead shields were used to minimize testicular dose during radiation therapy. To minimize imaging artifacts, kilovoltage CT (kVCT) treatment planning was conducted without shielding. Generous hypothetical contours were generated on each 'planning scan' to estimate the location of the lead shield and generate a directionally blocked helical tomotherapy plan. To ensure the accuracy of each plan, megavoltage fan-beam CT (MVCT) scans were obtained at the first treatment and adaptive planning was performed to account for lead shield placement. Two important regions of interest in these cases were femurs and femoral heads. During adaptive planning for the first patient, it was observed that the virtual lead shield contour on kVCT planning images was significantly larger than the actual lead shield used for treatment. However, for the second patient, it was noted that the size of the virtual lead shield contoured on the kVCT image was significantly smaller than the actual shield size. Thus, new adaptive plans based on MVCT images were generated and used for treatment. The planning target volume was underdosed up to 2% and had higher maximum doses without adaptive planning. In conclusion, the treatment of the upper thigh, particularly in young men, presents several clinical challenges, including preservation of gonadal function. In such circumstances, adaptive planning using MVCT can ensure accurate dose delivery even in the presence of high-density testicular shields. 10. Small field of view cone beam CT temporomandibular joint imaging dosimetry PubMed Central Lukat, T D; Wong, J C M; Lam, E W N 2013-01-01 Objectives: Cone beam CT (CBCT) is generally accepted as the imaging modality of choice for visualisation of the osseous structures of the temporomandibular joint (TMJ). The purpose of this study was to compare the radiation dose of a protocol for CBCT TMJ imaging using a large field of view Hitachi CB MercuRay™ unit (Hitachi Medical Systems, Tokyo, Japan) with an alternative approach that utilizes two CBCT acquisitions of the right and left TMJs using the Kodak 9000® 3D system (Carestream, Rochester, NY). Methods: 25 optically stimulated luminescence dosemeters were placed in various locations of an anthropomorphic RANDO® Man phantom (Alderson Research Laboratories, Stanford, CT). Dosimetric measurements were performed for each technique, and effective doses were calculated using the 2007 International Commission on Radiological Protection tissue weighting factor recommendations for all protocols. Results: The radiation effective dose for the CB MercuRay technique was 223.6 ± 1.1 μSv compared with 9.7 ± 0.1 μSv (child), 13.5 ± 0.9 μSv (adolescent/small adult) and 20.5 ± 1.3 μSv (adult) for the bilateral Kodak acquisitions. Conclusions: Acquisitions of individual right and left TMJ volumes using the Kodak 9000 3D CBCT imaging system resulted in a more than ten-fold reduction in the effective dose compared with the larger single field acquisition with the Hitachi CB MercuRay. This decrease is made even more significant when lower tube potential and tube current settings are used. PMID:24048693 11. Small field of view cone beam CT temporomandibular joint imaging dosimetry. PubMed Lukat, T D; Wong, J C M; Lam, E W N 2013-01-01 Cone beam CT (CBCT) is generally accepted as the imaging modality of choice for visualisation of the osseous structures of the temporomandibular joint (TMJ). The purpose of this study was to compare the radiation dose of a protocol for CBCT TMJ imaging using a large field of view Hitachi CB MercuRay™ unit (Hitachi Medical Systems, Tokyo, Japan) with an alternative approach that utilizes two CBCT acquisitions of the right and left TMJs using the Kodak 9000(®) 3D system (Carestream, Rochester, NY). 25 optically stimulated luminescence dosemeters were placed in various locations of an anthropomorphic RANDO(®) Man phantom (Alderson Research Laboratories, Stanford, CT). Dosimetric measurements were performed for each technique, and effective doses were calculated using the 2007 International Commission on Radiological Protection tissue weighting factor recommendations for all protocols. The radiation effective dose for the CB MercuRay technique was 223.6 ± 1.1 μSv compared with 9.7 ± 0.1 μSv (child), 13.5 ± 0.9 μSv (adolescent/small adult) and 20.5 ± 1.3 μSv (adult) for the bilateral Kodak acquisitions. Acquisitions of individual right and left TMJ volumes using the Kodak 9000 3D CBCT imaging system resulted in a more than ten-fold reduction in the effective dose compared with the larger single field acquisition with the Hitachi CB MercuRay. This decrease is made even more significant when lower tube potential and tube current settings are used. 12. Volume-of-Change Cone-Beam CT for Image-Guided Surgery PubMed Central Lee, Junghoon; Stayman, J. Webster; Otake, Yoshito; Schafer, Sebastian; Zbijewski, Wojciech; Khanna, A. Jay; Prince, Jerry L.; Siewerdsen, Jeffrey H. 2012-01-01 C-arm cone-beam CT (CBCT) can provide intraoperative 3D imaging capability for surgical guidance, but workflow and radiation dose are the significant barriers to broad utilization. One main reason is that each 3D image acquisition requires a complete scan with a full radiation dose to present a completely new 3D image every time. In this paper, we propose to utilize patient-specific CT or CBCT as prior knowledge to accurately reconstruct the aspects of the region that have changed by the surgical procedure from only a sparse set of x-rays. The proposed methods consist of a 3D-2D registration between the prior volume and a sparse set of intraoperative x-rays, creating digitally reconstructed radiographs (DRR) from the registered prior volume, computing difference images by subtracting DRRs from the intraoperative x-rays, a penalized likelihood reconstruction of the volume of change (VOC) from the difference images, and finally a fusion of VOC reconstruction with the prior volume to visualize the entire surgical field. When the surgical changes are local and relatively small, the VOC reconstruction involves only a small volume size and a small number of projections, allowing less computation and lower radiation dose than is needed to reconstruct the entire surgical field. We applied this approach to sacroplasty phantom data obtained from a CBCT test bench and vertebroplasty data with a fresh cadaver acquired from a C-arm CBCT system with a flat-panel detector (FPD). The VOCs were reconstructed from varying number of images (10–66 images) and compared to the CBCT ground truth using four different metrics (mean squared error, correlation coefficient, structural similarity index, and perceptual difference model). The results show promising reconstruction quality with structural similarity to the ground truth close to 1 even when only 15–20 images were used, allowing dose reduction by the factor of 10–20. PMID:22801026 13. Reducing metal artifacts in cone-beam CT images by preprocessing projection data SciTech Connect Zhang Yongbin; Zhang Lifei; Zhu, X. Ronald; Lee, Andrew K.; Chambers, Mark; Dong Lei . E-mail: [email protected] 2007-03-01 Purpose: Computed tomography (CT) streak artifacts caused by metallic implants remain a challenge for the automatic processing of image data. The impact of metal artifacts in the soft-tissue region is magnified in cone-beam CT (CBCT), because the soft-tissue contrast is usually lower in CBCT images. The goal of this study was to develop an effective offline processing technique to minimize the effect. Methods and Materials: The geometry calibration cue of the CBCT system was used to track the position of the metal object in projection views. The three-dimensional (3D) representation of the object can be established from only two user-selected viewing angles. The position of the shadowed region in other views can be tracked by projecting the 3D coordinates of the object. Automatic image segmentation was used followed by a Laplacian diffusion method to replace the pixels inside the metal object with the boundary pixels. The modified projection data were then used to reconstruct a new CBCT image. The procedure was tested in phantoms, prostate cancer patients with implanted gold markers and metal prosthesis, and a head-and-neck patient with dental amalgam in the teeth. Results: Both phantom and patient studies demonstrated that the procedure was able to minimize the metal artifacts. Soft-tissue visibility was improved near or away from the metal object. The processing time was 1-2 s per projection. Conclusion: We have implemented an effective metal artifact-suppressing algorithm to improve the quality of CBCT images. 14. Flat panel detector-based cone-beam volume CT angiography imaging: system evaluation. PubMed Ning, R; Chen, B; Yu, R; Conover, D; Tang, X; Ning, Y 2000-09-01 Preliminary evaluation of recently developed large-area flat panel detectors (FPDs) indicates that FPDs have some potential advantages: compactness, absence of geometric distortion and veiling glare with the benefits of high resolution, high detective quantum efficiency (DQE), high frame rate and high dynamic range, small image lag (< 1%), and excellent linearity (approximately 1%). The advantages of the new FPD make it a promising candidate for cone-beam volume computed tomography (CT) angiography (CBVCTA) imaging. The purpose of this study is to characterize a prototype FPD-based imaging system for CBVCTA applications. A prototype FPD-based CBVCTA imaging system has been designed and constructed around a modified GE 8800 CT scanner. This system is evaluated for a CBVCTA imaging task in the head and neck using four phantoms and a frozen rat. The system is first characterized in terms of linearity and dynamic range of the detector. Then, the optimal selection of kVps for CBVCTA is determined and the effect of image lag and scatter on the image quality of the CBVCTA system is evaluated. Next, low-contrast resolution and high-contrast spatial resolution are measured. Finally, the example reconstruction images of a frozen rat are presented. The results indicate that the FPD-based CBVCT can achieve 2.75-lp/mm spatial resolution at 0% modulation transfer function (MTF) and provide more than enough low-contrast resolution for intravenous CBVCTA imaging in the head and neck with clinically acceptable entrance exposure level. The results also suggest that to use an FPD for large cone-angle applications, such as body angiography, further investigations are required. 15. Clinical implementation of intraoperative cone-beam CT in head and neck surgery NASA Astrophysics Data System (ADS) Daly, M. J.; Chan, H.; Nithiananthan, S.; Qiu, J.; Barker, E.; Bachar, G.; Dixon, B. J.; Irish, J. C.; Siewerdsen, J. H. 2011-03-01 A prototype mobile C-arm for cone-beam CT (CBCT) has been translated to a prospective clinical trial in head and neck surgery. The flat-panel CBCT C-arm was developed in collaboration with Siemens Healthcare, and demonstrates both sub-mm spatial resolution and soft-tissue visibility at low radiation dose (e.g., <1/5th of a typical diagnostic head CT). CBCT images are available ~15 seconds after scan completion (~1 min acquisition) and reviewed at bedside using custom 3D visualization software based on the open-source Image-Guided Surgery Toolkit (IGSTK). The CBCT C-arm has been successfully deployed in 15 head and neck cases and streamlined into the surgical environment using human factors engineering methods and expert feedback from surgeons, nurses, and anesthetists. Intraoperative imaging is implemented in a manner that maintains operating field sterility, reduces image artifacts (e.g., carbon fiber OR table) and minimizes radiation exposure. Image reviews conducted with surgical staff indicate bony detail and soft-tissue visualization sufficient for intraoperative guidance, with additional artifact management (e.g., metal, scatter) promising further improvements. Clinical trial deployment suggests a role for intraoperative CBCT in guiding complex head and neck surgical tasks, including planning mandible and maxilla resection margins, guiding subcranial and endonasal approaches to skull base tumours, and verifying maxillofacial reconstruction alignment. Ongoing translational research into complimentary image-guidance subsystems include novel methods for real-time tool tracking, fusion of endoscopic video and CBCT, and deformable registration of preoperative volumes and planning contours with intraoperative CBCT. 16. Adaptive planning using megavoltage fan-beam CT for radiation therapy with testicular shielding. PubMed Yadav, Poonam; Kozak, Kevin; Tolakanahalli, Ranjini; Ramasubramanian, V; Paliwal, Bhudatt R; Welsh, James S; Rong, Yi 2012-01-01 This study highlights the use of adaptive planning to accommodate testicular shielding in helical tomotherapy for malignancies of the proximal thigh. Two cases of young men with large soft tissue sarcomas of the proximal thigh are presented. After multidisciplinary evaluation, preoperative radiation therapy was recommended. Both patients were referred for sperm banking and lead shields were used to minimize testicular dose during radiation therapy. To minimize imaging artifacts, kilovoltage CT (kVCT) treatment planning was conducted without shielding. Generous hypothetical contours were generated on each "planning scan" to estimate the location of the lead shield and generate a directionally blocked helical tomotherapy plan. To ensure the accuracy of each plan, megavoltage fan-beam CT (MVCT) scans were obtained at the first treatment and adaptive planning was performed to account for lead shield placement. Two important regions of interest in these cases were femurs and femoral heads. During adaptive planning for the first patient, it was observed that the virtual lead shield contour on kVCT planning images was significantly larger than the actual lead shield used for treatment. However, for the second patient, it was noted that the size of the virtual lead shield contoured on the kVCT image was significantly smaller than the actual shield size. Thus, new adaptive plans based on MVCT images were generated and used for treatment. The planning target volume was underdosed up to 2% and had higher maximum doses without adaptive planning. In conclusion, the treatment of the upper thigh, particularly in young men, presents several clinical challenges, including preservation of gonadal function. In such circumstances, adaptive planning using MVCT can ensure accurate dose delivery even in the presence of high-density testicular shields. 17. High-performance C-arm cone-beam CT guidance of thoracic surgery NASA Astrophysics Data System (ADS) Schafer, Sebastian; Otake, Yoshito; Uneri, Ali; Mirota, Daniel J.; Nithiananthan, Sajendra; Stayman, J. W.; Zbijewski, Wojciech; Kleinszig, Gerhard; Graumann, Rainer; Sussman, Marc; Siewerdsen, Jeffrey H. 2012-02-01 Localizing sub-palpable nodules in minimally invasive video-assisted thoracic surgery (VATS) presents a significant challenge. To overcome inherent problems of preoperative nodule tagging using CT fluoroscopic guidance, an intraoperative C-arm cone-beam CT (CBCT) image-guidance system has been developed for direct localization of subpalpable tumors in the OR, including real-time tracking of surgical tools (including thoracoscope), and video-CBCT registration for augmentation of the thoracoscopic scene. Acquisition protocols for nodule visibility in the inflated and deflated lung were delineated in phantom and animal/cadaver studies. Motion compensated reconstruction was implemented to account for motion induced by the ventilated contralateral lung. Experience in CBCT-guided targeting of simulated lung nodules included phantoms, porcine models, and cadavers. Phantom studies defined low-dose acquisition protocols providing contrast-to-noise ratio sufficient for lung nodule visualization, confirmed in porcine specimens with simulated nodules (3-6mm diameter PE spheres, ~100-150HU contrast, 2.1mGy). Nodule visibility in CBCT of the collapsed lung, with reduced contrast according to air volume retention, was more challenging, but initial studies confirmed visibility using scan protocols at slightly increased dose (~4.6-11.1mGy). Motion compensated reconstruction employing a 4D deformation map in the backprojection process reduced artifacts associated with motion blur. Augmentation of thoracoscopic video with renderings of the target and critical structures (e.g., pulmonary artery) showed geometric accuracy consistent with camera calibration and the tracking system (2.4mm registration error). Initial results suggest a potentially valuable role for CBCT guidance in VATS, improving precision in minimally invasive, lungconserving surgeries, avoid critical structures, obviate the burdens of preoperative localization, and improve patient safety. 18. Deriving Hounsfield units using grey levels in cone beam CT: a clinical application PubMed Central Reeves, TE; Mah, P; McDavid, WD 2012-01-01 Objective To present a clinical study demonstrating a method to derive Hounsfield units from grey levels in cone beam CT (CBCT). Methods An acrylic intraoral reference object with aluminium, outer bone equivalent material (cortical bone), inner bone equivalent material (trabecular bone), polymethlymethacrylate and water equivalent material was used. Patients were asked if they would be willing to have an acrylic bite plate with the reference object placed in their mouth during a routine CBCT scan. There were 31 scans taken on the Asahi Alphard 3030 (Belmont Takara, Kyoto, Japan) and 30 scans taken on the Planmeca ProMax 3D (Planmeca, Helsinki, Finland) CBCT. Linear regression between the grey levels of the reference materials and their linear attenuation coefficients was performed for various photon energies. The energy with the highest regression coefficient was chosen as the effective energy. The attenuation coefficients for the five materials at the effective energy were scaled as Hounsfield units using the standard Hounsfield units equation and compared to those derived from the measured grey levels of the materials using the regression equation. Results In general, there was a satisfactory linear relation between the grey levels and the attenuation coefficients. This made it possible to calculate Hounsfield units from the measured grey levels. Uncertainty in determining effective energies resulted in unrealistic effective energies and significant variability of calculated CT numbers. Linear regression from grey levels directly to Hounsfield units at specified energies resulted in greater consistency. Conclusions The clinical application of a method for deriving Hounsfield units from grey levels in CBCT was demonstrated. PMID:22752324 19. Reconstruction-plane-dependent weighted FDK algorithm for cone beam volumetric CT NASA Astrophysics Data System (ADS) Tang, Xiangyang; Hsieh, Jiang 2005-04-01 The original FDK algorithm has been extensively employed in medical and industrial imaging applications. With an increased cone angle, cone beam (CB) artifacts in images reconstructed by the original FDK algorithm deteriorate, since the circular trajectory does not satisfy the so-called data sufficiency condition (DSC). A few "circular plus" trajectories have been proposed in the past to reduce CB artifacts by meeting the DSC. However, the circular trajectory has distinct advantages over other scanning trajectories in practical CT imaging, such as cardiac, vascular and perfusion applications. In addition to looking into the DSC, another insight into the CB artifacts of the original FDK algorithm is the inconsistency between conjugate rays that are 180° apart in view angle. The inconsistence between conjugate rays is pixel dependent, i.e., it varies dramatically over pixels within the image plane to be reconstructed. However, the original FDK algorithm treats all conjugate rays equally, resulting in CB artifacts that can be avoided if appropriate view weighting strategy is exercised. In this paper, a modified FDK algorithm is proposed, along with an experimental evaluation and verification, in which the helical body phantom and a humanoid head phantom scanned by a volumetric CT (64 x 0.625 mm) are utilized. Without extra trajectories supplemental to the circular trajectory, the modified FDK algorithm applies reconstruction-plane-dependent view weighting on projection data before 3D backprojection, which reduces the inconsistency between conjugate rays by suppressing the contribution of one of the conjugate rays with a larger cone angle. Both computer-simulated and real phantom studies show that, up to a moderate cone angle, the CB artifacts can be substantially suppressed by the modified FDK algorithm, while advantages of the original FDK algorithm, such as the filtered backprojection algorithm structure, 1D ramp filtering, and data manipulation efficiency, can be 20. Low-dose and scatter-free cone-beam CT imaging: a preliminary study NASA Astrophysics Data System (ADS) Dong, Xue; Jia, Xun; Niu, Tianye; Zhu, Lei 2012-03-01 Clinical applications of CBCT imaging are still limited by excessive imaging dose from repeated scans and poor image quality mainly due to scatter contamination. Compressed sensing (CS) reconstruction algorithms have shown promises in recovering faithful signals from low-dose projection data, but do not serve well the needs of accurate CBCT imaging if effective scatter correction is not in place. Scatter can be accurately measured and removed using measurement-based methods. However, in conventional FDK reconstruction, these approaches are considered unpractical since they require multiple scans or moving the beam blocker during the data acquisition to compensate for the inevitable primary loss. In this work, we combine the measurement-based scatter correction and CS-based iterative reconstruction algorithm, such that scatter-free images can be obtained from low-dose data. We lower the CBCT dose by reducing the projection number and inserting lead strips between the x-ray source and the object. The insertion of lead strips also enables scatter measurement on the measured samples inside the strip shadows. CS-based iterative reconstruction is finally carried out to obtain scatter-free and low-dose CBCT images. Simulation studies are designed to optimize the lead strip geometry for a certain dose reduction ratio. After optimization, our approach reduces the CT number error from over 220HU to below 5HU on the Shepp-Logan phantom, with a dose reduction of ~80%. With the same dose reduction and the optimized method parameters, the CT number error is reduced from 242HU to 20HU in the selected region of interest on Catphan©600 phantom. 1. Technical Note: FreeCT_wFBP: A robust, efficient, open-source implementation of weighted filtered backprojection for helical, fan-beam CT PubMed Central Hoffman, John; Young, Stefano; Noo, Frédéric 2016-01-01 Purpose: With growing interest in quantitative imaging, radiomics, and CAD using CT imaging, the need to explore the impacts of acquisition and reconstruction parameters has grown. This usually requires extensive access to the scanner on which the data were acquired and its workflow is not designed for large-scale reconstruction projects. Therefore, the authors have developed a freely available, open-source software package implementing a common reconstruction method, weighted filtered backprojection (wFBP), for helical fan-beam CT applications. Methods: FreeCT_wFBP is a low-dependency, GPU-based reconstruction program utilizing c for the host code and Nvidia CUDA C for GPU code. The software is capable of reconstructing helical scans acquired with arbitrary pitch-values, and sampling techniques such as flying focal spots and a quarter-detector offset. In this work, the software has been described and evaluated for reconstruction speed, image quality, and accuracy. Speed was evaluated based on acquisitions of the ACR CT accreditation phantom under four different flying focal spot configurations. Image quality was assessed using the same phantom by evaluating CT number accuracy, uniformity, and contrast to noise ratio (CNR). Finally, reconstructed mass-attenuation coefficient accuracy was evaluated using a simulated scan of a FORBILD thorax phantom and comparing reconstructed values to the known phantom values. Results: The average reconstruction time evaluated under all flying focal spot configurations was found to be 17.4 ± 1.0 s for a 512 row × 512 column × 32 slice volume. Reconstructions of the ACR phantom were found to meet all CT Accreditation Program criteria including CT number, CNR, and uniformity tests. Finally, reconstructed mass-attenuation coefficient values of water within the FORBILD thorax phantom agreed with original phantom values to within 0.0001 mm2/g (0.01%). Conclusions: FreeCT_wFBP is a fast, highly configurable reconstruction package for 2. Technical Note: FreeCT_wFBP: A robust, efficient, open-source implementation of weighted filtered backprojection for helical, fan-beam CT. PubMed Hoffman, John; Young, Stefano; Noo, Frédéric; McNitt-Gray, Michael 2016-03-01 With growing interest in quantitative imaging, radiomics, and CAD using CT imaging, the need to explore the impacts of acquisition and reconstruction parameters has grown. This usually requires extensive access to the scanner on which the data were acquired and its workflow is not designed for large-scale reconstruction projects. Therefore, the authors have developed a freely available, open-source software package implementing a common reconstruction method, weighted filtered backprojection (wFBP), for helical fan-beam CT applications. FreeCT_wFBP is a low-dependency, GPU-based reconstruction program utilizing c for the host code and Nvidia CUDA C for GPU code. The software is capable of reconstructing helical scans acquired with arbitrary pitch-values, and sampling techniques such as flying focal spots and a quarter-detector offset. In this work, the software has been described and evaluated for reconstruction speed, image quality, and accuracy. Speed was evaluated based on acquisitions of the ACR CT accreditation phantom under four different flying focal spot configurations. Image quality was assessed using the same phantom by evaluating CT number accuracy, uniformity, and contrast to noise ratio (CNR). Finally, reconstructed mass-attenuation coefficient accuracy was evaluated using a simulated scan of a FORBILD thorax phantom and comparing reconstructed values to the known phantom values. The average reconstruction time evaluated under all flying focal spot configurations was found to be 17.4 ± 1.0 s for a 512 row × 512 column × 32 slice volume. Reconstructions of the ACR phantom were found to meet all CT Accreditation Program criteria including CT number, CNR, and uniformity tests. Finally, reconstructed mass-attenuation coefficient values of water within the FORBILD thorax phantom agreed with original phantom values to within 0.0001 mm(2)/g (0.01%). FreeCT_wFBP is a fast, highly configurable reconstruction package for third-generation CT available under 3. Evaluation of On-Board kV Cone Beam Computed Tomography–Based Dose Calculation With Deformable Image Registration Using Hounsfield Unit Modifications SciTech Connect Onozato, Yusuke; Kadoya, Noriyuki; Fujita, Yukio; Arai, Kazuhiro; Dobashi, Suguru; Takeda, Ken; Kishi, Kazuma; Umezawa, Rei; Matsushita, Haruo; Jingu, Keiichi 2014-06-01 Purpose: The purpose of this study was to estimate the accuracy of the dose calculation of On-Board Imager (Varian, Palo Alto, CA) cone beam computed tomography (CBCT) with deformable image registration (DIR), using the multilevel-threshold (MLT) algorithm and histogram matching (HM) algorithm in pelvic radiation therapy. Methods and Materials: One pelvis phantom and 10 patients with prostate cancer treated with intensity modulated radiation therapy were studied. To minimize the effect of organ deformation and different Hounsfield unit values between planning CT (PCT) and CBCT, we modified CBCT (mCBCT) with DIR by using the MLT (mCBCT{sub MLT}) and HM (mCBCT{sub HM}) algorithms. To evaluate the accuracy of the dose calculation, we compared dose differences in dosimetric parameters (mean dose [D{sub mean}], minimum dose [D{sub min}], and maximum dose [D{sub max}]) for planning target volume, rectum, and bladder between PCT (reference) and CBCTs or mCBCTs. Furthermore, we investigated the effect of organ deformation compared with DIR and rigid registration (RR). We determined whether dose differences between PCT and mCBCTs were significantly lower than in CBCT by using Student t test. Results: For patients, the average dose differences in all dosimetric parameters of CBCT with DIR were smaller than those of CBCT with RR (eg, rectum; 0.54% for DIR vs 1.24% for RR). For the mCBCTs with DIR, the average dose differences in all dosimetric parameters were less than 1.0%. Conclusions: We evaluated the accuracy of the dose calculation in CBCT, mCBCT{sub MLT}, and mCBCT{sub HM} with DIR for 10 patients. The results showed that dose differences in D{sub mean}, D{sub min}, and D{sub max} in mCBCTs were within 1%, which were significantly better than those in CBCT, especially for the rectum (P<.05). Our results indicate that the mCBCT{sub MLT} and mCBCT{sub HM} can be useful for improving the dose calculation for adaptive radiation therapy. 4. Evaluation of on-board kV cone beam computed tomography-based dose calculation with deformable image registration using Hounsfield unit modifications. PubMed Onozato, Yusuke; Kadoya, Noriyuki; Fujita, Yukio; Arai, Kazuhiro; Dobashi, Suguru; Takeda, Ken; Kishi, Kazuma; Umezawa, Rei; Matsushita, Haruo; Jingu, Keiichi 2014-06-01 The purpose of this study was to estimate the accuracy of the dose calculation of On-Board Imager (Varian, Palo Alto, CA) cone beam computed tomography (CBCT) with deformable image registration (DIR), using the multilevel-threshold (MLT) algorithm and histogram matching (HM) algorithm in pelvic radiation therapy. One pelvis phantom and 10 patients with prostate cancer treated with intensity modulated radiation therapy were studied. To minimize the effect of organ deformation and different Hounsfield unit values between planning CT (PCT) and CBCT, we modified CBCT (mCBCT) with DIR by using the MLT (mCBCT(MLT)) and HM (mCBCT(HM)) algorithms. To evaluate the accuracy of the dose calculation, we compared dose differences in dosimetric parameters (mean dose [D(mean)], minimum dose [D(min)], and maximum dose [D(max)]) for planning target volume, rectum, and bladder between PCT (reference) and CBCTs or mCBCTs. Furthermore, we investigated the effect of organ deformation compared with DIR and rigid registration (RR). We determined whether dose differences between PCT and mCBCTs were significantly lower than in CBCT by using Student t test. For patients, the average dose differences in all dosimetric parameters of CBCT with DIR were smaller than those of CBCT with RR (eg, rectum; 0.54% for DIR vs 1.24% for RR). For the mCBCTs with DIR, the average dose differences in all dosimetric parameters were less than 1.0%. We evaluated the accuracy of the dose calculation in CBCT, mCBCT(MLT), and mCBCT(HM) with DIR for 10 patients. The results showed that dose differences in D(mean), D(min), and D(max) in mCBCTs were within 1%, which were significantly better than those in CBCT, especially for the rectum (P<.05). Our results indicate that the mCBCT(MLT) and mCBCT(HM) can be useful for improving the dose calculation for adaptive radiation therapy. Copyright © 2014 Elsevier Inc. All rights reserved. 5. Task-driven image acquisition and reconstruction in cone-beam CT PubMed Central Gang, Grace J.; Stayman, J. Webster; Ehtiati, Tina; Siewerdsen, Jeffrey H. 2015-01-01 This work introduces a task-driven imaging framework that incorporates a mathematical definition of the imaging task, a model of the imaging system, and a patient-specific anatomical model to prospectively design image acquisition and reconstruction techniques to optimize task performance. The framework is applied to joint optimization of tube current modulation, view-dependent reconstruction kernel, and orbital tilt in cone-beam CT. The system model considers a cone-beam CT system incorporating a flat-panel detector and 3D filtered backprojection and accurately describes the spatially varying noise and resolution over a wide range of imaging parameters and in the presence of a realistic anatomical model. Task-based detectability index (d') is incorporated as the objective function in a task-driven optimization of image acquisition and reconstruction techniques. The orbital tilt was optimized through an exhaustive search across tilt angles ranging ±30°. For each tilt angle, the view-dependent tube current and reconstruction kernel (i.e., the modulation profiles) that maximized detectability were identified via an alternating optimization. The task-driven approach was compared with conventional unmodulated and automatic exposure control (AEC) strategies for a variety of imaging tasks and anthropomorphic phantoms. The task-driven strategy outperformed the unmodulated and AEC cases for all tasks. For example, d' for a sphere detection task in a head phantom was improved by 30% compared to the unmodulated case by using smoother kernels for noisy views and distributing mAs across less noisy views (at fixed total mAs) in a manner that was beneficial to task performance. Similarly for detection of a line-pair pattern, the task-driven approach increased d' by 80% compared to no modulation by means of view-dependent mA and kernel selection that yields modulation transfer function and noise-power spectrum optimal to the task. Optimization of orbital tilt identified the 6. Task-driven image acquisition and reconstruction in cone-beam CT NASA Astrophysics Data System (ADS) Gang, Grace J.; Webster Stayman, J.; Ehtiati, Tina; Siewerdsen, Jeffrey H. 2015-04-01 This work introduces a task-driven imaging framework that incorporates a mathematical definition of the imaging task, a model of the imaging system, and a patient-specific anatomical model to prospectively design image acquisition and reconstruction techniques to optimize task performance. The framework is applied to joint optimization of tube current modulation, view-dependent reconstruction kernel, and orbital tilt in cone-beam CT. The system model considers a cone-beam CT system incorporating a flat-panel detector and 3D filtered backprojection and accurately describes the spatially varying noise and resolution over a wide range of imaging parameters in the presence of a realistic anatomical model. Task-based detectability index (d‧) is incorporated as the objective function in a task-driven optimization of image acquisition and reconstruction techniques. The orbital tilt was optimized through an exhaustive search across tilt angles ranging ±30°. For each tilt angle, the view-dependent tube current and reconstruction kernel (i.e. the modulation profiles) that maximized detectability were identified via an alternating optimization. The task-driven approach was compared with conventional unmodulated and automatic exposure control (AEC) strategies for a variety of imaging tasks and anthropomorphic phantoms. The task-driven strategy outperformed the unmodulated and AEC cases for all tasks. For example, d‧ for a sphere detection task in a head phantom was improved by 30% compared to the unmodulated case by using smoother kernels for noisy views and distributing mAs across less noisy views (at fixed total mAs) in a manner that was beneficial to task performance. Similarly for detection of a line-pair pattern, the task-driven approach increased d‧ by 80% compared to no modulation by means of view-dependent mA and kernel selection that yields modulation transfer function and noise-power spectrum optimal to the task. Optimization of orbital tilt identified the 7. Task-driven image acquisition and reconstruction in cone-beam CT. PubMed Gang, Grace J; Stayman, J Webster; Ehtiati, Tina; Siewerdsen, Jeffrey H 2015-04-21 This work introduces a task-driven imaging framework that incorporates a mathematical definition of the imaging task, a model of the imaging system, and a patient-specific anatomical model to prospectively design image acquisition and reconstruction techniques to optimize task performance. The framework is applied to joint optimization of tube current modulation, view-dependent reconstruction kernel, and orbital tilt in cone-beam CT. The system model considers a cone-beam CT system incorporating a flat-panel detector and 3D filtered backprojection and accurately describes the spatially varying noise and resolution over a wide range of imaging parameters in the presence of a realistic anatomical model. Task-based detectability index (d') is incorporated as the objective function in a task-driven optimization of image acquisition and reconstruction techniques. The orbital tilt was optimized through an exhaustive search across tilt angles ranging ± 30°. For each tilt angle, the view-dependent tube current and reconstruction kernel (i.e. the modulation profiles) that maximized detectability were identified via an alternating optimization. The task-driven approach was compared with conventional unmodulated and automatic exposure control (AEC) strategies for a variety of imaging tasks and anthropomorphic phantoms. The task-driven strategy outperformed the unmodulated and AEC cases for all tasks. For example, d' for a sphere detection task in a head phantom was improved by 30% compared to the unmodulated case by using smoother kernels for noisy views and distributing mAs across less noisy views (at fixed total mAs) in a manner that was beneficial to task performance. Similarly for detection of a line-pair pattern, the task-driven approach increased d' by 80% compared to no modulation by means of view-dependent mA and kernel selection that yields modulation transfer function and noise-power spectrum optimal to the task. Optimization of orbital tilt identified the tilt 8. Stereotactic radiosurgery for intradural spine tumors using cone-beam CT image guidance. PubMed Monserrate, Andrés; Zussman, Benjamin; Ozpinar, Alp; Niranjan, Ajay; Flickinger, John C; Gerszten, Peter C 2017-01-01 OBJECTIVE Cone-beam CT (CBCT) image guidance technology has been widely adopted for spine radiosurgery delivery. There is relatively little experience with spine radiosurgery for intradural tumors using CBCT image guidance. This study prospectively evaluated a series of intradural spine tumors treated with radiosurgery. Patient setup accuracy for spine radiosurgery delivery using CBCT image guidance for intradural spine tumors was determined. METHODS Eighty-two patients with intradural tumors were treated and prospectively evaluated. The positioning deviations of the spine radiosurgery treatments in patients were recorded. Radiosurgery was delivered using a linear accelerator with a beam modulator and CBCT image guidance combined with a robotic couch that allows positioning correction in 3 translational and 3 rotational directions. To measure patient movement, 3 quality assurance CBCTs were performed and recorded in 30 patients: before, halfway, and after the radiosurgery treatment. The positioning data and fused images of planning CT and CBCT from the treatments were analyzed to determine intrafraction patient movements. From each of 3 CBCTs, 3 translational and 3 rotational coordinates were obtained. RESULTS The radiosurgery procedure was successfully completed for all patients. Lesion locations included cervical (22), thoracic (17), lumbar (38), and sacral (5). Tumor histologies included schwannoma (27), neurofibromas (18), meningioma (16), hemangioblastoma (8), and ependymoma (5). The mean prescription dose was 17 Gy (range 12-27 Gy) delivered in 1-3 fractions. At the halfway point of the radiation, the translational variations and standard deviations were 0.4 ± 0.5, 0.5 ± 0.8, and 0.4 ± 0.5 mm in the lateral (x), longitudinal (y), and anteroposterior (z) directions, respectively. Similarly, the variations immediately after treatment were 0.5 ± 0.4, 0.5 ± 0.6, and 0.6 ± 0.5 mm along x, y, and z directions, respectively. The mean rotational angles were 0 9. Determining superficial dosimetry for the internal canthus from the Monte Carlo simulation of kV photon and MeV electron beams. PubMed Currie, B E 2009-06-01 This paper presents the findings of an investigation into the Monte Carlo simulation of superficial cancer treatments of an internal canthus site using both kilovoltage photons and megavoltage electrons. The EGSnrc system of codes for the Monte Carlo simulation of the transport of electrons and photons through a phantom representative of either a water phantom or treatment site in a patient is utilised. Two clinical treatment units are simulated: the Varian Medical Systems Clinac 2100C accelerator for 6 MeV electron fields and the Pantak Therapax SXT 150 X-ray unit for 100 kVp photon fields. Depth dose, profile and isodose curves for these simulated units are compared against those measured by ion chamber in a PTW Freiburg MP3 water phantom. Good agreement was achieved away from the surface of the phantom between simulated and measured data. Dose distributions are determined for both kV photon and MeV electron fields in the internal canthus site containing lead and tungsten shielding, rapidly sloping surfaces and different density interfaces. There is a relatively high level of deposition of dose in tissue-bone and tissue-cartilage interfaces in the kV photon fields in contrast to the MeV electron fields. This is reflected in the maximum doses in the PTV of the internal canthus field being 12 Gy for kV photons and 4.8 Gy for MeV electrons. From the dose distributions, DVH and dose comparators are used to assess the simulated treatment fields. Any indication as to which modality is preferable to treat the internal canthus requires careful consideration of many different factors, this investigation provides further perspective in being able to assess which modality is appropriate. 10. A dual modality phantom for cone beam CT and ultrasound image fusion in prostate implant SciTech Connect Ng, Angela; Beiki-Ardakan, Akbar; Tong, Shidong; Moseley, Douglas; Siewerdsen, Jeffrey; Jaffray, David; Yeung, Ivan W. T. 2008-05-15 In transrectal ultrasound (TRUS) guided prostate seed brachytherapy, TRUS provides good delineation of the prostate while x-ray imaging, e.g., C-arm, gives excellent contrast for seed localization. With the recent availability of cone beam CT (CBCT) technology, the combination of the two imaging modalities may provide an ideal system for intraoperative dosimetric feedback during implantation. A dual modality phantom made of acrylic and copper wire was designed to measure the accuracy and precision of image coregistration between a C-arm based CBCT and 3D TRUS. The phantom was scanned with TRUS and CBCT under the same setup condition. Successive parallel transverse ultrasound (US) images were acquired through manual stepping of the US probe across the phantom at an increment of 1 mm over 7.5 cm. The CBCT imaging was done with three reconstructed slice thicknesses (0.4, 0.8, and 1.6 mm) as well as at three different tilt angles (0 deg., 15 deg., 30 deg. ), and the coregistration between CBCT and US images was done using the Variseed system based on four fiducial markers. Fiducial localization error (FLE), fiducial registration error (FRE), and target registration error (TRE) were calculated for all registered image sets. Results showed that FLE were typically less than 0.4 mm, FRE were less than 0.5 mm, and TRE were typically less than 1 mm within the range of operation for prostate implant (i.e., <6 cm to surface of US probe). An analysis of variance test showed no significant difference in TRE for the CBCT-US fusion among the three slice thicknesses (p=0.37). As a comparison, the experiment was repeated with a US-conventional CT scanner combination. No significant difference in TRE was noted between the US-conventional CT fusion and that for all three CBCT image slice thicknesses (p=0.21). CBCT imaging was also performed at three different C-arm tilt angles of 0 deg., 15 deg., and 30 deg. and reconstructed at a slice thickness of 0.8 mm. There is no significant 11. A dual cone-beam CT system for image guided radiotherapy: Initial performance characterization SciTech Connect Li Hao; Bowsher, James; Yin Fangfang; Giles, William 2013-02-15 Purpose: The purpose of this study is to evaluate the performance of a recently developed benchtop dual cone-beam computed tomography (CBCT) system with two orthogonally placed tube/detector sets. Methods: The benchtop dual CBCT system consists of two orthogonally placed 40 Multiplication-Sign 30 cm flat-panel detectors and two conventional x-ray tubes with two individual high-voltage generators sharing the same rotational axis. The x-ray source to detector distance is 150 cm and x-ray source to rotational axis distance is 100 cm for both subsystems. The objects are scanned through 200 Degree-Sign of rotation. The dual CBCT system utilized 110 Degree-Sign of projection data from one detector and 90 Degree-Sign from the other while the two individual single CBCTs utilized 200 Degree-Sign data from each detector. The system performance was characterized in terms of uniformity, contrast, spatial resolution, noise power spectrum, and CT number linearity. The uniformities, within the axial slice and along the longitudinal direction, and noise power spectrum were assessed by scanning a water bucket; the contrast and CT number linearity were measured using the Catphan phantom; and the spatial resolution was evaluated using a tungsten wire phantom. A skull phantom and a ham were also scanned to provide qualitative evaluation of high- and low-contrast resolution. Each measurement was compared between dual and single CBCT systems. Results: Compared to single CBCT, the dual CBCT presented: (1) a decrease in uniformity by 1.9% in axial view and 1.1% in the longitudinal view, as averaged for four energies (80, 100, 125, and 150 kVp); (2) comparable or slightly better contrast (0{approx}25 HU) for low-contrast objects and comparable contrast for high-contrast objects; (3) comparable spatial resolution; (4) comparable CT number linearity with R{sup 2}{>=} 0.99 for all four tested energies; (5) lower noise power spectrum in magnitude. Dual CBCT images of the skull phantom and the 12. A model-based scatter artifacts correction for cone beam CT SciTech Connect Zhao, Wei; Zhu, Jun; Wang, Luyao; Vernekohl, Don; Xing, Lei 2016-04-15 Purpose: Due to the increased axial coverage of multislice computed tomography (CT) and the introduction of flat detectors, the size of x-ray illumination fields has grown dramatically, causing an increase in scatter radiation. For CT imaging, scatter is a significant issue that introduces shading artifact, streaks, as well as reduced contrast and Hounsfield Units (HU) accuracy. The purpose of this work is to provide a fast and accurate scatter artifacts correction algorithm for cone beam CT (CBCT) imaging. Methods: The method starts with an estimation of coarse scatter profiles for a set of CBCT data in either image domain or projection domain. A denoising algorithm designed specifically for Poisson signals is then applied to derive the final scatter distribution. Qualitative and quantitative evaluations using thorax and abdomen phantoms with Monte Carlo (MC) simulations, experimental Catphan phantom data, and in vivo human data acquired for a clinical image guided radiation therapy were performed. Scatter correction in both projection domain and image domain was conducted and the influences of segmentation method, mismatched attenuation coefficients, and spectrum model as well as parameter selection were also investigated. Results: Results show that the proposed algorithm can significantly reduce scatter artifacts and recover the correct HU in either projection domain or image domain. For the MC thorax phantom study, four-components segmentation yields the best results, while the results of three-components segmentation are still acceptable. The parameters (iteration number K and weight β) affect the accuracy of the scatter correction and the results get improved as K and β increase. It was found that variations in attenuation coefficient accuracies only slightly impact the performance of the proposed processing. For the Catphan phantom data, the mean value over all pixels in the residual image is reduced from −21.8 to −0.2 HU and 0.7 HU for projection 13. A model-based scatter artifacts correction for cone beam CT PubMed Central Zhao, Wei; Vernekohl, Don; Zhu, Jun; Wang, Luyao; Xing, Lei 2016-01-01 Purpose: Due to the increased axial coverage of multislice computed tomography (CT) and the introduction of flat detectors, the size of x-ray illumination fields has grown dramatically, causing an increase in scatter radiation. For CT imaging, scatter is a significant issue that introduces shading artifact, streaks, as well as reduced contrast and Hounsfield Units (HU) accuracy. The purpose of this work is to provide a fast and accurate scatter artifacts correction algorithm for cone beam CT (CBCT) imaging. Methods: The method starts with an estimation of coarse scatter profiles for a set of CBCT data in either image domain or projection domain. A denoising algorithm designed specifically for Poisson signals is then applied to derive the final scatter distribution. Qualitative and quantitative evaluations using thorax and abdomen phantoms with Monte Carlo (MC) simulations, experimental Catphan phantom data, and in vivo human data acquired for a clinical image guided radiation therapy were performed. Scatter correction in both projection domain and image domain was conducted and the influences of segmentation method, mismatched attenuation coefficients, and spectrum model as well as parameter selection were also investigated. Results: Results show that the proposed algorithm can significantly reduce scatter artifacts and recover the correct HU in either projection domain or image domain. For the MC thorax phantom study, four-components segmentation yields the best results, while the results of three-components segmentation are still acceptable. The parameters (iteration number K and weight β) affect the accuracy of the scatter correction and the results get improved as K and β increase. It was found that variations in attenuation coefficient accuracies only slightly impact the performance of the proposed processing. For the Catphan phantom data, the mean value over all pixels in the residual image is reduced from −21.8 to −0.2 HU and 0.7 HU for projection 14. Deformable image registration with local rigidity constraints for cone-beam CT-guided spine surgery NASA Astrophysics Data System (ADS) Reaungamornrat, S.; Wang, A. S.; Uneri, A.; Otake, Y.; Khanna, A. J.; Siewerdsen, J. H. 2014-07-01 Image-guided spine surgery (IGSS) is associated with reduced co-morbidity and improved surgical outcome. However, precise localization of target anatomy and adjacent nerves and vessels relative to planning information (e.g., device trajectories) can be challenged by anatomical deformation. Rigid registration alone fails to account for deformation associated with changes in spine curvature, and conventional deformable registration fails to account for rigidity of the vertebrae, causing unrealistic distortions in the registered image that can confound high-precision surgery. We developed and evaluated a deformable registration method capable of preserving rigidity of bones while resolving the deformation of surrounding soft tissue. The method aligns preoperative CT to intraoperative cone-beam CT (CBCT) using free-form deformation (FFD) with constraints on rigid body motion imposed according to a simple intensity threshold of bone intensities. The constraints enforced three properties of a rigid transformation—namely, constraints on affinity (AC), orthogonality (OC), and properness (PC). The method also incorporated an injectivity constraint (IC) to preserve topology. Physical experiments involving phantoms, an ovine spine, and a human cadaver as well as digital simulations were performed to evaluate the sensitivity to registration parameters, preservation of rigid body morphology, and overall registration accuracy of constrained FFD in comparison to conventional unconstrained FFD (uFFD) and Demons registration. FFD with orthogonality and injectivity constraints (denoted FFD+OC+IC) demonstrated improved performance compared to uFFD and Demons. Affinity and properness constraints offered little or no additional improvement. The FFD+OC+IC method preserved rigid body morphology at near-ideal values of zero dilatation ({ D} = 0.05, compared to 0.39 and 0.56 for uFFD and Demons, respectively) and shear ({ S} = 0.08, compared to 0.36 and 0.44 for uFFD and Demons 15. Self-calibration of a cone-beam micro-CT system SciTech Connect Patel, V.; Chityala, R. N.; Hoffmann, K. R.; Ionita, C. N.; Bednarek, D. R.; Rudin, S. 2009-01-15 Use of cone-beam computed tomography (CBCT) is becoming more frequent. For proper reconstruction, the geometry of the CBCT systems must be known. While the system can be designed to reduce errors in the geometry, calibration measurements must still be performed and corrections applied. Investigators have proposed techniques using calibration objects for system calibration. In this study, the authors present methods to calibrate a rotary-stage CB micro-CT (CB{mu}CT) system using only the images acquired of the object to be reconstructed, i.e., without the use of calibration objects. Projection images are acquired using a CB{mu}CT system constructed in the authors' laboratories. Dark- and flat-field corrections are performed. Exposure variations are detected and quantified using analysis of image regions with an unobstructed view of the x-ray source. Translations that occur during the acquisition in the horizontal direction are detected, quantified, and corrected based on sinogram analysis. The axis of rotation is determined using registration of antiposed projection images. These techniques were evaluated using data obtained with calibration objects and phantoms. The physical geometric axis of rotation is determined and aligned with the rotational axis (assumed to be the center of the detector plane) used in the reconstruction process. The parameters describing this axis agree to within 0.1 mm and 0.3 deg with those determined using other techniques. Blurring due to residual calibration errors has a point-spread function in the reconstructed planes with a full-width-at-half-maximum of less than 125 {mu}m in a tangential direction and essentially zero in the radial direction for the rotating object. The authors have used this approach on over 100 acquisitions over the past 2 years and have regularly obtained high-quality reconstructions, i.e., without artifacts and no detectable blurring of the reconstructed objects. This self-calibrating approach not only obviates 16. Deformable Image Registration for Cone-Beam CT Guided Transoral Robotic Base of Tongue Surgery PubMed Central Reaungamornrat, S.; Liu, W. P.; Wang, A. S.; Otake, Y.; Nithiananthan, S.; Uneri, A.; Schafer, S.; Tryggestad, E.; Richmon, J.; Sorger, J. M.; Siewerdsen, J. H.; Taylor, R. H. 2013-01-01 Transoral robotic surgery (TORS) offers a minimally invasive approach to resection of base of tongue tumors. However, precise localization of the surgical target and adjacent critical structures can be challenged by the highly deformed intraoperative setup. We propose a deformable registration method using intraoperative cone-beam CT (CBCT) to accurately align preoperative CT or MR images with the intraoperative scene. The registration method combines a Gaussian mixture (GM) model followed by a variation of the Demons algorithm. First, following segmentation of the volume of interest (i.e., volume of the tongue extending to the hyoid), a GM model is applied to surface point clouds for rigid initialization (GM rigid) followed by nonrigid deformation (GM nonrigid). Second, the registration is refined using the Demons algorithm applied to distance map transforms of the (GM-registered) preoperative image and intraoperative CBCT. Performance was evaluated in repeat cadaver studies (25 image pairs) in terms of target registration error (TRE), entropy correlation coefficient (ECC), and normalized pointwise mutual information (NPMI). Retraction of the tongue in the TORS operative setup induced gross deformation >30 mm. The mean TRE following the GM rigid, GM nonrigid, and Demons steps was 4.6, 2.1, and 1.7 mm, respectively. The respective ECC was 0.57, 0.70, and 0.73 and NPMI was 0.46, 0.57, and 0.60. Registration accuracy was best across the superior aspect of the tongue and in proximity to the hyoid (by virtue of GM registration of surface points on these structures). The Demons step refined registration primarily in deeper portions of the tongue further from the surface and hyoid bone. Since the method does not use image intensities directly, it is suitable to multi-modality registration of preoperative CT or MR with intraoperative CBCT. Extending the 3D image registration to the fusion of image and planning data in stereo-endoscopic video is anticipated to support 17. Homogeneous and inhomogeneous material effect in gamma index evaluation of IMRT technique based on fan beam and Cone Beam CT patient images NASA Astrophysics Data System (ADS) Wibowo, W. E.; Waliyyulhaq, M.; Pawiro, S. A. 2017-05-01 Patient-specific Quality Assurance (QA) technique in lung case Intensity-Modulated Radiation Therapy (IMRT) is traditionally limited to homogeneous material, although the fact that the planning is carried out with inhomogeneous material present. Moreover, the chest area has many of inhomogeneous material, such as lung, soft tissue, and bone, which inhomogeneous material requires special attention to avoid inaccuracies in dose calculation in the Treatment Planning System (TPS). Recent preliminary studies shown that the role of Cone Beam CT (CBCT) can be used not only to position the patient at the time prior to irradiation but also to serve as planning modality. Our study presented the influence of a homogeneous and inhomogeneous materials using Fan Beam CT and Cone Beam CT modalities in IMRT technique on the Gamma Index (GI) value. We used a variation of the segment and Calculation Grid Resolution (CGR). The results showed the deviation of averaged GI value to be between CGR 0.2 cm and 0.4 cm with homogeneous material ranging from -0.44% to 1.46%. For inhomogeneous material, the value was range from -1.74% to 0.98%. In performing patient-specific IMRT QA techniques for lung cancer, homogeneous material can be implemented in evaluating the gamma index. 18. Comparison of human and Hotelling observer performance for a fan-beam CT signal detection task PubMed Central Sanchez, Adrian A.; Sidky, Emil Y.; Reiser, Ingrid; Pan, Xiaochuan 2013-01-01 Purpose: A human observer study was performed for a signal detection task for the case of fan-beam x-ray computed tomography. Hotelling observer (HO) performance was calculated for the same detection task without the use of efficient channels. By considering the full image covariance produced by the filtered backprojection (FBP) algorithm and avoiding the use of channels in the computation of HO performance, the authors establish an absolute upper bound on signal detectability. Therefore, this study serves as a baseline for relating human and ideal observer performance in the case of fan-beam CT. Methods: Eight human observers participated in a two-alternative forced choice experiment where the signal of interest was a small simulated ellipsoid in the presence of independent, identically distributed Gaussian detector noise. Theoretical performance of the HO, which is equivalent to the ideal observer in this case (see Sec. 13.2.12 in Barrett and Myers [Foundations of Image Science (Wiley, Hoboken, NJ, 2004)], was also computed and compared to the performance of the human observers. In addition to a reference FBP implementation, two FBP implementations with inherent loss of HO signal detectability (e.g., by apodizing the ramp filter) were also investigated. Each of these latter two implementations takes the form of a discrete-to-discrete linear operator (i.e., a matrix), which has a nontrivial null-space resulting in the loss of detectability. Results: Estimated observer detectability index (\\documentclass[12pt]{minimal}\\begin{document}\\hat{d}_A\$\\end{document}d^A) values for the human observers and SNR values for the HO were obtained. While Hanning filtering in the FBP implementation with a cutoff frequency of 1/4 of the Nyquist frequency reduces HO SNR (due to the reconstruction matrix's nontrivial null-space), this filtering was shown to consistently improve human observer performance. By contrast, increasing the image pixel size was seen to have a comparable 19. TU-AB-204-00: Advances in Cone-Beam CT and Emerging Applications SciTech Connect 2015-06-15 This symposium highlights advanced cone-beam CT (CBCT) technologies in four areas of emerging application in diagnostic imaging and image-guided interventions. Each area includes research that extends the spatial, temporal, and/or contrast resolution characteristics of CBCT beyond conventional limits through advances in scanner technology, acquisition protocols, and 3D image reconstruction techniques. Dr. G. Chen (University of Wisconsin) will present on the topic: Advances in C-arm CBCT for Brain Perfusion Imaging. Stroke is a leading cause of death and disability, and a fraction of people having an acute ischemic stroke are suitable candidates for endovascular therapy. Critical factors that affect both the likelihood of successful revascularization and good clinical outcome are: 1) the time between stroke onset and revascularization; and 2) the ability to distinguish patients who have a small volume of irreversibly injured brain (ischemic core) and a large volume of ischemic but salvageable brain (penumbra) from patients with a large ischemic core and little or no penumbra. Therefore, “time is brain” in the care of the stroke patients. C-arm CBCT systems widely available in angiography suites have the potential to generate non-contrast-enhanced CBCT images to exclude the presence of hemorrhage, time-resolved CBCT angiography to evaluate the site of occlusion and collaterals, and CBCT perfusion parametric images to assess the extent of the ischemic core and penumbra, thereby fulfilling the imaging requirements of a “one-stop-shop” in the angiography suite to reduce the time between onset and revascularization therapy. The challenges and opportunities to advance CBCT technology to fully enable the one-stop-shop C-arm CBCT platform for brain imaging will be discussed. Dr. R. Fahrig (Stanford University) will present on the topic: Advances in C-arm CBCT for Cardiac Interventions. With the goal of providing functional information during cardiac interventions 20. Iterative image-domain ring artifact removal in cone-beam CT Liang, Xiaokun; Zhang, Zhicheng; Niu, Tianye; Yu, Shaode; Wu, Shibin; Li, Zhicheng; Zhang, Huailing; Xie, Yaoqin 2017-07-01 Ring artifacts in cone beam computed tomography (CBCT) images are caused by pixel gain variations using flat-panel detectors, and may lead to structured non-uniformities and deterioration of image quality. The purpose of this study is to propose a method of general ring artifact removal in CBCT images. This method is based on the polar coordinate system, where the ring artifacts manifest as stripe artifacts. Using relative total variation, the CBCT images are first smoothed to generate template images with fewer image details and ring artifacts. By subtracting the template images from the CBCT images, residual images with image details and ring artifacts are generated. As the ring artifact manifests as a stripe artifact in a polar coordinate system, the artifact image can be extracted by mean value from the residual image; the image details are generated by subtracting the artifact image from the residual image. Finally, the image details are compensated to the template image to generate the corrected images. The proposed framework is iterated until the differences in the extracted ring artifacts are minimized. We use a 3D Shepp-Logan phantom, Catphan©504 phantom, uniform acrylic cylinder, and images from a head patient to evaluate the proposed method. In the experiments using simulated data, the spatial uniformity is increased by 1.68 times and the structural similarity index is increased from 87.12% to 95.50% using the proposed method. In the experiment using clinical data, our method shows high efficiency in ring artifact removal while preserving the image structure and detail. The iterative approach we propose for ring artifact removal in cone-beam CT is practical and attractive for CBCT guided radiation therapy. 1. Evaluation of bone changes in the temporomandibular joint using cone beam CT PubMed Central dos Anjos Pontual, ML; Freire, JSL; Barbosa, JMN; Frazão, MAG; dos Anjos Pontual, A; Fonseca da Silveira, MM 2012-01-01 Objective The aim of this study was to assess bone changes and mobility in temporomandibular joints (TMJs) using cone beam CT (CBCT) in a population sample in Recife, PE, Brazil. Methods The TMJ images of patients treated by a radiologist at a private dental radiology service over a period of 1 year were retrieved from the computer database and assessed using a computer with a 21-inch monitor and the iCAT Cone Beam 3D Dental Imaging System Workstation program (Imaging Sciences International, Hatfield, PA). The Pearson χ2 test was used to analyse the differences in percentage of bone changes among the categories of mobility (p ≤ 0.05). The McNemar test was used to compare the presence of bone changes in TMJs on the right and left sides (p ≤ 0.05). Results An adjusted logistic regression model was used to assess the effect of age and gender on the occurrence of bone changes (p ≤ 0.05). Bone changes were present in 227 (71%) patients. Age group and gender showed a statistically significant association with presence of bone changes (p ≤ 0.05). There was no significant difference between the right and left sides (p = 0.556) and in condylar mobility (p = 0.925) with regard to the presence of degenerative bone changes. Conclusions There is a high prevalence of degenerative bone alteration in TMJs, which is more frequent in women and mostly located in the condyle. The prevalence of degenerative bone changes increases with age. There is no correlation between condylar mobility and the presence of degenerative bony changes in TMJs. PMID:22184625 2. Noise suppression in reconstruction of low-Z target megavoltage cone-beam CT images SciTech Connect Wang Jing; Robar, James; Guan Huaiqun 2012-08-15 Purpose: To improve the image contrast-to-noise (CNR) ratio for low-Z target megavoltage cone-beam CT (MV CBCT) using a statistical projection noise suppression algorithm based on the penalized weighted least-squares (PWLS) criterion. Methods: Projection images of a contrast phantom, a CatPhan{sup Registered-Sign} 600 phantom and a head phantom were acquired by a Varian 2100EX LINAC with a low-Z (Al) target and low energy x-ray beam (2.5 MeV) at a low-dose level and at a high-dose level. The projections were then processed by minimizing the PWLS objective function. The weighted least square (WLS) term models the noise of measured projection and the penalty term enforces the smoothing constraints of the projection image. The variance of projection data was chosen as the weight for the PWLS objective function and it determined the contribution of each measurement. An anisotropic quadratic form penalty that incorporates the gradient information of projection image was used to preserve edges during noise reduction. Low-Z target MV CBCT images were reconstructed by the FDK algorithm after each projection was processed by the PWLS smoothing. Results: Noise in low-Z target MV CBCT images were greatly suppressed after the PWLS projection smoothing, without noticeable sacrifice of the spatial resolution. Depending on the choice of smoothing parameter, the CNR of selected regions of interest in the PWLS processed low-dose low-Z target MV CBCT image can be higher than the corresponding high-dose image.Conclusion: The CNR of low-Z target MV CBCT images was substantially improved by using PWLS projection smoothing. The PWLS projection smoothing algorithm allows the reconstruction of high contrast low-Z target MV CBCT image with a total dose of as low as 2.3 cGy. 3. Three-dimensional focus of attention for iterative cone-beam micro-CT reconstruction Benson, T. M.; Gregor, J. 2006-09-01 Three-dimensional iterative reconstruction of high-resolution, circular orbit cone-beam x-ray CT data is often considered impractical due to the demand for vast amounts of computer cycles and associated memory. In this paper, we show that the computational burden can be reduced by limiting the reconstruction to a small, well-defined portion of the image volume. We first discuss using the support region defined by the set of voxels covered by all of the projection views. We then present a data-driven preprocessing technique called focus of attention that heuristically separates both image and projection data into object and background before reconstruction, thereby further reducing the reconstruction region of interest. We present experimental results for both methods based on mouse data and a parallelized implementation of the SIRT algorithm. The computational savings associated with the support region are substantial. However, the results for focus of attention are even more impressive in that only about one quarter of the computer cycles and memory are needed compared with reconstruction of the entire image volume. The image quality is not compromised by either method. 4. Clinical Implementation Of Megavoltage Cone Beam CT As Part Of An IGRT Program Gonzalez, Albin; Bauer, Lisa; Kinney, Vicki; Crooks, Cheryl 2008-03-01 Knowing where the tumor is at all times during treatment is the next challenge in the field of radiation therapy. This issue has become more important because with treatments such as Intensity Modulated Radiation Therapy (IMRT), healthy tissue is spared by using very tight margins around the tumor. These tight margins leave very small room for patient setup errors. The use of an imaging modality in the treatment room as a way to localize the tumor for patient set up is generally known as "Image Guided Radiation Therapy" or IGRT. This article deals with a form of IGRT known as Megavoltage Cone Beam Computed Tomography (MCBCT) using a Siemens Oncor linear accelerator currently in use at Firelands Regional Medical Center. With MCBCT, we are capable of acquiring CT images right before the treatment of the patient and then use this information to position the patient tumor according to the treatment plan. This article presents the steps followed in order to clinically implement this system, as well as some of the quality assurance tests suggested by the manufacturer and some tests developed in house 5. Descriptive study of the bifid mandibular canals and retromolar foramina: cone beam CT vs panoramic radiography PubMed Central Muinelo-Lorenzo, J; Suárez-Quintanilla, J A; Fernández-Alonso, A; Marsillas-Rascado, S 2014-01-01 Objectives: To examine the presence and morphologic characteristics of bifid mandibular canals (BMCs) and retromolar foramens (RFs) using cone beam CT (CBCT) and to determine their visualization on panoramic radiographs (PANs). Methods: A sample of 225 CBCT examinations was analysed for the presence of BMCs, as well as length, height, diameter and angle. The diameter of the RF was also determined. Subsequently, corresponding PANs were analysed to determine whether the BMCs and RFs were visible or not. Results: The BMCs were observed on CBCT in 83 out of the 225 patients (36.8%). With respect to gender, statistically significant differences were found in the number of BMCs. There were also significant differences in anatomical characteristics of the types of BMCs. Only 37.8% of the BMCs and 32.5% of the RFs identified on CBCT were also visible on PANs. The diameter had a significant effect on the capability of PANs to visualize BMCs and RFs (B = 0.791, p = 0.035; B = 1.900, p = 0.017, respectively). Conclusions: PANs are unable to sufficiently identify BMCs and RFs. The diameter of these anatomical landmarks represents a relevant factor for visualization on PANs. Pre-operative images using only PANs may lead to underestimation of the presence of BMCs and to surgical complications and anaesthetic failures, which could have been avoided. For true determination of BMCs, a CBCT device should be considered better than a PAN. PMID:24785820 6. 3D Alternating Direction TV-Based Cone-Beam CT Reconstruction with Efficient GPU Implementation PubMed Central Cai, Ailong; Zhang, Hanming; Li, Lei; Xi, Xiaoqi; Guan, Min; Li, Jianxin 2014-01-01 Iterative image reconstruction (IIR) with sparsity-exploiting methods, such as total variation (TV) minimization, claims potentially large reductions in sampling requirements. However, the computation complexity becomes a heavy burden, especially in 3D reconstruction situations. In order to improve the performance for iterative reconstruction, an efficient IIR algorithm for cone-beam computed tomography (CBCT) with GPU implementation has been proposed in this paper. In the first place, an algorithm based on alternating direction total variation using local linearization and proximity technique is proposed for CBCT reconstruction. The applied proximal technique avoids the horrible pseudoinverse computation of big matrix which makes the proposed algorithm applicable and efficient for CBCT imaging. The iteration for this algorithm is simple but convergent. The simulation and real CT data reconstruction results indicate that the proposed algorithm is both fast and accurate. The GPU implementation shows an excellent acceleration ratio of more than 100 compared with CPU computation without losing numerical accuracy. The runtime for the new 3D algorithm is about 6.8 seconds per loop with the image size of 256 × 256 × 256 and 36 projections of the size of 512 × 512. PMID:25045400 7. How I Do It: Cone-Beam CT during Transarterial Chemoembolization for Liver Cancer PubMed Central Tacher, Vania; Radaelli, Alessandro; Lin, MingDe 2015-01-01 8. Breast density quantification with cone-beam CT: a post-mortem study. PubMed Johnson, Travis; Ding, Huanjun; Le, Huy Q; Ducote, Justin L; Molloi, Sabee 2013-12-07 Forty post-mortem breasts were imaged with a flat-panel based cone-beam x-ray CT system at 50 kVp. The feasibility of breast density quantification has been investigated using standard histogram thresholding and an automatic segmentation method based on the fuzzy c-means algorithm (FCM). The breasts were chemically decomposed into water, lipid, and protein immediately after image acquisition was completed. The per cent fibroglandular volume (%FGV) from chemical analysis was used as the gold standard for breast density comparison. Both image-based segmentation techniques showed good precision in breast density quantification with high linear coefficients between the right and left breast of each pair. When comparing with the gold standard using %FGV from chemical analysis, Pearson's r-values were estimated to be 0.983 and 0.968 for the FCM clustering and the histogram thresholding techniques, respectively. The standard error of the estimate was also reduced from 3.92% to 2.45% by applying the automatic clustering technique. The results of the postmortem study suggested that breast tissue can be characterized in terms of water, lipid and protein contents with high accuracy by using chemical analysis, which offers a gold standard for breast density studies comparing different techniques. In the investigated image segmentation techniques, the FCM algorithm had high precision and accuracy in breast density quantification. In comparison to conventional histogram thresholding, it was more efficient and reduced inter-observer variation. 9. Comparative study between conventional and cone beam CT-synthesized half and total skull cephalograms PubMed Central Liedke, GS; Delamare, EL; Vizzotto, MB; da Silveira, HLD; Prietsch, JR; Dutra, V; da Silveira, HED 2012-01-01 Objectives The aim of this study was to compare cephalometric measurements obtained from conventional cephalograms with total and half-skull synthesized cone beam CT (CBCT) cephalograms. Methods Cephalometric analyses of 30 clinically symmetric patients were conducted by a calibrated examiner on conventional and CBCT-synthesized cephalograms (total, right and left). Reproducibility was investigated using the intraclass correlation coefficient (ICC). The Bland–Altman analysis was used to assess the agreement of the measurements from each factor obtained by conventional, total, right and left CBCT-synthesized cephalograms. Results The ICC was above 0.9 for most of the 40 cephalometric factors analysed, revealing similar levels of reproducibility. When the measurements obtained from conventional and CBCT-synthesized cephalograms were compared, the Bland–Altman analysis showed a strong agreement between them. Conclusions Half-skull CBCT-synthesized cephalograms offer the same diagnostic performance and equivalent reproducibility in terms of cephalometric analysis as observed in conventional and total CBCT-synthesized cephalograms. PMID:22301638 10. Observer Reliability of Three-Dimensional Cephalometric Landmark Identification on Cone-Beam CT PubMed Central de Oliveira, Ana Emilia F.; Cevidanes, Lucia Helena S.; Phillips, Ceib; Motta, Alexandre; Burke, Brandon; Tyndall, Donald 2009-01-01 Objective To evaluate reliability in 3D landmark identification using Cone-Beam CT. Study Design Twelve pre-surgery CBCTs were randomly selected from 159 orthognathic surgery patients. Three observers independently repeated three times the identification of 30 landmarks in the sagittal, coronal, and axial slices. A mixed effects ANOVA model estimated the Intraclass Correlations (ICC) and assessed systematic bias. Results The ICC was >0.9 for 86% of intra-observer assessments and 66% of inter-observer assessments. Only 1% of intra-observer and 3% of inter-observer coefficients were <0.45. The systematic difference among observers was greater in X and Z than in Y dimensions, but the maximum mean difference was quite small. Conclusion Overall, the intra- and inter-observer reliability was excellent. 3D landmark identification using CBCT can offer consistent and reproducible data, if a protocol for operator training and calibration is followed. This is particularly important for landmarks not easily specified in all three planes of space. PMID:18718796 11. Effect of anatomical backgrounds on detectability in volumetric cone beam CT images Han, Minah; Park, Subok; Baek, Jongduk 2016-03-01 As anatomical noise is often a dominating factor affecting signal detection in medical imaging, we investigate the effects of anatomical backgrounds on signal detection in volumetric cone beam CT images. Signal detection performances are compared between transverse and longitudinal planes with either uniform or anatomical backgrounds. Sphere objects with diameters of 1mm, 5mm, 8mm, and 11mm are used as the signals. Three-dimensional (3D) anatomical backgrounds are generated using an anatomical noise power spectrum, 1/fβ, with β=3, equivalent to mammographic background [1]. The mean voxel value of the 3D anatomical backgrounds is used as an attenuation coefficient of the uniform background. Noisy projection data are acquired by the forward projection of the uniform and anatomical 3D backgrounds with/without sphere lesions and by the addition of quantum noise. Then, images are reconstructed by an FDK algorithm [2]. For each signal size, signal detection performances in transverse and longitudinal planes are measured by calculating the task SNR of a channelized Hotelling observer with Laguerre-Gauss channels. In the uniform background case, transverse planes yield higher task SNR values for all sphere diameters but 1mm. In the anatomical background case, longitudinal planes yield higher task SNR values for all signal diameters. The results indicate that it is beneficial to use longitudinal planes to detect spherical signals in anatomical backgrounds. 12. Investigation on location dependent detectability in cone beam CT images with uniform and anatomical backgrounds Han, Minah; Baek, Jongduk 2017-03-01 We investigate location dependent lesion detectability of cone beam computed tomography images for different background types (i.e., uniform and anatomical), image planes (i.e., transverse and longitudinal) and slice thicknesses. Anatomical backgrounds are generated using a power law spectrum of breast anatomy, 1/f3. Spherical object with a 5mm diameter is used as a signal. CT projection data are acquired by the forward projection of uniform and anatomical backgrounds with and without the signal. Then, projection data are reconstructed using the FDK algorithm. Detectability is evaluated by a channelized Hotelling observer with dense difference-of-Gaussian channels. For uniform background, off-centered images yield higher detectability than iso-centered images for the transverse plane, while for the longitudinal plane, detectability of iso-centered and off-centered images are similar. For anatomical background, off-centered images yield higher detectability for the transverse plane, while iso-centered images yield higher detectability for the longitudinal plane, when the slice thickness is smaller than 1.9mm. The optimal slice thickness is 3.8mm for all tasks, and the transverse plane at the off-center (iso-center and off-center) produces the highest detectability for uniform (anatomical) background. 13. Task-Based Regularization Design for Detection of Intracranial Hemorrhage in Cone-Beam CT PubMed Central Dang, H.; Stayman, J. W.; Xu, J.; Sisniega, A.; Zbijewski, W.; Wang, X.; Foos, D. H.; Aygun, N.; Koliatsos, V. E.; Siewerdsen, J. H. 2016-01-01 Prompt and reliable detection of acute intracranial hemorrhage (ICH) is critical to treatment of a number of neurological disorders. Cone-beam CT (CBCT) systems are potentially suitable for detecting ICH (contrast 40-80 HU, size down to 1 mm) at the point of care but face major challenges in image quality requirements. Statistical reconstruction demonstrates improved noise-resolution tradeoffs in CBCT head imaging, but its capability in improving image quality with respect to the task of ICH detection remains to be fully investigated. Moreover, statistical reconstruction typically exhibits nonuniform spatial resolution and noise characteristics, leading to spatially varying detectability of ICH for a conventional penalty. In this work, we propose a spatially varying penalty design that maximizes detectability of ICH at each location throughout the image. We leverage theoretical analysis of spatial resolution and noise for a penalized weighted least-squares (PWLS) estimator, and employ a task-based imaging performance descriptor in terms of detectability index using a nonprewhitening observer model. Performance prediction was validated using a 3D anthropomorphic head phantom. The proposed penalty achieved superior detectability throughout the head and improved detectability in regions adjacent to the skull base by ~10% compared to a conventional uniform penalty. PWLS reconstruction with the proposed penalty demonstrated excellent visualization of simulated ICH in different regions of the head and provides further support for development of dedicated CBCT head scanning at the point-of-care in the neuro ICU and OR. 14. 4D cone-beam CT reconstruction using multi-organ meshes for sliding motion modeling PubMed Central Zhong, Zichun; Gu, Xuejun; Mao, Weihua; Wang, Jing 2016-01-01 A simultaneous motion estimation and image reconstruction (SMEIR) strategy was proposed for 4D cone-beam CT (4D-CBCT) reconstruction and showed excellent results in both phantom and lung cancer patient studies. In the original SMEIR algorithm, the deformation vector field (DVF) was defined on voxel grid and estimated by enforcing a global smoothness regularization term on the motion fields. The objective of this work is to improve the computation efficiency and motion estimation accuracy of SMEIR for 4D-CBCT through developing a multi-organ meshing model. Feature-based adaptive meshes were generated to reduce the number of unknowns in the DVF estimation and accurately capture the organ shapes and motion. Additionally, the discontinuity in the motion fields between different organs during respiration was explicitly considered in the multi-organ mesh model. This will help with the accurate visualization and motion estimation of the tumor on the organ boundaries in 4D-CBCT. To further improve the computational efficiency, a GPU-based parallel implementation was designed. The performance of the proposed algorithm was evaluated on a synthetic sliding motion phantom, a 4D NCAT phantom, and four lung cancer patients. The proposed multi-organ mesh based strategy outperformed the conventional Feldkamp–Davis–Kress, iterative total variation minimization, original SMEIR and single meshing method based on both qualitative and quantitative evaluations. PMID:26758496 15. Measurement of inter and intra fraction organ motion in radiotherapy using cone beam CT projection images Marchant, T. E.; Amer, A. M.; Moore, C. J. 2008-02-01 A method is presented for extraction of intra and inter fraction motion of seeds/markers within the patient from cone beam CT (CBCT) projection images. The position of the marker is determined on each projection image and fitted to a function describing the projection of a fixed point onto the imaging panel at different gantry angles. The fitted parameters provide the mean marker position with respect to the isocentre. Differences between the theoretical function and the actual projected marker positions are used to estimate the range of intra fraction motion and the principal motion axis in the transverse plane. The method was validated using CBCT projection images of a static marker at known locations and of a marker moving with known amplitude. The mean difference between actual and measured motion range was less than 1 mm in all directions, although errors of up to 5 mm were observed when large amplitude motion was present in an orthogonal direction. In these cases it was possible to calculate the range of motion magnitudes consistent with the observed marker trajectory. The method was shown to be feasible using clinical CBCT projections of a pancreas cancer patient. 16. Motion artefacts in cone beam CT: an in vitro study about the effects on the images PubMed Central Molteni, Roberto; Lorini, Chiara; Taliani, Gian G; Matteuzzi, Benedetta; Mazzoni, Elisa; Colagrande, Stefano 2016-01-01 Objective: In cone beam CT (CBCT), imperfect patient immobility, caused by involuntary movements, is one of the most important causes of artefacts and image quality degradation. Various works in literature address this topic, but seldom is the nature of the movement correlated with the type of artefact and the image degradation in a systematic manner, and the correlation analyzed and explained. Methods: All three types of movements that can occur during a scan—nodding, tilting and rolling—were applied to a dry skull, in various manners from abrupt to gradual through the entire scan, at different times and angles, over a wide range of displacements. 84 scans were performed, with different skull movements, and the resulting images examined by two skilled radiologists, rated in a four-point scale and statistically analyzed. A commercial CBCT machine was used, featuring supine patient positioning. Results: Different types of movements induce different artefacts, in different parts of the anatomy. In general, movement of short duration may lead to double contours (bilateral or monolateral depending upon the angle of the scan at which they occur), whereas gradual movements result into blurring. Conclusion: Not all movements cause motion artefacts that equally jeopardize the image. Rolling is the type of movement that most severely affects the image diagnostic value. Advances in knowledge: These findings may help practitioners to identify the causes of motion artefacts and the resulting image degradation, and remediate them, and manufacturers to improve the patient-positioning devices. PMID:26577438 17. Extra projection data identification method for fast-continuous-rotation industrial cone-beam CT. PubMed Yang, Min; Duan, Shengling; Duan, Jinghui; Wang, Xiaolong; Li, Xingdong; Meng, Fanyong; Zhang, Jianhai 2013-01-01 Fast-continuous-rotation is an effective measure to improve the scanning speed and decrease the radiation dose for cone-beam CT. However, because of acceleration and deceleration of the motor, as well as the response lag of the scanning control terminals to the host PC, uneven-distributed and redundant projections are inevitably created, which seriously decrease the quality of the reconstruction images. In this paper, we first analyzed the aspects of the theoretical sequence chart of the fast-continuous-rotation mode. Then, an optimized sequence chart was proposed by extending the rotation angle span to ensure the effective 2π-span projections were situated in the stable rotation stage. In order to match the rotation angle with the projection image accurately, structure similarity (SSIM) index was used as a control parameter for extraction of the effective projection sequence which was exactly the complete projection data for image reconstruction. The experimental results showed that SSIM based method had a high accuracy of projection view locating and was easy to realize. 18. Pseudo super-resolution for improved calcification characterization for cone beam breast CT (CBBCT) Liu, Jiangkun; Ning, Ruola; Cai, Weixing 2010-04-01 Cone Beam Breast CT imaging (CBBCT) is a promising tool for diagnosis of breast tumors and calcifications. However, as the sizes of calcifications in early stages are very small, it is not easy to distinguish them from background tissues because of the relatively high noise level. Therefore, it is necessary to enhance the visualization of calcifications for accurate detection. In this work, the Papoulis-Gerchberg (PG) method was introduced and modified to improve calcification characterization. PG method is an iterative algorithm of signal extrapolation and has been demonstrated to be very effective in image restoration like super-resolution (SR) and inpainting. The projection images were zoomed by bicubic interpolation method, then the modified PG method were applied to improve the image quality. The reconstruction from processed projection images showed that this approach can effectively improve the image quality by improving the Modulation Transfer Function (MTF) with a limited increase in noise level. As a result, the detectability of calcifications was improved in CBBCT images. 19. The effect of cone beam CT (CBCT) on therapeutic decision-making in endodontics. PubMed Mota de Almeida, F J; Knutsson, K; Flygare, L 2014-01-01 The aim was to assess to what extent cone beam CT (CBCT) used in accordance with current European Commission guidelines in a normal clinical setting has an impact on therapeutic decisions in a population referred for endodontic problems. The study includes data of consecutively examined patients collected from October 2011 to December 2012. From 2 different endodontic specialist clinics, 57 patients were referred for a CBCT examination using criteria in accordance with current European guidelines. The CBCT examinations were performed using similar equipment and standardized among clinics. After a thorough clinical examination, but before CBCT, the examiner made a preliminary therapy plan which was recorded. After the CBCT examination, the same examiner made a new therapy plan. Therapy plans both before and after the CBCT examination were plotted for 53 patients and 81 teeth. As four patients had incomplete protocols, they were not included in the final analysis. 4% of the patients referred to endodontic clinics during the study period were examined with CBCT. The most frequent reason for referral to CBCT examination was to differentiate pathology from normal anatomy, this was the case in 24 patients (45% of the cases). The primary outcome was therapy plan changes that could be attributed to CBCT examination. There were changes in 28 patients (53%). CBCT has a significant impact on therapeutic decision efficacy in endodontics when used in concordance with the current European Commission guidelines. 20. The effect of cone beam CT (CBCT) on therapeutic decision-making in endodontics PubMed Central Knutsson, K; Flygare, L 2014-01-01 Objectives: The aim was to assess to what extent cone beam CT (CBCT) used in accordance with current European Commission guidelines in a normal clinical setting has an impact on therapeutic decisions in a population referred for endodontic problems. Methods: The study includes data of consecutively examined patients collected from October 2011 to December 2012. From 2 different endodontic specialist clinics, 57 patients were referred for a CBCT examination using criteria in accordance with current European guidelines. The CBCT examinations were performed using similar equipment and standardized among clinics. After a thorough clinical examination, but before CBCT, the examiner made a preliminary therapy plan which was recorded. After the CBCT examination, the same examiner made a new therapy plan. Therapy plans both before and after the CBCT examination were plotted for 53 patients and 81 teeth. As four patients had incomplete protocols, they were not included in the final analysis. Results: 4% of the patients referred to endodontic clinics during the study period were examined with CBCT. The most frequent reason for referral to CBCT examination was to differentiate pathology from normal anatomy, this was the case in 24 patients (45% of the cases). The primary outcome was therapy plan changes that could be attributed to CBCT examination. There were changes in 28 patients (53%). Conclusions: CBCT has a significant impact on therapeutic decision efficacy in endodontics when used in concordance with the current European Commission guidelines. PMID:24766060 1. A web-based instruction module for interpretation of craniofacial cone beam CT anatomy. PubMed Hassan, B A; Jacobs, R; Scarfe, W C; Al-Rawi, W T 2007-09-01 To develop a web-based module for learner instruction in the interpretation and recognition of osseous anatomy on craniofacial cone-beam CT (CBCT) images. Volumetric datasets from three CBCT systems were acquired (i-CAT, NewTom 3G and AccuiTomo FPD) for various subjects using equipment-specific scanning protocols. The datasets were processed using multiple software to provide two-dimensional (2D) multiplanar reformatted (MPR) images (e.g. sagittal, coronal and axial) and three-dimensional (3D) visual representations (e.g. maximum intensity projection, minimum intensity projection, ray sum, surface and volume rendering). Distinct didactic modules which illustrate the principles of CBCT systems, guided navigation of the volumetric dataset, and anatomic correlation of 3D models and 2D MPR graphics were developed using a hybrid combination of web authoring and image analysis techniques. Interactive web multimedia instruction was facilitated by the use of dynamic highlighting and labelling, and rendered video illustrations, supplemented with didactic textual material. HTML coding and Java scripting were heavily implemented for the blending of the educational modules. An interactive, multimedia educational tool for visualizing the morphology and interrelationships of osseous craniofacial anatomy, as depicted on CBCT MPR and 3D images, was designed and implemented. The present design of a web-based instruction module may assist radiologists and clinicians in learning how to recognize and interpret the craniofacial anatomy of CBCT based images more efficiently. 2. [Cone-beam CT study of bone septa during maxillary sinus lift among Changzhou population]. PubMed Chen, Min-zhen; Xie, Yong-fu; Xie, Hui; Wang, Guo-hai; He, Jia-cai 2016-02-01 To observe the incidence, location, morphological characteristics of sinus septa among Changzhou population, and to investigate the relationship between maxillary posterior teeth loss and bony septum, and the guiding significance for sinus lift. One hundred and twenty-four subjects were selected, the preoperative cone-beam CT (CBCT) data was analyzed by NNT software, which provided a three-dimensional measurement of the maxillary sinus septa. SPSS 13.0 software package was used for statistical analysis. 33.87%(42/124)subjects had sinus septa, 27.42%(68/248)sinus had septa. 66.18% (45/68) of the septa were located in the middle region, 22.06% (15/68)in the posterior region, 11.76%(8/68) in the anterior region. The occurrence of sinus septa had no relation with gender, age and loss of teeth. The sinus septa can be observed by CBCT for the position, pattern, to predict the difficulty of the surgery, and enhance the success rate. 3. A motion-compensated cone-beam CT using electrical impedance tomography imaging. PubMed Pengpan, T; Smith, N D; Qiu, W; Yao, A; Mitchell, C N; Soleimani, M 2011-01-01 Cone-beam CT (CBCT) is an imaging technique used in conjunction with radiation therapy. For example CBCT is used to verify the position of lung cancer tumours just prior to radiation treatment. The accuracy of the radiation treatment of thoracic and upper abdominal structures is heavily affected by respiratory movement. Such movement typically blurs the CBCT reconstruction and ideally should be removed. Hence motion-compensated CBCT has recently been researched for correcting image artefacts due to breathing motion. This paper presents a new dual-modality approach where CBCT is aided by using electrical impedance tomography (EIT) for motion compensation. EIT can generate images of contrasts in electrical properties. The main advantage of using EIT is its high temporal resolution. In this paper motion information is extracted from EIT images and incorporated directly in the CBCT reconstruction. In this study synthetic moving data are generated using simulated and experimental phantoms. The paper demonstrates that image blur, created as a result of motion, can be reduced through motion compensation with EIT. 4. 4D cone-beam CT reconstruction using multi-organ meshes for sliding motion modeling Zhong, Zichun; Gu, Xuejun; Mao, Weihua; Wang, Jing 2016-02-01 A simultaneous motion estimation and image reconstruction (SMEIR) strategy was proposed for 4D cone-beam CT (4D-CBCT) reconstruction and showed excellent results in both phantom and lung cancer patient studies. In the original SMEIR algorithm, the deformation vector field (DVF) was defined on voxel grid and estimated by enforcing a global smoothness regularization term on the motion fields. The objective of this work is to improve the computation efficiency and motion estimation accuracy of SMEIR for 4D-CBCT through developing a multi-organ meshing model. Feature-based adaptive meshes were generated to reduce the number of unknowns in the DVF estimation and accurately capture the organ shapes and motion. Additionally, the discontinuity in the motion fields between different organs during respiration was explicitly considered in the multi-organ mesh model. This will help with the accurate visualization and motion estimation of the tumor on the organ boundaries in 4D-CBCT. To further improve the computational efficiency, a GPU-based parallel implementation was designed. The performance of the proposed algorithm was evaluated on a synthetic sliding motion phantom, a 4D NCAT phantom, and four lung cancer patients. The proposed multi-organ mesh based strategy outperformed the conventional Feldkamp-Davis-Kress, iterative total variation minimization, original SMEIR and single meshing method based on both qualitative and quantitative evaluations. 5. Breast density quantification with cone-beam CT: A post-mortem study PubMed Central Johnson, Travis; Ding, Huanjun; Le, Huy Q.; Ducote, Justin L.; Molloi, Sabee 2014-01-01 Forty post-mortem breasts were imaged with a flat-panel based cone-beam x-ray CT system at 50 kVp. The feasibility of breast density quantification has been investigated using standard histogram thresholding and an automatic segmentation method based on the fuzzy c-means algorithm (FCM). The breasts were chemically decomposed into water, lipid, and protein immediately after image acquisition was completed. The percent fibroglandular volume (%FGV) from chemical analysis was used as the gold standard for breast density comparison. Both image-based segmentation techniques showed good precision in breast density quantification with high linear coefficients between the right and left breast of each pair. When comparing with the gold standard using %FGV from chemical analysis, Pearson’s r-values were estimated to be 0.983 and 0.968 for the FCM clustering and the histogram thresholding techniques, respectively. The standard error of the estimate (SEE) was also reduced from 3.92% to 2.45% by applying the automatic clustering technique. The results of the postmortem study suggested that breast tissue can be characterized in terms of water, lipid and protein contents with high accuracy by using chemical analysis, which offers a gold standard for breast density studies comparing different techniques. In the investigated image segmentation techniques, the FCM algorithm had high precision and accuracy in breast density quantification. In comparison to conventional histogram thresholding, it was more efficient and reduced inter-observer variation. PMID:24254317 6. Implementation of the FDK algorithm for cone-beam CT on the cell broadband engine architecture Scherl, Holger; Koerner, Mario; Hofmann, Hannes; Eckert, Wieland; Kowarschik, Markus; Hornegger, Joachim 2007-03-01 In most of today's commercially available cone-beam CT scanners, the well known FDK method is used for solving the 3D reconstruction task. The computational complexity of this algorithm prohibits its use for many medical applications without hardware acceleration. The brand-new Cell Broadband Engine Architecture (CBEA) with its high level of parallelism is a cost-efficient processor for performing the FDK reconstruction according to the medical requirements. The programming scheme, however, is quite different to any standard personal computer hardware. In this paper, we present an innovative implementation of the most time-consuming parts of the FDK algorithm: filtering and back-projection. We also explain the required transformations to parallelize the algorithm for the CBEA. Our software framework allows to compute the filtering and back-projection in parallel, making it possible to do an on-the-fly-reconstruction. The achieved results demonstrate that a complete FDK reconstruction is computed with the CBEA in less than seven seconds for a standard clinical scenario. Given the fact that scan times are usually much higher, we conclude that reconstruction is finished right after the end of data acquisition. This enables us to present the reconstructed volume to the physician in real-time, immediately after the last projection image has been acquired by the scanning device. 7. Prevalence of apical periodontitis detected in cone beam CT images of a Brazilian subpopulation PubMed Central Paes da Silva Ramos Fernandes, LM; Ordinola-Zapata, R; Húngaro Duarte, MA; Alvares Capelozza, AL 2013-01-01 Objectives The aim of this study was to determine the prevalence of apical periodontitis (AP) detected in cone beam CT (CBCT) images from a database. Methods CBCT images of 300 Brazilian patients were assessed. AP images were measured in three dimensions. Age, gender, number and location of total teeth in each patient were considered. AP location was considered according to tooth groups. The extent of AP was determined by the largest diameter in any of the three dimensions. Percentages and the χ2 test were used for statistical analysis. Results AP was found in 51.4% of the patients and in 3.4% of the teeth. Higher prevalence of AP was found in 60- to 69-year-olds (73.1%) and in mandibular molars (5.9%) (p < 0.05). Inadequate endodontic treatment presented higher prevalence of AP (78.1%). Conclusions AP can be frequently found in CBCT examinations. The presence of AP has a significant association with patients' age, and tooth type and condition. CBCT databases are useful for cross-sectional studies about AP prevalence in a population. PMID:22752318 8. Diagnostic value of cone-beam CT in histologically confirmed otosclerosis. PubMed Liktor, Balázs; Révész, Péter; Csomor, Péter; Gerlinger, Imre; Sziklai, István; Karosi, Tamás 2014-08-01 This retrospective case review was performed with the aim to asses the value of cone-beam computed tomography (CBCT) in the preoperative diagnosis of otosclerosis. A total of 32 patients with histologically confirmed stapedial otosclerosis, who underwent unilateral stapedectomies were analyzed. Preoperative temporal bone CBCT scans were performed in all cases. CBCT imaging was characterized by a slice thickness of 0.3 mm and multiplanar image reconstruction. Histopathologic examination of the removed stapes footplates was performed in all cases. Findings of CBCT were categorized according to Marshall's grading system (from grade 0 to grade 3). Histopathologic results were correlated to multiplanar reconstructed CBCT scans, respectively. Histologically active foci of otosclerosis (n = 21) were identified by CBCT in all cases with a sensitivity of 100 %. However, CBCT was unable to detect histologically inactive otosclerosis (n = 11, sensitivity = 0 %). According to CBCT scans, no retrofenestral lesions were found and all positive cases were recruited into the grade 1 group indicating solely fenestral lesions at the anterior pole of stapes footplates. In conclusion, CBCT is a reliable imaging method with considerably lower radiation dose than high-resolution CT (HRCT) in the preoperative diagnosis of otosclerosis. These results indicate that CBCT has high sensitivity and specificity in the detection of hypodense lesions due to histologically active otosclerosis. 9. Performance of cone-beam CT using a flat-panel imager Endo, Masahiro; Tsunoo, Takanori; Satoh, Kazumasa; Matsusita, Satoshi; Kusakabe, Masahiro; Fukuda, Yasushi 2001-06-01 An active matrix flat-panel imager (FPI) is a good candidate for the 2-dimensional detector of cone beam CT (CBCT), because it has a wider dynamic range and less geometrical distortion than video-fluoroscopic system so far employed. However the performance of FPI-based CBCT has not been sufficiently examined yet. The aim of this work is to examine the performance of CBCT using a FPI with several phantoms. An X-ray tube, a phantom and a FPI were aligned on an experimental table. The FPI was PaxScan2520 provided by Varian Medical Systems. It has an active area of approximately 180x240mm and the pixel size is 127 micrometer. CsI is used as a scintillator. The phantom was rotated with 1-degree steps while 360 projection frames (1408x1888 active pixels each frame) were collected. 2x2 pixels were combined into a single pixel to reduce noise. 512x512x512 voxels were reconstructed with the Feldkamp method. The comparison was made between reconstructed images with or without scatter rejecting grid. The uniformity and linearity of reconstruction value was drastically improved with the grid. Scatter rejection using a thin-vane collimator was also examined, and it showed more effective than the grid. 10. Iterative reconstruction optimisations for high angle cone-beam micro-CT Recur, B.; Fauconneau, M.; Kingston, A.; Myers, G.; Sheppard, A. 2014-09-01 We address several acquisition questions that have arisen for the high cone-angle helical-scanning micro-CT facility developed at the Australian National University. These challenges are generally known in medical and industrial cone-beam scanners but can be neglected in these systems. For our large datasets, with more than 20483 voxels, minimising the number of operations (or iterations) is crucial. Large cone-angles enable high signal-to-noise ratio imaging and a large helical pitch to be used. This introduces two challenges: (i) non-uniform resolution throughout the reconstruction, (ii) over-scan beyond the region-of-interest significantly increases re- quired reconstructed volume size. Challenge (i) can be addressed by using a double-helix or lower pitch helix but both solutions slow down iterations. Challenge (ii) can also be improved by using a lower pitch helix but results in more projections slowing down iterations. This may be overcome using less projections per revolution but leads to more iterations required. Here we assume a given total time for acquisition and a given reconstruction technique (SART) and seek to identify the optimal trajectory and number of projections per revolution in order to produce the best tomogram, minimise reconstruction time required, and minimise memory requirements. 11. Optical CT scanner for in-air readout of gels for external radiation beam 3D dosimetry. PubMed Ramm, Daniel; Rutten, Thomas P; Shepherd, Justin; Bezak, Eva 2012-06-21 Optical CT scanners for a 3D readout of externally irradiated radiosensitive hydrogels currently require the use of a refractive index (RI) matching liquid bath to obtain suitable optical ray paths through the gel sample to the detector. The requirement for a RI matching liquid bath has been negated by the design of a plastic cylindrical gel container that provides parallel beam geometry through the gel sample for the majority of the projection. The design method can be used for various hydrogels. Preliminary test results for the prototype laser beam scanner with ferrous xylenol-orange gel show geometric distortion of 0.2 mm maximum, spatial resolution limited to beam spot size of about 0.4 mm and 0.8% noise (1 SD) for a uniform irradiation. Reconstruction of a star pattern irradiated through the cylinder walls demonstrates the suitability for external beam applications. The extremely simple and cost-effective construction of this optical CT scanner, together with the simplicity of scanning gel samples without RI matching fluid increases the feasibility of using 3D gel dosimetry for clinical external beam dose verifications. 12. Reconstruction of implanted marker trajectories from cone-beam CT projection images using interdimensional correlation modeling. PubMed Chung, Hyekyun; Poulsen, Per Rugaard; Keall, Paul J; Cho, Seungryong; Cho, Byungchul 2016-08-01 Cone-beam CT (CBCT) is a widely used imaging modality for image-guided radiotherapy. Most vendors provide CBCT systems that are mounted on a linac gantry. Thus, CBCT can be used to estimate the actual 3-dimensional (3D) position of moving respiratory targets in the thoracic/abdominal region using 2D projection images. The authors have developed a method for estimating the 3D trajectory of respiratory-induced target motion from CBCT projection images using interdimensional correlation modeling. Because the superior-inferior (SI) motion of a target can be easily analyzed on projection images of a gantry-mounted CBCT system, the authors investigated the interdimensional correlation of the SI motion with left-right and anterior-posterior (AP) movements while the gantry is rotating. A simple linear model and a state-augmented model were implemented and applied to the interdimensional correlation analysis, and their performance was compared. The parameters of the interdimensional correlation models were determined by least-square estimation of the 2D error between the actual and estimated projected target position. The method was validated using 160 3D tumor trajectories from 46 thoracic/abdominal cancer patients obtained during CyberKnife treatment. The authors' simulations assumed two application scenarios: (1) retrospective estimation for the purpose of moving tumor setup used just after volumetric matching with CBCT; and (2) on-the-fly estimation for the purpose of real-time target position estimation during gating or tracking delivery, either for full-rotation volumetric-modulated arc therapy (VMAT) in 60 s or a stationary six-field intensity-modulated radiation therapy (IMRT) with a beam delivery time of 20 s. For the retrospective CBCT simulations, the mean 3D root-mean-square error (RMSE) for all 4893 trajectory segments was 0.41 mm (simple linear model) and 0.35 mm (state-augmented model). In the on-the-fly simulations, prior projections over more than 60 13. Cone beam CT--anatomic assessment and legal issues: the new standards of care. PubMed Curley, Arthur; Hatcher, David C 2010-01-01 technology. Multidimensional anatomical reconstruction can be performed through software applications. The ultimate reward of technological imaging advancements is the 3-D representations (digital volume) of anatomy as it exists in nature (anatomic truth). Analysis of the accurate digital volume can provide clinically relevant spatial information or data. Visualization and analysis of 3-D information can benefit a dental practice by providing data that will improve diagnosis, risk assessment, treatment outcome and treatment efficiency, and reduce treatment complications. This article discusses the uses and benefits of 3-D imaging (cone beam CT, CBCT) for diagnosis, treatment planning and the legal issues affecting the standard of care, as well as offering risk management tips and use guidance. 14. Dose measurements for dental cone-beam CT: a comparison with MSCT and panoramic imaging Deman, P.; Atwal, P.; Duzenli, C.; Thakur, Y.; Ford, N. L. 2014-06-01 To date there is a lack of published information on appropriate methods to determine patient doses from dental cone-beam computed tomography (CBCT) equipment. The goal of this study is to apply and extend the methods recommended in the American Association of Physicists in Medicine (AAPM) Report 111 for CBCT equipment to characterize dose and effective dose for a range of dental imaging equipment. A protocol derived from the one proposed by Dixon et al (2010 Technical Report 111, American Association of Physicist in Medicine, MD, USA), was applied to dose measurements of multi-slice CT, dental CBCT (small and large fields of view (FOV)) and a dental panoramic system. The computed tomography dose index protocol was also performed on the MSCT to compare both methods. The dose distributions in a cylindrical polymethyl methacrylate phantom were characterized using a thimble ionization chamber and Gafchromic™ film (beam profiles). Gafchromic™ films were used to measure the dose distribution in an anthropomorphic phantom. A method was proposed to extend dose estimates to planes superior and inferior to the central plane. The dose normalized to 100 mAs measured in the center of the phantom for the large FOV dental CBCT (11.4 mGy/100 mAs) is two times lower than that of MSCT (20.7 mGy/100 mAs) for the same FOV, but approximately 15 times higher than for a panoramic system (0.6 mGy/100 mAs). The effective dose per scan (in clinical conditions) found for the dental CBCT are 167.60 ± 3.62, 61.30 ± 3.88 and 92.86 ± 7.76 mSv for the Kodak 9000 (fixed scan length of 3.7 cm), and the iCAT Next Generation for 6 cm and 13 cm scan lengths respectively. The method to extend the dose estimates from the central slice to superior and inferior slices indicates a good agreement between theory and measurement. The Gafchromic™ films provided useful beam profile data and 2D distributions of dose in phantom. 15. Accelerated barrier optimization compressed sensing (ABOCS) reconstruction for cone-beam CT: Phantom studies. PubMed Niu, Tianye; Zhu, Lei 2012-07-01 Recent advances in compressed sensing (CS) enable accurate CT image reconstruction from highly undersampled and noisy projection measurements, due to the sparsifiable feature of most CT images using total variation (TV). These novel reconstruction methods have demonstrated advantages in clinical applications where radiation dose reduction is critical, such as onboard cone-beam CT (CBCT) imaging in radiation therapy. The image reconstruction using CS is formulated as either a constrained problem to minimize the TV objective within a small and fixed data fidelity error, or an unconstrained problem to minimize the data fidelity error with TV regularization. However, the conventional solutions to the above two formulations are either computationally inefficient or involved with inconsistent regularization parameter tuning, which significantly limit the clinical use of CS-based iterative reconstruction. In this paper, we propose an optimization algorithm for CS reconstruction which overcomes the above two drawbacks. The data fidelity tolerance of CS reconstruction can be well estimated based on the measured data, as most of the projection errors are from Poisson noise after effective data correction for scatter and beam-hardening effects. We therefore adopt the TV optimization framework with a data fidelity constraint. To accelerate the convergence, we first convert such a constrained optimization using a logarithmic barrier method into a form similar to that of the conventional TV regularization based reconstruction but with an automatically adjusted penalty weight. The problem is then solved efficiently by gradient projection with an adaptive Barzilai-Borwein step-size selection scheme. The proposed algorithm is referred to as accelerated barrier optimization for CS (ABOCS), and evaluated using both digital and physical phantom studies. ABOCS directly estimates the data fidelity tolerance from the raw projection data. Therefore, as demonstrated in both digital Shepp 16. Accelerated barrier optimization compressed sensing (ABOCS) reconstruction for cone-beam CT: phantom studies. PubMed Niu, Tianye; Zhu, Lei 2012-07-01 Recent advances in compressed sensing (CS) enable accurate CT image reconstruction from highly undersampled and noisy projection measurements, due to the sparsifiable feature of most CT images using total variation (TV). These novel reconstruction methods have demonstrated advantages in clinical applications where radiation dose reduction is critical, such as onboard cone-beam CT (CBCT) imaging in radiation therapy. The image reconstruction using CS is formulated as either a constrained problem to minimize the TV objective within a small and fixed data fidelity error, or an unconstrained problem to minimize the data fidelity error with TV regularization. However, the conventional solutions to the above two formulations are either computationally inefficient or involved with inconsistent regularization parameter tuning, which significantly limit the clinical use of CS-based iterative reconstruction. In this paper, we propose an optimization algorithm for CS reconstruction which overcomes the above two drawbacks. The data fidelity tolerance of CS reconstruction can be well estimated based on the measured data, as most of the projection errors are from Poisson noise after effective data correction for scatter and beam-hardening effects. We therefore adopt the TV optimization framework with a data fidelity constraint. To accelerate the convergence, we first convert such a constrained optimization using a logarithmic barrier method into a form similar to that of the conventional TV regularization based reconstruction but with an automatically adjusted penalty weight. The problem is then solved efficiently by gradient projection with an adaptive Barzilai-Borwein step-size selection scheme. The proposed algorithm is referred to as accelerated barrier optimization for CS (ABOCS), and evaluated using both digital and physical phantom studies. ABOCS directly estimates the data fidelity tolerance from the raw projection data. Therefore, as demonstrated in both digital Shepp 17. Accelerated barrier optimization compressed sensing (ABOCS) reconstruction for cone-beam CT: Phantom studies PubMed Central Niu, Tianye; Zhu, Lei 2012-01-01 Purpose: Recent advances in compressed sensing (CS) enable accurate CT image reconstruction from highly undersampled and noisy projection measurements, due to the sparsifiable feature of most CT images using total variation (TV). These novel reconstruction methods have demonstrated advantages in clinical applications where radiation dose reduction is critical, such as onboard cone-beam CT (CBCT) imaging in radiation therapy. The image reconstruction using CS is formulated as either a constrained problem to minimize the TV objective within a small and fixed data fidelity error, or an unconstrained problem to minimize the data fidelity error with TV regularization. However, the conventional solutions to the above two formulations are either computationally inefficient or involved with inconsistent regularization parameter tuning, which significantly limit the clinical use of CS-based iterative reconstruction. In this paper, we propose an optimization algorithm for CS reconstruction which overcomes the above two drawbacks. Methods: The data fidelity tolerance of CS reconstruction can be well estimated based on the measured data, as most of the projection errors are from Poisson noise after effective data correction for scatter and beam-hardening effects. We therefore adopt the TV optimization framework with a data fidelity constraint. To accelerate the convergence, we first convert such a constrained optimization using a logarithmic barrier method into a form similar to that of the conventional TV regularization based reconstruction but with an automatically adjusted penalty weight. The problem is then solved efficiently by gradient projection with an adaptive Barzilai–Borwein step-size selection scheme. The proposed algorithm is referred to as accelerated barrier optimization for CS (ABOCS), and evaluated using both digital and physical phantom studies. Results: ABOCS directly estimates the data fidelity tolerance from the raw projection data. Therefore, as 18. SU-F-I-06: Evaluation of Imaging Dose for Modulation Layer Based Dual Energy Cone-Beam CT SciTech Connect Ju, Eunbin; Ahn, SoHyun; Cho, Samju; Keum, Ki Chang; Lee, Rena 2016-06-15 Purpose: Dual energy cone beam CT system is finding a variety of promising applications in diagnostic CT, both in imaging of endogenous materials and exogenous materials across a range of body sites. Dual energy cone beam CT system to suggest in this study acquire image by rotating 360 degree with half of the X-ray window covered using copper modulation layer. In the region that covered by modulation layer absorb the low energy X-ray by modulation layer. Relative high energy X-ray passes through the layer and contributes to image reconstruction. Dose evaluation should be carried out in order to utilize such an imaging acquirement technology for clinical use. Methods: For evaluating imaging dose of modulation layer based dual energy cone beam CT system, Prototype cone beam CT that configured X-ray tube (D054SB, Toshiba, Japan) and detector (PaxScan 2520V, Varian Medical Systems, Palo Alto, CA) is used. A range of 0.5–2.0 mm thickness of modulation layer is implemented in Monte Carlo simulation (MCNPX, ver. 2.6.0, Los Alamos National Laboratory, USA) with half of X-ray window covered. In-house phantom using in this study that has 3 cylindrical phantoms configured water, Teflon air with PMMA covered for verifying the comparability the various material in human body and is implemented in Monte Carlo simulation. The actual dose with 2.0 mm copper covered half of X-ray window is measured using Gafchromic EBT3 film with 5.0 mm bolus for compared with simulative dose. Results: Dose in phantom reduced 33% by copper modulation layer of 2.0 mm. Scattering dose occurred in modulation layer by Compton scattering effect is 0.04% of overall dose. Conclusion: Modulation layer of that based dual energy cone beam CT has not influence on unnecessary scatter dose. This study was supported by the Radiation Safety Research Programs (1305033) through the Nuclear Safety and Security Commission. 19. Pelvic Beam-Hardening Artifacts in Dual-Energy CT Image Reconstructions: Occurrence and Impact on Image Quality. PubMed Winklhofer, Sebastian; Lambert, Jack W; Sun, Yuxin; Wang, Zhen Jane; Sun, Derek S; Yeh, Benjamin M 2017-01-01 The purpose of this study was to describe the frequency and appearance of beam-hardening artifacts on rapid-kilovoltage-switching dual-energy CT (DECT) image reconstructions of the pelvis. Monochromatic (70, 52, and 120 keV) and material decomposition CT images (iodine-water and water-iodine) from consecutive pelvic rapid-kilovoltage-switching DECT scans were retrospectively evaluated. We recorded the presence, type (high versus low attenuation), and severity of beam-hardening artifacts (Likert scale from 0, barely seen, to 4, severe), clarity of anatomic delineation (Likert scale from 0, unimpaired, to 4, severely impaired) and SD of CT numbers, iodine and water concentrations, and gray-scale values for artifact-affected regions and corresponding unaffected reference tissue. A pelvic phantom was scanned and evaluated in a similar manner. Wilcoxon signed rank and paired t tests were used to compare results between the image reconstructions. Beam-hardening artifacts were seen in all image reconstructions in all 41 patients (22 men, 19 women; mean age, 57 years; range 22-86 years) who met the inclusion criteria. The median artifact severity score was worse for water-iodine and iodine-water images (score of 3 for each) than for 70-keV (score 1), 52-keV (score 2), and 120-keV (score 1) images (all p < 0.001). The anatomic delineation was worse (p < 0.001) for water-iodine and iodine-water images than for monochromatic images. Higher CT number SD values, material concentrations, and gray-scale values were found for areas affected by artifacts than for reference tissues in all datasets (all p < 0.001). Similar results were seen in the phantom study. Beam-hardening artifacts are prevalent in pelvic rapid-kilovoltage-switching DECT and more severe in material decomposition than monochromatic image reconstructions. 20. Comprehensive evaluation of ten deformable image registration algorithms for contour propagation between CT and cone-beam CT images in adaptive head & neck radiotherapy PubMed Central Li, Xin; Zhang, Yuyu; Shi, Yinghua; Wu, Shuyu; Xiao, Yang; Gu, Xuejun; Zhou, Linghong 2017-01-01 Deformable image registration (DIR) is a critical technic in adaptive radiotherapy (ART) for propagating contours between planning computerized tomography (CT) images and treatment CT/cone-beam CT (CBCT) images to account for organ deformation for treatment re-planning. To validate the ability and accuracy of DIR algorithms in organ at risk (OAR) contour mapping, ten intensity-based DIR strategies, which were classified into four categories—optical flow-based, demons-based, level-set-based and spline-based—were tested on planning CT and fractional CBCT images acquired from twenty-one head & neck (H&N) cancer patients who underwent 6~7-week intensity-modulated radiation therapy (IMRT). Three similarity metrics, i.e., the Dice similarity coefficient (DSC), the percentage error (PE) and the Hausdorff distance (HD), were employed to measure the agreement between the propagated contours and the physician-delineated ground truths of four OARs, including the vertebra (VTB), the vertebral foramen (VF), the parotid gland (PG) and the submandibular gland (SMG). It was found that the evaluated DIRs in this work did not necessarily outperform rigid registration. DIR performed better for bony structures than soft-tissue organs, and the DIR performance tended to vary for different ROIs with different degrees of deformation as the treatment proceeded. Generally, the optical flow-based DIR performed best, while the demons-based DIR usually ranked last except for a modified demons-based DISC used for CT-CBCT DIR. These experimental results suggest that the choice of a specific DIR algorithm depends on the image modality, anatomic site, magnitude of deformation and application. Therefore, careful examinations and modifications are required before accepting the auto-propagated contours, especially for automatic re-planning ART systems. PMID:28414799 1. Autotransplantation of an immature premolar, with the aid of cone beam CT and computer-aided prototyping: a case report. PubMed Keightley, Alexander J; Cross, David L; McKerlie, Robert A; Brocklebank, Laetitia 2010-04-01 Autotransplantation of immature teeth has good survival rates, and has benefits over ossointegrated implants in the growing child, but is very technique sensitive. Spiral CT imaging has been previously used in adult patients to enable computer-aided prototyping to produce a surgical template of the donor tooth, further increasing success rates. The case presented describes management of a 9-year-old girl with the combination of hypodontia affecting the upper lateral incisors as well as a severely ectopic maxillary canine. Cone beam CT was used in combination with computer-aided prototyping to produce a surgical template of an immature mandibular second premolar. The surgical template was used to prepare the transplant site before the donor tooth was extracted, greatly reducing the time from extraction to implantation. By 6 months posttransplant the tooth was clinically sound, and continued root development and laying down of dentine was visible radiographically. This paper demonstrates the use of a novel technique to aid the surgical procedure of autotransplantation of immature premolar teeth. The use of autotransplantation in this case allowed the difficult situation of two missing units in the upper left quadrant to be reduced to one unit, while retaining symmetry in the upper arch. Compared to previous studies, the use of cone beam CT to create a 3D prototype reduced radiation dose compared to spiral CT and drastically reduced the extra-oral time of the donor tooth from extraction to transplantation. 2. Combined MV + kV inverse treatment planning for optimal kV dose incorporation in IGRT Grelewicz, Zachary; Wiersma, Rodney D. 2014-04-01 Despite the existence of real-time kV intra-fractional tumor tracking strategies for many years, clinical adoption has been held back by concern over the excess kV imaging dose cost to the patient when imaging in continuous fluoroscopic mode. This work aims to solve this problem by investigating, for the first time, the use of convex optimization tools to optimally integrate this excess kV imaging dose into the MV therapeutic dose in order to make real-time kV tracking clinically feasible. Phase space files modeling both a 6 MV treatment beam and a kV on-board-imaging beam of a commercial LINAC were generated with BEAMnrc, and used to generate dose influence matrices in DOSXYZnrc for ten previously treated lung cancer patients. The dose optimization problem for IMRT, formulated as a quadratic problem, was modified to include additional constraints as required for real-time kV intra-fractional tracking. An interior point optimizer was used to solve the modified optimization problem. It was found that when using large kV imaging apertures during fluoroscopic tracking, combined MV + kV optimization lead to a 0.5%-5.17% reduction in the total number of monitor units assigned to the MV beam due to inclusion of the kV dose over the ten patients. This was accompanied by a reduction of up to 42% of the excess kV dose compared to standard MV IMRT with kV tracking. For all kV field sizes considered, combined MV + kV optimization provided prescription dose to the treatment volume coverage equal to the no-imaging case, yet superior to standard MV IMRT with non-optimized kV fluoroscopic tracking. With combined MV + kV optimization, it is possible to quantify in a patient specific way the dosimetric effect of real-time imaging on the patient. Such information is necessary when substantial kV imaging is performed. 3. SU-D-206-06: Task-Specific Optimization of Scintillator Thickness for CMOS-Detector Based Cone-Beam Breast CT SciTech Connect Vedantham, S; Shrestha, S; Shi, L; Vijayaraghavan, G; Karellas, A 2016-06-15 Purpose: To optimize the cesium iodide (CsI:Tl) scintillator thickness in a complimentary metal-oxide semiconductor (CMOS)-based detector for use in dedicated cone-beam breast CT. Methods: The imaging task considered was the detection of a microcalcification cluster comprising six 220µm diameter calcium carbonate spheres, arranged in the form of a regular pentagon with 2 mm spacing on its sides and a central calcification, similar to that in ACR-recommended mammography accreditation phantom, at a mean glandular dose of 4.5 mGy. Generalized parallel-cascades based linear systems analysis was used to determine Fourier-domain image quality metrics in reconstructed object space, from which the detectability index inclusive of anatomical noise was determined for a non-prewhitening numerical observer. For 300 projections over 2π, magnification-associated focal-spot blur, Monte Carlo derived x-ray scatter, K-fluorescent emission and reabsorption within CsI:Tl, CsI:Tl quantum efficiency and optical blur, fiberoptic plate transmission efficiency and blur, CMOS quantum efficiency, pixel aperture function and additive noise, and filtered back-projection to isotropic 105µm voxel pitch with bilinear interpolation were modeled. Imaging geometry of a clinical prototype breast CT system, a 60 kV Cu/Al filtered x-ray spectrum from 0.3 mm focal spot incident on a 14 cm diameter semi-ellipsoidal breast were used to determine the detectability index for 300–600 µm thick (75µm increments) CsI:Tl. The CsI:Tl thickness that maximized the detectability index was considered optimal. Results: The limiting resolution (10% modulation transfer function, MTF) progressively decreased with increasing CsI:Tl thickness. The zero-frequency detective quantum efficiency, DQE(0), in projection space increased with increasing CsI:Tl thickness. The maximum detectability index was achieved with 525µm thick CsI:Tl scintillator. Reduced MTF at mid-to-high frequencies for 600µm thick CsI:Tl lowered 4. Three-dimensional anisotropic adaptive filtering of projection data for noise reduction in cone beam CT. PubMed Maier, Andreas; Wigstrom, Lars; Hofmann, Hannes G; Hornegger, Joachim; Zhu, Lei; Strobel, Norbert; Fahrig, Rebecca 2011-11-01 processing (from 1336 to 150 s). Adaptive anisotropic filtering has the potential to substantially improve image quality and/or reduce the radiation dose required for obtaining 3D image data using cone beam CT. 5. Deformable image registration for cone-beam CT guided transoral robotic base-of-tongue surgery. PubMed Reaungamornrat, S; Liu, W P; Wang, A S; Otake, Y; Nithiananthan, S; Uneri, A; Schafer, S; Tryggestad, E; Richmon, J; Sorger, J M; Siewerdsen, J H; Taylor, R H 2013-07-21 Transoral robotic surgery (TORS) offers a minimally invasive approach to resection of base-of-tongue tumors. However, precise localization of the surgical target and adjacent critical structures can be challenged by the highly deformed intraoperative setup. We propose a deformable registration method using intraoperative cone-beam computed tomography (CBCT) to accurately align preoperative CT or MR images with the intraoperative scene. The registration method combines a Gaussian mixture (GM) model followed by a variation of the Demons algorithm. First, following segmentation of the volume of interest (i.e. volume of the tongue extending to the hyoid), a GM model is applied to surface point clouds for rigid initialization (GM rigid) followed by nonrigid deformation (GM nonrigid). Second, the registration is refined using the Demons algorithm applied to distance map transforms of the (GM-registered) preoperative image and intraoperative CBCT. Performance was evaluated in repeat cadaver studies (25 image pairs) in terms of target registration error (TRE), entropy correlation coefficient (ECC) and normalized pointwise mutual information (NPMI). Retraction of the tongue in the TORS operative setup induced gross deformation >30 mm. The mean TRE following the GM rigid, GM nonrigid and Demons steps was 4.6, 2.1 and 1.7 mm, respectively. The respective ECC was 0.57, 0.70 and 0.73, and NPMI was 0.46, 0.57 and 0.60. Registration accuracy was best across the superior aspect of the tongue and in proximity to the hyoid (by virtue of GM registration of surface points on these structures). The Demons step refined registration primarily in deeper portions of the tongue further from the surface and hyoid bone. Since the method does not use image intensities directly, it is suitable to multi-modality registration of preoperative CT or MR with intraoperative CBCT. Extending the 3D image registration to the fusion of image and planning data in stereo-endoscopic video is anticipated 6. Three-dimensional anisotropic adaptive filtering of projection data for noise reduction in cone beam CT SciTech Connect Maier, Andreas; Wigstroem, Lars; Hofmann, Hannes G.; Hornegger, Joachim; Zhu Lei; Strobel, Norbert; Fahrig, Rebecca 2011-11-15 speed-up of the processing (from 1336 to 150 s). Conclusions: Adaptive anisotropic filtering has the potential to substantially improve image quality and/or reduce the radiation dose required for obtaining 3D image data using cone beam CT. 7. Deformable image registration for cone-beam CT guided transoral robotic base-of-tongue surgery Reaungamornrat, S.; Liu, W. P.; Wang, A. S.; Otake, Y.; Nithiananthan, S.; Uneri, A.; Schafer, S.; Tryggestad, E.; Richmon, J.; Sorger, J. M.; Siewerdsen, J. H.; Taylor, R. H. 2013-07-01 Transoral robotic surgery (TORS) offers a minimally invasive approach to resection of base-of-tongue tumors. However, precise localization of the surgical target and adjacent critical structures can be challenged by the highly deformed intraoperative setup. We propose a deformable registration method using intraoperative cone-beam computed tomography (CBCT) to accurately align preoperative CT or MR images with the intraoperative scene. The registration method combines a Gaussian mixture (GM) model followed by a variation of the Demons algorithm. First, following segmentation of the volume of interest (i.e. volume of the tongue extending to the hyoid), a GM model is applied to surface point clouds for rigid initialization (GM rigid) followed by nonrigid deformation (GM nonrigid). Second, the registration is refined using the Demons algorithm applied to distance map transforms of the (GM-registered) preoperative image and intraoperative CBCT. Performance was evaluated in repeat cadaver studies (25 image pairs) in terms of target registration error (TRE), entropy correlation coefficient (ECC) and normalized pointwise mutual information (NPMI). Retraction of the tongue in the TORS operative setup induced gross deformation >30 mm. The mean TRE following the GM rigid, GM nonrigid and Demons steps was 4.6, 2.1 and 1.7 mm, respectively. The respective ECC was 0.57, 0.70 and 0.73, and NPMI was 0.46, 0.57 and 0.60. Registration accuracy was best across the superior aspect of the tongue and in proximity to the hyoid (by virtue of GM registration of surface points on these structures). The Demons step refined registration primarily in deeper portions of the tongue further from the surface and hyoid bone. Since the method does not use image intensities directly, it is suitable to multi-modality registration of preoperative CT or MR with intraoperative CBCT. Extending the 3D image registration to the fusion of image and planning data in stereo-endoscopic video is anticipated to 8. Three-dimensional anisotropic adaptive filtering of projection data for noise reduction in cone beam CT PubMed Central Maier, Andreas; Wigström, Lars; Hofmann, Hannes G.; Hornegger, Joachim; Zhu, Lei; Strobel, Norbert; Fahrig, Rebecca 2011-01-01 .9-fold speed-up of the processing (from 1336 to 150 s). Conclusions: Adaptive anisotropic filtering has the potential to substantially improve image quality and∕or reduce the radiation dose required for obtaining 3D image data using cone beam CT. PMID:22047354 9. Effect of beam hardening on transmural myocardial perfusion quantification in myocardial CT imaging Fahmi, Rachid; Eck, Brendan L.; Levi, Jacob; Fares, Anas; Wu, Hao; Vembar, Mani; Dhanantwari, Amar; Bezerra, Hiram G.; Wilson, David L. 2016-03-01 The detection of subendocardial ischemia exhibiting an abnormal transmural perfusion gradient (TPG) may help identify ischemic conditions due to micro-vascular dysfunction. We evaluated the effect of beam hardening (BH) artifacts on TPG quantification using myocardial CT perfusion (CTP). We used a prototype spectral detector CT scanner (Philips Healthcare) to acquire dynamic myocardial CTP scans in a porcine ischemia model with partial occlusion of the left anterior descending (LAD) coronary artery guided by pressure wire-derived fractional flow reserve (FFR) measurements. Conventional 120 kVp and 70 keV projection-based mono-energetic images were reconstructed from the same projection data and used to compute myocardial blood flow (MBF) using the Johnson-Wilson model. Under moderate LAD occlusion (FFR~0.7), we used three 5 mm short axis slices and divided the myocardium into three LAD segments and three remote segments. For each slice and each segment, we characterized TPG as the mean "endo-to-epi" transmural flow ratio (TFR). BH-induced hypoenhancement on the ischemic anterior wall at 120 kVp resulted in significantly lower mean TFR value as compared to the 70 keV TFR value (0.29+/-0.01 vs. 0.55+/-0.01 p<1e-05). No significant difference was measured between 120 kVp and 70 keV mean TFR values on segments moderately affected or unaffected by BH. In the entire ischemic LAD territory, 120 kVp mean endocardial flow was significantly reduced as compared to mean epicardial flow (15.80+/-10.98 vs. 40.85+/-23.44 ml/min/100g; p<1e-04). At 70 keV, BH was effectively minimized resulting in mean endocardial MBF of 40.85+/-15.3407 ml/min/100g vs. 74.09+/-5.07 ml/min/100g (p=0.0054) in the epicardium. We also found that BH artifact in the conventional 120 kVp images resulted in falsely reduced MBF measurements even under non-ischemic conditions. 10. Analytic image reconstruction from partial data for a single-scan cone-beam CT with scatter correction SciTech Connect Min, Jonghwan; Pua, Rizza; Cho, Seungryong; Kim, Insoo; Han, Bumsoo 2015-11-15 Purpose: A beam-blocker composed of multiple strips is a useful gadget for scatter correction and/or for dose reduction in cone-beam CT (CBCT). However, the use of such a beam-blocker would yield cone-beam data that can be challenging for accurate image reconstruction from a single scan in the filtered-backprojection framework. The focus of the work was to develop an analytic image reconstruction method for CBCT that can be directly applied to partially blocked cone-beam data in conjunction with the scatter correction. Methods: The authors developed a rebinned backprojection-filteration (BPF) algorithm for reconstructing images from the partially blocked cone-beam data in a circular scan. The authors also proposed a beam-blocking geometry considering data redundancy such that an efficient scatter estimate can be acquired and sufficient data for BPF image reconstruction can be secured at the same time from a single scan without using any blocker motion. Additionally, scatter correction method and noise reduction scheme have been developed. The authors have performed both simulation and experimental studies to validate the rebinned BPF algorithm for image reconstruction from partially blocked cone-beam data. Quantitative evaluations of the reconstructed image quality were performed in the experimental studies. Results: The simulation study revealed that the developed reconstruction algorithm successfully reconstructs the images from the partial cone-beam data. In the experimental study, the proposed method effectively corrected for the scatter in each projection and reconstructed scatter-corrected images from a single scan. Reduction of cupping artifacts and an enhancement of the image contrast have been demonstrated. The image contrast has increased by a factor of about 2, and the image accuracy in terms of root-mean-square-error with respect to the fan-beam CT image has increased by more than 30%. Conclusions: The authors have successfully demonstrated that the 11. Analytic image reconstruction from partial data for a single-scan cone-beam CT with scatter correction. PubMed Min, Jonghwan; Pua, Rizza; Kim, Insoo; Han, Bumsoo; Cho, Seungryong 2015-11-01 A beam-blocker composed of multiple strips is a useful gadget for scatter correction and/or for dose reduction in cone-beam CT (CBCT). However, the use of such a beam-blocker would yield cone-beam data that can be challenging for accurate image reconstruction from a single scan in the filtered-backprojection framework. The focus of the work was to develop an analytic image reconstruction method for CBCT that can be directly applied to partially blocked cone-beam data in conjunction with the scatter correction. The authors developed a rebinned backprojection-filteration (BPF) algorithm for reconstructing images from the partially blocked cone-beam data in a circular scan. The authors also proposed a beam-blocking geometry considering data redundancy such that an efficient scatter estimate can be acquired and sufficient data for BPF image reconstruction can be secured at the same time from a single scan without using any blocker motion. Additionally, scatter correction method and noise reduction scheme have been developed. The authors have performed both simulation and experimental studies to validate the rebinned BPF algorithm for image reconstruction from partially blocked cone-beam data. Quantitative evaluations of the reconstructed image quality were performed in the experimental studies. The simulation study revealed that the developed reconstruction algorithm successfully reconstructs the images from the partial cone-beam data. In the experimental study, the proposed method effectively corrected for the scatter in each projection and reconstructed scatter-corrected images from a single scan. Reduction of cupping artifacts and an enhancement of the image contrast have been demonstrated. The image contrast has increased by a factor of about 2, and the image accuracy in terms of root-mean-square-error with respect to the fan-beam CT image has increased by more than 30%. The authors have successfully demonstrated that the proposed scanning method and image 12. A prototype fan-beam optical CT scanner for 3D dosimetry SciTech Connect Campbell, Warren G.; Rudko, D. A.; Braam, Nicolas A.; Jirasek, Andrew; Wells, Derek M. 2013-06-15 Purpose: The objective of this work is to introduce a prototype fan-beam optical computed tomography scanner for three-dimensional (3D) radiation dosimetry. Methods: Two techniques of fan-beam creation were evaluated: a helium-neon laser (HeNe, {lambda} = 543 nm) with line-generating lens, and a laser diode module (LDM, {lambda} = 635 nm) with line-creating head module. Two physical collimator designs were assessed: a single-slot collimator and a multihole collimator. Optimal collimator depth was determined by observing the signal of a single photodiode with varying collimator depths. A method of extending the dynamic range of the system is presented. Two sample types were used for evaluations: nondosimetric absorbent solutions and irradiated polymer gel dosimeters, each housed in 1 liter cylindrical plastic flasks. Imaging protocol investigations were performed to address ring artefacts and image noise. Two image artefact removal techniques were performed in sinogram space. Collimator efficacy was evaluated by imaging highly opaque samples of scatter-based and absorption-based solutions. A noise-based flask registration technique was developed. Two protocols for gel manufacture were examined. Results: The LDM proved advantageous over the HeNe laser due to its reduced noise. Also, the LDM uses a wavelength more suitable for the PRESAGE{sup TM} dosimeter. Collimator depth of 1.5 cm was found to be an optimal balance between scatter rejection, signal strength, and manufacture ease. The multihole collimator is capable of maintaining accurate scatter-rejection to high levels of opacity with scatter-based solutions (T < 0.015%). Imaging protocol investigations support the need for preirradiation and postirradiation scanning to reduce reflection-based ring artefacts and to accommodate flask imperfections and gel inhomogeneities. Artefact removal techniques in sinogram space eliminate streaking artefacts and reduce ring artefacts of up to {approx}40% in magnitude. The 13. A prototype fan-beam optical CT scanner for 3D dosimetry. PubMed Campbell, Warren G; Rudko, D A; Braam, Nicolas A; Wells, Derek M; Jirasek, Andrew 2013-06-01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6209781765937805, "perplexity": 5887.880508565182}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891705.93/warc/CC-MAIN-20180123012644-20180123032644-00155.warc.gz"}
https://www.physicsforums.com/threads/statistical-geometry.81287/
# Statistical Geometry 1. Jul 5, 2005 ### BicycleTree You have two tools: 1. A straight edge 2. The ability to judge any distance to within 20%, or any angle to within 20 degrees. What can you construct with these tools, with an unlimited? In particular, is there a method for constructing an approximate circle that tends towards perfect accuracy as the number of steps you use tends towards infinity? For an example of what you can do, here is how to construct an approximate parallel line to a given line m through a point P: First prepare a length x = 100 light years. 1. draw a line n through P that seems about parallel to your angle judgment. This line is either parallel to m (statistically impossible) or it intersects m at point Q. 2. Does PQ appear, to your 20% accurate length judgment, to be at least 21% greater than x? If not, return to step 1. Otherwise, let x = PQ, and either return to step 1 or decide you are finished and n is your nearly parallel line. Given enough time, this method will produce an asymptotically accurate parallel line. It will never be exactly accurate. The fact that it may take a huge number of steps to become accurate is not important. Also, please disregard the fact that by some statistical fluke it might take a vast number of steps yet still not be any more accurate than it was to begin with--under reasonable expectations of chance, its accuracy will increase. So, can you find a similar method for creating a circle? Feel free to share any interesting sub-algorithms you come across. One that I am working on is how to, given an angle, reflect it over one of its rays. 2. Jul 6, 2005 ### Jimmy Snyder How do you know this won't result in an infinite loop? 3. Jul 7, 2005 ### Rahmuss Mathematically I'm not so hot; but could I... 1 - Measure out a square as close as possible to exactl right angles. 2 - Measure out another square at 45 degrees from the angles of that square; but on top of it. 3 - Keep bisecting the angles of the squares with another square on top of that and so on until you get a fairly roundish circle. In fact, if you use the straight edge and make all the lines exactly that long, then you won't be off on that distance by 20% making it more accurate. As the number of steps approaches infinity it becomes more of a circle. In fact, the more squares you put, the greater accuracy you'll have because the angel gets smaller and smaller so that 20 degrees is smaller visually. http://img242.imageshack.us/img242/1171/makingcircles9em.png [Broken] Last edited by a moderator: May 2, 2017 4. Jul 8, 2005 ### BicycleTree Jimmy, I said: And here is a more exact description of just what you can do. You may name a point in any region you want, or along any line or line segment you want, or at any intersection you want. Given two points A and B, you may draw line AB. Given an angle ABC, you may name a degree amount x so that the difference between the measure of ABC and x is less than or equal to 20 degrees. Given a line segment AB, you may name a distance in inches x so that the difference between AB and x is less than or equal to 0.2 times the true length of AB. Assume nothing about the randomness or non-randomness of the two guessing methods, unless it turns out you absolutely can't proceed unless you assume they are random. However, since I don't think you can proceed otherwise, assume that when you select points from a region or along a line or line segment, the selection is random. Rahmuss, you have not shown that the steps you are describing are possible. For example, is it even possible to bisect an angle using the tools I have described? Is it possible to create a right angle? Last edited: Jul 8, 2005 5. Jul 8, 2005 ### NateTG It appears to be possible to create an arbitrary angle using your current rules. (Consider the possibility, for example, of starting with a 180 angle, and stepping down in increments of 20 degrees.) That means, that the following is possible: Pick the number of steps you want to take minimum 19 or so . Call that number $n$. In 5 steps you can construct a 90 degree angle. In (at most) another 5 steps you can construct a $\frac{540}{n-10}$ degree angle. Now, construct $\frac{n-10}{2}$ lines that meet at a point so that the smallest angles are all $\frac{540}{n-10}$. Choose an arbitrary point along one of the legs, and construct a perpendicular to both adjacent legs. Then reflect this perpendicular across the clockwise adjacent leg. Proceed clockwise around all ${n-10}$ legs (this takes only $\frac{n-10}{2}$ steps, since the reflection is over every other leg. This constructs a regular $\frac{n-10}{2}$ gon. Clearly, you can use this construction to build a regular polygon with an arbitrary number of sides - so you can approach a circle. Last edited: Jul 8, 2005 6. Jul 9, 2005 ### BicycleTree "Stepping down" is not one of the rules. It is not possible to certainly construct a given angle in any fixed number of steps. It is only possible to approach that construction in a large number of steps. You have not shown that the steps of your method are possible, so I cannot accept this as a solution. You need to show how to construct any given angle, how to construct a perpendicular to a given line through a given point, and how to perform a reflection. 7. Jul 11, 2005 ### BicycleTree Well, the only way I know of to make the circle is by using a kind of "trick," that's not explicitly allowed in the rules (though, one might argue, it is a not unreasonable extrapolation of those rules). Without using that trick, the best I can come up with is an ellipse. Warm-up Question: How do you approximate an ellipse using these tools? 8. Jul 11, 2005 ### NateTG Seems pretty obvious that if I start with a 180 degree angle, then I can name x at 160, and so on. Do you mean: Given an angle ABC it is possible to construct another angle DEF so that the difference in the measures of ABC and DEF is less than 20 degrees? 9. Jul 11, 2005 ### BicycleTree The rule is, "Given an angle ABC, you may name a degree amount x so that the difference between the measure of ABC and x is less than or equal to 20 degrees." You don't get to choose how much less than or equal to 20 degrees it is. The only thing you know when you name x is that the error of x as an approximation to the measure of ABC is less than or equal to 20 degrees. For example, let's say I have an angle ABC and I use the rule to name the degree amount "fifty degrees." Now I know that the measure of angle ABC lies somewhere between thirty and seventy degrees, inclusive. I suppose the rule is slightly oddly worded. Last edited: Jul 11, 2005 10. Jul 11, 2005 ### NateTG I don't think your rules are sufficiently clear. 11. Jul 11, 2005 ### BicycleTree I'll try again: 1. You may name a point in any region you want, or along any line or line segment you want, or at any intersection you want. If you name a point in a region or along a line or line segment, the point has equal probability of being in any arbitrary subset of of the region, line, or line segment. (Relying on this working for infinite regions is, by the way, the sort of "cheat") 2. Given two points A and B, you may draw line (or line segment) AB. 3. Given an angle ABC, you have the function angleguess(ABC) that returns an arbitrary degree measure x so that the difference between x and the measure of ABC is less than or equal to 20 degrees. 4. Given a line segment AB, you have the function lengthguess(AB) that returns an arbitrary length x so that the difference between x and the length of AB is less than or equal to 0.2 times the true length of AB. All points and lines lie in the Euclidean plane. Last edited: Jul 11, 2005 12. Jul 11, 2005 ### NateTG So there is no way to transfer angles or lengths except for picking at random, and seeing whether you're close? 13. Jul 11, 2005 ### BicycleTree Yes; an inefficient method, but you have unlimited (though finite) steps to make your approximate circle in. I got to thinking about this when I was drawing somewhat inaccurate circles by hand and thinking, I can draw a straight line pretty well, and I can approximately guess distances and angles, so if you idealize those procedures as I did here, is there any algorithm to get as close to a circle as you like? I figured out the answer a little while ago: yes, there is, and without using the "cheat." Hint in white: The key to the method I found is finding a way to use the approximate distance guessing to get nearly exact distance guessing. Last edited: Jul 11, 2005 14. Jul 11, 2005 ### NateTG It's got to be pretty tricky unless you make assumptions about the distribution - otherwise the potential for malicious randomness will almost certainly make it impossible. 15. Jul 12, 2005 ### mustafa Hey, I am able to construct an ellipse. Using BicycleTree's algorithm for parallel lines we can construct a parallelogram: Draw a straight line m. Take two points A and B on the line some distance apart. Draw a line n through B at any angle to m. Take a point C on n. Draw a line parallel to m through C. Draw a line parallel to n through A. Suppose the above two lines intersect in D. Then ABCD is a parallelogram. Since opposite sides of a parallelogram are congruent, the length AB is transferred to CD. Drawing another parallelogram, say BDCE, the length AB is further transferred to BE on the same line m. In this way we can get any no. of congruent segments on a given line. Also drawing parallel lines we can use n congruent segments on a line to divide a line segment into n equal parts. So we have developed two more tools: we can draw parallelograms and we can divide a given line segment in any no of equal parts. Therefore an ellipse can be easily constructed using the parallelogram method. 16. Jul 17, 2005 ### BicycleTree Congratulations, Mustafa. Actually, I was in error about my original scheme for measuring lengths exactly. The method I had in mind was first squaring a given length, then squaring the result, and so on to a power of 2^n (n integer), and then measuring the final length and using that measurement to find the original length to much greater accuracy. But the method was flawed because it required an original estimation of a short length to perform the first squaring procedure, and the error in that estimation becomes amplified with every squaring so it nullifies the benefit from the power operation. Trying to use some exact technique like letting the small estimated length be 1/10 of the length to be squared does not result in an actual squaring. I was typing a further reply detailing a mistake I had made in my original solution, the "cheating" method for creating a nearly exact circle, and a correct solution for creating a nearly exact circle, but then I realized that the "randomness" guarantee of rule 1 is impossible. Corrected version of rule 1 (changes underlined): 1. You may name a point in any region you want, or along any named line or line segment you want, or at any intersection you want. If you name a point in a region or along a line, ray or line segment, the point has equal probability of being in any two arbitrary subsets of equal area or length of the region, line, ray, or line segment. If the region, ray, or line has infinite area or length and other points have already been named, then the new named point is named only finitely far away from other named points (I don't think I really need to state this, except to be clear) Now, hopefully, you can do the problem. The "cheating" method makes extensive use of this rule with infinite regions, and can reasonably be argued not to work, but there is at least one method that doesn't stretch the rules. There are no guarantees on the randomness or nonrandomness of the two estimation functions, though that wasn't what was wrong with my earlier idea. 17. Jul 24, 2005 ### BicycleTree Well, what I had in mind for the solution there was also flawed, and I have not been able to come up with one that is not flawed. I think that it is not possible for an intuitive but incomplete reason: if you imagine that you have constructed a circle, stretch the entire plane by a factor of 15% in one direction, and any length measurement from the original plane could also be chosen to apply to the stretched plane, due to the measurement error. I believe that the angle measurement provision does not change the problem since angles can also be measured by measuring the length of sides of a triangle. Not that thinking about it wasn't interesting. My original idea involved producing a distance proportional to the square of a given distance, and then squaring that derived distance, and so on to produce a distance proportional to the original distance taken to a power of 2^n. Then measure that final distance to within 20%, and you will know the original distance far more accurately. This doesn't work because taking the square of the original distance in the method I could think of necessarily involves introducing a small line segment of "known" length parallel to the distance to be squared. The device of taking the small line segment to be 1/10 of the distance to be squared do not work because they do not result in squaring the length. The inaccuracy in the small "known" line segment becomes amplified with successive squarings so that no advantage results. The method I had in mind later on had to do with measuring densities of randomly plotted points within a triangle to find equivalent lengths. There's no need to go into exactly what the idea was, but the problem was similar--no way around having a small initial segment of known length.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8429268598556519, "perplexity": 466.7025033243562}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039747665.82/warc/CC-MAIN-20181121092625-20181121114625-00257.warc.gz"}
https://www.csdn.net/tags/MtjaMg4sNTc1NTUtYmxvZwO0O0OO0O0O.html
• BS ISO 37301:2021(TC Tracked Changes)Compliance management syste • LaTex排版时track changes1. 宏包与语句2. 可能出现的错误3. 其他参考资料 1. 宏包与语句 \usepackage{changes} %宏包 \definechangesauthor[name={Shuer}, color=orange]{S.} % 设置批注人的名字 基本语句 1)\... 目录 1. 宏包与语句2. 可能出现的错误3. 其他参考资料 1. 宏包与语句 \usepackage{changes} %宏包 \definechangesauthor[name={Shuer}, color=orange]{S.} % 设置批注人的名字 基本语句 1)\added{要添加的文字} 2)\deleted{要删除的文字} 2)\replaced{要新添的文字}{原文字} 带注释的选项 \added[id=S.,comment={notation}]{new words} % id 对应已经设置的批注人的名字 2. 可能出现的错误 编译后出现 L a T e X E r r o r : C o m m a n d \ h i g h l i g h t a l r e a d y d e f i n e d . \color{red}{ LaTeX\ Error: \ Command \ \verb|\|highlight \ already \ defined. } 解决方法:在出现问题的宏包前写入以下语句,释放相关命令: % \let\highlight\relax 3. 其他 a) 接受修订 将语句\usepackage{changes}改为\usepackage[final]{changes},编译出来就是接受所有修订的文档。 b) 增加行间公式 将\added{}写在公式环境内,可以避免报错: $$\added{a+b=c}$$ 表格同理。 参考资料 教程《The changes-package》 链接: https://pan.baidu.com/s/1gliO7oozn4che7rMqvCedA 密码: sc2g 展开全文 • http://www.fgcu.edu/support/office2007/word/changes.asp...Track Changes is a great feature of Word that allows you to see what changes have been made to a document. The tools for track changes are fou... http://www.fgcu.edu/support/office2007/word/changes.asp Track Changes is a great feature of Word that allows you to see what changes have been made to a document.  The tools for track changes are found on the Reviewing tab of the Ribbon. Begin Track ChangesTo keep track of the changes you’ll be making to a document, you must click on Track Changes icon.  To start Tracking Changes: Click Review Tab on the RibbonClick Track ChangesMake the changes to your document and you will see any changes you have made. Document ViewsThere are four ways to view a document after you have tracked changes: Final Showing Markup:  This shows the document with the changes displayedFinal:  This shows the changed document, without the changes displayedOriginal Showing Markup:  The original document with the changes displayedOriginal:  The original document without any changes. To change the view, click the appropriate choice in the Tracking Group of the Review Tab on the Ribbon. The Show Markup feature allows you to view different items (comments, formatting, etc.) and choose to view different authors’ comments. Accept or Reject ChangesWhen you view the changes in a document you can either choose to accept or reject the changes.  This allows you to review the document by each change to accept or reject each change. 转载于:https://www.cnblogs.com/Jessy/archive/2011/05/17/2048841.html 展开全文 • 在Word 2016 中追踪文档的变化(类似于版本控制) 在Word 2016 中追踪文档的变化(类似于版本控制):原链接 展开全文 • 工程build完之后,通常...输入git status命令查看,会发现修改的文件分为两种:Changes not staged for commit(也称为Tracked files)和Untracked files。 对于Unracked files,通常在项目目录下会有一个.gitign... 工程build完之后,通常会产生一些临时文件,或者本地的配置跟服务器上不一样。在提交时,不需要add这些文件,可以用下面的方法。 输入git status命令查看,会发现修改的文件分为两种:Changes not staged for commit(也称为Tracked files)和Untracked files。 对于Unracked files,通常在项目目录下会有一个.gitignore文件,如果没有也可以新建一个(touch .gitignore)。打开它(vim .gitignore),在里面添加需要忽略的文件。 输入git status命令查看,发现这两个文件已经被忽略了。但Tracked files也增加了一个.gitignore文件。 对于Tracked files,在.gitignore里面添加是不起作用的,读者可以自己试试看。这时要使用下面的命令 git update-index --assume-unchanged <files> 添加需要忽视的文件,添加之后,git Status就看不到了。 继续添加需要忽略的文件即可。 展开全文 • I have used the changes package in the past and I find it very useful. It has a key=value system so most of the things are customizable. You can define different authors and the changes are tracked • vs2017代码运行时不允许进行更改 工具-&gt;选项-&gt;调试-&gt;常规-&gt;启用编辑并继续不选择“启用编辑并继续”,这样就可以在调试时修改cs代码了。  ... • <div><p>When you write a document with others in MS Word, it is hard to find comments and tracked changes, because NVDA jumps to comments, changes, and spelling errors when pressing the key '... • <div><p>Restore tag counts and ...<p>Sorry for the churn around the DOM changes; if we don't want polling I can probably clean up the diff</p><p>该提问来源于开源项目:new-xkit/XKit</p></div> • but how to integrate this together with Floobits so that small changes in real-time could be tracked effectively is escaping me. <p>Even if all we had were highlighted lines where changes occurred, ... • (slip conditions may lead to velocity changes in both the longitudinal and lateral directions) v = [ v x 0 v θ ] + C ( v x , v θ ) ⋅ α \mathbf{v} = \begin{bmatrix} v_{x}\\ 0\\ v_{\theta} \end... • 如果系统中有一些配置文件在服务器上做了配置修改,然后后续开发又新添加一些配置项...error: Your local changes to the following files would be overwritten by merge:  protected/config/main.php Please, comm • modified: CONTRIBUTING.md The CONTRIBUTING.md file appears under a section named “Changes not staged for commit” — which means that a file that is tracked has been modified in the working ... • 本文翻译自:Checkout another branch when there are uncommitted changes on the current branch Most of the time when I try to checkout another existing branch, Git doesn't allow me if I have some ... • <div><p>So we are using CarrierWave to upload <code>jpg</code> file and this is what we tracked it down to. <p>We debugged the <code>storage.rb</code> file, the <code>store!</code> method: <pre><code>... • The Symfony 3.3 DI Container Changes Explained (autowiring, _defaults, etc) If you look at the services.yaml file in a new Symfony 3.3 or newer project, you’ll notice some big changes: _defaults, ... • Recording Changes to the Repository You have a bona fide Git repository and a checkout or working copy of the files for that project.You need to make some changes and commit snapshots of those chan • <div><p>Raises an error in the run of unit tests and UI tests if changes to <code>schema.rb</code> or <code>dashboard/config/locales/*</code> are generated during while running the tests on drone.... • AbstractBackgroundWe assessed the stability of BAFF, interferon, plasma cell and LDG neutrophil gene expression signatures over time, and whether changes in expression coincided with changes in SLE di... • 进入项目目录, 输入下面两行: git rm --cached ProjectName.xcodeproj/project.xcworkspace/xcuserdata/username.xcuserdatad/UserInterface...git commit -m "Removed file that shouldn't be tracked" 来自: • <p>i tracked the underlying change to adapters/twisted_connection.py <p>The callback here does not return the connection anymore. Is this intended and is there a new way to do this? <pre><code> def _... • Breaking changes in 7.0 This section discusses the changes that you need to be aware of when migrating your application to Elasticsearch 7.0. 本部分讨论将应用程序迁移到Elasticsearch 7.0时需要注意的... • it's a usefull function for tracking changes in the word, such as writing contact, solution document....i hope it'... • 原文地址:[url]http://get2java.wordpress.com/2011/06/27/envers-easy-auditing-for-entity-classes/...Have you tracked your Entity Objects ? When it has created,modified and deleted with time. Try Env... • 它本身没有输出,与git status一起使用 The above command informs git that all files in the current local git repository need to be tracked for changes. Without running this, git will never know about ... • 以太坊EVM源码注释之State Ethereum State EVM在给定的状态下使用提供的上下文(Context)运行合约,计算有效的状态转换(智能合约代码执行的结果)来更新以太坊状态(Ethereum state)。因此可以认为以太坊是基于交易的... • changes and commit them, and you can discard anycommits you make in this state without impacting any branches by performinganother checkout.    If you want to create a new... ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.7657976150512695, "perplexity": 7034.177970263671}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304686.15/warc/CC-MAIN-20220124220008-20220125010008-00643.warc.gz"}
https://www.physicsforums.com/threads/problem-regarding-friction.676600/
# Homework Help: Problem regarding friction 1. Mar 6, 2013 ### sankalpmittal 1. The problem statement, all variables and given/known data See the image: http://postimage.org/image/4ukoszng1/ Look at question 28 there. If you feel uncomfortable, you can click on the image to increase its size. Note, T:Tension in string a: acceleration in big block R1: reaction force by small block on big block.. 2. Relevant equations f≤fssN fkkN 3. The attempt at a solution This problem is different from what I have done. Now, I think that the acceleration of small block down must be twice the acceleration of big block, because small block is connected by one segment while the big block by two. Considering FBD of big block: T-µ2Mg-R1=Ma Considering FBD of small block: mg-T-µ1R1=2ma But, there are two equations and three unknowns. How can I solve for "a" ? Last edited: Mar 6, 2013 2. Mar 6, 2013 ### MostlyHarmless The acceleration of the two blocks will be the same because they are part of the same system. If one block moves with the same direction and acceleration. EDIT: also since your problem gives no numbers, I imagine they answer your looking for isnt going be numerical, so the number of unknowns isn't relevant Last edited: Mar 6, 2013 3. Mar 6, 2013 ### sankalpmittal No, I think you are not correct. Small block is connected by one segment. When big block, connected by 2 segments will move say x/2 units right, each segment will shorten by x/2. But the small block is connected by one segment, so that single segment has to lengthen by x units downwards. So small block will move downward by x units. Now differentiating distance, twice we get, asmall = d2x/dt= a abig=0.5d2x/dt=a/2 Q.E.D.. By distance, I meant of course displacement. Anyone ? Any hints? Edit: Answer given is in terms of µ1, µ2, m and M.. 4. Mar 6, 2013 ### SammyS Staff Emeritus The vertical component of the acceleration of the small block is equal to (the negative of) twice the horizontal component of the acceleration of the large block. The large block's motion being purely horizontal. The horizontal component of the acceleration of the small block is equal the horizontal component of the acceleration of the large block. Also: You're missing some forces exerted on the big block. The Normal force on the big block is not Mg. To calculate this, you should have an equation involving the vertical components of the forces on the big block. I inserted the word "twice" above. Last edited: Mar 6, 2013 5. Mar 6, 2013 ### MostlyHarmless I'm not sure what you are doing to justify, the accelerations of the blocks being different, but if they are connected, the acceleration is going to the same for both masses. Take newtons second law. We have a system of masses Fnet=(M+m)a. Also, just think about holding a stick with one hand on either end of the stick. If you move one hand the the other will move in exactly the same waY. 6. Mar 6, 2013 ### sankalpmittal Huh ? I do not understand. The directions of forces perpendicular to each other, do not have effect on each other. So how can downward motion of small block be negative of... Oh !! Sorry !! I understand. So you are taking right as positive and downward as negative. But hold on ! I do not agree.The vertical component of the acceleration of the small block should be twice the (the negative of) the horizontal component of the acceleration of the large block. This is because small block is connected by one segment but large block by two. See the reasoning in my previous post. Uhhh, Yes. So the friction µ1R1 (kinetic) acts on the big block in upward direction also ? Other forces also act? I cannot think of one? Edit: @Jesse H. That is not what I say. That is what my textbook say. Well, Suppose one block is connected by a single segment from pulley. Other block of same mass is connected by the two segments, tension in each segment being T. Will the acceleration(downward) of two blocks be same ? Think it yourself. Last edited: Mar 6, 2013 7. Mar 6, 2013 ### SammyS Staff Emeritus Right. It should be TWICE. After putting in all those other adjectives, I left the word "twice" out. I edited that post. Last edited: Mar 6, 2013 8. Mar 6, 2013 ### Maiq I'm not 100% sure but i would think that the tension for the M block would be 2T. Also since the reaction force between the two blocks is the only force acting on the m block in the horizontal direction, you could say R1=ma. 9. Mar 6, 2013 ### MostlyHarmless Sorry, taking a second look at the diagram. I see where I was mistaken. Apologies. 10. Mar 6, 2013 ### sankalpmittal Why you say,R1=ma ? @Jesse H. You are welcome. Ok thanks. Now let me write my solution comprehensively so that you can freely look and analyze it. FBD for small block: Forces acting are: 1.Downward force mg 2.Upward force tension T 3.Friction upward: µ1R1 4.Pseudo force of Ma on it towards left. 5.Reaction force R1 by big block on it towards right for its horizontal equilibrium. So, my equation becomes: R1=Ma ..(i) mg-T- µ1R1=2ma On using (i), mg-T- µ1Ma=2ma .... (A) FBD of big block: Forces acting on it are: 1.Downward force Mg 2.Tension T due right 3.Friction by ground of µ2R due left 4.Reaction force or third law pair of friction, exerted by small block downward:µ1R1 5.Reaction force R by floor on big block So accounting for vertical equilibrium of big block, R=Mg+µ1R1=Mg+µ1Ma ....(ii) T-µ2R=Ma On using (ii) T-µ2Mg-µ2µ1Ma=Ma ...(B) On adding A and B, Tension gets cancelled out, and I get, a=(mg-µ2Mg)/(µ2µ1M+µ1M+2m+M) This is not the correct answer as per the book. Where did I go wrong ? 11. Mar 6, 2013 ### Maiq Why do you say R1=Ma? The net force acting an object is equal to its mass times its acceleration. R1 is the only force acting on block m so R1 would be equal to block m times the acceleration a. 12. Mar 6, 2013 ### SammyS Staff Emeritus How did you get that Pseudo force? Looking at the components of the acceleration of the small block, the block travels in a straight line, the slope of which is -2. It looks like you have let a = the acceleration of the big block. I would do that too. I would say that R1 = ma . For each FBD , you should have one equation for horizontal components and one for vertical components. 13. Mar 6, 2013 ### sankalpmittal Yes. Thank you. I get it now. Now please see my solution and tell where I went wrong. Addresses to Maiq's below post. Dang it !! Again thanks. I forgot to include R1 in horizontal motion of big block. Ok, sorry. Again with this correction, and maiq's correction, I will continue it tomorrow. For now, I am off to sleep. Thinking back. Why did you say tension on big block be 2T ? Again, only one segment is connected directly to big block. Though its acceleration will be halved of smaller's downward one, but force will be still T. Last edited: Mar 6, 2013 14. Mar 6, 2013 ### Maiq Also R1 affects block M as well, because of Newton's third law. Last edited: Mar 6, 2013 15. Mar 6, 2013 ### Maiq After thinking about it i realized the last part of my last post was incorrect. It would actually equal -R1 if all other forces equaled 0. Sorry I didn't think of the direction of R1 when I thought of that explanation. Either way I believe everything else I said is true. 16. Mar 6, 2013 ### sankalpmittal Ok, so after all the correction and including other forces: FBD for small block: Forces acting are: 1.Downward force mg 2.Upward force tension T 3.Friction upward: µ1R1 4.Pseudo force of ma on it towards left. 5.Reaction force R1 by big block on it towards right for its horizontal equilibrium. So, my equation becomes: R1=ma ..(i) mg-T- µ1R1=2ma On using (i), mg-T- µ1ma=2ma .... (A) FBD of big block: Forces acting on it are: 1.Downward force Mg 2.Tension T due right 3.Friction by ground of µ2R due left 4.Reaction force or third law pair of friction, exerted by small block downward:µ1R1 5.Reaction force R by floor on big block 6.Reaction force R1 on it due left by small block as pointed out by maiq. So accounting for vertical equilibrium of big block, R=Mg+µ1R1=Mg+µ1ma ....(ii) T-µ2R-R1=Ma On using (ii) T-µ2Mg-µ2µ1ma-ma=Ma ...(B) On adding A and B, Tension gets cancelled out, and I get, a=(mg-µ2Mg)/(3m+M+µ1µ2m+µ1m) Now also the answer is incorrect! Where did I go wrong? What forces I missed in the FBDs ? Last edited: Mar 6, 2013 17. Mar 6, 2013 ### SammyS Staff Emeritus The above looks good. The string also passes over the pulley, which is attached to the bog block. Therefore, the string has two strands pulling to the right on the big block: That's 2T . AND one strand pulling down on the big block. So the stuff below is incorrect. 18. Mar 6, 2013 ### haruspex I agree down to there, but I don't get the result below: I get (mg-Ma-Mgµ2)/[m(3+µ1µ21)] 19. Mar 6, 2013 ### sankalpmittal Thanks SammyS !! I got the correct answer !! And of course thanks to maiq!! And thanks Jesse H. for at least trying.... @ haruspex: Yes, when you isolate "a", you will get the wrong answer as I was. But on following SammyS's hint, I got the correct answer. Thanks SammyS, once again. And thanks haruspex for replying.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8937286734580994, "perplexity": 2198.5909813983653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676594790.48/warc/CC-MAIN-20180723012644-20180723032644-00098.warc.gz"}
http://latextools.readthedocs.io/en/latest/completions/
# Completions ## Command completion, snippets, etc. By default, ST provides a number of snippets for LaTeX editing; the LaTeXTools plugin adds a few more. You can see what they are, and experiment, by selecting Tools | Snippets | LaTeX and Tools | Snippets | LaTeXTools from the menu. In addition, the LaTeXTools plugin provides useful completions for both regular and math text; check out files LaTeX.sublime-completions and LaTeX math.sublime-completions in the LaTeXTools directory for details. Some of these are semi-intelligent: i.e. bf expands to \textbf{} if you are typing text, and to \mathbf{} if you are in math mode. Others allow you to cycle among different completions: e.g. f in math mode expands to \phi first, but if you hit Tab again you get \varphi; if you hit Tab a third time, you get back \phi. ## LaTeX-cwl support LaTeXTools provides support for the LaTeX-cwl autocompletion word lists. If the package is installed, support is automatically enabled. In addition, support will be enabled if any custom cwl files are installed in the Packages/User/cwl directory. By default, as soon as one starts typing a command, e.g., \te, a popup is shown displaying possible completions, e.g. including \textit and the like. The following settings are provided to control LaTeXTools cwl behavior. • cwl_list: a list of cwl files to load • cwl_autoload: controls loading completions based on packages in the current document in addition to those specified in the cwl_list. Defaults to true, so you only need to set this if you want to disable this behavior. • command_completion: when to show that cwl completion popup. The possible values are: • prefixed (default): show completions only if the current word is prefixed with a \ • always: always show cwl completions • never: never display the popup • env_auto_trigger: if true, autocomplete environment names upon typing \begin{ or \end{ (default: false) ## User defined completions LaTeXTools provides support for custom user-defined completions through modification of the LaTeX.sublime-completions file. The user modified version should be placed in your User directory, otherwise it will be overwritten by future updates.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4607125520706177, "perplexity": 8105.058251733665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818693866.86/warc/CC-MAIN-20170925235144-20170926015144-00535.warc.gz"}
http://fmoldove.blogspot.com/2016/02/the-quantum-eraser-key-differences.html
## The Quantum Eraser The key differences between quantum and classical mechanics is superposition. The granddaddy of superposition experiments is the double slit experiment with a single particle source. If we place polarizers in front of the slits such that at detection time we can measure the polarization and determine through which slit did the photon went, then the interference pattern is lost. The idea of the quantum eraser is to recover the interference pattern by erasing the "which way information". Today I want to talk in-depth about the quantum description of the experiment. We will see that in the actual mathematical description the things are quite straightforward. The experimental setup is as follows: A laser produces one photon at a time and this photon is passed through a Beta Barium Borate (BBO) crystal where by parametric down conversion it is converted into two photons entangled in a Bell state: $$|\psi\rangle = \frac{1}{\sqrt{2}}(|x\rangle_s|y\rangle_i + |y\rangle_s|x\rangle_i)$$ where $$s$$ stands for signal, and $$i$$ stands for idler. Here we need to introduce a few notations: • $$|x\rangle$$ photon polarized horizontally, • $$|y\rangle$$ photon polarized vertically, • $$|+\rangle$$ photon polarized at 45 degrees • $$|-\rangle$$ photon polarized at -45 degrees • $$|L\rangle$$ photon left circular polarized • $$|R\rangle$$ photon right circular polarized and the relations between them: • $$|x\rangle = \frac{1}{\sqrt{2}}(|+\rangle + |-\rangle)$$ • $$|y\rangle = \frac{1}{\sqrt{2}}(|+\rangle - |-\rangle)$$ • $$|R\rangle = \frac{1-i}{2}(|+\rangle + i|-\rangle)$$ • $$|L\rangle = \frac{1-i}{2}(i|+\rangle + |-\rangle)$$ After passing through the double slit, the wavefunction becomes: $$|\Psi\rangle = \frac{1}{\sqrt{2}}(|\psi\rangle_1 + |\psi\rangle_2)$$ where $$|\psi\rangle_1 = \frac{1}{\sqrt{2}}(|x\rangle_{s1}|y\rangle_i + |y\rangle_{s1}|x\rangle_i)$$ $$|\psi\rangle_2 = \frac{1}{\sqrt{2}}(|x\rangle_{s2}|y\rangle_i + |y\rangle_{s2}|x\rangle_i)$$ If we add quarter wave plates after the slits to convert the linear polarized photons into circularly polarized ones $$|\psi\rangle_1$$ and $$|\psi\rangle_2$$ become: $$|\psi\rangle_1 = \frac{1}{\sqrt{2}}(|L\rangle_{s1}|y\rangle_i + i |R\rangle_{s1}|x\rangle_i)$$ $$|\psi\rangle_2 = \frac{1}{\sqrt{2}}(|R\rangle_{s2}|y\rangle_i - i|L\rangle_{s2}|x\rangle_i)$$ We see that $$|\psi\rangle_1$$ is orthogonal with $$|\psi\rangle_2$$ and that there is no interference possible. However, hidden in the no-interference photon distribution curve there are two interference patterns called "fringe" and "anti-fringe". Can we extract this pattern from it? Indeed we can and to see how we need to rewrite $$|\Psi\rangle$$ using a different basis: $$|\Psi\rangle = \frac{1+i}{\sqrt{2}} \frac{1}{2}[(|+\rangle_{s1} - i|+\rangle_{s2})|+\rangle_{i} + i (|-\rangle_{s1} + i |-\rangle_{s2})|-\rangle_{i}]$$ You can try to do the simple but tedious algebaic manipulations to convince yourself that the equation above is indeed the same thing as the two equations before it. Now if we place a  polarizer in the path of beam $$i$$ orientating it at +45 degrees to select $$|+\rangle_p$$ or at -45 degrees to select $$|-\rangle_p$$ then we get the fringe or the anti-fringe interference patterns. The experimental outcomes are not erased, only the interference pattern is extracted by using coincidence detection using the idler signal. What this means is that for each detector hit of the signal photon we know whether the idler photon has the 45 degree polarization or not based if it passed through the +45 degrees polarizer or not. Selecting only the ones for which the idler photon pass or did not pass recovers the fringes. By  playing with the signal and idler path lengths one can make one photon to be detected before the other at will. If the idler is detected after the signal we have what is called "delayed erasure". There are a lot of nonsense explanations about the meaning of the quantum eraser and there are a lot of baseless speculation about it like "erasing the past", but if you look at the math the explanation is straightforward. Granted, the name "quantum eraser" is a genius marketing ploy to get people excited about quantum mechanics. On my end I wanted to present the math of this example because I will use it in the next post to look at it through the eyes of the equivalence relationship and see what it can teach us about the Grothendieck approach for solving the measurement problem. We will learn two lessons: contextuality and irreversibility. Please stay tuned. 1. Buna ziua, Scuze, dar ma voi adresa in limba romana, datorita intrebarii mele: Exista echivalent in limba romana al expresiei "quantum eraser"? Mai precis, exista documentatie de specialitate (in romana) care sa trateze subiectul, si ce denumire este utilizata? O intrebare similara si pentru cuvantul "entanglement". Multumesc si o zi buna, Emil Negrea 2. Buna ziua, Din pacate nu cunosc echivalentul in limba Romana si cred ca cel ai bine termenii trebuie tratati ca neologisme, adica luati direct din engleza. Sa incerc totusi sa ii traduc: Pentru entanglement poate cel mai potrivit ar fi: amestec, amestecare. Quantum eraser este si mai greu de tradus: stergerea (cuantica a) istoriei. Numai bine, Florin PS: anul trecut am avut o prezentare de fizica la Academia Romana si am avut dificultati majore sa o prezint in limba Romana tocmai din cauza dificultatii traducerii termenilor tehnici.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.792223334312439, "perplexity": 2990.8638513947803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578727587.83/warc/CC-MAIN-20190425154024-20190425180024-00474.warc.gz"}
https://en.wikipedia.org/wiki/Butterfly_diagram
# Butterfly diagram Signal-flow graph connecting the inputs x (left) to the outputs y that depend on them (right) for a "butterfly" step of a radix-2 Cooley–Tukey FFT. This diagram resembles a butterfly (as in the Morpho butterfly shown for comparison), hence the name. In the context of fast Fourier transform algorithms, a butterfly is a portion of the computation that combines the results of smaller discrete Fourier transforms (DFTs) into a larger DFT, or vice versa (breaking a larger DFT up into subtransforms). The name "butterfly" comes from the shape of the data-flow diagram in the radix-2 case, as described below.[1] The earliest occurrence in print of the term is thought to be in a 1969 MIT technical report.[2][3] The same structure can also be found in the Viterbi algorithm, used for finding the most likely sequence of hidden states. Most commonly, the term "butterfly" appears in the context of the Cooley–Tukey FFT algorithm, which recursively breaks down a DFT of composite size n = rm into r smaller transforms of size m where r is the "radix" of the transform. These smaller DFTs are then combined via size-r butterflies, which themselves are DFTs of size r (performed m times on corresponding outputs of the sub-transforms) pre-multiplied by roots of unity (known as twiddle factors). (This is the "decimation in time" case; one can also perform the steps in reverse, known as "decimation in frequency", where the butterflies come first and are post-multiplied by twiddle factors. See also the Cooley–Tukey FFT article.) ## Contents In the case of the radix-2 Cooley–Tukey algorithm, the butterfly is simply a DFT of size-2 that takes two inputs (x0x1) (corresponding outputs of the two sub-transforms) and gives two outputs (y0y1) by the formula (not including twiddle factors): $y_0 = x_0 + x_1 \,$ $y_1 = x_0 - x_1. \,$ If one draws the data-flow diagram for this pair of operations, the (x0x1) to (y0y1) lines cross and resemble the wings of a butterfly, hence the name (see also the illustration at right). A decimation-in-time radix-2 FFT breaks a length-N DFT into two length-N/2 DFTs followed by a combining stage consisting of many butterfly operations. More specifically, a decimation-in-time FFT algorithm on n = 2 p inputs with respect to a primitive n-th root of unity $\omega^k_n = e^{-\frac{2\pi i k}{n}}$ relies on O(n log n) butterflies of the form: $y_0 = x_0 + x_1 \omega^k_n \,$ $y_1 = x_0 - x_1 \omega^k_n, \,$ where k is an integer depending on the part of the transform being computed. Whereas the corresponding inverse transform can mathematically be performed by replacing ω with ω−1 (and possibly multiplying by an overall scale factor, depending on the normalization convention), one may also directly invert the butterflies: $x_0 = \frac{1}{2} (y_0 + y_1) \,$ $x_1 = \frac{\omega^{-k}_n}{2} (y_0 - y_1), \,$ corresponding to a decimation-in-frequency FFT algorithm. ## Other uses The butterfly can also be used to improve the randomness of large arrays of partially random numbers, by bringing every 32 or 64 bit word into causal contact with every other word through a desired hashing algorithm, so that a change in any one bit has the possibility of changing all the bits in the large array.[4]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8293667435646057, "perplexity": 1289.950850426049}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701165484.60/warc/CC-MAIN-20160205193925-00219-ip-10-236-182-209.ec2.internal.warc.gz"}
https://science.sciencemag.org/content/161/3848/1338.abstract
Reports Potassium-Argon Ages and Spreading Rates on the Mid-Atlantic Ridge at 45° North See allHide authors and affiliations Science  27 Sep 1968: Vol. 161, Issue 3848, pp. 1338-1339 DOI: 10.1126/science.161.3848.1338 Abstract Potassium-argon dates obtained from extrusives collected on a traverse across the Mid-Atlantic Ridge at 45°N are consistent with the hypothesis of ocean-floor spreading. The dates suggest a spreading rate in the range of 2.6 to 3.2 centimeters per year near the axis of the ridge; the rate agrees with that computed from fission-track dating of basalt glasses. Additional data for a basalt collected 62 kilometers west of the axis gives a spreading rate of 0.8 centimeter per year, which is similar to the rate inferred from magnetic anomaly patterns in the area. Reasons for the difference in calculated spreading rates are discussed.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8047419786453247, "perplexity": 3116.2235682939145}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152236.64/warc/CC-MAIN-20210727041254-20210727071254-00044.warc.gz"}
https://carlo-hamalainen.net/2015/05/
For my use cases there are two scenarios when running a list of worker threads: 1. If any thread throws an exception, give up on everything. 2. If any thread throws an exception, log it, but let the other workers run to completion. First, imports that we’ll use: > {-# LANGUAGE DeriveDataTypeable, ScopedTypeVariables #-} > > module Main where > > import Data.Conduit > import Data.Conduit.List > import Data.Traversable (traverse) > import Control.Applicative > import Control.Concurrent > import Control.Concurrent.Async > import Control.Concurrent.ParallelIO.Local > import Control.Monad hiding (mapM, mapM_) > import Data.Typeable > import Prelude hiding (map, mapM, mapM_) > import System.IO We will use code from parallel-io and async for running worker threads. For a pipeline we’ll also use conduit. Here’s our exception type, which we throw using throwM from Control.Monad.Catch: > data MyException = MyException String deriving (Show, Typeable) > > instance Exception MyException Our two tasks. The first task immediately throws an exception; the second waits for 5 seconds and completes happily. > task1 :: IO String > task1 = throwM $MyException "task1 blew up" > > task2 :: IO String > task2 = do > threadDelay$ 5 * 10^6 > return $"task2 finished" Example: parallel_ > main1 :: IO () > main1 = do > > x parallel_ pool [task1, task2] > print (x :: ()) Output: *Main> main1 *** Exception: MyException "task1 blew up" Example: parallelE_ > main2 :: IO () > main2 = do > > x parallelE_ pool [task1, task2] > print x Output: *Main> main2 [Just (MyException "task1 blew up"),Nothing] Example: parallel > main3 :: IO () > main3 = do > x parallel pool [task1, task2] > print x Output: *Main> main3 *** Exception: MyException "task1 blew up" Example: parallelE > main4 :: IO () > main4 = do > x parallelE pool [task1, task2] > print x Output: *Main> main4 [Left (MyException "task1 blew up"),Right "task2 finished"] Example: async/wait > main5 :: IO () > main5 = do > a1 a2 result1 result2 > print [result1, result2] Output: *Main> main5 *** Exception: MyException "task1 blew up" Example: async/waitCatch > main6 :: IO () > main6 = do > a1 a2 result1 result2 > print [result1, result2] Output: *Main> main6 [Left (MyException "task1 blew up"),Right "task2 finished"] Example: concurrently > main7 :: IO () > main7 = do > result > print result Output: *Main> main7 *** Exception: MyException "task1 blew up" Example: throwM in a conduit sink > main8 :: IO () > main8 = do > sourceList [1..5] $$(throwM MyException "main8 in conduit exploded") > print "this is never printed" Output: *** Exception: MyException "main8 in conduit exploded" Example: throwM in a conduit sink (on one value) > main9 :: IO () > main9 = do > > let foo x = if x == 3 then throwM MyException "got a 3 in main9" > else print x > > sourceList [1..5]$$ (mapM_ foo) > print "this is never printed" The conduit processes values 1 and 2, throws an exception on 3, and never sees 4 and 5. *Main> main9 1 2 *** Exception: MyException "got a 3 in main9" Example: throwM/catchC > main10 :: IO () > main10 = do > > let foo x = if x == 3 then throwM$ MyException "got a 3 in main10" > else print x > > let sink = catchC (mapM_ foo) > ((e :: SomeException) -> mapM_ $x -> putStrLn$ "When processing " ++ show x ++ " caught exception: " ++ show e) > > sourceList [1..5] $$sink > print "main10 finished" The output is not what I expected. Values 1 and 2 are processed as expected, then the 3 throws an exception, but the effect of catchC is that the rest of the values (4 and 5) are processed using the second argument to catchC. In this situation, a conduit can’t be used to process a stream with independently failing components. You have to catch all exceptions before they bubble up to the conduit code. 1 2 When processing 4 caught exception: MyException "got a 3 in main10" When processing 5 caught exception: MyException "got a 3 in main10" "main10 finished" Example: catchAll in conduit A combinator that runs an IO action and catches any exception: > catchBlah :: Show a => (a -> IO ()) -> a -> IO () > catchBlah action = x -> catchAll (action x) > ((e :: SomeException) -> putStrLn "On value " ++ show x ++ " caught exception: " ++ show e) Using catchBlah in the sink: > main11 :: IO () > main11 = do > > let foo x = if x == 3 then throwM MyException "got a 3 in main11" > else print x > > sourceList [1..5]$$ (mapM_ $catchBlah foo) > > print "main11 finished" Now the conduit processes every value, because the exception is caught and dealt with at a lower level. *Main> main11 1 2 On value 3 caught exception: MyException "got a 3 in main11" 4 5 "main11 finished" Example: catchBlah’ in conduit Now, suppose we have a few stages in the conduit and the first stage blows up. Use catchAll to catch the exception and return a IO (Maybe b) instead of IO b: > catchBlah' :: Show a => (a -> IO b) -> a -> IO (Maybe b) > catchBlah' action = x -> do > catchAll (action x >>= (return . Just)) > ((e :: SomeException) -> do putStrLn$ "On value " ++ show x ++ " caught exception: " ++ show e > return Nothing) > main12 :: IO () > main12 = do > > let src = [1..5] :: [Int] > > let stage1 x = do when (x == 3) $throwM$ MyException "Got a 3 in stage1" > putStrLn $"First print: " ++ show x > return x > > sourceList src $$(mapM catchBlah' stage1) == (mapM_ print) > > print "main12 finished" Output: First print: 1 Just 1 First print: 2 Just 2 On value 3 caught exception: MyException "Got a 3 in stage1" Nothing First print: 4 Just 4 First print: 5 Just 5 "main12 finished" Example: catchBlah’ in conduit (tweaked) Same as the previous example but with nicer printing in the sink: > main13 :: IO () > main13 = do > > let src = [1..5] :: [Int] > > let stage1 x = do when (x == 3) throwM MyException "Got a 3 in stage1" > putStrLn "First print: " ++ show x > return x > stage2 x = case x of > Just x' -> do putStrLn "Second print: " ++ show (x' + 1) > putStrLn "" > Nothing -> do putStrLn "Second print got Nothing..." > putStrLn "" > > sourceList src$$ (mapM$ catchBlah' stage1) =$= (mapM_ stage2) > > print "main13 finished" Output: *Main> main13 First print: 1 Second print: 2 First print: 2 Second print: 3 On value 3 caught exception: MyException "Got a 3 in stage1" Second print got Nothing... First print: 4 Second print: 5 First print: 5 Second print: 6 "main13 finished" # Note to self: parallel-io A short note about using parallel-io to run shell commands in parallel from Haskell. If you want to try out this blog post’s Literate Haskell source then your best bet is to compile in a sandbox which has various package versions fixed using the cabal.config file (via the cabal freeze command). This is how to build the sandbox: git clone https://github.com/carlohamalainen/playground.git cd playground/haskell/parallel rm -fr .cabal-sandbox cabal.sandbox.config dist # start fresh cabal sandbox init cabal install --haddock-hyperlink-source --dependencies-only cabal install cabal repl Also, note the line ghc-options: -threaded -rtsopts -with-rtsopts=-N in parallel.cabal. Without those rtsopts options you would have to execute the binary using ./P +RTS -N. Now, onto the actual blog post. First, a few imports to get us going. > module Main where > import Control.Concurrent > import Control.Monad > import Control.Concurrent.ParallelIO.Local > import Data.Traversable > import qualified Pipes.ByteString as B > import qualified Data.ByteString as BS > import qualified Data.ByteString.Lazy as BSL > import Data.ByteString.Internal (w2c) > import System.Exit > import System.IO > import System.Process.Streaming In one of my work projects I often need to call legacy command line tools to process various imaging formats (DICOM, MINC, Nifti, etc). I used to use a plain call to createProcess and then readRestOfHandle to read the stdout and stderr but I discovered that it can deadlock and a better approach is to use process-streaming. This is the current snippet that I use: > -- Copied from https://github.com/carlohamalainen/imagetrove-uploader/blob/master/src/Network/ImageTrove/Utils.hs > -- Run a shell command, returning Right with stdout if the command exited successfully > -- and Left with stderr if there was an exit failure. > runShellCommand :: FilePath -> [String] -> IO (Either String String) > runShellCommand cmd args = do > > (exitCode, (stdOut, stdErr)) > return$ case exitCode of > ExitSuccess -> Right $map w2c$ BS.unpack $BSL.toStrict stdOut > ExitFailure e -> Left$ "runShellCommand: exit status " ++ show e ++ " with stdErr: " > ++ (map w2c $BS.unpack$ BSL.toStrict $stdErr) Suppose we have a shell command that takes a while, in this case because it’s sleeping. Pretend that it’s IO bound. > longShellCommand :: Int -> IO (Either String String) > longShellCommand n = do > putStrLn$ "Running sleep command for " ++ show n ++ " second(s)." > runShellCommand "sleep" [show n ++ "s"] We could run them in order: > main1 :: IO () > main1 = do > -- Think of these as arguments to our long-running commands. > let sleepTimes = [1, 1, 1, 1] > > forM_ sleepTimes longShellCommand In Haskell we can think of IO as a data type that describes an IO action, so we can build it up using ‘pure’ code and then execute them later. To make it a bit more explicit, here is a function for running an IO action: > runIO :: IO a -> IO a > runIO x = do > result return result We can use it like this: *Main> let action = print 3 -- pure code, nothing happens yet *Main> runIO action -- runs the action 3 And we can rewrite main1 like this: > main2 :: IO () > main2 = do > let sleepTimes = [1, 1, 1, 1] > > let actions = map longShellCommand sleepTimes :: [IO (Either String String)] > > forM_ actions runIO As an aside, runIO is equivalent to liftM id (see Control.Monad for info about liftM). Now, imagine that you had a lot of these shell commands to execute and wanted a pool of, say, 4 workers. The parallel-io package provides withPool which can be used like this: withPool 4 $pool -> parallel_ pool [putStrLn "Echo", putStrLn " in parallel"] Note that the IO actions (the putStrLn fragments) are provided in a list. A list of IO actions. So we can run our shell commands in parallel like so: > main3 :: IO () > main3 = do > let sleepTimes = [1, 1, 1, 1] > > let actions = map longShellCommand sleepTimes :: [IO (Either String String)] > > hSetBuffering stdout LineBuffering -- Otherwise output is garbled. > > withPool 4$ pool -> parallel_ pool actions If we did this a lot we might define our own version of forM_ that uses withPool: > parForM_ :: Int -> [IO a] -> IO () > parForM_ nrWorkers tasks = withPool nrWorkers \$ pool -> parallel_ pool tasks > main4 :: IO () > main4 = do > let sleepTimes = [1, 1, 1, 1] > > let actions = map longShellCommand sleepTimes :: [IO (Either String String)] > > hSetBuffering stdout LineBuffering -- Otherwise output is garbled. > > parForM_ 4 actions Here is another example of building up some IO actions in pure form and then executing them later. Imagine that instead of a list of Ints for the sleep times, we have some actual sleep times and others that represent an error case. An easy way to model this is using Either, which by convention has the erroneous values in the Left and correct values in the Right. > main5 :: IO () > main5 = do > let sleepTimes = [Right 1, Left "something went wrong", Right 2, Right 3] > > let actions = map (traverse longShellCommand) sleepTimes :: [IO (Either [Char] (Either String String))] > actions' = map (fmap join) actions :: [IO (Either [Char] String)] > > hSetBuffering stdout LineBuffering -- Otherwise output is garbled. > > parForM_ 4 actions' In main5 we define actions by mapping a function over the sleep times, which are are now of type Either String Int. We can’t apply longShellCommand directly because it expects an Int, so we use traverse longShellCommand instead (see Data.Traversable for the definition of traverse). Next, the Either-of-Either is a bit clunky but we can mash them together using join. Here we have to use fmap because we have list elements of type IO (Either [Char] String), not Either [Char] String as join might expect. One topic that I haven’t touched on is dealing with asynchronous exceptions. For this, have a read of Catching all exceptions from Snoyman and also enclosed-exceptions. Also, Chapter 13 of Parallel and Concurrent Programming in Haskell shows how to use the handy async package. > -- Run all of the mains. > main :: IO () > main = do > > print "main1" > main1 > > print "main2" > main2 > > print "main3" > main3 > > print "main4" > main4 > > print "main5" > main5 A side note: runShellCommand does not seem to play well with Unicode, because it does not do decoding of bytes, e.g.,
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6459428071975708, "perplexity": 21086.948430481978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371799447.70/warc/CC-MAIN-20200407121105-20200407151605-00391.warc.gz"}
https://link.springer.com/chapter/10.1007%2F978-94-007-6371-5_16
# Drawing Diamond Structures with Eigenvectors Chapter Part of the Carbon Materials: Chemistry and Physics book series (CMCP, volume 6) ## Abstract Very often the basic information about a nanostructure is a topological one. Based on this topological information, we have to determine the Descartes coordinates of the atoms. For fullerenes, nanotubes and nanotori, the topological coordinate method supplies the necessary information. With the help of the bi-lobal eigenvectors of the Laplacian matrix, the position of the atoms can be generated easily. This method fails, however, for nanotube junctions and coils and other nanostructures. We have found recently a matrix W which could generate the Descartes coordinates for fullerenes, nanotubes and nanotori and also for nanotube junctions and coils as well. Solving, namely, the eigenvalue problem of this matrix W, its eigenvectors with zero eigenvalue give the Descartes coordinates. There are nanostructures however, whose W matrices have more eigenvectors with zero eigenvalues than it is needed for determining the positions of the atoms in 3D space. In this chapter, we have studied this problem in the case of diamond structures. We have found that this extra degeneracy is due to the fact that the first and second neighbour interactions do not determine the geometry of the structure. It was found that including the third neighbour interaction as well, diamond structures were described properly. ## Keywords Adjacency Matrix Null Space Atomic Arrangement Zero Eigenvalue Neighbour Interaction These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves. ## Notes ### Acknowledgments I. László thanks for the support of grants TAMOP-4.2.1/B-09/1/KONV-2010-0003, TAMOP-4.2.1/B-09/1/KMR-2010-0002 and for the support obtained in the frame work of bilateral agreement between the Croatian Academy of Science and Art and the Hungarian Academy of Sciences. The research of T. Pisanski has been financed by ARRS project P1-0294 and within the EUROCORES Programme EUROGIGA (project GReGAS N1-0011) of the European Science Foundation. ## References 1. Biyikoglu T, Hordijk W, Leydold J, Pisanski T, Stadler PF (2004) Graph Laplacians, nodal domains, and hyperplane arrangements. Linear Algebra Appl 390:155–174 2. Biyikoglu T, Leydold J, Stadler PF (2007) Laplacian eigenvectors of graphs. Perron-Frobenius and Faber-Krahn type theorems. LNM 1915. Springer, Berlin/HeidelbergGoogle Scholar 3. Brenner DW (1990) Empirical potentials for hydrocarbons for use in simulating the chemical vapor deposition of diamond films. Phys Rev B 42:9458–9471 4. Colin de Verdière Y (1998) Spectres de graphes. Cours spécialisés 4. Société Mathématique de France, ParisGoogle Scholar 5. Di Battista G, Eades P, Tamassia R, Tollis IG (1999) Graph drawing: algorithms for the visualization of graphs. Prentice Hall, Upper Saddle RiverGoogle Scholar 6. Dresselhaus MS, Dresselhaus G, Eklund PC (1996) Science of fullerenes and carbon nanotubes: their properties and applications. Academic, New York/LondonGoogle Scholar 7. Fowler PW, Manolopulos DE (1995) An atlas of fullerenes. Clarendon, OxfordGoogle Scholar 8. Fowler PW, Pisanski T, Shawe-Taylor JS (1995) Molecular graph eigenvectors for molecular coordinates. In: Tamassia R, Tollis EG (eds) Graph drawing. DIMACS international workshop, GD’94, Princeton, New Jersey, USA, 10–12 October 1994. Lecture Notes in Computer Science 894. Springer, BerlinGoogle Scholar 9. Godsil CD, Royle GF (2001) Algebraic graph theory. Springer, Heidelberg 10. Graovac A, Plavšić D, Kaufman M, Pisanski T, Kirby EC (2000) Application of the adjacency matrix eigenvectors method to geometry determination of toroidal carbon molecules. J Chem Phys 113:1925–1931 11. Graovac A, Lászlo I, Plavšić D, Pisanski T (2008a) Shape analysis of carbon nanotube junctions. MATCH Commun Math Comput Chem 60:917–926Google Scholar 12. Graovac A, László I, Pisanski T, Plavšić D (2008b) Shape analysis of polyhex carbon nanotubes and nanotori. Int J Chem Model 1:355–362Google Scholar 13. Hall KM (1970) An r-dimensional quadratic placement algorithm. Manag Sci 17:219–229 14. Kaufmann M, Wagner D (eds) (2001) Drawing graphs. Methods and models. LNCS 2025. Springer, New YorkGoogle Scholar 15. Koren Y (2005) Drawing graphs by eigenvectors: theory and practice. Comput Math Appl 49:1867–1888 16. László I (2004a) Topological coordinates for nanotubes. Carbon 42:983–986 17. László I (2004b) The electronic structure of nanotubes and the topological arrangements of carbon atoms. In: Buzaneva E, Scharff P (eds) Frontiers of multifunctional integrated nanosystems. NATO Science Series, II. Mathematics, Physics and Chemistry, 152. Kluwer Academic Publishers, Dordrecht, p 11Google Scholar 18. László I (2005) Topological coordinates for Schlegel diagrams of fullerenes and other planar graphs. In: Diudea MV (ed) Nanostructures: novel architecture. Nova, New York, pp 193–202Google Scholar 19. László I (2008) Hexagonal and non-hexagonal carbon surfaces. In: Blank V, Kulnitskiy B (eds) Carbon nanotubes and related structures. Research Singpost, Kerala, pp 121–146Google Scholar 20. László I, Rassat A (2003) The geometric structure of deformed nanotubes and the topological coordinates. J Chem Inf Comput Sci 43:519–524 21. Lovász L, Schrijver A (1999) On the null space of the Colin de Verdière matrix. Annales de l’Institute Fourier (Grenoble) 49:1017–1026 22. Lovász L, Vesztergombi K (1999) Representation of graphs. In: Halász G, Lovász L, Simonovits M, Sós VT (eds) Paul Erdős and his mathematics. Bolyai Society. Springer, New YorkGoogle Scholar 23. László I, Rassat A, Fowler PW, Graovac A (2001) Topological coordinates for toroidal structures. Chem Phys Lett 342:369–374 24. László I, Graovac A, Pisanski T, Plavšić D (2011) Graph drawing with eigenvectors. In: Putz MV (ed) Carbon bonding and structures. Advances in physics and chemistry. Springer, Dordrecht, pp 95–115 25. László I, Graovac A, Pisanski T (2012) Nanostructures and eigenvectors of matrices. In: Ashrafi AR, Cataldo F, Graovac A, Iranmanesh A, Ori O, Vukicevic D (eds) Carbon materials chemistry and physics: topological modelling of nanostructures and extended systems. Springer, Dordrecht/Heidelberg/London/New YorkGoogle Scholar 26. Manolopoulos DE, Fowler PW (1992) Molecular graphs, point groups, and fullerenes. J Chem Phys 96:7603–7614 27. Pisanski T, Shawe-Taylor JS (1993) Characterising graph drawing with eigenvectors. In: Technical report CSD-TR-93-20, Royal Holloway, University of London, Department of Computer Science, Egham, Surrey TW200EX, EnglandGoogle Scholar 28. Pisanski T, Shawe-Taylor JS (2000) Characterising graph drawing with eigenvectors. J Chem Inf Comput Sci 40:567–571 29. Pisanski T, Žitnik A (2009) Representing graphs and maps. In: Beineke LW, Wilson RJ (eds) Encyclopedia of mathematics and its applications, 128. Cambridge University Press, Cambridge, pp 151–180Google Scholar 30. Pisanski T, Plestenjak B, Graovac A (1995) NiceGraph and its applications in chemistry. Croat Chim Acta 68:283–292Google Scholar 31. Rassat A, László I, Fowler PW (2003) Topological rotational strengths as chirality descriptors for fullerenes. Chem Eur J 9:644–650 32. Stone AJ (1981) New approach to bonding in transition-metal clusters and related compounds. Inorg Chem 20:563–571 33. Trinajstić N (1992) Chemical graph theory. CRC Press, Boca Raton/Ann Arbor/London/TokyoGoogle Scholar 34. Tutte WT (1963) How to draw a graph. Proc Lond Math Soc 13:743–768 35. van der Holst H (1996) Topological and spectral graph characterizations. Ph.D. thesis, University of Amsterdam, AmsterdamGoogle Scholar
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8681065440177917, "perplexity": 16813.405681794582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948029.93/warc/CC-MAIN-20180425232612-20180426012612-00468.warc.gz"}
https://www.talkstats.com/tags/matrix-algebra/
# matrix algebra 1. ### Trouble understanding univariate logistic regression using categorical data Hello, I have a cancer dataset of 98 observations. Cancer detection rate was determined for 2 detection modalities (C and S). One of the independent variables of interest was a 3 tiered scoring system (possible scores: 3, 4, and 5). On univariate logistic regression, the score was... 2. ### Probability limit of ridge estimator mu hat I have been working on the following question for a while but I stucked at a certain point. I have done part a and b and I found = (I_T + D'D)y where I_T is the identity matrix with dimension T, lambda is tuning parameter and D is the second order difference matrix which is a tridiagonal and... 3. ### Correct Cross Validation. How to calculate the projected R Squared or Residual Sum Sq Hi, I have read into the subject of finding good estimators to determine the goodness of fit when the regression on a trainingset is projected on a testset (unseen data). I have found a lot of scientific papers but I get completely lost in terminologie and very complex equations I do not... 4. ### Showing Left Side to Right Side. Let \mathbf x is a (p\times 1) vector, \mathbf\mu_1 is a (p\times 1) vector, \mathbf\mu_2 is a (p\times 1) vector, and \Sigma is a (p\times p) matrix. Now I have to show -\frac{1}{2}(\mathbf x-\mathbf\mu_1)'\Sigma^{-1}(\mathbf x-\mathbf\mu_1)+\frac{1}{2}(\mathbf... 5. ### Why does the determinant always equal zero for a matrix of consecutive numbers? Hi. Why does the determinant always equal zero for a matrix of consecutive numbers? This applies whether the consecutive numbers are in the matrix starting from smallest to largest, or vice versa. It also applies irrespective of whether they are entered row then column or vice versa, which...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8101984858512878, "perplexity": 1044.9981936985198}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00013.warc.gz"}
https://www.esaral.com/q/the-strength-of-an-aqueous-naoh-solution-is-most-accurately-determined-by-titrating-51549
# The strength of an aqueous NaOH solution is most accurately determined by titrating: Question: The strength of an aqueous $\mathrm{NaOH}$ solution is most accurately determined by titrating: (Note: consider that an appropriate indicator is used) 1. Aq. $\mathrm{NaOH}$ in a pipette and aqueous oxalic acid in a burette 2. Aq. $\mathrm{NaOH}$ in a burette and aqueous oxalic acid in a conical flask 3. Aq. $\mathrm{NaOH}$ in a burette and concentrated $\mathrm{H}_{2} \mathrm{SO}_{4}$ in a conical flask 4. Aq. $\mathrm{NaOH}$ in a volumetric flask and concentrated $\mathrm{H}_{2} \mathrm{SO}_{4}$ in a conical flask Correct Option: , 2 Solution: Oxalic acid is a primary standard solution whereas $\mathrm{H}_{2} \mathrm{SO}_{4}$ is a secondary standard solution. So it does not matter whether oxalic acid is taken in a burette or in conical flask. Therefore accurate measurement of concentration by titration depends on the nature of the solution.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8333795666694641, "perplexity": 3859.2932749666265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948871.42/warc/CC-MAIN-20230328201715-20230328231715-00146.warc.gz"}
http://is.tuebingen.mpg.de/publications?departments%5B%5D=pf&year%5B%5D=2015&year%5B%5D=2000&year%5B%5D=2020
#### 2015 ##### Enzymatically active biomimetic micropropellers for the penetration of mucin gels Walker (Schamel), D., Käsdorf, B. T., Jeong, H. H., Lieleg, O., Fischer, P. Science Advances, 1(11):e1500501, December 2015 (article) Abstract In the body, mucus provides an important defense mechanism by limiting the penetration of pathogens. It is therefore also a major obstacle for the efficient delivery of particle-based drug carriers. The acidic stomach lining in particular is difficult to overcome because mucin glycoproteins form viscoelastic gels under acidic conditions. The bacterium Helicobacter pylori has developed a strategy to overcome the mucus barrier by producing the enzyme urease, which locally raises the pH and consequently liquefies the mucus. This allows the bacteria to swim through mucus and to reach the epithelial surface. We present an artificial system of reactive magnetic micropropellers that mimic this strategy to move through gastric mucin gels by making use of surface-immobilized urease. The results demonstrate the validity of this biomimetic approach to penetrate biological gels, and show that externally propelled microstructures can actively and reversibly manipulate the physical state of their surroundings, suggesting that such particles could potentially penetrate native mucus. pf #### 2015 ##### The EChemPen: A Guiding Hand To Learn Electrochemical Surface Modifications Valetaud, M., Loget, G., Roche, J., Hueken, N., Fattah, Z., Badets, V., Fontaine, O., Zigah, D. J. of Chem. Ed., 92(10):1700-1704, September 2015 (article) Abstract The Electrochemical Pen (EChemPen) was developed as an attractive tool for learning electrochemistry. The fabrication, principle, and operation of the EChemPen are simple and can be easily performed by students in practical classes. It is based on a regular fountain pen principle, where the electrolytic solution is dispensed at a tip to locally modify a conductive surface by triggering a localized electrochemical reaction. Three simple model reactions were chosen to demonstrate the versatility of the EChemPen for teaching various electrochemical processes. We describe first the reversible writing/erasing of metal letters, then the electrodeposition of a black conducting polymer "ink", and finally the colorful writings that can be generated by titanium anodization and that can be controlled by the applied potential. These entertaining and didactic experiments are adapted for teaching undergraduate students that start to study electrochemistry by means of surface modification reactions. pf ##### 3D-printed Soft Microrobot for Swimming in Biological Fluids In Conf. Proc. IEEE Eng. Med. Biol. Soc., pages: 4922-4925, August 2015 (inproceedings) Abstract Microscopic artificial swimmers hold the potential to enable novel non-invasive medical procedures. In order to ease their translation towards real biomedical applications, simpler designs as well as cheaper yet more reliable materials and fabrication processes should be adopted, provided that the functionality of the microrobots can be kept. A simple single-hinge design could already enable microswimming in non-Newtonian fluids, which most bodily fluids are. Here, we address the fabrication of such single-hinge microrobots with a 3D-printed soft material. Firstly, a finite element model is developed to investigate the deformability of the 3D-printed microstructure under typical values of the actuating magnetic fields. Then the microstructures are fabricated by direct 3D-printing of a soft material and their swimming performances are evaluated. The speeds achieved with the 3D-printed microrobots are comparable to those obtained in previous work with complex fabrication procedures, thus showing great promise for 3D-printed microrobots to be operated in biological fluids. pf ##### Optimal Length of Low Reynolds Number Nanopropellers Walker (Schamel), D., Kuebler, M., Morozov, K. I., Fischer, P., Leshansky, A. M. Nano Letters, 15(7):4412-4416, June 2015 (article) Abstract Locomotion in fluids at the nanoscale is dominated by viscous drag. One efficient propulsion scheme is to use a weak rotating magnetic field that drives a chiral object. Froth bacterial flagella to artificial drills, the corkscrew is a universally useful chiral shape for propulsion in viscous environments. Externally powered magnetic micro- and nanomotors have been recently developed that allow for precise fuel-free propulsion in complex media. Here, we combine analytical and numerical theory with experiments on nanostructured screw-propellers to show that the optimal length is surprisingly short only about one helical turn, which is shorter than most of the structures in use to date. The results have important implications for the design of artificial actuated nano- and micropropellers and can dramatically reduce fabrication times, while ensuring optimal performance. pf ##### A theoretical study of potentially observable chirality-sensitive NMR effects in molecules Garbacz, P., Cukras, J., Jaszunski, M. Phys. Chem. Chem. Phys., 17(35):22642-22651, May 2015 (article) Abstract Two recently predicted nuclear magnetic resonance effects, the chirality-induced rotating electric polarization and the oscillating magnetization, are examined for several experimentally available chiral molecules. We discuss in detail the requirements for experimental detection of chirality-sensitive NMR effects of the studied molecules. These requirements are related to two parameters: the shielding polarizability and the antisymmetric part of the nuclear magnetic shielding tensor. The dominant second contribution has been computed for small molecules at the coupled cluster and density functional theory levels. It was found that DFT calculations using the KT2 functional and the aug-cc-pCVTZ basis set adequately reproduce the CCSD(T) values obtained with the same basis set. The largest values of parameters, thus most promising from the experimental point of view, were obtained for the fluorine nuclei in 1,3-difluorocyclopropene and 1,3-diphenyl-2-fluoro-3-trifluoromethylcyclopropene. pf ##### Dynamic Inclusion Complexes of Metal Nanoparticles Inside Nanocups Angew. Chem. Int. Ed., 54(23):6730-6734, May 2015, Featured cover article. (article) Abstract Host-guest inclusion complexes are abundant in molecular systems and of fundamental importance in living organisms. Realizing a colloidal analogue of a molecular dynamic inclusion complex is challenging because inorganic nanoparticles (NPs) with a well-defined cavity and portal are difficult to synthesize in high yield and with good structural fidelity. Herein, a generic strategy towards the fabrication of dynamic 1: 1 inclusion complexes of metal nanoparticles inside oxide nanocups with high yield (> 70%) and regiospecificity (> 90%) by means of a reactive double Janus nanoparticle intermediate is reported. Experimental evidence confirms that the inclusion complexes are formed by a kinetically controlled mechanism involving a delicate interplay between bipolar galvanic corrosion and alloying-dealloying oxidation. Release of the NP guest from the nanocups can be efficiently triggered by an external stimulus. Featured cover article. pf ##### Surface roughness-induced speed increase for active Janus micromotors Choudhury, U., Soler, L., Gibbs, J. G., Sanchez, S., Fischer, P. Chem. Comm., 51(41):8660-8663, April 2015 (article) Abstract We demonstrate a simple physical fabrication method to control surface roughness of Janus micromotors and fabricate self-propelled active Janus microparticles with rough catalytic platinum surfaces that show a four-fold increase in their propulsion speed compared to conventional Janus particles coated with a smooth Pt layer. pf ##### Active colloidal microdrills Chem. Comm., 51(20):4192-4195, Febuary 2015 (article) Abstract We demonstrate a chemically driven, autonomous catalytic microdrill. An asymmetric distribution of catalyst causes the helical swimmer to twist while it undergoes directed propulsion. A driving torque and hydrodynamic coupling between translation and rotation at low Reynolds number leads to drill-like swimming behaviour. pf ##### Selectable Nanopattern Arrays for Nanolithographic Imprint and Etch-Mask Applications Jeong, H. H., Mark, A. G., Lee, T., Son, K., Chen, W., Alarcon-Correa, M., Kim, I., Schütz, G., Fischer, P. Adv. Science, 2(7):1500016, 2015, Featured cover article. (article) Abstract A parallel nanolithographic patterning method is presented that can be used to obtain arrays of multifunctional nanoparticles. These patterns can simply be converted into a variety of secondary nanopatterns that are useful for nanolithographic imprint, plasmonic, and etch-mask applications. pf #### 2000 ##### Phenomenological damping in optical response tensors Buckingham, A., Fischer, P. PHYSICAL REVIEW A, 61(3), 2000 (article) Abstract Although perturbation theory applied to the optical response of a molecule or material system is only strictly valid far from resonances, it is often applied to near-resonance{''} conditions by means of complex energies incorporating damping. Inconsistent signs of the damping in optical response tensors have appeared in the recent literature, as have errors in the treatment of the perturbation by a static held. The equal-sign{''} convention used in a recent publication yields an unphysical material response, and Koroteev's intimation that linear electro-optical circular dichroism may exist in an optically active liquid under resonance conditions is also flawed. We show that the isotropic part of the Pockels tensor vanishes. pf #### 2000 ##### Ab initio investigation of the sum-frequency hyperpolarizability of small chiral molecules Champagne, B., Fischer, P., Buckingham, A. CHEMICAL PHYSICS LETTERS, 331(1):83-88, 2000 (article) Abstract Using a sum-over-states procedure based on configuration interaction singles /6-311++G{*}{*}, we have computed the sum-frequency hyperpolarizability beta (ijk)(-3 omega; 2 omega, omega) Of two small chiral molecules, R-monofluoro-oxirane and R-(+)-propylene oxide. Excitation energies were scaled to fit experimental UV-absorption data and checked with ab initio values from time-dependent density functional theory. The isotropic part of the computed hyperpolarizabilities, beta(-3 omega; 2 omega, omega), is much smaller than that reported previously from sum-frequency generation experiments on aqueous solutions of arabinose. Comparison is made with a single-centre chiral model. (C) 2000 Elsevier Science B.V. All rights reserved. pf ##### Three-wave mixing in chiral liquids Fischer, P., Wiersma, D., Righini, R., Champagne, B., Buckingham, A. PHYSICAL REVIEW LETTERS, 85(20):4253-4256, 2000 (article) Abstract Second-order nonlinear optical frequency conversion in isotropic systems is only dipole allowed for sum- and difference-frequency generation in chiral media. We develop a single-center chiral model of the three-wave mixing (sum:frequency generation) nonlinearity and estimate its magnitude. We also report results from ab initio calculations and from three- and four-wave mixing experiments in support of the theoretical estimates. We show that the second-order susceptibility in chiral liquids is much smaller than previously thought. pf
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5866062045097351, "perplexity": 7022.418730623526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514570830.42/warc/CC-MAIN-20190915072355-20190915094355-00412.warc.gz"}
http://crypto.stackexchange.com/tags/rabin-cryptosystem/hot
# Tag Info ## Hot answers tagged rabin-cryptosystem 7 A is acting as a square-root oracle in that protocol. We can use that oracle to factor $n$ and break the scheme. Suppose you are an attacker that wants to impersonate A. You: Pick a random $m$; Send $m^2$ to A; Compute $p = \gcd(m_1 - m, n)$, thus factoring $n$. This works with probability $1/2$ for each attempt. 7 Unless they did something wrong (either accidentally, or deliberately to make it easy), there is no practical way. It's well known that, if you're able to compute the squareroot of an arbitrary number modulo a composite, you can efficiently factor that composite. And, solving $e=4$ is equivalent for solving the RSA problem twice with $e=2$. Now, it's ... 5 Rabin-Williams signature verification with 3072 bit keys is much faster than EdDSA signature verification of comparable security (when done in software). How much depends on care of coding, hardware, EdDSA parameters. Two data points: in the eBATS benchmarks for a skylake CPU, ronald3072 signature verification (RSA with $e=3$ as an OpenSSL wrapper, by ... 5 Adding some more information to fkraiem's answer: The encryption in the Rabin cryptsystem is basically textbook RSA with an exponent of $2$. 1) Neither p nor q are equal to 2. This means they are odd. The product of (p−1)(q−1) would be even i.e. not coprime with 2. Well, yes. That is one of the basic problems in Rabin's cryptosystem. If we want that $$... 5 The modulus 77 leads to a short period. 5 Since n = pq, then when an integer modulo n is a square, then it has (in general) four square roots. This can be seen by reasoning modulo p and modulo q: a square has two roots modulo p, and two roots modulo q, which makes for four combinations. More precisely, modulo a prime p, if y has a square root x, it also has another square root which is -x. The same ... 4 Because r is not guaranteed to be a Quadratic Residue, so for random r there wouldn't be m_1 such that r \equiv m_1^2(\mod n), therefore authentication will be impossible in this case. 4 Nightcracker's method works fine. There also are deterministic solutions to select the correct ciphertext that require very few additional bits. One very useful ingredient is the use of the Jacobi symbol. For example, you might look at The Rabin cryptosystem revisited by M. Elia, M. Piva and D. Schipani (http://arxiv.org/pdf/1108.5935.pdf). 4 This is a solution that should work with very high probability, but possibly can fail. As a bonus it also resists tampering with the ciphertext. As encrypter generate a random key (say a 128-bit key for AES128-CTR) and encrypt the plaintext using that key. Then compute a MAC over the ciphertext (for example using HMAC-SHA1) using the same key. Finally you ... 3 As first step to compute the four square roots of c \pmod N one can compute the two square roots \mod p and the two square roots \mod q and then using the Chinese Reminder Theorem combine them to the four square roots \mod N where N = p \cdot q. Let's start computing the square root of ciphertext c \mod p. Usually p \equiv q \equiv 3 \pmod 4. ... 3 At first I want to cite Lindell and Katz book: A "plain Rabin" encryption scheme, constructed in a manner analogous to plain RSA encryption, is vulnerable to a chosen-ciphertext attack that enables an adversary to learn the entire private key. Although plain RSA is not CCA-secure either, known chosen-ciphertext attacks on plain RSA are less damaging ... 3 After another 5 minutes of thought, I think I solved my own problem. Choose an arbitrary message m, compute c=m^2 % n and submit c and n to the Rabin oracle. If you repeat this enough times (by which I mean probably within 2 iterations) you will choose m in such a way that the oracle gives you ± the other root, which you can then use to factor n. 3 Here's how the attack works: Select a random value y Compute a = y^2 \bmod n Ask for the signature of a, that is x with x^2 = a If x \ne y and x + y \ne n, then gcd(n, x+y) is a proper factor of n The last step will succeed with probability \approx 0.5. You can make it probability 1 if you select a y with Jacobi symbol -1. 2 In short words: when you compute things modulo n = pq, you are really computing things simultaneously modulo p and modulo q. That's the gist of the Chinese Remainder Theorem. So to prove that a = b \pmod n, you just have to prove that a = b \pmod p and a = b \pmod q. Modulo p, for any x that is not a multiple of p, x^{p-1} = 1 \pmod p (... 2 By following the above advice (taking the equations for r and s given in the article and writing r-s) you will notice that q is a divisor, therefore GCD(|r-s|,n) cannot be 1. There are only two options left since n is only divisible by q and p. 2 Both Rabin and RSA rely on padding for security. Proper padding prevents chosen-ciphertext attacks since modified ciphertext has a negligible chance of producing valid padding. If you claim Rabin (or RSA) is vulnerable to CCA attacks, you should limit that to the unpadded/textbook variants. Most deployed implementations use padding, though some paddings are ... 2 RSA with e = 2 is Rabin, it works a bit differently and is slightly more mathematically involved, but it is a valid cryptosystem. 2 The equation a = x^2 \bmod N has at most 4 solutions x. There are solutions if a is a square modulo both p and q. This can be checked by computing the Legendre of symbol of x modulo p and modulo q. Assuming that the two Legendre symbols are +1, when p \equiv 3 \pmod 4, a square-root of a modulo p is given by x_p = a^{(p+1)/4} \... 1 Consider two numbers a and b that square to the same value modulo n and don't just differ by the sign.$$a^2 \equiv b^2 \pmod n2(a-b)(a+b) \equiv 0 \pmod n Neither of the factors on the left is 0 (or equivalently a multiple of $n$), thus each of them must contain one of the prime factors of $n$. Thus you can use $\operatorname{GCD}(a-b, n)$ ... 1 Let $\mathcal A$ be the hypothetical algorithm in the question, with input $(n,q)$, output $r$, such that $r^2\equiv q\pmod n$ for a $1/(\log(n))$ fraction of the quadratic residues $q\pmod n$, running in random polynomial time w.r.t. $\log(n)$, restricting to $n$ product of two large distinct primes. Let $\mathcal F$ be the following algorithm with input $... 1 That practice of replacing the result of$y=x^d\bmod N$(or$y=x^e\bmod N$) by$\hat y=\min(y,N-y)$is also in ISO/IEC 9796-2:2010 (paywalled) and ancestors; I first met that in [INCITS/ANSI]/ISO/IEC 9796:1991, also given in the Handbook of Applied Cryptography, see in particular note 11.36. ISO/IEC 9796 was a broken and now withdrawn ... 1 An older copy of P1363 Public Key Cryptography was used below. In may (or may not) reflect the current state of affairs. It also uses Bernstein's RSA signatures and Rabin–Williams signatures: the state of the art. Do tweaked roots violate P1363? What I might be really asking is, does an exponent of 2 run afoul of P1363, but I'm not sure at the moment. ... 1 Blinding is usually applied on the whole modulus, and I see no incentive to do otherwise; random is cheap. In RSA, blinding is not always applied as described in the question and article, for efficiency and security reasons: the technique described requires computing$r^d\bmod N$, which is just as costly as the$m^d\bmod N\$ operation being protected, and ... 1 Your question is related to the well known RABIN Cryptosystem which is similar to RSA, except the public exponent is 2. As fgrieu mentioned, decipherment can be easily processed by the CRT algorithm, but some precautions must beforehand be observed during the key generation. In fact the solution of the equation gives 4 roots, which means that the solution ... 1 The tricky point is that modulo a Blum integer (the product n = pq of two primes p and q that are equal to 3 modulo 4), in general, a quadratic residue (a value that is a square of something) has four square roots, not two. Consider the "normal" Rabin algorithm. Message m is encrypted into c = m2 mod n. To decrypt, you work modulo p and ... Only top voted, non community-wiki answers of a minimum length are eligible
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9498569369316101, "perplexity": 980.5424714514513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397636.15/warc/CC-MAIN-20160624154957-00126-ip-10-164-35-72.ec2.internal.warc.gz"}
https://gmarini.com/showcase/random_projects/
# Miscellaneous A collection of Javascript/WebGL tests ## [R/Plotly] 3D Waves point distribution and animation This is a little fun test I have tried some time ago. I wanted to test the animations with Plotly and R and what best way to test it if not some nice smooth waves? No crazy math, just a mix of sin() and cos() ## [JS/Plotly] Plotly animation and 3D data visualisation Similar to the wave test I wanted to tr Plotly with javascript so I tested a few animations and 3D graphs ## [WebGL] Cubes recursion generator While working at the European Central Bank there was a lot of talking about “cubes” of data, three dimensional models of datasets. I like to see it as a bunch of stacked Excel spreadsheet. The spreasheets are 2-dimensional so stacking them would make it 3-dimensional and can be seen as a cube instead of a rectangle. So I decided to try my hand at the generation of recursive cubes, with smaller and smaller ones inside each cube. ## [WebGL] Globe Data Visualisation A little fun test on global geographical data visualisation with WebGL. ## [JS] Canvas JSON marker rendering A pure javascript implementation of a map with fixed point as markers.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.191151961684227, "perplexity": 2118.907988461037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00092.warc.gz"}
http://insti.physics.sunysb.edu/latex-help/ltx-295.html
# picture \begin{picture}(width,height)(x offset,y offset) . picture commands . \end{picture} The picture environment allows you to create just about any kind of picture you want containing text, lines, arrows and circles. You tell LaTeX where to put things in the picture by specifying their coordinates. A coordinate is a number that may have a decimal point and a minus sign - a number like 5, 2.3 or -3.1416. A coordinate specifies a length in multiples of the unit length \unitlength, so if \unitlength has been set to 1cm, then the coordinate 2.54 specifies a length of 2.54 centimeters. You can change the value of \unitlength anywhere you want, using the \setlength command, but strange things will happen if you try changing it inside the picture environment. A position is a pair of coordinates, such as (2.4,-5), specifying the point with x-coordinate 2.4 and y-coordinate -5. Coordinates are specified in the usual way with respect to an origin, which is normally at the lower-left corner of the picture. Note that when a position appears as an argument, it is not enclosed in braces; the parentheses serve to delimit the argument. The picture environment has one mandatory argument, which is a position. It specifies the size of the picture. The environment produces a rectangular box with width and height determined by this argument's x- and y-coordinates. The picture environment also has an optional position argument, following the size argument, that can change the origin. (Unlike ordinary optional arguments, this argument is not contained in square brackets.) The optional argument gives the coordinates of the point at the lower-left corner of the picture (thereby determining the origin). For example, if \unitlength has been set to 1mm, the command \begin{picture}(100,200)(10,20) produces a picture of width 100 millimeters and height 200 millimeters, whose lower-left corner is the point (10,20) and whose upper-right corner is therefore the point (110,220). When you first draw a picture, you will omit the optional argument, leaving the origin at the lower-left corner. If you then want to modify your picture by shifting everything, you just add the appropriate optional argument. The environment's mandatory argument determines the nominal size of the picture. This need bear no relation to how large the picture really is; LaTeX will happily allow you to put things outside the picture, or even off the page. The picture's nominal size is used by TeX in determining how much room to leave for it. Everything that appears in a picture is drawn by the \put command. The command \put (11.3,-.3){ ... } puts the object specified by "..." in the picture, with its reference point at coordinates (11.3,-.3). The reference points for various objects will be described below. The \put command creates an LR box. You can put anything in the text argument of the \put command that you'd put into the argument of an \mbox and related commands. When you do this, the reference point will be the lower left corner of the box.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9426071047782898, "perplexity": 1336.8041603864715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822145.14/warc/CC-MAIN-20171017163022-20171017183022-00155.warc.gz"}
https://freakonometrics.hypotheses.org/tag/optimization
# Classification from scratch, linear discrimination 8/8 Eighth post of our series on classification from scratch. The latest one was on the SVM, and today, I want to get back on very old stuff, with here also a linear separation of the space, using Fisher’s linear discriminent analysis. ## Bayes (naive) classifier Consider the follwing naive classification rule$$m^\star(\mathbf{x})=\text{argmin}_y\{\mathbb{P}[Y=y\vert\mathbf{X}=\mathbf{x}]\}$$or$$m^\star(\mathbf{x})=\text{argmin}_y\left\{\frac{\mathbb{P}[\mathbf{X}=\mathbf{x}\vert Y=y]}{\mathbb{P}[\mathbf{X}=\mathbf{x}]}\right\}$$(where $\mathbb{P}[\mathbf{X}=\mathbf{x}]$ is the density in the continuous case). In the case where $y$ takes two values, that will be standard $\{0,1\}$ here, one can rewrite the later as$$m^\star(\mathbf{x})=\begin{cases}1\text{ if }\mathbb{E}(Y\vert \mathbf{X}=\mathbf{x})>\displaystyle{\frac{1}{2}}\\0\text{ otherwise}\end{cases}$$and the set$$\mathcal{D}_S =\left\{\mathbf{x},\mathbb{E}(Y\vert \mathbf{X}=\mathbf{x})=\frac{1}{2}\right\}$$is called the decision boundary. Assume that$$\mathbf{X}\vert Y=0\sim\mathcal{N}(\mathbf{\mu}_0,\mathbf{\Sigma})$$and$$\mathbf{X}\vert Y=1\sim\mathcal{N}(\mathbf{\mu}_1,\mathbf{\Sigma})$$then explicit expressions can be derived.$$m^\star(\mathbf{x})=\begin{cases}1\text{ if }r_1^2< r_0^2+2\displaystyle{\log\frac{\mathbb{P}(Y=1)}{\mathbb{P}(Y=0)}+\log\frac{\vert\mathbf{\Sigma}_0\vert}{\vert\mathbf{\Sigma}_1\vert}}\\0\text{ otherwise}\end{cases}$$where $r_y^2$ is the Manalahobis distance, $$r_y^2 = [\mathbf{X}-\mathbf{\mu}_y]^{\text{{T}}}\mathbf{\Sigma}_y^{-1}[\mathbf{X}-\mathbf{\mu}_y]$$ Let $\delta_y$be defined as$$\delta_y(\mathbf{x})=-\frac{1}{2}\log\vert\mathbf{\Sigma}_y\vert-\frac{1}{2}[{\color{blue}{\mathbf{x}}}-\mathbf{\mu}_y]^{\text{{T}}}\mathbf{\Sigma}_y^{-1}[{\color{blue}{\mathbf{x}}}-\mathbf{\mu}_y]+\log\mathbb{P}(Y=y)$$the decision boundary of this classifier is $$\{\mathbf{x}\text{ such that }\delta_0(\mathbf{x})=\delta_1(\mathbf{x})\}$$which is quadratic in ${\color{blue}{\mathbf{x}}}$. This is the quadratic discriminant analysis. This can be visualized bellow. The decision boundary is here But that can’t be the linear discriminant analysis, right? I mean, the frontier is not linear… Actually, in Fisher’s seminal paper, it was assumed that $\mathbf{\Sigma}_0=\mathbf{\Sigma}_1$. In that case, actually, $$\delta_y(\mathbf{x})={\color{blue}{\mathbf{x}}}^{\text{T}}\mathbf{\Sigma}^{-1}\mathbf{\mu}_y-\frac{1}{2}\mathbf{\mu}_y^{\text{T}}\mathbf{\Sigma}^{-1}\mathbf{\mu}_y+\log\mathbb{P}(Y=y)$$ and the decision frontier is now linear in ${\color{blue}{\mathbf{x}}}$. This is the linear discriminant analysis. This can be visualized bellow Here the two samples have the same variance matrix and the frontier is ## Link with the logistic regression Assume as previously that$$\mathbf{X}\vert Y=0\sim\mathcal{N}(\mathbf{\mu}_0,\mathbf{\Sigma})$$and$$\mathbf{X}\vert Y=1\sim\mathcal{N}(\mathbf{\mu}_1,\mathbf{\Sigma})$$then$$\log\frac{\mathbb{P}(Y=1\vert \mathbf{X}=\mathbf{x})}{\mathbb{P}(Y=0\vert \mathbf{X}=\mathbf{x})}$$is equal to $$\mathbf{x}^{\text{{T}}}\mathbf{\Sigma}^{-1}[\mathbf{\mu}_y]-\frac{1}{2}[\mathbf{\mu}_1-\mathbf{\mu}_0]^{\text{{T}}}\mathbf{\Sigma}^{-1}[\mathbf{\mu}_1-\mathbf{\mu}_0]+\log\frac{\mathbb{P}(Y=1)}{\mathbb{P}(Y=0)}$$which is linear in $\mathbf{x}$$$\log\frac{\mathbb{P}(Y=1\vert \mathbf{X}=\mathbf{x})}{\mathbb{P}(Y=0\vert \mathbf{X}=\mathbf{x})}=\mathbf{x}^{\text{{T}}}\mathbf{\beta}$$Hence, when each groups have Gaussian distributions with identical variance matrix, then LDA and the logistic regression lead to the same classification rule. Observe furthermore that the slope is proportional to $\mathbf{\Sigma}^{-1}[\mathbf{\mu}_1-\mathbf{\mu}_0]$, as stated in Fisher’s article. But to obtain such a relationship, he observe that the ratio of between and within variances (in the two groups) was$$\frac{\text{variance between}}{\text{variance within}}=\frac{[\mathbf{\omega}\mathbf{\mu}_1-\mathbf{\omega}\mathbf{\mu}_0]^2}{\mathbf{\omega}^{\text{T}}\mathbf{\Sigma}_1\mathbf{\omega}+\mathbf{\omega}^{\text{T}}\mathbf{\Sigma}_0\mathbf{\omega}}$$which is maximal when $\mathbf{\omega}$ is proportional to $\mathbf{\Sigma}^{-1}[\mathbf{\mu}_1-\mathbf{\mu}_0]$, when $\mathbf{\Sigma}_0=\mathbf{\Sigma}_1$. ## Homebrew linear discriminant analysis To compute vector $\mathbf{\omega}$ m0 = apply(myocarde[myocarde$PRONO=="0",1:7],2,mean) m1 = apply(myocarde[myocarde$PRONO=="1",1:7],2,mean) Sigma = var(myocarde[,1:7]) omega = solve(Sigma)%*%(m1-m0) omega [,1] FRCAR -0.012909708542 INCAR 1.088582058796 INSYS -0.019390084344 PRDIA -0.025817110020 PAPUL 0.020441287970 PVENT -0.038298291091 REPUL -0.001371677757 For the constant – in the equation $\omega^T\mathbf{x}+b=0$ – if we have equiprobable probabilities, use b = (t(m1)%*%solve(Sigma)%*%m1-t(m0)%*%solve(Sigma)%*%m0)/2 ## Application (on the small dataset) In order to visualize what’s going on, consider the small dataset, with only two covariates, x = c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85) y = c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3) z = c(1,1,1,1,1,0,0,1,0,0) df = data.frame(x1=x,x2=y,y=as.factor(z)) m0 = apply(df[df$y=="0",1:2],2,mean) m1 = apply(df[df$y=="1",1:2],2,mean) Sigma = var(df[,1:2]) omega = solve(Sigma)%*%(m1-m0) omega [,1] x1 -2.640613174 x2 4.858705676 Using R regular function, we get library(MASS) fit_lda = lda(y ~x1+x2 , data=df) fit_lda   Coefficients of linear discriminants: LD1 x1 -2.588389554 x2 4.762614663 which is the same coefficient as the one we got with our own code. For the constant, use b = (t(m1)%*%solve(Sigma)%*%m1-t(m0)%*%solve(Sigma)%*%m0)/2 If we plot it, we get the red straight line plot(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")]) abline(a=b/omega[2],b=-omega[1]/omega[2],col="red") As we can see (with the blue points), our red line intersects the middle of the segment of the two barycenters points(m0["x1"],m0["x2"],pch=4) points(m1["x1"],m1["x2"],pch=4) segments(m0["x1"],m0["x2"],m1["x1"],m1["x2"],col="blue") points(.5*m0["x1"]+.5*m1["x1"],.5*m0["x2"]+.5*m1["x2"],col="blue",pch=19) Of course, we can also use R function predlda = function(x,y) predict(fit_lda, data.frame(x1=x,x2=y))$class==1 vv=outer(vu,vu,predlda) contour(vu,vu,vv,add=TRUE,lwd=2,levels = .5) One can also consider the quadratic discriminent analysis since it might be difficult to argue that $\mathbf{\Sigma}_0=\mathbf{\Sigma}_1$ fit_qda = qda(y ~x1+x2 , data=df) The separation curve is here plot(df$x1,df$x2,pch=19, col=c("blue","red")[1+(df$y=="1")]) predqda=function(x,y) predict(fit_qda, data.frame(x1=x,x2=y))$class==1 vv=outer(vu,vu,predlda) contour(vu,vu,vv,add=TRUE,lwd=2,levels = .5) # Classification from scratch, SVM 7/8 Seventh post of our series on classification from scratch. The latest one was on the neural nets, and today, we will discuss SVM, support vector machines. ## A formal introduction Here $y$ takes values in $\{-1,+1\}$. Our model will be $$m(\mathbf{x})=\text{sign}[\mathbf{\omega}^T\mathbf{x}+b]$$ Thus, the space is divided by a (linear) border$$\Delta:\lbrace\mathbf{x}\in\mathbb{R}^p:\mathbf{\omega}^T\mathbf{x}+b=0\rbrace$$ The distance from point $\mathbf{x}_i$ to $\Delta$ is $$d(\mathbf{x}_i,\Delta)=\frac{\mathbf{\omega}^T\mathbf{x}_i+b}{\|\mathbf{\omega}\|}$$If the space is linearly separable, the problem is ill posed (there is an infinite number of solutions). So consider $$\max_{\mathbf{\omega},b}\left\lbrace\min_{i=1,\cdots,n}\left\lbrace\text{distance}(\mathbf{x}_i,\Delta)\right\rbrace\right\rbrace$$ The strategy is to maximize the margin. One can prove that we want to solve $$\max_{\mathbf{\omega},m}\left\lbrace\frac{m}{\|\mathbf{\omega}\|}\right\rbrace$$ subject to $y_i\cdot(\mathbf{\omega}^T\mathbf{x}_i)=m$, $\forall i=1,\cdots,n$. Again, the problem is ill posed (non identifiable), and we can consider $m=1$: $$\max_{\mathbf{\omega}}\left\lbrace\frac{1}{\|\mathbf{\omega}\|}\right\rbrace$$ subject to $y_i\cdot(\mathbf{\omega}^T\mathbf{x}_i)=1$, $\forall i=1,\cdots,n$. The optimization objective can be written$$\min_{\mathbf{\omega}}\left\lbrace\|\mathbf{\omega}\|^2\right\rbrace$$ ## The primal problem In the separable case, consider the following primal problem,$$\min_{\mathbf{w}\in\mathbb{R}^d,b\in\mathbb{R}}\left\lbrace\frac{1}{2}\|\mathbf{\omega}\|^2\right\rbrace$$subject to $y_i\cdot (\mathbf{\omega}^T\mathbf{x}_i+b)\geq 1$, $\forall i=1,\cdots,n$. In the non-separable case, introduce slack (error) variables $\mathbf{\xi}$ : if $y_i\cdot (\mathbf{\omega}^T\mathbf{x}_i+b)\geq 1$, there is no error $\xi_i=0$. Let $C$ denote the cost of misclassification. The optimization problem becomes$$\min_{\mathbf{w}\in\mathbb{R}^d,b\in\mathbb{R},{\color{red}{\mathbf{\xi}}}\in\mathbb{R}^n}\left\lbrace\frac{1}{2}\|\mathbf{\omega}\|^2 + C\sum_{i=1}^n\xi_i\right\rbrace$$subject to $y_i\cdot (\mathbf{\omega}^T\mathbf{x}_i+b)\geq 1-{\color{red}{\xi_i}}$, with ${\color{red}{\xi_i}}\geq 0$, $\forall i=1,\cdots,n$. Let us try to code this optimization problem. The dataset is here n = length(myocarde[,"PRONO"]) myocarde0 = myocarde myocarde0$PRONO = myocarde$PRONO*2-1 C = .5 and we have to set a value for the cost $C$. In the (linearly) constrained optimization function in R, we need to provide the objective function $f(\mathbf{\theta})$ and the gradient $\nabla f(\mathbf{\theta})$. f = function(param){ w = param[1:7] b = param[8] xi = param[8+1:nrow(myocarde)] .5*sum(w^2) + C*sum(xi)} grad_f = function(param){ w = param[1:7] b = param[8] xi = param[8+1:nrow(myocarde)] c(2*w,0,rep(C,length(xi)))} and (linear) constraints are written as $\mathbf{U}\mathbf{\theta}-\mathbf{c}\geq \mathbf{0}$ U = rbind(cbind(myocarde0[,"PRONO"]*as.matrix(myocarde[,1:7]),diag(n),myocarde0[,"PRONO"]), cbind(matrix(0,n,7),diag(n,n),matrix(0,n,1))) C = c(rep(1,n),rep(0,n)) Then we use constrOptim(theta=p_init, f, grad_f, ui = U,ci = C) Observe that something is missing here: we need a starting point for the algorithm, $\mathbf{\theta}_0$. Unfortunately, I could not think of a simple technique to get a valid starting point (that satisfies those linear constraints). Let us try something else. Because those functions are quite simple: either linear or quadratic. Actually, one can recognize in the separable case, but also in the non-separable case, a classic quadratic program$$\min_{\mathbf{z}\in\mathbb{R}^d}\left\lbrace\frac{1}{2}\mathbf{z}^T\mathbf{D}\mathbf{z}-\mathbf{d}\mathbf{z}\right\rbrace$$subject to $\mathbf{A}\mathbf{z}\geq\mathbf{b}$. library(quadprog) eps = 5e-4 y = myocarde[,&quot;PRONO&quot;]*2-1 X = as.matrix(cbind(1,myocarde[,1:7])) n = length(y) D = diag(n+7+1) diag(D)[8+0:n] = 0 d = matrix(c(rep(0,7),0,rep(C,n)), nrow=n+7+1) A = Ui b = Ci sol = solve.QP(D+eps*diag(n+7+1), d, t(A), b, meq=1, factorized=FALSE) qpsol = sol$solution (omega = qpsol[1:7]) [1] -0.106642005446 -0.002026198103 -0.022513312261 -0.018958578746 -0.023105767847 -0.018958578746 -1.080638988521 (b = qpsol[n+7+1]) [1] 997.6289927 Given an observation $\mathbf{x}$, the prediction is $$y=\text{sign}[\mathbf{\omega}^T\mathbf{x}+b]$$ y_pred = 2*((as.matrix(myocarde0[,1:7])%*%omega+b)&gt;0)-1 Observe that here, we do have a classifier, depending if the point lies on the left or on the right (above or below, etc) the separating line (or hyperplane). We do not have a probability, because there is no probabilistic model here. So far. ## The dual problem The Lagrangian of the separable problem could be written introducing Lagrange multipliers $\mathbf{\alpha}\in\mathbb{R}^n$, $\mathbf{\alpha}\geq \mathbf{0}$ as$$\mathcal{L}(\mathbf{\omega},b,\mathbf{\alpha})=\frac{1}{2}\|\mathbf{\omega}\|^2-\sum_{i=1}^n \alpha_i\big(y_i(\mathbf{\omega}^T\mathbf{x}_i+b)-1\big)$$Somehow, $\alpha_i$ represents the influence of the observation $(y_i,\mathbf{x}_i)$. Consider the Dual Problem, with $\mathbf{G}=[G_{ij}]$ and $G_{ij}=y_iy_j\mathbf{x}_j^T\mathbf{x}_i$ $$\min_{\mathbf{\alpha}\in\mathbb{R}^n}\left\lbrace\frac{1}{2}\mathbf{\alpha}^T\mathbf{G}\mathbf{\alpha}-\mathbf{1}^T\mathbf{\alpha}\right\rbrace$$ subject to $\mathbf{y}^T\mathbf{\alpha}=\mathbf{0}$ and $\mathbf{\alpha}\geq\mathbf{0}$. The Lagrangian of the non-separable problem could be written introducing Lagrange multipliers $\mathbf{\alpha},{\color{red}{\mathbf{\beta}}}\in\mathbb{R}^n$, $\mathbf{\alpha},{\color{red}{\mathbf{\beta}}}\geq \mathbf{0}$, and define the Lagrangian $\mathcal{L}(\mathbf{\omega},b,{\color{red}{\mathbf{\xi}}},\mathbf{\alpha},{\color{red}{\mathbf{\beta}}})$ as$$\frac{1}{2}\|\mathbf{\omega}\|^2+{\color{blue}{C}}\sum_{i=1}^n{\color{red}{\xi_i}}-\sum_{i=1}^n \alpha_i\big(y_i(\mathbf{\omega}^T\mathbf{x}_i+b)-1+{\color{red}{\xi_i}}\big)-\sum_{i=1}^n{\color{red}{\beta_i}}{\color{red}{\xi_i}}$$ Somehow, $\alpha_i$ represents the influence of the observation $(y_i,\mathbf{x}_i)$. The Dual Problem become with $\mathbf{G}=[G_{ij}]$ and $G_{ij}=y_iy_j\mathbf{x}_j^T\mathbf{x}_i$$$\min_{\mathbf{\alpha}\in\mathbb{R}^n}\left\lbrace\frac{1}{2}\mathbf{\alpha}^T\mathbf{G}\mathbf{\alpha}-\mathbf{1}^T\mathbf{\alpha}\right\rbrace$$ subject to $\mathbf{y}^T\mathbf{\alpha}=\mathbf{0}$, $\mathbf{\alpha}\geq\mathbf{0}$ and $\mathbf{\alpha}\leq {\color{blue}{C}}$. As previsouly, one can also use quadratic programming library(quadprog) eps = 5e-4 y = myocarde[,"PRONO"]*2-1 X = as.matrix(cbind(1,myocarde[,1:7])) n = length(y) Q = sapply(1:n, function(i) y[i]*t(X)[,i]) D = t(Q)%*%Q d = matrix(1, nrow=n) A = rbind(y,diag(n),-diag(n)) C = .5 b = c(0,rep(0,n),rep(-C,n)) sol = solve.QP(D+eps*diag(n), d, t(A), b, meq=1, factorized=FALSE) qpsol = sol$solution The two problems are connected in the sense that for all $\mathbf{x}$$$\mathbf{\omega}^T\mathbf{x}+b = \sum_{i=1}^n \alpha_i y_i (\mathbf{x}^T\mathbf{x}_i)+b$$ To recover the solution of the primal problem,$$\mathbf{\omega}=\sum_{i=1}^n \alpha_iy_i \mathbf{x}_i$$thus omega = apply(qpsol*y*X,2,sum) omega 1 FRCAR INCAR INSYS 0.0000000000000002439074265 0.0550138658687635215271960 -0.0920163239049630876653652 0.3609571899422952534486342 PRDIA PAPUL PVENT REPUL -0.1094017965288692356695677 -0.0485213403643276475207813 -0.0660058643191372279579454 0.0010093656567606212794835 while $b=y-\mathbf{\omega}^T\mathbf{x}$ (but actually, one can add the constant vector in the matrix of explanatory variables). More generally, consider the following function (to make sure that $D$ is a definite-positive matrix, we use the nearPD function). svm.fit = function(X, y, C=NULL) { n.samples = nrow(X) n.features = ncol(X) K = matrix(rep(0, n.samples*n.samples), nrow=n.samples) for (i in 1:n.samples){ for (j in 1:n.samples){ K[i,j] = X[i,] %*% X[j,] }} Dmat = outer(y,y) * K Dmat = as.matrix(nearPD(Dmat)$mat) dvec = rep(1, n.samples) Amat = rbind(y, diag(n.samples), -1*diag(n.samples)) bvec = c(0, rep(0, n.samples), rep(-C, n.samples)) res = solve.QP(Dmat,dvec,t(Amat),bvec=bvec, meq=1) a = res$solution bomega = apply(a*y*X,2,sum) return(bomega) } On our dataset, we obtain M = as.matrix(myocarde[,1:7]) center = function(z) (z-mean(z))/sd(z) for(j in 1:7) M[,j] = center(M[,j]) bomega = svm.fit(cbind(1,M),myocarde$PRONO*2-1,C=.5) y_pred = 2*((cbind(1,M)%*%bomega)&gt;0)-1 table(obs=myocarde0$PRONO,pred=y_pred) pred obs -1 1 -1 27 2 1 9 33 i.e. 11 misclassification, out of 71 points (which is also what we got with the logistic regression). ## Kernel Based Approach In some cases, it might be difficult to “separate” by a linear separators the two sets of points, like below, It might be difficult, here, because which want to find a straight line in the two dimensional space $(x_1,x_2)$. But maybe, we can distort the space, possible by adding another dimension That’s heuristically the idea. Because on the case above, in dimension 3, the set of points is now linearly separable. And the trick to do so is to use a kernel. The difficult task is to find the good one (if any). A positive kernel on $\mathcal{X}$ is a function $K:\mathcal{X}\times\mathcal{X}\rightarrow\mathbb{R}$ symmetric, and such that for any $n$, $\forall\alpha_1,\cdots,\alpha_n$ and $\forall\mathbf{x}_1,\cdots,\mathbf{x}_n$,$$\sum_{i=1}^n\sum_{j=1}^n\alpha_i\alpha_j k(\mathbf{x}_i,\mathbf{x}_j)\geq 0.$$ For example, the linear kernel is $k(\mathbf{x}_i,\mathbf{x}_j)=\mathbf{x}_i^T\mathbf{x}_j$. That’s what we’ve been using here, so far. One can also define the product kernel $k(\mathbf{x}_i,\mathbf{x}_j)=\kappa(\mathbf{x}_i)\cdot\kappa(\mathbf{x}_j)$ where $\kappa$ is some function $\mathcal{X}\rightarrow\mathbb{R}$. Finally, the Gaussian kernel is $k(\mathbf{x}_i,\mathbf{x}_j)=\exp[-\|\mathbf{x}_i-\mathbf{x}_j\|^2]$. Since it is a function of $\|\mathbf{x}_i-\mathbf{x}_j\|$, it is also called a radial kernel. linear.kernel = function(x1, x2) { return (x1%*%x2) } svm.fit = function(X, y, FUN=linear.kernel, C=NULL) { n.samples = nrow(X) n.features = ncol(X) K = matrix(rep(0, n.samples*n.samples), nrow=n.samples) for (i in 1:n.samples){ for (j in 1:n.samples){ K[i,j] = FUN(X[i,], X[j,]) } } Dmat = outer(y,y) * K Dmat = as.matrix(nearPD(Dmat)$mat) dvec = rep(1, n.samples) Amat = rbind(y, diag(n.samples), -1*diag(n.samples)) bvec = c(0, rep(0, n.samples), rep(-C, n.samples)) res = solve.QP(Dmat,dvec,t(Amat),bvec=bvec, meq=1) a = res$solution bomega = apply(a*y*X,2,sum) return(bomega) } ## Link to the regression To relate this duality optimization problem to OLS, recall that $y=\mathbf{x}^T\mathbf{\omega}+\varepsilon$, so that $\widehat{y}=\mathbf{x}^T\widehat{\mathbf{\omega}}$, where $\widehat{\mathbf{\omega}}=[\mathbf{X}^T\mathbf{X}]^{-1}\mathbf{X}^T\mathbf{y}$ But one can also write $$y=\mathbf{x}^T\widehat{\mathbf{\omega}}=\sum_{i=1}^n \widehat{\alpha}_i\cdot \mathbf{x}^T\mathbf{x}_i$$ where $\widehat{\mathbf{\alpha}}=\mathbf{X}[\mathbf{X}^T\mathbf{X}]^{-1}\widehat{\mathbf{\omega}}$, or conversely, $\widehat{\mathbf{\omega}}=\mathbf{X}^T\widehat{\mathbf{\alpha}}$. ## Application (on our small dataset) One can actually use a dedicated R package to run a SVM. To get the linear kernel, use library(kernlab) df0 = df df0$y = 2*(df$y=="1")-1 SVM1 = ksvm(y ~ x1 + x2, data = df0, C=.5, kernel = "vanilladot" , type="C-svc") Since the dataset is not linearly separable, there will be some mistakes here table(df0$y,predict(SVM1)) -1 1 -1 2 2 1 1 5 The problem with that function is that it cannot be used to get a prediction for other points than those in the sample (and I could neither extract $\omega$ nor $b$ from the 24 slots of that objet). But it’s possible by adding a small option in the function SVM2 = ksvm(y ~ x1 + x2, data = df0, C=.5, kernel = "vanilladot" , prob.model=TRUE, type="C-svc") With that function, we convert the distance as some sort of probability. Someday, I will try to replicate the probabilistic version of SVM, I promise, but today, the goal is just to understand what is done when running the SVM algorithm. To visualize the prediction, use pred_SVM2 = function(x,y){ return(predict(SVM2,newdata=data.frame(x1=x,x2=y), type="probabilities")[,2])} plot(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")], cex=1.5,xlab="", ylab="",xlim=c(0,1),ylim=c(0,1)) vu = seq(-.1,1.1,length=251) vv = outer(vu,vu,function(x,y) pred_SVM2(x,y)) contour(vu,vu,vv,add=TRUE,lwd=2,nlevels = .5,col="red") Here the cost is $C$=.5, but of course, we can change it SVM2 = ksvm(y ~ x1 + x2, data = df0, C=2, kernel = "vanilladot" , prob.model=TRUE, type="C-svc") pred_SVM2 = function(x,y){ return(predict(SVM2,newdata=data.frame(x1=x,x2=y), type="probabilities")[,2])} plot(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")], cex=1.5,xlab="", ylab="",xlim=c(0,1),ylim=c(0,1)) vu = seq(-.1,1.1,length=251) vv = outer(vu,vu,function(x,y) pred_SVM2(x,y)) contour(vu,vu,vv,add=TRUE,lwd=2,levels = .5,col="red") As expected, we have a linear separator. But slightly different. Now, let us consider the “Radial Basis Gaussian kernel” SVM3 = ksvm(y ~ x1 + x2, data = df0, C=2, kernel = "rbfdot" , prob.model=TRUE, type="C-svc") Observe that here, we’ve been able to separare the white and the black points table(df0$y,predict(SVM3))   -1 1 -1 4 0 1 0 6 plot(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")], cex=1.5,xlab="", ylab="",xlim=c(0,1),ylim=c(0,1)) vu = seq(-.1,1.1,length=251) vv = outer(vu,vu,function(x,y) pred_SVM3(x,y)) contour(vu,vu,vv,add=TRUE,lwd=2,levels = .5,col="red") Now, to be completely honest, if I understand the theory of the algorithm used to compute $\omega$ and $b$ with linear kernel (using quadratic programming), I do not feel confortable with this R function. Especially if you run it several times… you can get (with exactly the same set of parameters) or (to be continued…) # Traveling Salesman In the second part of the course on graphs and networks, we will focus on economic applications, and flows. The first series of slides are on the traveling salesman problem. Slides are available online. # Simple and heuristic optimization This week, at the Rmetrics conference, there has been an interesting discussion about heuristic optimization. The starting point was simple: in complex optimization problems (here we mean with a lot of local maxima, for instance), we do not necessarily need extremely advanced algorithms that do converge extremly fast, if we cannot ensure that they reach the optimum. Converging extremely fast, with a great numerical precision to some point (that is not the point we’re looking for) is useless. And some algorithms might be much slower, but at least, it is much more likely to converge to the optimum. Wherever we start from. We have experienced that with Mathieu, while we were looking for maximum likelihood of our MINAR process: genetic algorithm have performed extremely well. The idea is extremly simple, and natural. Let us consider as a starting point the following algorithm, 1. Start from some 2. At step , draw a point in a neighborhood of • either then • or then This is simple (if you do not enter into details about what such a neighborhood should be). But using that kind of algorithm, you might get trapped and attracted to some local optima if the neighborhood is not large enough. An alternative to this technique is the following: it might be interesting to change a bit more, and instead of changing when we have a maximum, we change if we have almost a maximum. Namely at step , • either then • or then for some . To illustrate the idea, consider the following function > f=function(x,y) { r <- sqrt(x^2+y^2); + 1.1^(x+y)*10 * sin(r)/r } (on some bounded support). Here, by picking noise and values arbitrary, we have obtained the following scenarios > x0=15 > MX=matrix(NA,501,2) > MX[1,]=runif(2,-x0,x0) > k=.5 > for(s in 2:501){ + bruit=rnorm(2) + X=MX[s-1,]+bruit*3 + if(X[1]>x0){X[1]=x0} + if(X[1]<(-x0)){X[1]=-x0} + if(X[2]>x0){X[2]=x0} + if(X[2]<(-x0)){X[2]=-x0} + if(f(X[1],X[2])+k>f(MX[s-1,1], + MX[s-1,2])){MX[s,]=X} + if(f(X[1],X[2])+k<=f(MX[s-1,1], + MX[s-1,2])){MX[s,]=MX[s-1,]} +} It does not always converge towards the optimum, and sometimes, we just missed it after being extremely unlucky Note that if we run 10,000 scenarios (with different random noises and starting point), in 50% scenarios, we reach the maxima. Or at least, we are next to it, on top. What if we compare with a standard optimization routine, like Nelder-Mead, or quasi gradient ?Since we look for the maxima on a restricted domain, we can use the following function, > g=function(x) f(x[1],x[2]) > optim(X0, g,method="L-BFGS-B", + lower=-c(x0,x0),upper=c(x0,x0))$par In that case, if we run the algorithm with 10,000 random starting point, this is where we end, below on the right (while the heuristic technique is on the left), In only 15% of the scenarios, we have been able to reach the region where the maximum is. So here, it looks like an heuristic method works extremelly well, if do not need to reach the maxima with a great precision. Which is usually the case actually. # EM and mixture estimation Following my previous post on optimization and mixtures (here), Nicolas told me that my idea was probably not the most clever one (there). So, we get back to our simple mixture model, In order to describe how EM algorithm works, assume first that both  and  are perfectly known, and the mixture parameter is the only one we care about. • The simple model, with only one parameter that is unknown Here, the likelihood is so that we write the log likelihood as which might not be simple to maximize. Recall that the mixture model can interpreted through a latent variate  (that cannot be observed), taking value when  is drawn from , and 0 if it is drawn from . More generally (especially in the case we want to extend our model to 3, 4, … mixtures),  and . With that notation, the likelihood becomes and the log likelihood the term on the right is useless since we only care about p, here. From here, consider the following iterative procedure, Assume that the mixture probability  is known, denoted . Then I can predict the value of  (i.e.  and ) for all observations, So I can inject those values into my log likelihood, i.e. in having maximum (no need to run numerical tools here) that will be denoted . And I can iterate from here. Formally, the first step is where we calculate an expected (E) value, where is the best predictor of  given my observations (as well as my belief in ). Then comes a maximization (M) step, where using , I can estimate probability . • A more general framework, all parameters are now unkown So far, it was simple, since we assumed that  and   were perfectly known. Which is not reallistic. An there is not much to change to get a complete algorithm, to estimate . Recall that we had  which was the expected value of Z_{1,i}, i.e. it is a probability that observation i has been drawn from . If , instead of being in the segment  was in , then we could have considered mean and standard deviations of observations such that =0, and similarly on the subset of observations such that =1. But we can’t. So what can be done is to consider  as the weight we should give to observation i when estimating parameters of , and similarly, 1-would be weights given to observation i when estimating parameters of . So we set, as before and then and for the variance, well, it is a weighted mean again, and this is it. • Let us run the code on the same data as before Here, the code is rather simple: let us start generating a sample > X1 = rnorm(n,0,1) > X20 = rnorm(n,0,1) > Z  = sample(c(1,2,2),size=n,replace=TRUE) > X2=4+X20 > X = c(X1[Z==1],X2[Z==2]) then, given a vector of initial values (that I called  and then  before), >  s = c(0.5, mean(X)-1, var(X), mean(X)+1, var(X)) I define my function as, >  em = function(X0,s) { +  Ep = s[1]*dnorm(X0, s[2], sqrt(s[4]))/(s[1]*dnorm(X0, s[2], sqrt(s[4])) + +  (1-s[1])*dnorm(X0, s[3], sqrt(s[5]))) +  s[1] = mean(Ep) +  s[2] = sum(Ep*X0) / sum(Ep) +  s[3] = sum((1-Ep)*X0) / sum(1-Ep) +  s[4] = sum(Ep*(X0-s[2])^2) / sum(Ep) +  s[5] = sum((1-Ep)*(X0-s[3])^2) / sum(1-Ep) +  return(s) +  } Then I get , or . So this is it ! We just need to iterate (here I stop after 200 iterations) since we can see that, actually, our algorithm converges quite fast, > for(i in 2:200){ + s=em(X,s) + } Let us run the same procedure as before, i.e. I generate samples of size 200, where difference between means can be small (0) or large (4), Ok, Nicolas, you were right, we’re doing much better ! Maybe we should also go for a Gibbs sampling procedure ?… next time, maybe…. # Optimization and mixture estimation Recently, one of my students asked me about optimization routines in R. He told me he that R performed well on the estimation of a time series model with different regimes, while he had trouble with a (simple) GARCH process, and he was wondering if R was good in optimization routines. Actually, I always thought that mixtures (and regimes) was something difficult to estimate, so I was a bit surprised… Indeed, it reminded me some trouble I experienced once, while I was talking about maximum likelihooh estimation, for non standard distribution, i.e. when optimization had to be done on the log likelihood function. And even when generating nice samples, giving appropriate initial values (actually the true value used in random generation), each time I tried to optimize my log likelihood, it failed. So I decided to play a little bit with standard optimization functions, to see which one performed better when trying to estimate mixture parameter (from a mixture based sample). Here, I generate a mixture of two gaussian distributions, and I would like to see how different the mean should be to have a high probability to estimate properly the parameters of the mixture. The density is here  proportional to The true model is , and  being a parameter that will change, from 0 to 4. The log likelihood (actually, I add a minus since most of the optimization functions actually minimize functions) is > logvraineg <- function(param, obs) { + p <- param[1] + m1 <- param[2] + sd1 <- param[3] + m2 <- param[4] +  sd2 <- param[5] +  -sum(log(p * dnorm(x = obs, mean = m1, sd = sd1) + (1 – p) * + dnorm(x = obs, mean = m2, sd = sd2))) +  } The code to generate my samples is the following, >X1 = rnorm(n,0,1) > X20 = rnorm(n,0,1) > Z  = sample(c(1,2,2),size=n,replace=TRUE) > X2=m+X20 > X = c(X1[Z==1],X2[Z==2]) Then I use two functions to optimize my log likelihood, with identical intial values, > O1=nlm(f = logvraineg, p = c(.5, mean(X)-sd(X)/5, sd(X), mean(X)+sd(X)/5, sd(X)), obs = X) > logvrainegX <- function(param) {logvraineg(param,X)} > O2=optim( par = c(.5, mean(X)-sd(X)/5, sd(X), mean(X)+sd(X)/5, sd(X)), +   fn = logvrainegX) Actually, since I might have identification problems, I take either  or , depending whether  or  is the smallest parameter. On the graph above, the x-axis is the difference between means of the mixture (as on the animated grap above). Then, the red point is the median of estimated parameter I have (here ), and I have included something that can be interpreted as a confidence interval, i.e. where I have been in 90% of my scenarios: theblack vertical segments. Obviously, when the sample is not enough heterogeneous (i.e.  and  rather different), I cannot estimate properly my parameters, I might even have a probability that exceed 1 (I did not add any constraint). The blue plain horizontal line is the true value of the parameter, while the blue dotted horizontal line is the initial value of the parameter in the optimization algorithm (I started assuming that the mixture probability was around 0.2). The graph below is based on the second optimization routine (with identical  starting values, and of course on the same generated samples), (just to be honest, in many cases, it did not converge, so the loop stopped, and I had to run it again… so finally, my study is based on a bit less than 500 samples (times 15 since I considered several values for the mean of my second underlying distribution), with 200 generated observations from a mixture). The graph below compares the two (empty circles are the first algorithm, while plain circles the second one), On average, it is not so bad…. but the probability to be far away from the tru value is not small at all… except when the difference between the two means exceeds 3… If I change starting values for the optimization algorithm (previously, I assumed that the mixture probability was 1/5, here I start from 1/2), we have the following graph which look like the previous one, except for small differences between the two underlying distributions (just as if initial values had not impact on the optimization, but it might come from the fact that the surface is nice, and we are not trapped in regions of local minimum). Thus, I am far from being an expert in optimization routines in R (see here for further information), but so far, it looks like R is not doing so bad… and the two algorithm perform similarly (maybe the first one being a bit closer to the trueparameter).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 120, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9084352850914001, "perplexity": 2018.0863430272657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592475.84/warc/CC-MAIN-20180721090529-20180721110529-00371.warc.gz"}
http://mathhelpforum.com/calculus/144153-find-length-curve-over-interval.html
# Math Help - find the length of the curve over the interval 1. ## find the length of the curve over the interval i kinda got stuck on this problem and im not sure where to go with it??? any suggestions? thx in advanceIMG_0002.pdf 2. Originally Posted by slapmaxwell1 i kinda got stuck on this problem and im not sure where to go with it??? any suggestions? thx in advanceIMG_0002.pdf 1 + cos(θ) = 2cos^2(θ/2) Now proceed with integration.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9167394638061523, "perplexity": 596.099084800922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678695509/warc/CC-MAIN-20140313024455-00055-ip-10-183-142-35.ec2.internal.warc.gz"}
http://mathhelpforum.com/algebra/41451-ratio.html
# Math Help - Ratio 1. ## Ratio Hello all, I need some help on this problem: If $\frac{y+z}{pb+qc} = \frac{z+x}{pc+qa} = \frac{x+y}{pa+qb}$, then show that $\frac{2(x+y+z)}{a+b+c} = \frac{(b+c)x + (c+a)y + (a+b)z}{bc+ ca + ab}$ 2. Hi! You can easily get it by using the property if a/b = c/d then a/b = c/d = (a+c)/(b+d) Try it. If you can't get it, tell me. Keep Smiling Malay 3. I tried ... The sum of $\frac{y+z}{pb+qc} = \frac{z+x}{pc+qa} =\frac{x+y}{pa+qb} = \frac{2(x+y+z)}{(p+q)(a+b+c)}$ It is close to the proof except for $(p+b)$ in the denominator. What should I do next? It can disappear if I assume $(p+b) = 1$ 4. $\frac{y+z}{pq+bc}=\frac{ay+az}{pab+qac}$ $\frac{z+x}{pc+qa}=\frac{bz+bx}{pbc+qab}$ $\frac{x+y}{pa+qb}=\frac{cx+cy}{pac+qbc}$ $\frac{y+z}{pq+bc}+\frac{z+x}{pc+qa}+\frac{x+y}{pa+ qb}=\frac{ay+az}{pab+qac}+\frac{bz+bx}{pbc+qab}+\f rac{cx+cy}{pac+qbc}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9123596549034119, "perplexity": 1138.6471413470258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462202.69/warc/CC-MAIN-20150226074102-00146-ip-10-28-5-156.ec2.internal.warc.gz"}
http://mathhelpforum.com/trigonometry/207926-need-help-double-angle-question.html
# Math Help - Need help with a double angle question 1. ## Need help with a double angle question If cos2x=12/13 and 2x is in the first quadrant, evaluate the six trigonometric functions for x, without your calculator. The answer is: cos x= 5/√26; sec x=√26/5 sinx=1/ √26; csc x=√26 tanx=1/5; cot x=5 How do I get these answers? Any help would be greatly appreciated. 2. ## Re: Need help with a double angle question Do you know either the "double angle formulas" or the "half angle formulas"? 3. ## Re: Need help with a double angle question since $0 < 2x < \frac{\pi}{2}$ , then $0 < x < \frac{\pi}{4}$ $\cos{x} = \sqrt{\frac{1 + \cos(2x)}{2}}$ $\sin{x} = \sqrt{\frac{1 - \cos(2x)}{2}}$ sub in the value given for $\cos(2x)$ ... 4. ## Re: Need help with a double angle question Thanks guys, I got it now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9130949378013611, "perplexity": 2476.0961646159835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435376073161.33/warc/CC-MAIN-20150627033433-00100-ip-10-179-60-89.ec2.internal.warc.gz"}
https://forum.wilmott.com/viewtopic.php?f=8&p=865852&sid=aa994b585cb46cb5bad971a650ba442d
Serving the Quantitative Finance Community • 1 • 4 • 5 • 6 • 7 • 8 Cuchulainn Topic Author Posts: 64706 Joined: July 16th, 2004, 7:38 am Location: Drosophila melanogaster Contact: ### Re: Silly questions It is BM, indeed. Maybe it is better to try to describe the problem (already discussed "ODE asymptotic .. thread (Aluffi, Gemam et al)) Recall $d \vec{x}_t = -\nabla f(\vec{x_t}) dt + \sqrt{(2T)} d \vec{W}_t$ (1) When there is no random fluctuation I can solve constrained optimisation as a gradient system. Now I want to do hlll-climbing (basin hopping) by solving (1). My issues are A) Solving (1) as a SDE is not ideal (?) we lose out on ODE solver B) What I want is to use Strang  splitting into deterministiic and random legs, and which form the latter leg should be ie really the trigger for my question. So, what would be the 'best' approach for the stochastic part? Here is idea for stochastic Burgers https://arxiv.org/pdf/1907.12747.pdf "Compatibility means deliberately repeating other people's mistakes." David Wheeler http://www.datasimfinancial.com http://www.datasim.nl Alan Posts: 10656 Joined: December 19th, 2001, 4:01 am Location: California Contact: ### Re: Silly questions I may be missing something, but since the time-step $\Delta t$ is small, I would just do the usual simplest possible Monte Carlo for the 'random part': At each step, draw a std-normally distributed n-vector $\vec{Z}$ and take $\vec{x}_{t + \Delta t} = \vec{x}_t + c \vec{Z} \sqrt{\Delta t}$, where $c = \sqrt{2 T}$. Cuchulainn Topic Author Posts: 64706 Joined: July 16th, 2004, 7:38 am Location: Drosophila melanogaster Contact: ### Re: Silly questions I may be missing something, but since the time-step $\Delta t$ is small, I would just do the usual simplest possible Monte Carlo for the 'random part': At each step, draw a std-normally distributed n-vector $\vec{Z}$ and take $\vec{x}_{t + \Delta t} = \vec{x}_t + c \vec{Z} \sqrt{\Delta t}$, where $c = \sqrt{2 T}$. Yes, this would be one acceptable approach I reckon. Let me explain the  full context using the equations here as baseline https://arxiv.org/abs/1707.06618 The objective is to solve the SDE (1.2) by Algorithm 1 (GLD). No surprises really as it is essentially just Euler's method. Now, here's the nub: 1. Euler for the deterministic part is shaky/brittle. 2.  What I want (heresy, it works in practice but does it work in theory) ?) is to split 2.1 into an ODE part and an SDE part. 3. Proposal; use Lie-Trotter/Strang operator splitting on the ODE/SDE in part 2. Interested in the mathematical justification for this approach. If it works then we can find a global minimum (and we can send Gradient Descent out to pasture). "Compatibility means deliberately repeating other people's mistakes." David Wheeler http://www.datasimfinancial.com http://www.datasim.nl Alan Posts: 10656 Joined: December 19th, 2001, 4:01 am Location: California Contact: ### Re: Silly questions Can you show it works in practice" in an example? Cuchulainn Topic Author Posts: 64706 Joined: July 16th, 2004, 7:38 am Location: Drosophila melanogaster Contact: ### Re: Silly questions Can you show it works in practice" in an example? Yes! I even programmed the SGLD (Algorithm 2) which is what ML people do. For the ODE part, that works very well, even with Lagrangians and KKT (equality, inequality constraints) So, I need to resurrect and clean up my C++ code from last year (I build it from the ground up). The algorithm is easy 1. GLD (SDE) 2. Split 1) ODE, 2) SDE I'll go back to bunker and get back soon! I think it will be a very useful exercise. A test case is Rastrigin, many locals, 1 global. https://en.wikipedia.org/wiki/Rastrigin_function "Compatibility means deliberately repeating other people's mistakes." David Wheeler http://www.datasimfinancial.com http://www.datasim.nl Cuchulainn Topic Author Posts: 64706 Joined: July 16th, 2004, 7:38 am Location: Drosophila melanogaster Contact: ### Re: Silly questions The characteristic function of a probability distribution can be computed in closed form if we are lucky (Student t has no closed form). But CF solves an ODE (to be found) which is easier? For example, Student t CF $\phi_X$ satisfies: $t\phi^{''}_X(t) - (\nu - 1)\phi^{'}_X(t) -\nu t\phi_X(t) = 0$ for $0 < t < \infty$ with "BC" $\phi_X(0) = 1, |\phi_X(t)| < \infty$. Q. Solve analytically or numerically (this part is a piece of cake)? Depending on the requirements(?) "Compatibility means deliberately repeating other people's mistakes." David Wheeler http://www.datasimfinancial.com http://www.datasim.nl Alan Posts: 10656 Joined: December 19th, 2001, 4:01 am Location: California Contact: ### Re: Silly questions The characteristic function of a probability distribution can be computed in closed form if we are lucky (Student t has no closed form). But CF solves an ODE (to be found) which is easier? For example, Student t CF $\phi_X$ satisfies: wikipedia gives the closed-form cf. Cuchulainn Topic Author Posts: 64706 Joined: July 16th, 2004, 7:38 am Location: Drosophila melanogaster Contact: ### Re: Silly questions The characteristic function of a probability distribution can be computed in closed form if we are lucky (Student t has no closed form). But CF solves an ODE (to be found) which is easier? For example, Student t CF $\phi_X$ satisfies: wikipedia gives the closed-form cf. This formula is from Hurst 1995/Joarder 1995 using weird and wonderful A&S style functions (must take ages to work it out??). It's the kind of thing you'd expect to be done in the 1930s. I first looked in Cont and Tankov 2005 page 33 that there is no closed solution. hence, the ODE solution which looks elegant, easy and generic. https://arxiv.org/abs/1912.01245 More ways to skin a cat? // This kind of maths has a charm all of its own. "Compatibility means deliberately repeating other people's mistakes." David Wheeler http://www.datasimfinancial.com http://www.datasim.nl Alan Posts: 10656 Joined: December 19th, 2001, 4:01 am Location: California Contact: ### Re: Silly questions Mathematica can do it: bessel.png (6.84 KiB) Viewed 1040 times Cuchulainn Topic Author Posts: 64706 Joined: July 16th, 2004, 7:38 am Location: Drosophila melanogaster Contact: ### Re: Silly questions Mathematica can do it: bessel.png Is that there an irregular mod cyl Bessel that I see before me, yonder? https://en.cppreference.com/w/cpp/numer ... _functions https://en.cppreference.com/w/cpp/numer ... l_bessel_k Any good test values? "Compatibility means deliberately repeating other people's mistakes." David Wheeler http://www.datasimfinancial.com http://www.datasim.nl Alan Posts: 10656 Joined: December 19th, 2001, 4:01 am Location: California Contact: ### Re: Silly questions Mathematica can do it: bessel.png Is that there an irregular mod cyl Bessel that I see before me, yonder? https://en.cppreference.com/w/cpp/numer ... _functions https://en.cppreference.com/w/cpp/numer ... l_bessel_k Any good test values? yes, apparently cyl_bessel_k For a test value, take nu=1, t = 1.2345, see answer at cyl_bessel_k link Cuchulainn Topic Author Posts: 64706 Joined: July 16th, 2004, 7:38 am Location: Drosophila melanogaster Contact: ### Re: Silly questions For the 2nd question, I meant test value for the characteristic function. "Compatibility means deliberately repeating other people's mistakes." David Wheeler http://www.datasimfinancial.com http://www.datasim.nl Alan Posts: 10656 Joined: December 19th, 2001, 4:01 am Location: California Contact: ### Re: Silly questions For the 2nd question, I meant test value for the characteristic function. But with $\nu=1$ the two differ only by trivial factors Cuchulainn Topic Author Posts: 64706 Joined: July 16th, 2004, 7:38 am Location: Drosophila melanogaster Contact: ### Re: Silly questions Can $\frac{1}{2} \sigma^2 F^{2\beta} \frac{\partial^2 V}{\partial F^2}$ be transformed into sumping like $\frac{1}{2} \sigma^2 \frac{\partial^2 V}{\partial y^2}$ , $y = f(F)$ without using Lamperti on the SDE? "Compatibility means deliberately repeating other people's mistakes." David Wheeler http://www.datasimfinancial.com http://www.datasim.nl Alan Posts: 10656 Joined: December 19th, 2001, 4:01 am Location: California Contact: ### Re: Silly questions Chain rule (no sde) says $y = F^{1-\beta}/(1 - \beta)$ works.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8777196407318115, "perplexity": 7728.229378156533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488512243.88/warc/CC-MAIN-20210622063335-20210622093335-00157.warc.gz"}
https://3dprinting.stackexchange.com/questions/9994/no-extrusion-but-manual-extrusion-works
No extrusion, but manual extrusion works I just bought and build my first 3d printer (HE3D K280 with Marlin) and I'm encountering some problems with Cura 4 and Repetier. When I load and slice a part, the printer does not extrude anything during printing. However, when I manually extrude like 100mm (G1 F100 E100) it does work. Now I'm suspecting the problem lies with the gcode file which is generated with Cura since it contains very small values for E: ;Layer height: 0.2 ;Generated with Cura_SteamEngine 4.0.0 M140 S60 M105 M190 S60 M104 S200 M105 M109 S200 M82 ;absolute extrusion mode G28 ;Home G1 Z15.0 F6000 ;Move the platform down 15mm ;Prime the extruder G92 E0 G1 F200 E3 G92 E0 G92 E0 G1 F1500 E-6.5 ;LAYER_COUNT:250 ;LAYER:0 M107 G0 F3600 X-7.753 Y4.378 Z0.3 ;TYPE:SKIRT G1 F1500 E0 G1 F1800 X-8.127 Y3.918 E0.01115 G1 X-8.35 Y3.57 E0.01893 G1 X-9.088 Y2.287 E0.04677 G1 X-9.348 Y1.754 E0.05792 G1 X-9.483 Y1.376 E0.06547 G1 X-11.413 Y-4.956 E0.18999 G1 X-11.547 Y-5.534 E0.20115 G1 X-11.602 Y-6.124 E0.2123 Etc... Does anyone know how to fix this? • How many heads is your printer capable of supporting concurrently? Have you tried entering a line with only T0 before your first G1 move? Do you know if you're slicing for linear or volumetric E values, and which your printer requires? – Davo May 21 '19 at 13:29 • I does supprt 2 heads, however I'm using just 1. I just added T0 but unfortunately this did not work.. I'm slicing for linear but i tried with both and it did not extrude with either option – Mikelo May 21 '19 at 13:38 • First thought volumetric flow, but on second thoughts: "What is the filament diameter in the slicer?" It looks as if the diameter is too large. – 0scar May 21 '19 at 13:53 I think that you have the incorrect diameter specified (e.g. 2.85 mm instead of 1.75 mm) in your slicer; this also appears from a calculation, see below. Note that you can calculate from extruded volume entering the hotend, or deposited volume. For the first you could calculate the line width of the deposited line and verify that with the settings; from the second you can verify if the volume for the extruded filament equals filament volume based on extruded filament going into the hotend for an assumed line width. Do note that (certainly for first layers!) modifiers may be in place. This is merely to get a ballpark feeling for the chosen filament diameter. If you look at the first move from: G0 F3600 X-7.753 Y4.378 Z0.3 to: G1 F1800 X-8.127 Y3.918 E0.01115 You can calculate the travelled distance $$s = \sqrt{{\Delta X}^2+{\Delta Y}^2} = 0.59\ mm$$. Also, from these moves you can see that $$0.01115\ mm$$ of filament enters the extruder $$(E)$$. The deposited volume ($$V_{extruded_filament}$$) of the printed line equals the cross sectional area $$\times$$ length of the deposited filament path. Area could be defined as taken from e.g. the Slic3r reference manual to be: Basically (as we apply conservation of mass) the filament volume $$(V_{filament})$$ entering the hotend need to be the same as the extruded filament volume $$(V_{extruded_filament})$$ leaving the nozzle; so $$A_{filament}\times E = A_{extruded\ filament}\times s$$. This latter equation can be solved for $$w$$ by filling out the known parameters. From this calculation follows that for $$1.75\ mm$$ filament you get a calculated line width of about $$0.22\ mm$$, and respectively for $$2.85\ mm$$ filament you get $$0.46\ mm$$ line widths. As the nozzle diameter has not been specified in the question, but most commonly used nozzle diameter often is $$0.4\ mm$$, and modifiers for the first layer are at play to print thicker lines; you most probably have the have the wrong filament diameter set if you have a $$1.75\ mm$$ extruder setup. Basically it under-extrudes. • I'd have thought that at least a little material would get pushed out in this case -- perhaps the retraction commands (for the wrong diameter again) keep material from ever getting to the nozzle output? – Carl Witthoft May 21 '19 at 15:59 • @CarlWitthoft That would be in case the g-code is in relative coordinates, this is absolute. Not always after retraction, the nozzle is fully primed with the same length extrusion. – 0scar May 21 '19 at 16:27 • This was indeed one of the problems! Another problem was that it didnt save the option that I wanted to slice volumetric (which was needed for my printer after all, manual of K280 sucks). Thanks for your help! :) – Mikelo May 22 '19 at 14:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8115177154541016, "perplexity": 3740.833328700412}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361510.12/warc/CC-MAIN-20210228145113-20210228175113-00234.warc.gz"}
http://mathhelpforum.com/pre-calculus/67833-area-parallelogram-vertices.html
# Math Help - Area of a Parallelogram with Vertices.. 1. ## Area of a Parallelogram with Vertices.. [1,2,3] [1,3,6] [3,8,6] [3,7,3] I know I have to find 2 (I think) vectors that determine the area and then I take the cross product of the 2. That's about all I know I don't know which 2 determine the area. 2. Originally Posted by sfgiants13 [1,2,3] [1,3,6] [3,8,6] [3,7,3] I know I have to find 2 (I think) vectors that determine the area and then I take the cross product of the 2. That's about all I know I don't know which 2 determine the area. The given vectors are the stationary vectors of the vertices of the parallelogram. Calculate the vectors describing the sides of the parallelogram: $\overrightarrow{AD} = \vec d - \vec a = [2,5,0]$ $\overrightarrow{BC} = \vec c - \vec b = [2,5,0]$ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - $\overrightarrow{AB} = \vec b - \vec a = [0,1,3]$ $\overrightarrow{DC} = \vec c - \vec d = [0,1,3]$ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - You now know the pairs of parallels. As you have written, the area is $a_{parallelogram} = |\overrightarrow{AD} \times \overrightarrow{AB}| =| [2,5,0] \times[0,1,3] |= | [15,-6,2] | = \sqrt{265} \approx 16.2788$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3569764196872711, "perplexity": 29.26107001100079}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510268363.15/warc/CC-MAIN-20140728011748-00482-ip-10-146-231-18.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/97857/partial-backups
# Partial backups Suppose you have some storage medium of a given size M, and can make some kind of backup on another medium of size B with M > B. You can choose the scheme to determine the contents of the backup. After you made that partial backup, an adversary (or a random process) will make a number of changes to your original medium. Given the changed medium and your partial backup, your task is to restore the original state of your medium. How many changes could you undo? What is the theoretical maximum? And how successful are the schemes you can come up with? I have toyed with this question for a while. Obviously, in general you can not hope to undo more than B changes. Viewed more mathematical, I am looking for a systematic code that works with huge block sizes. - This question differs from the ones I've seen discussed in coding theory, because you're given that errors can be introduced only into the original medium, not into the backup. (Note that I wrote "I've seen" --- there's plenty of coding theory that I haven't seen, and questions like this may well have been treated there.) – Andreas Blass May 24 '12 at 18:31 I would start calculations based on Reed-Solomon code, but it has the following obvious drawbacks: 1) it will correct data only in chunks that are multiples of the size of the field element (=symbol), 2) the larger the total package M+B, the larger will the symbols have to be: with $r$ bit symbols the maximum size of M+B is $r\cdot 2^r$ bits, 3) the amount of corrupted data is counted in terms of the affected symbols, so if the adversary changes a single bit of a symbol, the entire symbol is corrupted, 4) it won't take advantage of the fact that the errors are all in the original. – Jyrki Lahtonen May 24 '12 at 19:25 ...(continue, sorry). So I would like to know a little bit more about what kind of errors the adversary will be able to induce. Do we know anything about that? Will the adversary like make a pass with a magnet over your storage medium (in which case we might reasonably assume that contiguous blocks of data will be affected). A scheme based on an RS-code has the big plus side that with $R=B/r$ check symbols we can correct up to $R/2$ corrupted symbols. You can double this number, if (a big if, but again something I need to ask) we know the locations of the changes. – Jyrki Lahtonen May 24 '12 at 19:31 ...(continue, sorry^2). How large can we expect M+B to be? Are we talking kilobytes, megabytes or gigabytes? At some point the granularity of RS-codes may become an issue. Another idea that comes to mind is to "waste" some of the storage space of the original copy by adding 32-bit CRCs to chunks of data (or some error-detection scheme like that). Then we can encode/decode on a chunk-by-chunk basis, and we shall automatically know which chunks are corrupted (in which case R extra chunks in B allow the recovery of R corrupted chunks in M). But again, a single flipped bit will ruin a chunk. – Jyrki Lahtonen May 24 '12 at 19:41 Without some restrictions on the backup, it seems to be a red herring. You want to be able to extract some number of bits out of $M$, a standard problem. If you can store $B$ bits reliably in the backup, this lowers the number of bits you need to store in the medium by $B$. – Douglas Zare May 24 '12 at 20:43 Your question is very similar to the extended idea of erasure-resilient codes discussed here: Originally, erasure-resilient codes were introduced for RAID (redundant array of independent disks) and similar storage systems. They are systematic codes, and Chee, Colbourn, and Ling's version is good for the type of problem you described. As is often the case with studies on reliability of storage, the focus of erasure-resilient codes is on data corruptions, unreadable bits, disk failure, and the like (which are all "erasures" in math) rather than bit flips. But if we forget about more practical issues and focus on math, erasures and bit flips can both be treated the same way by the notion of minimum distance, so here's some little things that are known about such codes in the math literature. The idea is basically the same as systematic linear codes. For the sake of simplicity, we only consider the binary case here. Assume that we have a linear $[n,k,d]$ code of length $n$, dimension $k$, and minimum distance $d$. Here, the dimension $k$ and the number $n-k$ will be your $M$ and $B$ respectively. Because it's systematic, we use a parity-check matrix $H$ in standard form: \begin{align*} H &= \left[\begin{array}{cc}I & A\end{array}\right]\\ &= \left[\begin{array}{ccccccc} 1&0&\dots&0 & a_{0,0} & a_{0,1} & \dots &a_{0,k-1}\\ 0&1&\dots&0 & a_{1,0} & a_{1,1} & \dots &a_{1,k-1}\\ \vdots&\vdots&\ddots&\vdots&&&\vdots&\\ 0&0&\dots&1 & a_{k-1,0} & a_{k-1,1} & \dots &a_{k-1,k-1} \end{array}\right] \end{align*}, where $I$ is the $(n-k)\times(n-k)$ identity matrix and $A = (a_{i,j})$ is a $k \times k$ matrix with $a_{i,j} \in \mathbb{F}_2$. The rows of $H$ are indexed by the $n-k$ bits for "some kind of backup" in your question (or any kind of storage medium of size $B = n-k$ for that matter) and columns of $A$ are indexed by the $k$ data bits we want to protect (i.e., the original data of size $M = k$). The backup scheme is that on the $i$th backup bit, we write the sum of the data bits according to whether $a_{i,j}$ is $0$ ("ignore") or $1$ ("add"), so that the $i$th backup bit $\beta_i$ is $$\beta_i = \sum_{x \in \{j \ \mid\ a_{i,j} = 1\} } \delta_x \pmod{2},$$ where $\delta_j$ is the $j$th unreliable data bit we are going to protect. It is straightforward to see that the standard syndrome decoding will detect errors on $\delta_i$ as long as the number of affected data bits are fewer than or equal to $\lfloor\frac{d-1}{2}\rfloor$; we just compare each $\beta_i$ with the sum of the corresponding data bits and see if they add up, which will give us the error syndrome. Now, we have the assumption that the backups $\beta_i$ are more reliable than the original data $\delta_j$. (Chee, Colbourn, and Ling's view is a bit different. But in situations we consider, both views coincide.) In your case, all $\beta_i$ are assumed to be immune to errors. The question is whether we can correct more than $\lfloor\frac{d-1}{2}\rfloor$ errors if $\beta_i$ are all reliable. In general, the answer should be yes. So, your question boils down to how much this assumption can increase the maximum number of tolerable errors and how we can construct systematic linear codes that take full advantage of the reliable backups. Unfortunately, these questions appear to be open in general. Basically, we need to understand how the minimum distance changes when the $n-k$ check bits of systematic linear $[n,k,d]$ codes are chopped off to form new non-systematic linear codes of length $k$. But it can be proved that some nice combinatorial structure, when used as the $A$ part, roughly doubles the minimum distance compared to when $\beta_i$ can be erroneous while having a huge block size as you requested. For instance, the following paper proved that the incidence matries of the Steiner $2$-designs forming the points and lines of affine geometry $\text{AG}(m,q)$ with $q$ odd are of this "almost doubling" kind: M. Müller, M. Jimbo, Erasure-resilient codes from affine spaces, Discrete Appl. Math., 143 (2004) 292–297. The code parameters in the case of affine geometry $\text{AG}(m,q)$ with $q$ odd and $m \geq 2$ is \begin{align*} n &= q^{m-1}\frac{q^m-1}{q-1}+q^m,\\ k &= q^{m-1}\frac{q^m-1}{q-1},\\ d' &= 2q\ \quad \text{ if backups are reliable},\\ (d &= q+1\ \text{ if backups are as unreliable}). \end{align*} Actually, their assumption is slightly more pessimistic in the sense that $\beta_i$ are "less prone" to errors, not completely reliable like your case. So they proved something stronger than the almost doubled minimum distance. In fact, their codes are asymptotically optimal under the pessimistic assumption as well as one more assumption that the column weights of $A$ are assumed to be uniform $w$ (which happens to be a reasonable thing to assume for the original, main intended purpose of erasure-resilient codes). Note that such $H = \left[\begin{array}{cc}I & A\end{array}\right]$ can be of minimum distance at most $d = w+1$ because a column of $A$ and a set of $w$ columns from $I$ can form a linearly dependent set. All else being equal, such error patters are much less likely than those that involve fewer backup bits. (And in the case of perfectly reliable backups, they don't happen.) As in your question, let $B$ be the number of backup bits. Assume that we require our erasure-resilient code to detect all errors on $d'-1$ bits or fewer except the very unlikely ones that involve one data bit and $w=d+1$ backups. Define $\text{ex}(B,d,d')$ to be the maximum number of data bits for such an erasure-resilient code. Chee, Colbourn, and Ling proved that $$\text{ex}(B,d,d') \leq c\cdot B^{d-\lfloor\frac{d'-1}{2}\rfloor}$$ for some constant $c$. Because codes from $\text{AG}(m,q)$ is of uniform column weight $q$, they asymptotically attain the upper bound on the block size $c\cdot B^2$. A friend of mine proved the same thing for projective geometry, although it's not published yet. (If you're curious about the exact statement, you can find it in the language of design theory as Theorem 3.16 in our preprint http://arxiv.org/pdf/1309.5587.pdf) So, while the case of completely reliable backups doesn't seem to have directly been studied, there are similar studies that give infinitely many examples of codes that significantly improve error correction capabilities when backup data are free from errors. And they are designed to support extremely large numbers of data bits, which translates to huge block sizes as requested. But constructions for codes and general bounds on improved minimum distance $d'$ for when backup bits $\beta_i$ are perfectly reliable seem to be wide open. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8605336546897888, "perplexity": 406.8458531514193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860121985.80/warc/CC-MAIN-20160428161521-00091-ip-10-239-7-51.ec2.internal.warc.gz"}
https://mathoverflow.net/questions/309980/which-zero-diagonal-matrices-contain-the-all-one-vector-in-their-columns-conic
# Which zero-diagonal matrices contain the all-one vector in their columns' conic hull? Let $A$ be a non-negative zero-diagonal invertible matrix. Which $A$ make the following assertions true, which are all equivalent: 1. The all-one vector $j$ is contained in the conic hull of $col(A)$. 2. The row sums of $A^{-1}$ are non-negative. 3. $ADj > 0$, where $D$ is any diagonal matrix with trace $1$. 4. The affine hull of $col(-A)$ does not intersect the non-negative orthant. The equivalency of $(1)$ and $(2)$ follows from the equation $$Ax = j.$$ Assertion $(3)$ is deduced from Farkas' Lemma, as the existence of a positive solution to the above equation implies that there cannot exist a vector $y$ with $y'j = -1$ such that $Ay \geq 0$ (I normalized $y$ without loss of generality). The set of $y$ with sum-of-entries $-1$ is given by $\{y\;|\;y=-Dj: tr(D)=1 \;\text{and}\; D \; \text{diagonal}\}$, the affine combinations of the negative standard basis vectors. This leads to $-ADj <0$. Finally, the matrix of images of the negative standard basis under $A$ is simply $-A$. Hence, requiring the affine hull of these images not to contain any non-negative vector should be equivalent to $(3)$. Two sufficient conditions are that $A$ be positive monomial with zero diagonal (as at least one of the entries of $y$ must be negative), or the adjacency matrix of a regular graph. What can be said in general? • What is a conic hull, please? – Gerry Myerson Sep 6 '18 at 11:49 • The (strict) conic hull of a set of real vectors $V=\{v_1, \dots, v_n\}$ is defined as $\left\{\sum_{i = 1}^n \alpha_i v_i \;|\; \alpha_i > 0 \right\}$. – bodhisat Sep 6 '18 at 11:57 • No. Consider the matrix \begin{pmatrix} 0&1&0 \\ 2&0&2 \\ 1&0&0 \end{pmatrix} Vector $j$ lies in the span of the columns with weights $1,1,-1/2$. But it does not lie in their conic hull. – bodhisat Sep 6 '18 at 17:19 • The set of matrices $M$ with $j$ in their conic hull is a (closed?) convex cone. The set of non-negative matrices is a pointed, closed, convex cone, and the set of matrices with zero diagonals is a linear subspace. The set of matrices we are interested in is the intersection of all these, therefore it is also a (closed?) pointed convex cone. My intuition is that this whose set should be closed but I will have to double check this. Then we can apply theorem 2.55 from ams.jhu.edu/~abasu9/AMS_550-465/notes-without-frills.pdf to get a reduced version of the problem. – Pushpendre Sep 9 '18 at 19:22 • The cone is not convex. Consider the following counter-example: $$\begin{pmatrix} 0&9&1 \\ 3 & 0 & 1 \\ 1 & 1 & 0 \end{pmatrix} = \begin{pmatrix} 0&9&0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \end{pmatrix} + \begin{pmatrix} 0&0&1 \\ 3 & 0 & 0 \\ 0 & 1 & 0 \end{pmatrix}$$ The matrix is a positive linear combination of two monomial matrices which are part of the cone. Yet their sum yields a matrix whose inverse has the row-sums $.75, .25, -1.25$. – bodhisat Sep 10 '18 at 17:30 Some further thoughts approaching the solution. I found a connection of the question to potential theory. Let $A$ be an invertible non-negative matrix, and $j$ the all-one vector. The (right) signed potential $\nu$ of $A$ is the solution to $$A\nu = j.$$ Additionally requiring $\nu > 0$ leads to a strict equilibrium potential. Hence, my question can be rephrased as follows: Which are the necessary and sufficient conditions on $A$ for the existence of a strict equilibrium potential? Of relevance is the definition of a strict potential matrix. $A$ is a potential iff its inverse is a strictly row-diagonally dominant $M$-matrix. This subsumes, for example, the strictly ultrametric matrices whose inverses are strictly row-diagonally dominant Stieltjes, and generalized strictly ultrametric matrices for $A$ asymmetric. Yet none of these classes allows for zero diagonals, and all of this is just sufficent. For instance, positive monomial matrices obviously have strict equilibrium potentials as well. Strikingly, if $A$ has the potential $\nu > 0$, then $SA$, where $S$ is pseudo-stochastic (i.e. row-sums equal one) must have the exact same potential. This can be seen by noting that $SA\nu = Sj = j$. Let $A = DS_A$, where $D_{ii} = \sum_j A_{ij}$. Interpreting $A$ as a graph, $S_A$ is the Markov chain on $A$. We now have $$DS_A\nu = j = S_A^{-1}DS_A\nu.$$ Hence, $A$ has the same potential as its degree matrix, expressed in the basis given by the columns of its Markov chain. Is this useful to think about the existence of equilibrium potentials in general? Nabben, Reinhard, and Richard S. Varga. "Generalized ultrametric matrices—a class of inverse M-matrices." Linear Algebra and its Applications 220 (1995): 365-390. • The zero-diagonal requirement can be dispensed with, by the way. I found a way to get rid of it for my purposes. – bodhisat Sep 18 '18 at 13:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9733424186706543, "perplexity": 267.662722254626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987841291.79/warc/CC-MAIN-20191024040131-20191024063631-00229.warc.gz"}
https://www.eskesthai.com/2007/02/colour-of-gravity.html
Sunday, February 25, 2007 The Colour of Gravity I am not sure how this post is to unfold, yet in my mind different exercises were unfolding as to how I should explain it. Can I come from an artist's perspective I wondered? Say "by chance" anything that seems relevant here in writing, and any relation to science "is" metaphorical by nature? Yellow, Red, Blue 1925; Oil on canvas, 127x200cm; Centre Georges Pompidou, Paris These free, wild raptures are not the only form abstraction can take, and in his later, sadder years, Kandinsky became much more severely constrained, all trace of his original inspiration lost in magnificent patternings. Accent in Pink (1926; 101 x 81 cm (39 1/2 x 31 3/4 in)) exists solely as an object in its own right: the pink'' and the accent'' are purely visual. The only meaning to be found lies in what the experience of the pictures provides, and that demands prolonged contemplation. What some find hard about abstract art is the very demanding, time-consuming labour that is implicitly required. Yet if we do not look long and with an open heart, we shall see nothing but superior wallpaper. I underlined for emphasis. Does one want to gleam only what is coming across in geometrical form as a painting without understanding the depth of the artist in expression? Some may say, why any association at all, and just leave science to what it knows best without implicating any theoretical positions with the thought pertaining to gravity here. Yes that's why I selected the title of this post as thus, and why I am going to give perspective to what I may, "as artist in writing" see with these words, and then you decide whether it is useful to you. The Field as the Plane An ancient thought penetrated my thinking as I thought of "the field" that a society can work in agriculture, and yet, by definition it was the plane, "length and width" that was also appealing here. I did not want to loose it's "origination" while I moved any thinking to the "abstract of brane" and the like, without firmly attaching it to the ground. But who was to know that this plane could be moved to any "fifth dimensional understanding" without having studied the relationship to dimensional thinking and the like. The physics elevated. I allow this one time escapism to "other thinking" to demonstrate what use the colour of gravity implies while at the same time "theoretical positions" talk about it's place in the universe. If one did not accept the moves in science and the way it expressed itself to allow geometrical inclination, then how the heck could non-euclidean thinking ever make it's way into how we will discuss "the fields" about us? It meant that a perspective "on height" be adopted? As an observer I was watching from a position. While in that sleeping/dosing state, I wondered how else to express myself as these concepts were amalgamating themselves into a "conceptual frame of reference?" The picture of the field(I am referring to the ancient interpretation) continued in my mind, and "by abstract" I thought to introduce a line extend from the centre of this field upward. So here I am looking at this field before me. Now I had wondered off previous by bring "the brane" in here, yet is not without that sight I thought how the heck could any idealization so ancient make sense to what the colour of gravity to mean. Title page of Opticks .... by Sir Isaac Newton, 1642-1727. Fourth edition corrected by the author's own hand, and left before his death with the bookseller. Published in 1730. Library call number QC353 .N48 1730. So "an idea" came to mind. While correlating Newton's work here and the "extra dimensional thinking," I also wanted to include the work of the "Alchemist Newton". "To expand" the current thinking of our "emotive states" as a "vital expression of the biological being." Draw into any further discussion of the "philosophical or other wise," these views of mine which are a necessary part of what was only held to a "religious and uneducated evolutionary aspect of the human being." A cosmologist may still say that such thoughts of Einstein used in this vain is wrong, but I could never tear myself away from the views of "durations of time." Colour Space and Colour Theory The CIE 1931 colour space chromaticity diagram with wavelengths in nanometers. Note that the colors depicted depend on the color space of the device on which you are viewing the image. So by having defined the "frame of reference," and by introducing "Colour of gravity" I thought it important and consistent with the science to reveal how dynamical any point within that reference can become expressive. The history in association also important. In the arts and of painting, graphic design, and photography, color theory is a body of practical guidance to color mixing and the visual impact of specific color combinations. Although color theory principles first appear in the writings of Alberti (c.1435) and the notebooks of Leonardo da Vinci (c.1490), a tradition of "colory theory" begins in the 18th century, initially within a partisan controversy around Isaac Newton's theory of color (Opticks, 1704) and the nature of so-called primary colors. From there it developed as an independent artistic tradition with only sporadic or superficial reference to colorimetry and vision science. So you tend to draw on your reserves for such comparatives while thinking about this. I knew to apply "chemical relations" to this idea, and the consequential evidenced, by the resulting shadings by adding. I wanted to show "this point" moving within this colour space and all the time it's shading was describing the "nature of the gravity." Adding a certain mapping function between the color model and a certain reference color space results in a definite "footprint" within the reference color space By adding that vertical line in the field, the perimeter of my field of vision had to some how be drawn to an apex, while all kinds of thoughts about symmetry and perfection arose in my pyramidal mind. All these colours, infinite in their ability to express the human emotive state, as a consequence of philosophical and expressed as function of the emotive being? CIE 1976 L*, a*, b* Color Space (CIELAB) CIE L*a*b* (CIELAB) is the most complete color model used conventionally to describe all the colors visible to the human eye. It was developed for this specific purpose by the International Commission on Illumination (Commission Internationale d'Eclairage, hence its CIE initialism). The * after L, a and b are part of the full name, since they represent L*, a* and b*, derived from L, a and b. CIELAB is an Adams Chromatic Value Space. The three parameters in the model represent the lightness of the color (L*, L*=0 yields black and L*=100 indicates white), its position between magenta and green (a*, negative values indicate green while positive values indicate magenta) and its position between yellow and blue (b*, negative values indicate blue and positive values indicate yellow). The Lab color model has been created to serve as a device independent model to be used as a reference. Therefore it is crucial to realize that the visual representations of the full gamut of colors in this model are never accurate. They are there just to help in understanding the concept, but they are inherently inaccurate. Since the Lab model is a three dimensional model, it can only be represented properly in a three dimensional space. Entanglement the quantum entanglement would become so spread out through these interactions with the environment that it would become virtually impossible to detect. For all intents and purposes, the original entanglement between photons would have been erased. Never the less it is truly amazing that these connections do exist, and that carefully arranged laboratory conditions they can be observed over significant distances. They show us, fundamentally, that space is not what we once thought it was. What about time? Page 123, The Fabric of the Cosmo, by Brian Greene So many factors to include here, yet it is with the "idea of science" that I am compelled to see how things can get all mixed up, while I say emotive state, or Colours of gravity? It gets a little complicated for me here, yet the "Fuzzy logic" introduced or "John Venn's logic" is not without some association here. Or, the psychology I had adopted as I learnt to read of models and methods in psychology that could reveal the thinking we have developed, and what it included. Least I forget the "real entanglement" issues here, I have painted one more aspect with the "Colour of Gravity" to be included in this dimensional perspective, as we look to the models in science as well? Working from basic principles and the history of spooky has made this subject tenable in today's world. A scientist may not like all the comparisons I have made based on it, I could never see how the emotive and mental statements of the expressive human being could not have been included in the making of the reality. That I may of thought the "perfection of the human being" as some quality of the God in us all, would have granted sanction to some developing view of "religious virtuosity," against the goals of the scientist. So as ancient the views painted, there was something that may have been missed of the "Sensorium," and goes toward the basis of the philosophy shared currently by Lee Smolin. This entanglement to me is a vital addition to our exploration of the universe. Our place and observation within it? It did not mean to discount our inclusion within it, within a larger "oscillatory perspective."
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3919668197631836, "perplexity": 1771.1687587619983}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987828425.99/warc/CC-MAIN-20191023015841-20191023043341-00223.warc.gz"}
https://www.rocketryforum.com/threads/has-anyone-used-spacecad.2596/
### Help Support The Rocketry Forum: #### RocketsNorth ##### Well-Known Member Joined Apr 16, 2009 Messages 180 Reaction score 0 I've been testing evaluation versions of SpaceCAD and RocketSIM. Both programs have pluses and minuses and I find the discussions on RocketSIM very valuable. About a year ago I got back into Model Rockets, with my son, after a long absence (30 years) and we're having a great time. However as a relative noobie to Rocket Design the cost of SpaceCAD is extremely attractive, I'm just concerned that we will "outgrow" this application and end up having to reinvest in RocketSIM anyway. I'd like to hear advice from anyone who has had or is having this experience. Thanks #### jj94 ##### Well-Known Member Joined Jan 18, 2009 Messages 4,023 Reaction score 0 I've had Rocksim for a while now, and I love it. I got it when Rocksim v8 came out. It's really easy to use and it's really accurate. There's so many different things you can do with Rocksim that you can't do with SpaceCAD, and now with v9, there's even more. The simulations are also very accurate when done right. #### lkal32 Joined Jan 18, 2009 Messages 349 Reaction score 2 You get what you pay for... In this case, RockSim should be $500. Its great, you wont regret getting it. Ive tried SpaceCad trial (after I had Rocksim) and I have to say I was completely satisfied with my purchase. SpaceCad has some simplicity to it which is nice, but once you get used to RockSim, theres no going back #### Gillard ##### Well-Known Member Joined Jan 18, 2009 Messages 1,973 Reaction score 2 i'm a spacecad user and i really like it, its very simple to use, and i especially like the fin template tool where you can put the fin on the computer screen and move the points around to get the exact fin for the program. rocksim does do more and has alot more functions, but it cost more. #### CharlaineC ##### Well-Known Member Joined Jan 19, 2009 Messages 1,099 Reaction score 4 I myself have rocksim 7 Have not found a reasion to go for the newer ones yet. and love it though I do own have vcp and spacecad but rocksim is my love though it can be tricky at times for me I still love it. #### RocketsNorth ##### Well-Known Member Joined Apr 16, 2009 Messages 180 Reaction score 0 Folks thanks very much for your replies, I would have to agree that you definitely get what you pay for. There were a couple things I did notice about SpaceCAD that I found lacking in RocketSim, in particular as Gillard points out the fin design seems to be more powerful and interactive. As I said I downloaded both eval versions and created a design using SpaceCAD. A cool little 2 stage "C" motor design that my boy and I will start working on after school lets out. It took me about 3 hours to put the basics together and get it to a launchable state. When I tried to recreate the design in RocketSim it took me 2 nights and I was using the SpaceCAD design as a template. Much of that time was spent playing with the fins. At this point, I think we're still probably too new to get deep into our own designs, so we'll play with our test#1 over the summer and then look at RocketSim in the fall. Again thanks for everyone's input. #### ben_ullman ##### Well-Known Member Joined Jan 18, 2009 Messages 1,523 Reaction score 0 You get what you pay for... In this case, RockSim should be$500. Its great, you wont regret getting it. Ive tried SpaceCad trial (after I had Rocksim) and I have to say I was completely satisfied with my purchase. SpaceCad has some simplicity to it which is nice, but once you get used to RockSim, theres no going back uhhh not. Its $100 and I am not sure i would pay more tha$150 for it. Ben #### DexterLB ##### Well-Known Member Joined Jan 18, 2009 Messages 571 Reaction score 1 Rocksim is waaaaaaaaaay better than spacecad, but they are both (well, they are cheap as for features, but not everyone can afford them) So I'm stuck with vcp #### Micromeister ##### Micro Craftman/ClusterNut TRF Supporter Joined Jan 19, 2009 Messages 15,074 Reaction score 47 Location Washington DC I use both. I like both for different reasons. However until Rocsim-9, it really wasn't worth the price. Spacecad has some very nice features and limitations but it worth the money being ask of it. As someone already mentioned there are also several freeware programs that are extreamly helpful before you start throwing around \$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3804050087928772, "perplexity": 4099.534139009307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038088471.40/warc/CC-MAIN-20210416012946-20210416042946-00564.warc.gz"}
http://mathhelpforum.com/pre-calculus/218891-radicals-5-a.html
1. ## Radicals #5 imgur: the simple image sharer d) The dividing 4 part is confusing me, I'm not sure what to do: imgur: the simple image sharer I was pretty close, I just got 7 instead of 13/4... 3. ## Re: Radicals #5 Are you sure you did question d) ? I do not follow. 4. ## Re: Radicals #5 You are right i did question (d) of the upper question. Q 9 ( d ) is attached herewith; Thank you!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9630640745162964, "perplexity": 4745.030558467633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541907.82/warc/CC-MAIN-20161202170901-00148-ip-10-31-129-80.ec2.internal.warc.gz"}
http://www.math.hku.hk/imrwww/activities/by-years-2011-2020/activities-2018/8314/
# On the Semisimplicity of Geometric Monodromy Action in Fℓ Coefficients Professor Chun Yin Hui (Mathematical Sciences Center, Tsinghua U)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9337731003761292, "perplexity": 18177.322683138584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00004.warc.gz"}
https://www.physicsforums.com/threads/calculating-molarity.428474/
# Homework Help: Calculating Molarity 1. Sep 12, 2010 ### Jim4592 1. The problem statement, all variables and given/known data Eighty-six proof whiskey is 43 percent ethyl alcohol, CH3CH2OH, by volume. If the density of ethyl alcohol is 0.79 kg/L, what is the molarity in whiskey. 2. Relevant equations Molar Mass of CH3CH2OH = 46.07 g 3. The attempt at a solution 0.79 kg/L * 1000 g/kg * 1 mole / 46.07 g = 17.15 M I was just looking for a check on this particular problem since I haven't taken a chemistry course since my freshman year ha! 2. Sep 13, 2010 ### Staff: Mentor Not bad - you are on the right track - but wrong. You have not used 43% in your calculations and this is an important information. Last edited by a moderator: Aug 13, 2013 3. Sep 13, 2010 ### Jim4592 I thought you would have to use that 43% in there somewhere, but I'm not sure how to use it. It would be nice if I still owned my chem book. UPDATE: Ok I tried re-working the problem again, here's what I came up with: 0.79 kg/L * 0.43 L / 1 L * 1000 g / 1 kg * 1 mol / 46.07 g = 7.37 M how does that look? Last edited: Sep 13, 2010 4. Sep 14, 2010 Much better.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8442716598510742, "perplexity": 2287.649673096604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376831334.97/warc/CC-MAIN-20181219045716-20181219071716-00151.warc.gz"}
http://www.webdeveloper.com/forum/showthread.php?275311-Image-and-text-on-same-line&p=1257687&mode=threaded
## Image and text on same line I'm trying to have an image of fixed height/width on the left, and text on the right, in the same line of course. The overall container has a dynamic width of 90% of viewport, meaning that the text on the right will also have a dynamic width (90% - image width) since the image on the left is fixed. The text needs to be aligned left, so "float:right" won't work. I've tried countless combinations of floats, aligns, table cells, etc, nothing works... closest I've got to was they were in the same line, but the text was forced aligned to the right. Image of what I mean: http://i.imgur.com/QRDhLro.png Code: ```#container { overflow:hidden; position:relative; width:90%; min-width:800px; margin-bottom:20px; margin-top:20px; margin-left:auto; margin-right:auto; } .leftimage { width:600px; height:100px; } .righttext { float:right; }``` Code: ```<div id="container"> <div class="righttext">lorem ipsum lorem ipsum <br> lorem ipsum lorem ipsum </div> <div class="leftimage"><img src="../pictures/test.png"></div> </div>``` Thanks.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8693152666091919, "perplexity": 961.9121096848368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823947.97/warc/CC-MAIN-20160723071023-00140-ip-10-185-27-174.ec2.internal.warc.gz"}
https://root.cern.ch/doc/v608/classROOT_1_1Math_1_1UnaryOp.html
ROOT   6.08/07 Reference Guide ROOT::Math::UnaryOp< Operator, RHS, T > Class Template Reference ### template<class Operator, class RHS, class T> class ROOT::Math::UnaryOp< Operator, RHS, T > UnaryOperation class A class representing unary operators in the parse tree. The objects are stored by reference Definition at line 361 of file Expression.h. ## Public Member Functions UnaryOp (Operator, const RHS &rhs) ~UnaryOp () apply (unsigned int i) const bool IsInUse (const T *p) const operator() (unsigned int i, unsigned int j) const ## Protected Attributes const RHS & rhs_ #include <Math/Expression.h> ## ◆ UnaryOp() template<class Operator , class RHS , class T > ROOT::Math::UnaryOp< Operator, RHS, T >::UnaryOp ( Operator , const RHS & rhs ) inline Definition at line 364 of file Expression.h. ## ◆ ~UnaryOp() template<class Operator , class RHS , class T > ROOT::Math::UnaryOp< Operator, RHS, T >::~UnaryOp ( ) inline Definition at line 368 of file Expression.h. ## ◆ apply() template<class Operator , class RHS , class T > T ROOT::Math::UnaryOp< Operator, RHS, T >::apply ( unsigned int i ) const inline Definition at line 371 of file Expression.h. ## ◆ IsInUse() template<class Operator , class RHS , class T > bool ROOT::Math::UnaryOp< Operator, RHS, T >::IsInUse ( const T * p ) const inline Definition at line 378 of file Expression.h. ## ◆ operator()() template<class Operator , class RHS , class T > T ROOT::Math::UnaryOp< Operator, RHS, T >::operator() ( unsigned int i, unsigned int j ) const inline Definition at line 374 of file Expression.h. ## ◆ rhs_ template<class Operator , class RHS , class T > const RHS& ROOT::Math::UnaryOp< Operator, RHS, T >::rhs_ protected Definition at line 384 of file Expression.h. The documentation for this class was generated from the following file:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28656238317489624, "perplexity": 29924.627377709825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711001.28/warc/CC-MAIN-20221205000525-20221205030525-00457.warc.gz"}
https://mathematica.stackexchange.com/questions/41520/is-mathematica-compatible-with-primusrun
Is Mathematica compatible with primusrun? I use Mathematica 9 on Ubuntu 12.04 LTS x86 (64-bit). My computer has an Nvidia graphics card, for which I installed primus and bumblebee. If anyone is in a similar situation, I would like to know if they have solutions to the following two related problems: 1. Mathematica does not need OpenGL when it's not rendering any 3D graphics. For this reason I find it convenient to run the program without optirun mathematica or primusrun mathematica most of the time. However, I noticed that if I do make a Plot3D or related command, Mathematica will crash. Is it possible to set the program up in a way so that it doesn't crash, but rather shows an error message, or just doesn't show output, if no 3D rendering is possible? 2. primusrun extends optirun functionality by saving power when the graphics card is not in use. I suppose that for this reason, if I'm not actively rotating a 3D graphic, it goes blank in a session of primusrun mathematica rather than optirun mathematica. When using the less efficient optirun, all functionality is available and runs as expected. Did anyone figure out a workaround for this? I'm asking on this site rather than AskUbuntu because the solution, if it exists, probably lies in changing Mathematica preferences rather than primus'. • There's a misconception about the power saving. To clarify, primusrun avoids powering on the discrete GPU as long as the application does not load the OpenGL library; after that, power saving compared to optirun/virtualgl may come from obeying vsync, but that probably wouldn't matter for Mathematica. As noted below, consider filing a bug with reproduction steps for the blank 3D figure in primus. – amonakov Feb 1 '14 at 10:54 • @amonakov - yes, like I said, this was only my guess as to why the blank graphics occur. I agree it's due to some more complicated bug. I already filed a report to Wolfram support but I seriously doubt they would be able to help such a niche use case (i.e. Linux users with Optimus). – VF1 Feb 1 '14 at 15:50 You must adjust the Antialiasing Quality to solve that issue. Go to menu Edit -> Preferences -> Appearance -> Graphics.. then adjust it.. That worked for me. I have Ubuntu 14.04 and Mathematica V9. Other solution is open a terminal and run: mathematica -mesa when opening. • Hi ! Can you edit your post to clarify it ? As it is currently written it is hard to tell exactly which issue/s you are trying to solve. – Sektor Jul 13 '14 at 6:58 • Yep. This solution works on Linux Mint 17.1 as well. +1 – shivams Jun 18 '15 at 14:16 Problem 1 should be a distribution specific bug. I cannot reproduce this problem on Archlinux. I can reproduce problem 2. I think primusrun is still experimental. You may report a bug to them (but the developers may not have Mathemataica though). • If you can provide reproduction steps for the trial version, or capture the problem with apitrace, I will look into fixing it. Consider filing a bug at github.com/amonakov/primus/issues – amonakov Feb 1 '14 at 10:42 • Thanks! I filed a bug at github. – Yi Wang Feb 1 '14 at 13:56 • What happens for Problem 1 for you? – VF1 Feb 1 '14 at 15:52 • On my computer, Mathematica works well, no crash, and displays Plot3D figures correctly when started directly using integrated graphics (i.e. without optirun or primusrun) – Yi Wang Feb 1 '14 at 16:29 • Problem 2 is a limitation in primus, it doesn't support GLX pixmaps and front buffer rendering: github.com/amonakov/primus/issues/131 Regarding problem 1, you may be seeing a segfault in graphics drivers: 12.04 is quite old. – amonakov Feb 1 '14 at 20:34 After talking with Wolfram tech support, it seems problem 1 is really just this issue resolved by running Mathematica by the mathematica -mesa command. Problem 2 is unresolved. I suppose I should keep this question open, though I don't know if a fix will come soon.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.15390735864639282, "perplexity": 1782.204471837705}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990449.41/warc/CC-MAIN-20210514091252-20210514121252-00435.warc.gz"}
https://arxiv.org/list/hep-lat/9509
# High Energy Physics - Lattice ## Authors and titles for Sep 1995 [ total of 107 entries: 1-25 | 26-50 | 51-75 | 76-100 | 101-107 ] [ showing 25 entries per page: fewer | more | all ] [1] Title: Properties of the Z(3) Interface in (2+1)-D SU(3) Gauge Theory Comments: 4 pages with 4 figures as one uuencoded, gzipped postscript file; presented at Lattice '95 Journal-ref: Nucl.Phys.Proc.Suppl. 47 (1996) 535-538 Subjects: High Energy Physics - Lattice (hep-lat) [2] Title: High density QCD with static quarks Comments: LaTeX, 18 pages, uses epsf.sty, postscript figures included Journal-ref: Phys.Rev.Lett. 76 (1996) 1019-1022 Subjects: High Energy Physics - Lattice (hep-lat) [3] Title: Performance of the Cray T3D and Emerging Architectures on Canopy QCD Applications Authors: Mark Fischler, Mike Uchima (Fermilab) Comments: 4 pages, to be published in Proceedings of Lattice '95. LaTeX using Elsevere's espcrc2.sty style file (espcrc2.sty and .tex appended at end of submission) Journal-ref: Nucl.Phys.Proc.Suppl. 47 (1996) 808-811 Subjects: High Energy Physics - Lattice (hep-lat) [4] Title: Baby Universes in 4d Dynamical Triangulation Comments: 8 pages, 4 figures Journal-ref: Phys.Lett. B366 (1996) 72-76 Subjects: High Energy Physics - Lattice (hep-lat); High Energy Physics - Theory (hep-th) [5] Title: Four-point renormalized coupling constant in O(N) models Comments: 4 pages, to be published in the Proceedings of Lattice 95, Postscript file Journal-ref: Nucl.Phys.Proc.Suppl. 47 (1996) 751-754 Subjects: High Energy Physics - Lattice (hep-lat) [6] Title: RTNN: The new parallel machine in Zaragoza Authors: A.J. van der Sijs (University of Zaragoza, Spain) Comments: 10 pages PostScript, including 5 figures. Write-up (June 1995) of talk at the International Workshop QCD on Massively Parallel Computers'', Yamagata, Japan, 16-18 March 1995. To appear in the Proceedings, Suppl. Progr. Theor. Phys. (Kyoto) Journal-ref: Prog.Theor.Phys.Suppl.122:31-40,1996 Subjects: High Energy Physics - Lattice (hep-lat) [7] Title: The SU(2) Confining Vacuum as a Dual Superconductor Comments: 4 pages, uuencoded compressed (using GNU's gzip) tar file containing 1 LaTeX2e file, 5 encapsulated Postscript figures, and espcrc2.sty. Contribution to Lattice 95 Journal-ref: Nucl.Phys.Proc.Suppl. 47 (1996) 318-321 Subjects: High Energy Physics - Lattice (hep-lat) [8] Title: On the phase structure of QCD with Wilson fermions Authors: S. Aoki Comments: 14 pages (6 figures), latex (epsf style-file needed), talk presented at ''QCD on massively parallel computers'' (Yamagata, Japan, March 16-18 1995) Journal-ref: Prog.Theor.Phys.Suppl.122:179-186,1996 Subjects: High Energy Physics - Lattice (hep-lat) [9] Title: Spin and Gauge Systems on Spherical Lattices Comments: 4 pages, LaTeX, 3 POSTSCRIPT figures (uuencoded) Journal-ref: Nucl.Phys.Proc.Suppl. 47 (1996) 815-818 Subjects: High Energy Physics - Lattice (hep-lat) [10] Title: More on SU(3) Lattice Gauge Theory in the Fundamental--Adjoint Plane Authors: Urs M. Heller Comments: 4 pages, uuencoded, gziped postscript file. To appear in the Proceedings of LATTICE'95, Melbourne, Australia, 11-15 July, 1995 Journal-ref: Nucl.Phys.Proc.Suppl. 47 (1996) 262-265 Subjects: High Energy Physics - Lattice (hep-lat) [11] Title: Generalized Hyper-Systolic Algorithm Authors: A. Galli (MPI, München) Comments: Latex Subjects: High Energy Physics - Lattice (hep-lat) [12] Title: Chronological Inversion Method for the Dirac Matrix in Hybrid Monte Carlo Authors: R.C.Brower (Boston Un.), T.Ivanenko (MIT), A.R.Levi (Boston Un.), K.N.Orginos (Brown Un.) Comments: 35 pages, 18 EPS figures A new "preconditioning" method, derived from the Chronological Inversion, is described. Some new figures are appended. Some reorganization of the material has taken place Journal-ref: Nucl.Phys. B484 (1997) 353-374 Subjects: High Energy Physics - Lattice (hep-lat) [13] Title: Study of gauge (in)dependence of monopole dynamics Comments: 4pages (7 figures), Latex, Contribution to Lattice 95 Journal-ref: Nucl.Phys.Proc.Suppl. 47 (1996) 322-325 Subjects: High Energy Physics - Lattice (hep-lat) [14] Title: Monopoles in High Temperature Phase of SU(2) QCD Authors: Shinji Ejiri Comments: 4pages (3 figures), Latex, Contribution to Lattice 95 Journal-ref: Nucl.Phys.Proc.Suppl. 47 (1996) 539-542 Subjects: High Energy Physics - Lattice (hep-lat) [15] Title: Block spin transformation on the dual lattice and monopole action Comments: 4pages (4 figures), Latex, Contribution to Lattice 95 Journal-ref: Nucl.Phys.Proc.Suppl. 47 (1996) 270-273 Subjects: High Energy Physics - Lattice (hep-lat) [16] Title: Monopoles and hadron spectrum in quenched QCD Comments: 4pages, uuencoded PostScript, To appear in the Proceeding of LATTICE '95, Melbourne, Australia, 11-15 July, 1995 Journal-ref: Nucl.Phys.Proc.Suppl. 47 (1996) 374-377 Subjects: High Energy Physics - Lattice (hep-lat) [17] Title: B Physics with NRQCD: A Quenched Study Comments: 4 pages, uuencoded compressed postscript file, contribution to LATTICE 95 Journal-ref: Nucl.Phys.Proc.Suppl.47:425-428,1996 Subjects: High Energy Physics - Lattice (hep-lat) [18] Title: Status of the Finite Temperature Electroweak Phase Transition on the Lattice Authors: Karl Jansen Comments: Plenary talk given at the International Symposium on Lattice Field Theory, 11-15 July 1995, Melbourne, Australia, 16 pages Journal-ref: Nucl.Phys.Proc.Suppl. 47 (1996) 196-211 Subjects: High Energy Physics - Lattice (hep-lat); High Energy Physics - Phenomenology (hep-ph) [19] Title: Glueball Spectroscopy on S^3 Comments: 8p. latex, 3 uuencoded PostScript figures appended Journal-ref: Phys.Lett. B368 (1996) 124-130 Subjects: High Energy Physics - Lattice (hep-lat); High Energy Physics - Theory (hep-th) [20] Title: 2-dimensional Regge gravity in the conformal gauge Comments: 7 pages, latex file. To be published in the Proceedings of Lattice 95, Melbourne (Australia) 11-15 July 1995 Journal-ref: Nucl.Phys.Proc.Suppl. 47 (1996) 633-636 Subjects: High Energy Physics - Lattice (hep-lat); General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th) [21] Title: Continuum Limits and Exact Finite-Size-Scaling Functions for One-Dimensional $O(N)$-Invariant Spin Models Comments: 541038 bytes uuencoded gzip'ed (expands to 1301207 bytes Postscript); 88 pages including all figures Journal-ref: J.Statist.Phys. 86 (1997) 581-673 Subjects: High Energy Physics - Lattice (hep-lat) [22] Title: Scattering in the quenched approximation Comments: 4 pages, uuencoded compressed tar-file, contribution to Lattice'95 Journal-ref: Nucl.Phys.Proc.Suppl. 47 (1996) 553-556 Subjects: High Energy Physics - Lattice (hep-lat); High Energy Physics - Phenomenology (hep-ph) [23] Title: Lattice Chiral Fermions Authors: Yigal Shamir Comments: Plenary talk at Lattice'95, Melbourne, Australia. LaTeX, 16 pages Journal-ref: Nucl.Phys.Proc.Suppl. 47 (1996) 212-227 Subjects: High Energy Physics - Lattice (hep-lat); High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Theory (hep-th) [24] Title: Characterization of phases and boundary effects in U(1) gauge theory Comments: 5 pages, 5 figures included, uuencoded postscript file. Contribution to LATTICE 95 Journal-ref: Nucl.Phys.Proc.Suppl. 47 (1996) 667-670 Subjects: High Energy Physics - Lattice (hep-lat) [25] Title: Strong-coupling expansion of lattice O(N) sigma models Comments: 4 pages, compressed uuencoded PostScript, Contribution to Lattice 95 Journal-ref: Nucl.Phys.Proc.Suppl. 47 (1996) 755-758 Subjects: High Energy Physics - Lattice (hep-lat) [ total of 107 entries: 1-25 | 26-50 | 51-75 | 76-100 | 101-107 ] [ showing 25 entries per page: fewer | more | all ] Disable MathJax (What is MathJax?) Links to: arXiv, form interface, find, hep-lat, 2112, contact, help  (Access key information)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7832084894180298, "perplexity": 24400.52194444556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362923.11/warc/CC-MAIN-20211204003045-20211204033045-00139.warc.gz"}
https://www.rdocumentation.org/packages/Rcmdr/versions/0.9-17/topics/Recode
# Recode 0th Percentile ##### Rcmdr Recode Dialog The recode dialog is normally used to recode numeric variables and factors into factors, for example by combining values of numeric variables or levels of factors. It may also be used to produce new numeric variables. The Rcmdr recode dialog is based on the recode function in the car package. Keywords manip ##### Details The name of the new variable must be a valid R object name (consisting only of upper and lower-case letters, numerals, and periods, and not starting with a numeral). Enter recode directives in the box at the right. Directives are normally entered one per line, but may also be separated by semicolons. Each directive is of the form input = output (see the examples below). If an input value satisfies more than one specification, then the first (from top to bottom, and left to right) applies. If no specification is satisfied, then the input value is carried over to the result. NA is allowed on input and output. Factor levels are enclosed in double-quotes on both input and output. Several recode specifications are supported: [object Object],[object Object],[object Object],[object Object] If all of the output values are numeric, and the "Make new variable a factor" check box is unchecked, then a numeric result is returned. recode
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24500004947185516, "perplexity": 2264.2257599760405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496227.25/warc/CC-MAIN-20200329201741-20200329231741-00139.warc.gz"}
http://tex.stackexchange.com/questions/25215/combining-tikz-foreach-and-let-operation?answertab=votes
# Combining TikZ foreach and let operation I'm not having any luck using the TikZ let operation inside a foreach loop. Is there anything I'm missing? Sample code (that doesn't work): \begin{tikzpicture} \foreach \y in {1,2,3} {\draw (0,0) -- (3,\y); \draw let \p1 = (3,\y), \n1 = {atan2(\x1,\y1)} in (\y,0) arc [start angle = 0, end angle = \n1, radius=\y]; } \end{tikzpicture} - The let synax is perfectly valid inside a \foreach. You do however have a clash of variable names: The \y from the loop conflicts with the \y⟨n⟩ from let. Simply renaming the loop counter solves the problem: \documentclass{article} \usepackage{tikz} \usetikzlibrary{calc} \begin{document} \begin{tikzpicture} \foreach \a in {1,2,3} {\draw (0,0) -- (3,\a); \draw let \p1 = (3,\a), \n1 = {atan2(\x1,\y1)} in (\a,0) arc [start angle = 0, end angle = \n1, radius=\a]; } \end{tikzpicture} \end{document} The underlying problem is that TeX macro names cannot contain numbers. So let has to define a macro called \y that reads the 1 (or other number) as a parameter and then redirects to the correct value. This of course overrides the \y coming from the loop. So you (presumably) get an error on (\y,0), because the new \y (inside the let) expects to be followed by a number, not a ,. - Brilliant, thanks! –  Simon Byrne Aug 8 '11 at 17:05 This also made me suffer! This incompatibility should be mentioned in TikZ manual. –  Gonzalo Jan 8 '13 at 2:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9904711842536926, "perplexity": 3908.3740363898537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
https://socratic.org/questions/how-do-you-calculate-the-ka-for-the-weak-acid-with-pka-of-0-21
Organic Chemistry Topics # How do you calculate the Ka for the weak acid with pKa of 0.21.? Feb 13, 2016 ${K}_{a}$ $=$ ${10}^{- 0.21}$ $=$ ?? #### Explanation: The $p$ function means to take the negative of the log to the base 10. The same for $p H$; here, if $p H$ $=$ $1$, then $\left[{H}^{+}\right]$ $=$ ${10}^{-} 1$ $m o l \cdot {L}^{-} 1$. I recommend that you have a good look at your texts, and get this concept straight. For instance when we write ${\log}_{a} b = c$, we are saying that ${a}^{c}$ $=$ $b$. For practice, can you tell me what ${\log}_{10} 100$, and ${\log}_{10} 1000$ are? Years ago, (20-30 years), before the advent of cheap electronic calculators, students used to be issued with log tables for multiplication and division. ##### Impact of this question 2704 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 20, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7184759974479675, "perplexity": 617.9255864359517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058467.95/warc/CC-MAIN-20210927181724-20210927211724-00547.warc.gz"}
https://demo7.dspace.org/items/5625edff-5ff1-46f2-a702-01c413748082
## On certain permutation representations of the braid group ##### Authors Iliev, Valentin Vankov ##### Description This paper is devoted to the proof of a structural theorem, concerning certain homomorphic images of Artin braid group on $n$ strands in finite symmetric groups. It is shown that any one of these permutation groups is an extension of the symmetric group on $n$ letters by an appropriate abelian group, and in "half" of the cases this extension splits. Comment: 10 pages, modified theorem, corrected typos ##### Keywords Mathematics - Group Theory, Mathematical Physics, 20F36, 20E22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8678399920463562, "perplexity": 680.5202147565223}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711344.13/warc/CC-MAIN-20221208150643-20221208180643-00204.warc.gz"}
http://perfdynamics.blogspot.com/2012/11/
## Thursday, November 29, 2012 ### PDQ 6.0 from a Developer Standpoint This is a guest post by Paul Puglia, who contibuted significant development effort for the PDQ 6.0 release; especially as it relates to interfacing with R. Here, Paul provides more details about the motivation described in my eariler announcement. PDQ was designed and implemented around a couple of basic assumptions. First, the library would be a C-language API running on some variant of the Unix operating system where we could reasonably assume that we'd be able to link it against a standard C library. Second, programs built using this API would be "stand-alone" executables in the sense that they'd run in their own, dedicated memory address spaces, could route its I/O through the standard streams (stdout or stderr), and had complete control over how error conditions would be handled. Not surprisingly, the above assumptions drove a set of implementation decisions for the library, namely: • All I/O would be pushed through the standard stream library functions like printf and fprintf • Memory for internal data structures would be allocated and released through calls to the standard library functions calloc and free • Error conditions would result in PDQ causing the model execution to stop with an explicit call to exit(). These aren’t usual decisions for a C API. With the arrival of PDQ 2.0, we introduced foreign interfaces programming environments (PERL, Python and R) that allowed PDQ to be called from these other environments. All these new foreign interfaces were built and released using the SWIG interface building tool, which allows us to build these interfaces with absolutely no modification to the underlying PDQ code—a major benefit when you’ve got a mature, debugged API that you really want to remain that way. For the most part this arrangement worked pretty well—at least for those environments where it was natural to write and execute PDQ models like standalone C-programs (you can also read this as PERL and Python). When it came to R, however, our early implementation decisions weren’t such a great fit for how R is commonly used, which is as an interactive environment, similar to programs like Mathematica, Maple, and Matlab. Like these other environments, R users do most of their interaction with a REPL (Read-Execute-Print Loop) usually wrapped in either full-fledged GUI interface or a terminal-like interface called the console. It turns out that most of PDQ's implementation decisions could (and do) interfere with using R interactively. In particular: • Calling the exit() function results in the entire R environment exiting – not a good feature for an interactive environment. • Writing directly to the stdout and strerr using fprintf, bypasses R's own internal I/O mechanisms and prevents internal /O functions (like the sink() command) from working properly. • Using calloc() and free() functions interfere with R's own internal memory management mechanisms and would prove to be a major impediment for any Windows version of the interface. Not only do these severely degrade the interactive experience for R users, their use also gets flagged by R’s extension building mechanism when it does a consistency check. And not passing that check would prove a major impediment for getting the PDQ's R interface accepted on CRAN (Comprehensive R Archive Network). Luckily, none of the fixes for these issues are particularly hard to implement. Most are either fairly simple substitutions of the R API calls for C library routines or/and localized changes to PDQ library. And, while all of this does potentially create a risk of introducing bugs in the PDQ library, the reward for taking that risk is a stable R interface that can be eventually be submitted to CRAN. A version of the PDQ library can be easily built under Windows™ using the Rtools utilities. ## Monday, November 12, 2012 ### PDQ 6.0 is On Its Way PDQ (Pretty Damn Quick) version 6.0.β is in the QA pipeline. Although this is a major release, cosmetically, things won't look any different when it comes to writing PDQ models. All the big changes have taken place under the hood in order to make PDQ more consistent with the R statistical environment. R version 2.15.2 (2012-10-26) -- "Trick or Treat" Copyright (C) 2012 The R Foundation for Statistical Computing ISBN 3-900051-07-0 Platform: i386-apple-darwin9.8.0/i386 (32-bit) > library(pdq) > source("/Users/njg/PDQ/Test Suites/R-Test/mm1.r") *************************************** ****** Pretty Damn Quick REPORT ******* *************************************** *** of : Thu Nov 8 17:42:48 2012 *** *** for: M/M/1 Test *** *** Ver: PDQ Analyzer 6.0b 041112 *** *************************************** *************************************** ... The main trick is that the Perl and Python versions of PDQ will remain entirely unchanged while at the same time invisibly incorporating significant changes to accommodate R. ## Friday, November 9, 2012 ### Guerrilla Training in 2013 The preliminary Guerrilla CaP training schedule for 2013 has been posted. Book early, book often. ## Tuesday, November 6, 2012 ### Hotsos 2013: Superlinear Scalability As readers of this blog know, the Universal Scalability Law (USL) is a framework for quantifying performance measurements and extrapolating load-test data. Applied as a statistical regression model, the two USL contention (α) and coherency (β) parameters numerically indicate the degree of sublinear scalability in the data, i.e., how much linear scaling you're losing due to sharing and consistency overheads. Some examples of USL scalability analysis applied to databases, include: More recently, it was brought to my attention that the USL fails when it comes to modeling superlinear performance (e.g., see this Comments section). Superlinear scalability means you get more throughput than the available capacity would be expected to support. It's even discussed on the Wikipedia (so it must be true, right?). Nice stuff, if you can get it. But it also smacks of an effect like perpetual motion. Every so often, you see a news report about someone discovering (again) how to beat the law of conservation of energy. They will swear up and down that it works and it will be accompanied by a contraption that proves it works. Seeing is believing, after all. The hard part is not whether to believe their claim, it's debugging their contraption to find the mistake that has led them to the wrong conclusion. Similarly with superlinearity. Some data are just plain spurious. In other cases, however, certain superlinear measurements do appear to be correct, in that they are repeatable and not easily explained away. In that case, it was assumed that the USL needed to be corrected to accommodate superlinearity by introducing a third modeling parameter. This is bad news for many reasons, but primarily because it would weaken the universality of the universal scalability law. To my great surprise, however, I eventually discovered that the USL can accommodate superlinear data without any modification to the equation. As an unexpected benefit, the USL also warns you that you're modeling an unphysical effect: like a perpetual-motion detector. A corollary of this new analysis is the existence of a payback penalty for incurring superlinear scalability. You can think of this as a mathematical statement of the old adage: If it looks too good to be true, it probably is. I'll demonstrate this remarkable result with examples in my Hotsos presentation.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24500159919261932, "perplexity": 2854.994816862984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864776.82/warc/CC-MAIN-20180622182027-20180622202027-00055.warc.gz"}
http://www.thespectrumofriemannium.com/tag/classical-statistical-mechanics/
## LOG#156. Superstatistics (I). This post is the first of three dedicated to some of my followers. Those readers from Mexico (a nice country, despite the issues and particularities it has, as the one I live in…), ;). Why? Well, …Firstly, they have proved … Continue reading ## LOG#146. Path integral (I). My next thematic thread will cover the Feynman path integral approach to Quantum Mechanics! The standard formulation of Quantum Mechanics is well known. It was built and created by Schrödinger, Heisenberg and Dirac, plus many others, between 1925-1931. Later, it … Continue reading
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9331322908401489, "perplexity": 3263.1625169722474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703521987.71/warc/CC-MAIN-20210120182259-20210120212259-00547.warc.gz"}
https://newproxylists.com/tag/zeta/
## Less basic applications of Zeta regularization: As we all know, zeta regularization is used in quantum field theory and computation regarding the Casimir effect. Are there less fundamental applications of regularization? By less fundamental, I mean It appears "naturally" in more than one artificially / purely mathematically ideal constructed scenario. Thank you! ## nt.number theory – Does this series, linked to the Hasse / Ser series for \$ zeta (s) \$, converge for all \$ s in mathbb {C} \$? I asked this question at the math stack swap, but it didn't get traction. Always curious to know the answer. Numerical evidence suggests that: $$lim_ {N to + infty} sum_ {n = 1} ^ N frac {1} {n} sum_ {k = 0} ^ n (-1) ^ k {n choose k } frac {1} {(k + 1) ^ {s}} = s$$ or equivalent $$lim_ {N to + infty} H left (N right) + sum _ {n = 1} ^ {N} left ({ frac {1} {n} sum _ {k = 1} ^ {n} { left (-1 right) ^ {k} {n choose k} frac {1 } { left (k + 1 right) ^ {s}}}} right) = s$$ with $$H (N)$$ = the $$N$$-th Harmonic Number. Convergence is quite slow, but clearly goes faster for negatives $$s$$. In addition, calculations for non-integer values ​​of $$s$$ require high precision parameters (I used Maple, bet / gp and ARB). However, according to Mathematica, the series diverges according to the "harmonic series test", although $$s$$ as an integer, it agrees on convergence. Does this series converge for $$s in mathbb {C}$$ ? Some numerical results below: ``````s=0.5 0.497702121, N = 100 0.499804053, N = 1000 0.499905919, N = 2000 s=-3.1415926535897932385 -3.14160222, N = 100 -3.14159284, N = 1000 -3.14159272, N = 2000 s=2.3-2.1i 2.45310498 - 1.94063637i, N = 100 2.33501943 - 2.09308517i, N = 1000 2.31996958 - 2.09923503i, N = 2000 `````` ## referral request – Double sum for \$ zeta (3) \$ and \$ zeta (5) \$ I found the following double sum representations for $$zeta (3)$$ and $$zeta (5)$$ $$zeta (3) = frac {1} {2} sum_ {i, j geq 1} frac { beta (i, j)} {ij}$$ $$zeta (5) = frac {1} {4} sum_ {i, j geq 1} frac {H_ {i} H_ {j} , beta (i, j)} {ij}$$ or $$beta ( cdot, cdot)$$ represents the beta function, and $$H_ {i}$$ represents the $$i$$e harmonic number. Are these results known in the literature? If yes, please provide some references / evidence for the same. ## Asymptotic of the Hurwitz zeta function Can anyone please help me with a reference to Hurwitz asymptotics (or just the upper limits) $$zeta (s, z)$$ as $$| t | rightarrow infty$$ with $$Re (z)> 0,$$ $$s = sigma + i t$$ and $$sigma <0$$? I only found limits for the real z ## nt.number theory – Doubt on the proof of the irrationality of \$ zeta (3) \$ I read this article by Henri Cohen on the proof of the irrationality of Apery $$zeta (3)$$ but I don't really have the details of "THEOREM 1". My first doubt concerns the relationship $$a_n sim A alpha ^ n n ^ {- 3/2}$$. I know if $$a_n$$ filled the relationship $$a_n-34a_ {n-1} + 1 = 0$$ then as its characteristic polynomial is $$x ^ 2-34x + 1$$ and like $$alpha$$ is one of its roots, if we note by $$bar { alpha}$$ the second root, then we would have $$a_n = A_1 alpha ^ n + A_2 bar { alpha} ^ n$$. Then, like $$0 < bar { alpha} <1$$ we have that $$a_n / alpha ^ n longrightarrow A_1$$. However, the relationship for $$a_n$$ East $$a_n- (34-51n ^ {1} + 27n ^ {- 2} -5n ^ {- 3}) + (n-1) ^ 3n ^ {- 3} a_ {n-2} = 0$$ and I don't know how we can extravagantly deal with additional terms. Also, how to get the supplement $$n ^ {- 3/2}$$ term? Second, why does this relationship imply that $$zeta (3) -a_n / b_n = O ( alpha ^ {- 2n})?$$ After that, it remains that it can be shown that from the prime number theorem, we have that $$log d_n sim n$$ or $$d_n = text {lcm} (1,2, cdots, n)$$. I managed to prove that $$dfrac { log d_n} {n} leq pi (n) dfrac { log n} {n}$$ but I am unable to prove that $$log d_n / n$$ converges to $$1$$. Finally, I don't know how this last result is true that for everything $$varepsilon> 0$$ we have $$zeta (3) – dfrac {p_n} {q_n} = O (q_n ^ {- r- varepsilon})$$ I'm not really good at asymptotic behavior and big-O scoring, so I would really appreciate it if someone could respond with rotting and detailed explanations. Thank you so much. ## limits and convergence – Does this sum \$ (1- frac {1} {2 ^ s}) ^ {(1- frac {1} {3 ^ s}) ^ {… … ^ {(1- frac {1} {p ^ s})}}} \$ also linked to the Riemann zeta function for \$ Re (s)> 0 \$? I asked this question a month ago in SE but I don't have an answer. I want help for MO researchers I'm interesting for the amount of repeated exemption from the form $$(z_1) ^ {z_2) ^ {… ^ {z_k}}}$$ such as $$z_1$$ and $$z_2$$, $$z_k$$ are different real exponents, this type of sum has been studied by many authors such as: Barraw ((exponential infinite, monthly 28 (1921) pp 141-143), For the Euler product which is linked to the Riemann zeta function, we have this well-known identity:$$prod_ {p in mathbb {P}} (1-p ^ {- s}) = frac {1} { zeta (s)}$$ , for $$s = 2$$ This product gives $$frac {6} { Pi ^ 2}$$ Now I am thinking of making this product as sum of exposure as follows:$$S_p = (1- frac {1} {2 ^ s}) ^ {(1- frac {1} {3 ^ s}) ^ {… ^ {(1- frac {1} { p ^ s})}}}$$ , I want to know if this iterated exponential sum linked to Riemann's zeta function as an Euler product for a sufficiently large prime number I have a connection log function on this sum and increases exponentially I have the following identity: $$S_p = exp ((1-2 ^ {- s}) sum_ {p geq 3} (1-p ^ {- s}))$$ , from this identity, I don't know the relationship between the sum of the exhibitor on $$p geq3$$ and Riemann's zeta function, part of the observation that I have has is this sum for $$s = 2$$ and $$s = 1$$ is this sum delimited by $$frac {1} { zeta (s)}$$ for $$p to infty$$ as for $$s = 1$$ ,$$S_p leq frac {1} { zeta (2)}$$ and for $$s = 2$$ We have $$S_p leq frac {1} { zeta ^ 2 (2)}$$ . ## nt.number theory – Can this quantity be expressed in \$ x cdot zeta (k) + y, x, y in mathbb {Q} \$? For each natural number $$a$$ consider the sequence $$l (a): = left ( frac { gcd (a, b)} {a + b} right) _ {b in mathbb {N}}$$. Then I calculated it for $$k ge 2, k in mathbb {R}$$ and $$p$$ first, we have: $$| l (1) | _k ^ k = zeta (k) -1$$ $$| l (p) | _k ^ k = frac {2 p ^ k-1} {p ^ k} zeta (k) – left (1+ sum_ {j = 1} ^ {p-1} frac {1} {j ^ k} right)$$ I also calculated for $$n = 4$$ this: $$| l (4) | _k ^ k = zeta (k) left (3- frac {1} {4 ^ k} – frac {2} {2 ^ k} + frac {1} {2 ^ {2k}} droite ) -3- frac {1} {3 ^ k}$$ My question is, if $$| l (a) | _k ^ k = x zeta (k) + y$$ or $$x, y in mathbb {Q}$$? $$langle l (1), l (2) rangle = sum_ {k = 1} ^ infty frac {3k + 1} {2k (k + 1) (2k + 1)}$$ Is this last quantity equal to $$log (2)$$? ## riemann zeta function – A counterexample to the conjecture below? This question is the clarification of my recent closed question, an example satisfying this conjecture $$n = 180$$ is mentioned here and the smallest integer $$k$$ satisfying this guess is for $$n = 3$$ Which one is $$60,480$$ look in this example. Conjecture: Is$$k$$ and$$alpha$$ to be prime positive integers and$$alpha such as: if $$4 ^ {- n} zeta (2n) = frac { pi ^ {2n} alpha} {k}$$ then $$k$$ always divisible by the first integers of $$1$$ at $$9$$ for each $$n> 2$$. Now my question here: a counterexample for this conjecture? And if it is true how can I prove it? ## analytical number theory – Contour integration involving the Zeta function I am trying to calculate the integral of the contour $$frac {1} {2 pi i} int_ {c – i infty} ^ {c + i infty} zeta ^ 2 ( omega) frac {8 ^ omega} { omega} omega$$ or $$c> 1$$, $$zeta (s)$$ is Riemann's zeta function. Use Perron's formula and define $$D (x) = sum_ {k leq x} sigma_0 (n)$$, or $$sigma_0$$ is the usual function of counting the divisors, we can show that $$D (x) = frac {1} {2 pi i} int_ {c – i infty} ^ {c + i infty} zeta ^ 2 ( omega) frac {x ^ omega } { omega} d omega.$$ So for this purpose, we can just calculate $$D (8)$$ and call it a day. However, for my own needs, I want to redefine $$D (x)$$ by the full above instead. Therefore, why I state the problem for a specific case $$x = 8$$, for example. Considering a modified Bromwich contour which avoids branch cutting and $$z = 0$$, let's call it $$mathcal {B}$$, we can apply the Cauchy residue theorem: $$oint _ { mathcal {B}} zeta ^ 2 ( omega) frac {8 ^ omega} { omega} d omega = 2 pi i operatorname * {Res} ( zeta ^ 2 ( omega) frac {8 ^ { omega}} { omega}; 1) = 8 (-1 + 2 gamma + ln 8)$$ or $$gamma$$ is the Euler-Mascheroni constant. I got this by extending $$zeta ^ 2 ( omega) frac {8 ^ omega} { omega}$$ in his Laurent series. To obtain the desired integral, it would then be necessary to subtract from this value the parts of the contour which are not the vertical line $$c – iR$$ at $$c + iR$$, subtract them from the residual value obtained, then take the limit as $$R to infty$$ and $$r to 0$$ or $$C_r$$ is the circle of radius $$r$$ where the $$mathcal {B}$$ dodges the origin. Feel free to modify this outline in any shape or form, or consider a positive integer value different from $$x$$. When I tried to set limits $$zeta (0.5 + it)$$ using certain transformations on the Gamma function using the function $$f (x) = exp (-n x)$$ all over the beach $$(0, + infty)$$ , For $$Re (s) = frac12$$ and $$t> 0$$ I'm coming to the final limits for $$zeta (0.5 + it)$$ which is represented by the following formal: For $$t geq 1.22$$: $$| zeta (0.5 + it) | leq 0,5 frac {| Gamma (0.5 + it) |} {| Gamma (-0,5 + it) |} tag {1}$$, For the limits of $$Gamma (s)$$ we find that the monotonic increasing function for $$| t | geq 5/4$$ with respect for the real part of $$s$$ and it was wrong with $$| t | leq 1$$ in this article called On the horizontal monotony of $$| Gamma (s) |$$ by Gopala Krishna Srinivasan and P. Zvengrowski, |$$Gamma (s)$$| is given in the introduction to this document for $$s = sigma + i t$$ by this formula: $$| Gamma ( sigma + it) | = lambda frac { Gamma (1+ sigma)} { sqrt { sigma ^ 2 + t ^ 2}} sqrt { frac {2 pi t} { exp ( pi t) – exp (- pi t)}} tag {2}$$, it seems that the right side of this formal linked to the hyperbolic function cos, now when I tried to plug this formal into the RHS of my limits, it gives me a complicated form such as no simple formula for simplification, my question here how I can simplify RHS OF $$1$$ if this is true?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 122, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8769974708557129, "perplexity": 1707.1304780296002}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886178.40/warc/CC-MAIN-20200704135515-20200704165515-00061.warc.gz"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=151&t=44640
## calculating A in Arrhenius equation Arrhenius Equation: $\ln k = - \frac{E_{a}}{RT} + \ln A$ hanna_maillard3B Posts: 61 Joined: Fri Sep 28, 2018 12:23 am ### calculating A in Arrhenius equation how do we calculate A in the Arrhenius equation - I understand that it's the frequency of collision but how do we know what that number is ? Joonsoo Kim 4L Posts: 61 Joined: Fri Sep 28, 2018 12:29 am ### Re: calculating A in Arrhenius equation To my understanding, A depends on the steric factor (fraction of collisions with the correct orientation) and the collision frequency, but I am pretty sure that we won't have to calculate A and it will be given to us. taywebb Posts: 60 Joined: Fri Sep 28, 2018 12:15 am ### Re: calculating A in Arrhenius equation From the practice problems and textbook, I think it is safe to assume they will give us A in a chart or just in the question of the problem.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7600261569023132, "perplexity": 1427.3021137752132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315865.44/warc/CC-MAIN-20190821085942-20190821111942-00160.warc.gz"}
https://www.tutorialspoint.com/selection-sort-in-python-program
Selection Sort in Python Program In this article, we will learn about the Selection sort and its implementation in Python 3.x. Or earlier. In selection sort algorithm, an array is sorted by recursively finding the minimum element from the unsorted part and inserting it at the beginning. Two subarrays are formed during the execution of Selection sort on a given array. • The subarray , which is already sorted. • The subarray , which is unsorted. During every iteration of selection sort, the minimum element from the unsorted subarray is popped and inserted into the sorted subarray. Let’s see the visual representation of the algorithm − Now let’s see the implementation of the algorithm − Example Live Demo A = ['t','u','t','o','r','i','a','l'] for i in range(len(A)): min_= i for j in range(i+1, len(A)): if A[min_] > A[j]: min_ = j #swap A[i], A[min_] = A[min_], A[i] # main for i in range(len(A)): print(A[i]) Output a i l o r t t u Here we received output from the algorithm in ascending order. Min_ is the current value which is getting compared with all other values. The analysis parameters of the algorithm are listed below − Time Complexity − O(n^2) Auxiliary Space − O(1) Here all the variables are declared in the global frame as shown in the image below Conclusion −
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6444858312606812, "perplexity": 2208.776573975144}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500357.3/warc/CC-MAIN-20230206181343-20230206211343-00151.warc.gz"}
https://lw2.issarice.com/posts/zJZvoiwydJ5zvzTHK/the-allais-paradox
# The Allais Paradox post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-01-19T03:05:32.000Z · score: 27 (28 votes) · LW · GW · Legacy · 135 comments Choose between the following two options: 1A.  $24,000, with certainty. 1B. 33/34 chance of winning$27,000, and 1/34 chance of winning nothing. Which seems more intuitively appealing?  And which one would you choose in real life? Now which of these two options would you intuitively prefer, and which would you choose in real life? 2A. 34% chance of winning $24,000, and 66% chance of winning nothing. 2B. 33% chance of winning$27,000, and 67% chance of winning nothing. The Allais Paradox - as Allais called it, though it's not really a paradox - was one of the first conflicts between decision theory and human reasoning to be experimentally exposed, in 1953.  I've modified it slightly for ease of math, but the essential problem is the same:  Most people prefer 1A > 1B, and most people prefer 2B > 2A.  Indeed, in within-subject comparisons, a majority of subjects express both preferences simultaneously. This is a problem because the 2s are equal to a one-third chance of playing the 1s.  That is, 2A is equivalent to playing gamble 1A with 34% probability, and 2B is equivalent to playing 1B with 34% probability. Among the axioms used to prove that "consistent" decisionmakers can be viewed as maximizing expected utility, is the Axiom of Independence:  If X is strictly preferred to Y, then a probability P of X and (1 - P) of Z should be strictly preferred to P chance of Y and (1 - P) chance of Z. All the axioms are consequences, as well as antecedents, of a consistent utility function.  So it must be possible to prove that the experimental subjects above can't have a consistent utility function over outcomes.  And indeed, you can't simultaneously have: • U($24,000) > 33/34 U($27,000) + 1/34 U($0) • 0.34 U($24,000) + 0.66 U($0) < 0.33 U($27,000) + 0.67 U($0) These two equations are algebraically inconsistent, regardless of U, so the Allais Paradox has nothing to do with the diminishing marginal utility of money. Maurice Allais initially defended the revealed preferences of the experimental subjects - he saw the experiment as exposing a flaw in the conventional ideas of utility, rather than exposing a flaw in human psychology. This was 1953, after all, and the heuristics-and-biases movement wouldn't really get started for another two decades. Allais thought his experiment just showed that the Axiom of Independence clearly wasn't a good idea in real life. (How naive, how foolish, how simplistic is Bayesian decision theory...) Surely, the certainty of having$24,000 should count for something.  You can feel the difference, right?  The solid reassurance? (I'm starting to think of this as "naive philosophical realism" - supposing that our intuitions directly expose truths about which strategies are wiser, as though it was a directly perceived fact that "1A is superior to 1B".  Intuitions directly expose truths about human cognitive functions, and only indirectly expose (after we reflect on the cognitive functions themselves) truths about rationality.) "But come now," you say, "is it really such a terrible thing, to depart from Bayesian beauty?"  Okay, so the subjects didn't follow the neat little "independence axiom" espoused by the likes of von Neumann and Morgenstern.  Yet who says that things must be neat and tidy? Why fret about elegance, if it makes us take risks we don't want?  Expected utility tells us that we ought to assign some kind of number to an outcome, and then multiply that value by the outcome's probability, add them up, etc.  Okay, but why do we have to do that?  Why not make up more palatable rules instead? There is always a price for leaving the Bayesian Way.  That's what coherence and uniqueness theorems are all about. In this case, if an agent prefers 1A > 1B, and 2B > 2A, it introduces a form of preference reversal - a dynamic inconsistency in the agent's planning.  You become a money pump. Suppose that at 12:00PM I roll a hundred-sided die.  If the die shows a number greater than 34, the game terminates.  Otherwise, at 12:05PM I consult a switch with two settings, A and B.  If the setting is A, I pay you $24,000. If the setting is B, I roll a 34-sided die and pay you$27,000 unless the die shows "34", in which case I pay you nothing. Let's say you prefer 1A over 1B, and 2B over 2A, and you would pay a single penny to indulge each preference.  The switch starts in state A.  Before 12:00PM, you pay me a penny to throw the switch to B.  The die comes up 12.  After 12:00PM and before 12:05PM, you pay me a penny to throw the switch to A. I have taken your two cents on the subject. If you indulge your intuitions, and dismiss mere elegance as a pointless obsession with neatness, then don't be surprised when your pennies get taken from you... (I think the same failure to proportionally devalue the emotional impact of small probabilities is responsible for the lottery.) Allais, M. (1953). Le comportement de l'homme rationnel devant le risque: Critique des postulats et axiomes de l'école américaine.  Econometrica, 21, 503-46. Kahneman, D. and Tversky, A. (1979.) Prospect Theory: An Analysis of Decision Under Risk. Econometrica, 47, 263-92. Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27). comment by Doug_S. · 2008-01-19T03:25:27.000Z · score: 26 (30 votes) · LW · GW For $24,000, you can have my two cents. ;) comment by RobinHanson · 2008-01-19T03:37:26.000Z · score: 8 (10 votes) · LW · GW Yes, philosophers, and others, do often too easily accept the advice of strong intuitions, forgetting that strong intuitions often conflict in non-obvious ways. comment by Pablo_Stafforini · 2012-02-27T05:17:40.568Z · score: 2 (2 votes) · LW · GW Yes, exactly. For instance, many philosophers invoke Parfit's "repugnant conclusion" as a decisive objection to certain forms of consequentialism, overlooking the fact that all moral theories, when applied to scenarios involving different numbers of people, have implications that are arguably similarly repugnant. comment by Joe_Petviashvili · 2008-01-19T03:39:10.000Z · score: 10 (12 votes) · LW · GW The idea is that$ amount equals your utility, while in reality the history of how you got this amount also matters (regret, emotions, etc.). There's no paradox here - as your utility expressed in $just doesn't match utility of the subjects. As for money pump - you just have a win win situation - you earn money, and the subjects earn good feelings. comment by Nick_Tarleton · 2008-01-19T03:45:46.000Z · score: 14 (15 votes) · LW · GW If I knew the offer wouldn't be repeated, I might take 1A because I'd really rather not have to explain to people how I lost$24,000 on a gamble. comment by faul_sname · 2011-12-10T07:32:20.112Z · score: 2 (2 votes) · LW · GW This was my thought exactly. If I was given the option to keep the rest private if I lost, 1A would be a distinctly preferable choice. If I had a 1/34 chance of having to explain how I "lost" $24,000 vs an average loss of$2,200, I might well take choice 1B. (at a later time in my life, when I could afford to lose $2,200, and had significant financial risk from being perceived ask a risk-taker with money). comment by Gunnar_Zarncke · 2014-12-16T08:46:12.108Z · score: 1 (1 votes) · LW · GW I think these kinds of 'side channel' loss information are what make your intuition value 1A > 1B. In a way the implicit assumptions in the offer are what cause the trouble. Naive subjects are naive only to pure math not to real life. comment by Nick_Tarleton · 2008-01-19T03:48:34.000Z · score: 21 (21 votes) · LW · GW Actually, that makes me think of another explanation besides overreaction to small probabilities: if a person takes 1B and loses, they know they would have won if they'd chosen differently. If they take 2B and lose, they can tell themselves (and others) they probably would have lost anyway. comment by ThisDan · 2012-12-17T01:15:10.013Z · score: 4 (4 votes) · LW · GW Ok that is exactly my line of thinking and why i can't understand the broader point of this argument. Yes I can see the statistical similarity that makes it "the same"- but the situation is totally different in that one offers "certain win or risk" and the other is "risk vs risk" with a barely noticeable difference between them. So my decision on both questions goes like this 1a > 1b because even if i was offered MUCH less, i'd still likely take that deciding that i'm not greedy and free money always feels good but giving away free money (by trying to get a bit more) always feels foolish and greedy. 2b > 2a because if the statistic played out over 100 times, the average person will think it was equal value between them- unless they logged the statistics to find the slight difference. Therefore if it takes that much attention to feel the difference it's easy to pretend they are the same risk but one is 11.12% more money- which is a lot easier to notice without logging statistics. I don't see how these decisions conflict with each other. comment by [deleted] · 2015-03-16T17:26:38.909Z · score: 0 (0 votes) · LW · GW I seem to agree with you, but I think how you arrived to 11.12% is wrong. Did you divide 3000/27000? You can´t do that, since you won´t have 27000 unless you get those 3000 dollar extra. Shouldn´t you do 3000/24000 = 12,5%? comment by Caledonian2 · 2008-01-19T03:53:17.000Z · score: 4 (6 votes) · LW · GW A bird in the hand... Certainty is a form of utility, too. comment by buybuydandavis · 2011-10-28T00:33:48.921Z · score: 1 (1 votes) · LW · GW That goes hand in hand with his comments about complexity. The straightforward expected utility analysis doesn't include the cost of the analysis into the analysis. Nor the increased cost to all subsequent analyses for the uncertainty. We have limited computational power for executive functions. No doubt we have utility built into us to conserve those limited resources. Most people hate uncertainty and thinking, and they hate it much more than we do. I doubt I'm the only one here who has noticed that. comment by Bugmaster · 2011-10-28T01:23:06.325Z · score: -1 (1 votes) · LW · GW For me, the choice between 1A and 1B would depend on how badly I needed the money, which is why I disagree with Eliezer when the says that "marginal utility of the money doesn't count". For example, let's say I needed$20,000 in order to keep a roof over my head, food on my plate, and to generally survive. In this case, my penalty for failure is quite high, and IMO it would be more rational for me to take 1A. Sure, I could win more money if I picked 1B, but I could also die in that case. Thus, my utility in case of 1B would be something like 33/34 U($27,000, alive) + 1/34 U($0, dead) and U($anything, dead) is a very negative number. On the other hand, if I was a billionaire who makes$20,000 per second just by existing, then I would either pick 1B, or refuse to play the game altogether, because my time could be better spent on other things. comment by Vaniver · 2011-10-28T01:57:07.847Z · score: 7 (7 votes) · LW · GW Reread the post; that's not the paradox. The paradox is that, if you need the 20k to survive, then you should prefer 2A to 2B, because the extra 3k 33% of the time doesn't outweigh an additional 1% chance of dying. If someone prefers A in both cases, and B in both cases, they can have a consistent utility function. When someone prefers A in one case, and B in another, then they cannot have a consistent utility function. comment by Bugmaster · 2011-10-28T02:17:54.892Z · score: 0 (0 votes) · LW · GW Reread the post; that's not the paradox. Right, I didn't mean to imply that it was. But Eliezer seemed to be saying that picking 1A is irrational in general, in addition to the paradox, which is the notion that I was disputing. It's possible that I misinterpreted him, however. comment by Vaniver · 2011-10-28T04:26:49.872Z · score: 3 (3 votes) · LW · GW He makes it clearer in comments. What Caledonian is discussing is the certainty effect- essentially, having a term in your utility function for not having to multiply probabilities to get an expected value. That's different from risk aversion, which is just a statement that the utility function is concave. comment by Nainodelac_and_Tarleton_Nick · 2008-01-19T04:13:09.000Z · score: 5 (9 votes) · LW · GW Risk and cost of capital introduce very strange twists on expected utility. Assume that living has a greater expected utility to me than any monetary value. If I need a $20,000 operation within the next 3 hours to live, I have no other funding, and you make me offer 1, it is completely rational and unbiased to take option 1A. It is the difference between a 100% of living and a 97% chance of living. If I have$1,000,000,000 in the bank and command of legal or otherwise armed forces, I may just have you killed - for I would not tolerate such frivolous philosophizing. comment by Z._M._Davis · 2008-01-19T04:29:50.000Z · score: 1 (3 votes) · LW · GW I think defenses of the subject's choices by recourse to nonmonetary values is missing the point. Anything can be rational with a sufficiently weird utility function. The question is, if subjects understood the decision theory behind the problem, would they still make the same choice? After seeing a valid argument that your preferences make you a money pump, you certainly could persist in your original judgment, by insisting that your feelings make your first judgment the right one. But seriously?---why? comment by peco · 2008-01-19T04:47:48.000Z · score: 0 (0 votes) · LW · GW Since people only make a finite number of decisions in their lifetime, couldn't their utility function specify every decision independently? (You could have a utility function that is normal except that it says that everything you hear being called 1A is preferable to 1B, and anything you hear being called 2B is preferable to 2A. If this contradicts your normal utility function, this rule is always more important. Even if 2B leads to death, you still choose 2B.) The utility function would be impossible to come up with in advance, but it exists. comment by DonGeddis · 2008-01-19T05:00:39.000Z · score: 4 (6 votes) · LW · GW My intuitions match the stated naive intuitions, but I reject your assertion that the pair of preferences are inconsistent with Bayesian probability theory. You really underestimate the utility of certainty. "Nainodelac and Tarleton Nick"'s example in these comments about the operation is a perfect counter. With a 33% vs. 34% chance, the impact on your life is about the same, so you just do the straightforward probability calculation for expected value and take the maximum. But when offered 100% of some positive outcome, vs. a probability of nothing, it seems perfectly rational to prefer the guarantee. Maximizing expected dollar winnings is not necessarily the same as maximizing utility. And you're right, the issue isn't decreasing returns. But the issue is the cost of risk. Your money pump doesn't convince me either. I'd be happy to pay the two cents, both times, and not regret the cost at the end, just as I don't regret paying for insurance even if I happen not to get sick. comment by Roland5 · 2008-01-19T05:25:39.000Z · score: 2 (2 votes) · LW · GW Let's say you prefer 1A over 1B, and 2B over 2A, and you would pay a single penny to indulge each preference. The switch starts in state A. Before 12:00PM, you pay me a penny to throw the switch to B. I don't understand why I would pay you a penny to throw the switch gefore 12:00? comment by Constant2 · 2008-01-19T05:30:25.000Z · score: 1 (1 votes) · LW · GW Since I know myself, I know what I will do after midnight (pay to switch it to A), and so I resign myself to doing it immediately (i.e., leaving the switch at A) so as to save either one cent or two, depending on what happens. I will do this even if I share Don's intuition about certainty. Why pay before midnight to switch it to B if I know that after midnight I will pay to switch it back to A? *[if the first die comes up 1 to 34] comment by Thomas_B. · 2008-01-19T06:00:24.000Z · score: 0 (0 votes) · LW · GW I think I missed something on the algebraic inconsistency part... If there is some rational independent utility to certainty, the algebraic claims should be more like this: • U($24,000) + U(Certainty) > 33/34 U($27,000) + 1/34 U($0) • 0.34 U($24,000) + 0.66 U($0) < 0.33 U($27,000) + 0.67 U($0) This seems consistent so long as U(Certainty) > 1/34 U($27,000). I'm not committed to the notion there is a rational independent value to certainty, I'm just not seeing how it can be dismissed with quick algebra. Maybe that wasn't your goal. Forgive me if this is my oversight. comment by anon9 · 2008-01-19T06:21:40.000Z · score: 0 (0 votes) · LW · GW This reminds me of the foolish decisions on "deal or no deal". People would fail to follow their own announced utility. comment by Z._M._Davis · 2008-01-19T06:32:50.000Z · score: 4 (4 votes) · LW · GW When we speak of an inherent utility of certainty, what do we mean by certainty? An actual probability of unity, or, more reasonably, something which is merely very much certain, like probability .999? If the latter, then there should exist a function expressing the "utility bonus for certainty" as a function of how certain we are. It's not immediately obvious to me how such a function should behave. If probability 0.9999 is very much more preferable to probability 0.8999 than probability 0.5 is preferable to probability 0.4, then is 0.5 very much more preferable to 0.4 than 0.2 is to 0.1? comment by Dr._Science · 2008-01-19T06:52:41.000Z · score: 1 (1 votes) · LW · GW It's rational to take the certain outcome if gambling causes psychological stress. Notwithstanding that stress is intrinsically unpleasant, it increases your risk of peptic ulcers and stroke, which could easily cancel out the expected gain. comment by ricketson · 2012-01-15T19:37:24.686Z · score: 1 (1 votes) · LW · GW But such psychological stress arises from your perception of reality. If it is caused by an erroneous perception of reality, then the rational thing to do is correct your perception, not take the error for granted. If you are certain that you made the right decision, then you shouldn't feel stressed when you "lose". comment by John · 2008-01-19T07:08:52.000Z · score: -2 (4 votes) · LW · GW If you crunch the numbers differently, you can come to different conclusions. For example, if I choose 1B over 1A, I have a 1 in 34 chance of getting burned. If I choose 2B over 2A, my chance of getting burned is only 1 in 100. comment by TGGP4 · 2008-01-19T07:15:15.000Z · score: 2 (2 votes) · LW · GW James D. Miller has a proposal for Lottery Tickets that Usuallly Pay Off. Robin, were you thinking of a certain colleague of yours when you mentioned accepting intuition too readily? comment by tcpkac · 2008-01-19T08:36:22.000Z · score: 0 (2 votes) · LW · GW Risk aversion, and the degree to which it is felt, is a personality trait with high variance between individuals and over the lifespan. To ignore it in a utility calculation would be absurd. Maurice Allais should have listened to his homonym Alphonse Allais (no apparent relation), humorist and theoretician of the absurd, who famously remarked "La logique mène à tout à condition d'en sortir". Logic leads to everything, on condition it don't box you in. comment by Paul_Gowder · 2008-01-19T09:20:38.000Z · score: 2 (4 votes) · LW · GW I confess, the money pump thing sometimes strikes me as ... well... contrived. Yes, in theory, if one's preferences violate various rules of rationality (acyclicity being the easiest), one could conceivably be money-pumped. But, uh, it never actually happens in the real world. Our preferences, once they violate idealized axioms, lead to messes in highly unrealistic situations. Big deal. comment by GreedyAlgorithm · 2008-01-19T11:02:34.000Z · score: 1 (1 votes) · LW · GW I am intuitively certain that I'm being money-pumped all the time. And I'm very, very certain that transaction costs of many forms money-pump people left and right. comment by Ben_Jones · 2008-01-19T11:09:51.000Z · score: 0 (2 votes) · LW · GW As long as it was only one occasion, I wouldn't make the effort to cross the room for two pennies. If I'm playing the game just once, and I feel a one-off payment of 2p tends to zero, I'll play with you, sure. £1 for a lottery ticket crosses the threshold of palpability, even playing once. I can get a newspaper for a pound. Is this irrational? I hope not. comment by JulianMorrison · 2008-01-19T11:30:54.000Z · score: 7 (7 votes) · LW · GW When I made the (predictable, wrong) choice, I wasn't using probability at all. I was using intuitive rules of thumb like: "don't gamble", "treat small differences in probability as unimportant", and "if you have to gamble against similar odds, go for the larger win". How do you find time to use authentic probability math for all your chance-taking decisions? comment by ThisDan · 2012-12-17T01:48:30.271Z · score: 5 (5 votes) · LW · GW That's exactly how i felt too. "Don't gamble" is the key. 1a allowed me to indulge that even if i was boxed into being in the game. So in question 2 I want to follow "don't gamble" but both are gambling. Additionally, both gambles would feel the same risk to most human who didn't record statistics (other than subconscious and normal memory effected observations) so could be cheaply rounded off to say they are the same. If they are "the same" but 1 pays more money... Oh one more point "easy come easy go". If you can lose 2 either way you won't feel like you ever had anything. However even before you pick 1a and they physically hand you the money, it's already yours (by virtue of the ability to choose 1a ) until you choose 1b and introduce the probability that you won't be paid. I say already yours because if you are guaranteed the choice of 1a forever and unconditionally unless until you choose 1b- that's no less "having money" than when you "have money" but it's in your pocket or in your wallet in the other room. It might not be your money anymore if you fling your wallet out the window hoping it will boomerang back (1b) but it was until you introduced that gamble rather than just choosing to clutch the wallet (1a). I feel like i must be missing the point or something because they seems so obviously right... comment by Paul_Crowley2 · 2008-01-19T12:06:31.000Z · score: 7 (8 votes) · LW · GW The large sums of money make a big difference here. If it were for dollars, rather than thousands of dollars, I'd do what utility theory told me to do, and if that meant I missed out on $27 due to a very unlucky chance then so be it. But I don't think I could bring myself to do the same for life-changing amounts like those set out above; I would kick myself so hard if I took the very slightly riskier bet and didn't get the money. comment by Colin_Reid · 2008-01-19T12:14:32.000Z · score: 1 (1 votes) · LW · GW My experience of watching game shows such as 'Deal or No Deal' suggests that people do not ascribe a low positive utility to winning nothing or close to nothing - they actively fear it, as if it would make their life worse than before they were selected to appear on the show. It seems this fear is in some sense inversely proportional to the 'socially expected' probability of the bad event - so if the player is aware that very few players win less than £1 on the show, they start getting very uncomfortable if there is a high chance of this happening to them, because winning less than £1 is somehow embarrassing, and winning 1p is somehow significantly worse than winning say 50p. In contrast, on game shows where there's a 'double or nothing' option at the end, it is socially accepted that there's a high chance of winning nothing, so players seem to be much more sanguine about the gamble. I think the psychology of 'face' has a lot to answer for when it comes to such decisions. comment by Gray_Area · 2008-01-19T12:50:08.000Z · score: 14 (14 votes) · LW · GW People don't maximize expectations. Expectation-maximizing organisms -- if they ever existed -- died out long before rigid spines made of vertebrae came on the scene. The reason is simple, expectation maximization is not robust (outliers in the environment can cause large behavioral changes). This is as true now as it was before evolution invented intelligence and introspection. If people's behavior doesn't agree with the axiom system, the fault may not be with them, perhaps they know something the mathematician doesn't. Finally, the 'money pump' argument fails because you are changing the rules of the game. The original question was, I assume, asking whether you would play the game once, whereas you would presumably iterate the money pump until the pennies turn into millions. The problem, though, is if you asked people to make the original choices a million times, they would, correctly, maximize expectations. Because when you are talking about a million tries, expectations are the appropriate framework. When you are talking about 1 try, they are not. comment by ThisDan · 2012-12-17T02:13:29.972Z · score: 2 (2 votes) · LW · GW I was really confused about what point EY made that went over my head but i think I get it now. It totally changes the game to play it infinite amount of times rather than 1 go to win or lose. I made my choices based on 1 game and not a hybrid between the two of them played multiple times. If I play once, choosing 1a is just taking money that's already mine. If I play infinite times, 1b earns money faster because failing can be evened out. comment by steven · 2008-01-19T14:28:10.000Z · score: 2 (4 votes) · LW · GW tcpkac: no one is assuming away risk aversion. Choosing 1A and 2B is irrational regardless of your level of risk aversion. comment by Unknown · 2008-01-19T15:52:31.000Z · score: 0 (0 votes) · LW · GW Constant's response implies that if someone prefers 1A to 1B and 2B to 2A, when confronted with the money pump situation, the person will decide that after all, 1A is preferable to 1B and 2A is preferable to 2B. This is very strange but at least consistent. comment by Nick_Tarleton · 2008-01-19T16:15:17.000Z · score: 2 (2 votes) · LW · GW "Nainodelac and Tarleton Nick", why are you using my (reversed) name? steven: not if you're nonlinearly risk averse. As many have suggested, what if you take a large one-time utility hit for taking any risk, but you're not averse beyond that? comment by Caledonian2 · 2008-01-19T16:15:41.000Z · score: 2 (4 votes) · LW · GW Choosing 1A and 2B is irrational regardless of your level of risk aversion. No, only if the utility of avoiding risk is worth less than the money at risk. Duh. comment by billswift · 2008-01-19T16:22:49.000Z · score: 6 (6 votes) · LW · GW Your description is not a money pump. A money pump occurs when you prefer A > B and B > C and C > A. Then someone can trade you in a round robin taking a little out for themselves each cycle. I don't feel like typing in an illustration, so see Robyn Dawes, Rational Choice in an Uncertain World. There is a significant difference between single and iterative situations. For a single play I would prefer 1A to 1B and 2B to 2A. If it were repeated, especially open-endedly, I would prefer 1B to 1A for its slightly greater expected payoff. This is analogous, I think, to the iterated versus one-time prisoner's dilemma, see Axelrod's Evolution of Cooperation for an interesting discussion of how they differ. comment by Dagon · 2008-01-19T17:10:05.000Z · score: 6 (6 votes) · LW · GW How trustworthy is the randomizer? I'd pick B in both situations if it seemed likely that the offer were trustworthy. But in many cases, I'd give some chance of foul play, and it's FAR easier for an opponent to weasel out of paying if there's an apparently-random part of the wager. Someone says "I'll pay you$24k", it's reasonably clear. They say "I'll pay you $27k unless these dice roll snake eyes" and I'm going to expect much worse odds than 35/36 that I'll actually get paid. So for 1A > 1B, this may be based on expectation of cheating. For 2A < 2B, both choices are roughly equally amenable to cheating, so you may as well maximize your expectation. It seems likely that this kind of thinking is unconscious in most people, and therefore gets applied in situations where it's not relevant (like where you CAN actually trust the probabilities). But it's not automatically irrational. comment by George_Weinberg2 · 2008-01-19T18:08:36.000Z · score: 0 (0 votes) · LW · GW It seems to me that your argument relies on the utility of having a probability p of gaining x being equal to p times the utility of gaining x. It's not clear to me that this should be true. The trouble with the "money pump" argument is that the choice one makes may well depend on how one got into the situation of having the choice in the first place. For example, let's assume someone prefer 2B over 2A. It could be that if he were offered choice 1 "out of the blue" he would prefer 1A over 1B, yet if it were announced in advance that he would have a 2/3 chance of getting nothing and a 1/3 chance of being offered choice 1, he would decide beforehand that B is the better choice, and he would stick with that choice even if allowed to switch. This may seem odd, but I don't see why it's logically inconsistent. comment by Richard_Hollerith2 · 2008-01-19T18:16:54.000Z · score: -1 (1 votes) · LW · GW No, only if the utility of avoiding risk is worth less than the money at risk. Duh. Someone did not read the OP carefully enough. Hint: re-read the definition of the Axiom of Independence. comment by Caledonian2 · 2008-01-19T18:41:06.000Z · score: -2 (2 votes) · LW · GW Someone isn't thinking carefully enough. Hint: I did not assert that X is strictly preferred to Y. comment by steven · 2008-01-19T19:23:01.000Z · score: 1 (1 votes) · LW · GW Caledonian, Nick T: "Risk aversion" in the standard meaning is when an agent maximizes the expectation value of utility, and utility is a function of money that increases slower than linearly. When an agent doesn't maximize expected utility at all, that's something different. comment by steven · 2008-01-19T19:29:32.000Z · score: 1 (1 votes) · LW · GW Do you really want to say that it can be rational to accept a 1/3 chance of participating in a lottery, already knowing that if you got to participate you would change your mind? Risk aversion is (or at least, can be) a matter of taste, this is just a matter of not being stupid. comment by burger_flipper2 · 2008-01-19T21:44:33.000Z · score: 0 (0 votes) · LW · GW Dawes gives a very similar 2-gamble example of a money pump on pg 105 of Rational Choice. comment by Caledonian2 · 2008-01-19T21:50:29.000Z · score: 0 (2 votes) · LW · GW Caledonian, Nick T: "Risk aversion" in the standard meaning is when an agent maximizes the expectation value of utility Oh, I agree. I just measure utility differently than you do. comment by steven · 2008-01-20T00:29:28.000Z · score: 0 (0 votes) · LW · GW Caledonian, if utility is any function defined on amounts of money, then if you are maximizing expected utility, you cannot fall prey to the Allais paradox. You can define a utility function on gambles that is not the expected value of a utility function on amounts of money, but then that function is not expected utility, and you're outside of normal models of risk aversion, and you're violating rationality axioms like the one Eliezer gave in the OP. comment by Caledonian2 · 2008-01-20T01:40:58.000Z · score: 0 (0 votes) · LW · GW you're violating rationality axioms like the one Eliezer gave in the OP No. Those axioms are "if => then" statements. I'm violating the "if" part. comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-01-20T02:32:51.000Z · score: 6 (6 votes) · LW · GW Nainodelac, if you prefer 1A to 1B and 2A to 2B, as you should if you need exactly$24,000 to save your life, that is a perfectly consistent preference pattern. comment by Nick_Tarleton · 2008-01-20T03:32:28.000Z · score: 2 (2 votes) · LW · GW You can define a utility function on gambles that is not the expected value of a utility function on amounts of money, but then that function is not expected utility, and you're outside of normal models of risk aversion, and you're violating rationality axioms like the one Eliezer gave in the OP. Having a utility function determined by anything other than amounts of money is irrational? WTF? comment by Caledonian2 · 2008-01-20T03:42:33.000Z · score: 2 (2 votes) · LW · GW Upon rereading the thread and all of its comments, I suspect the person I originally quoted meant something along the lines of "preferring 1A to 1B but 2B to 2A is irrational", which seems more defensible. There is nothing irrational about preferring 1A and 2B by themselves, it's choosing the first option in the first scenario and the second in the second that's dodgy. comment by Richard_Hollerith2 · 2008-01-20T03:42:35.000Z · score: 0 (0 votes) · LW · GW Nick is right to object, but removing the phrase "on amounts of money" makes the statement unobjectionable -- and relevant and true. comment by Doug_S. · 2008-01-20T04:59:09.000Z · score: 1 (3 votes) · LW · GW Is Pascal's Mugging the reductio ad absurdum of expected value? comment by Joseph_Hertzlinger · 2008-01-20T05:29:10.000Z · score: 2 (2 votes) · LW · GW This may be related to the phenomenon of overconfident probability estimates. I would not be surprised to find that people who claim a 97% certainty have a real 90% probability of being right. Maybe someone who hears there's 1 chance in 34 of winning nothing interprets that as coming from an overconfident estimator whereas the 34% and 33% probabilities are taken at face value. On the other hand, the overconfidence detector seems to stop working when faced with asserted certainty. comment by Ian_Maxwell · 2008-01-20T05:34:48.000Z · score: 1 (1 votes) · LW · GW "Nainodelac and Tarleton Nick": This is not about risk aversion. I agree that if it is vital to gain at least $20,000, 1A is a superior choice to 1B. However, in that case, 2A is also a superior choice to 2B. The error is not in preferring 1A, but in simultaneously preferring 1A and 2B. comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-01-20T06:05:43.000Z · score: 2 (6 votes) · LW · GW Is Pascal's Mugging the reductio ad absurdum of expected value? No. I thought it might be! But Robin gave an excellent reason of why we should genuinely penalize the probability by a proportional amount, dragging the expected value back down to negligibility. (This may be the first time that I have presented an FAI question that stumped me, and it was solved by an economist. Which is actually a very encouraging sign.) comment by Unknown · 2008-01-20T06:23:03.000Z · score: 1 (1 votes) · LW · GW This discussion reminded me of the Torture vs. Dust Specks discussion; i.e. in that discussion, many comments, perhaps a majority, amounted to "I feel like choosing Dust Specks, so that's what I choose, and I don't care about anything else." In the same way, there is a perfectly consistent utility function that can prefer A1 to B1 and B2 to B1, namely one that sets utility on "feeling that I have made the right choice", and which does not set utility on money or anything else. Both in this case and in the case of the Torture and Dust Specks, many comments indicate a utility function which places value on the feeling of having made a right choice, without regard for anything else, especially for whether or not the choice was actually right, or for the consequences of the choice. comment by denis_bider · 2008-01-20T09:17:00.000Z · score: 1 (1 votes) · LW · GW Not sure if anyone pointed this out, but in a situation where you don't trust the organizer, the proper execution of 1A is a lot easier to verify than the proper execution of 1B, 2A and 2B. 1A minimizes your risk of being fooled by some hidden cleverness or violation of the contract. In 1B, 2A and 2B, if you lose, you have to verify that the random number generator is truly random. This can be extremely costly. In option 1A, verification consists of checking your bank account and seeing that you gained$24,000. Straightforward and simple. Hardly any risk of being deceived. comment by Nick_Tarleton · 2008-01-20T17:30:00.000Z · score: 2 (2 votes) · LW · GW I hate to discuss this again, but... Is Michael Vassar's variant Pascal's Mugging (with the pigs), bypassing as it does Robin's objection, the reductio of expected value? If you don't care about pigs, substitute something else really really bad that doesn't require creating 3^^^3 humans. comment by steven · 2008-01-20T18:10:00.000Z · score: 0 (0 votes) · LW · GW It's simple to show that no rational person would actually give money to a Pascal mugger, as the next mugger might threaten 4^^^4 people. I'm not sure whether this solves the problem or just sweeps it under the rug, though. comment by Doug_S. · 2008-01-20T22:27:00.000Z · score: 0 (0 votes) · LW · GW Well, if Pascal's Mugging doesn't do it, how about the St. Petersburg paradox? ;) Oh wait... infinite set atheist... never mind. comment by Wendy_Collings · 2008-01-20T22:37:00.000Z · score: 0 (0 votes) · LW · GW I'm afraid I don't follow the maths involved, but I'd like to know whether the equations work out differently if you take this premise: - Since 1A offers a certainty of $24,000, it is deemed to be immediately in your possession. 1B then becomes a 33/34 chance of winning$3,000 and 1/34 chance of losing $24,000. Can someone tell me how this works out mathematically, and how it then compares to 2B? comment by Bayesian · 2008-01-21T13:53:00.000Z · score: 0 (0 votes) · LW · GW The Allais Paradox is indeed quite puzzling. Here are my thoughts: 0. Some commenters simply dismiss Bayesian reasoning. This doesn't solve the problem, it just strips us of any mathematical way to analyze the problem. On the other hand, the fact that the inconsistent choice seems ok does mean that the Bayesian way is missing something. Simply dismissing the inconsistent choice doesn't solve the problem either. 1. If I understand correctly, you argue that situation 1 can be turned into situation 2 by randomization. In other words, if you sell me situation 1, I can sell somebody else (named X) situation 2 by throwing some dies and using your offer. More specifically, I throw a 100-sided die. If it's > 34, X looses. Otherwise, I play X's option with you. However, this can't be reversed. Given only situation 2, I can't sell situation 1, assuming I have only$0 initial capital. Hence, it seems that assuming invertibility of situations (I can both buy and sell them) and unlimited money buffers for that purpose are important for the demanded consistency. comment by CarlShulman · 2008-01-21T16:08:00.000Z · score: 0 (0 votes) · LW · GW Nick, "Is Michael Vassar's variant Pascal's Mugging (with the pigs), bypassing as it does Robin's objection, the reductio of expected value? If you don't care about pigs, substitute something else really really bad that doesn't require creating 3^^^3 humans." The Porcine Mugging doesn't bypass the objection. Your estimates of the frequency of simulated people and pigs should be commensurably vast, and it is vastly unlikely that your simulation (out of many with intelligent beings) will be selected for an actual Porcine Mugging that will consume vast resources (enough to simulate vast numbers of humans). These things offset to get you workable calculations. comment by mitchell_porter2 · 2008-01-24T14:21:00.000Z · score: 1 (1 votes) · LW · GW I would have chosen 1A and 2B, for the following reasons: Any sum of the order of $20,000 would revolutionize my personal circumstances. The likely payoff is enormous. Therefore, I'd pick 1A because I'd get such a sum guaranteed, rather than run the 3% risk (1B) of getting nothing at all. Whereas choice 2 is a gamble either way, so I am led to treat both options as qualitatively the same. But that's a mistake: if the value of getting either nonzero payoff at all is so great, then I should have favored the 34% chance of winning something over the 33% chance, just as I favored the 100% chance over the ~97% chance in choice 1. Interesting. comment by Phirand_Ice · 2008-01-24T16:13:00.000Z · score: 0 (0 votes) · LW · GW Surely the answer is dependednat on goal criterion. If the goal is to get 'some' money then the 100% option and the 34% options are better. If your goal is get 'the most' money then the 97% and the 33% options are better. However the goal might be socially construictued. This reminded me of John Nash whom offered one of his sectraries$15 dollars if she shared it equally with a co-worker but $10 if she kept it for her-self. She took the$15 and split it with her co-worker. She chose an option that maximised her social capital but was a weaker one economically. comment by Michael_Osborne · 2008-09-07T18:49:00.000Z · score: 0 (0 votes) · LW · GW I agree with Dagon. This experiment assumes that the subjective probabilities of participants were identical to the stated probabilities. In reality, I feel like people are probably wary of stated probabilities due to experiences with or fears of shysters and conmen. That, is if asked to choose between 1A and 1B, 1B offers the possibility that the randomising mechanism' that the experimenter is offering is in fact rigged. Even if the experimenter is completely honest in their statement of their own subjective probabilities, they may simply disagree with that of the participants. Whatever randomising mechanism' is suggested is, of course, almost certainly completely predictable given sufficient information - a die roll, or similar, predictable using Newtonian mechanics. That, is the experimenter's stated probability is purely a reflection of their own information concerning that mechanism, which may be completely at odds with the participant's knowledge. comment by Wei_Dai2 · 2009-02-01T22:52:00.000Z · score: 8 (8 votes) · LW · GW Eliezer, I see from this example that the Axiom of Independence is related to the notion of dynamic consistency. But, the logical implication goes only one way. That is, the Axiom of Independence implies dynamic consistency, but not vice versa. If we were to replace the Axiom of Independence with some sort of Axiom of Dynamic Consistency, we would no longer be able to derive expected utility theory. (Similarly with dutch book/money pump arguments, there are many ways to avoid them besides being an expected utility maximizer.) I'm afraid that the Axiom of Independence cannot really be justified as a basic principle of rationality. Von Neumann and Morgenstern probably came up with it because it was mathematically necessary to derive Expected Utility Theory, then they and others tried to justify it afterward because Expected Utility turned out to be such an elegant and useful idea. Has anyone seen Independence proposed as a principle of rationality prior to the invention of Expected Utility Theory? comment by torekp · 2011-03-05T22:16:46.518Z · score: 1 (1 votes) · LW · GW I'm equally afraid ;). The Axiom of Independence is intuitively appealing to me, but I don't posit it to be a basic principle of rationality, because that smells like a mind projection fallacy. I suspect you're right, also, about dutch book/money pump arguments. I tentatively conclude that a rational agent need not evince preferences that can be represented as an attempt to maximize such a utility function. That doesn't mean Expected Utility Theory can't be useful in many circumstances or for many agents, but this still seems like important news, which merits more discussion on Less Wrong. comment by Wei_Dai · 2011-03-06T10:33:36.785Z · score: 3 (3 votes) · LW · GW which merits more discussion on Less Wrong. Have you read these posts? comment by Tim_Tyler · 2009-05-04T11:18:00.000Z · score: 2 (2 votes) · LW · GW Agree with Denis. It seems rather objectionable to describle such behaviour as irrational. Humans may well not trust the experimenter to present the facts of the situation to them accurately. If the experimenter's dice are loaded, choosing 1A and 2B could well be perfectly rational. comment by CarlShulman · 2009-05-04T12:03:00.000Z · score: 0 (0 votes) · LW · GW "That is, the Axiom of Independence implies dynamic consistency, but not vice versa." Really? A hyperbolic discounter can conform to the Axiom of Independence at any particular time and be dynamically inconsistent. comment by JohnDavidBustard · 2010-08-25T16:59:43.525Z · score: 1 (1 votes) · LW · GW I would love to know if the results are different if you repeatedly expose people to the situation rather than communicate it in a formal way. They are likely to observe the outcomes of their strategy and adapt. Perhaps what is being measured is simply the numeracy of the subjects and not their practical inability to determine optimal strategies. The lottery is another interesting example, what is being bought is the probability of a big win, not a statistically optimal investment. Playing the lottery genuinely increases the chance of you suddenly gaining a life changing amount of money. This is a perfectly rational choice. comment by AlephNeil · 2010-08-25T18:13:39.238Z · score: 1 (1 votes) · LW · GW This is a perfectly rational choice. What about the Allais paradox? Imagine someone who is happy to play the lottery but would refuse to play an alternative version where the ticket merely confers a slight increase on a significant pre-existing probability of winning 'life changing money'. (As I understand it, most/all lottery players would in fact refuse the 'alternative' gamble.) Do you want to say that such a person is 'perfectly rational'? Would you call them perfectly rational if they accepted both gambles (despite both of them having negative EV)? To be fair, It is possible to tell a consistent story about a person for whom either gamble would be rational: Perhaps the Earth is going to be destroyed soon and the cost of entry into the new self-sustaining Mars colony equals the lottery jackpot. But needless to say, most people aren't in situations remotely resembling this one. comment by JohnDavidBustard · 2010-08-26T10:00:00.562Z · score: 0 (0 votes) · LW · GW I think the Allais paradox is fascinating, however, although it is very revealing about our likely motives for playing the lottery it doesn't change the potential rationality of actual playing it. I.e. that money and value don't necessarily have a linear relationship, and so optimising for EV is not rational. Although, I feel that the likely answer is that the brain is optimised for rapid responses to survival problems and these solutions may well be an optimal response given constraints on both processing and expected outcome. Another perspective is that in general specifications are not accurate but instead a communication of experience. If the problem specification is viewed instead as a measurement of a system where the placing of bets is an input and the output is not random but the outcome of an unknown set of interactions. Systems encountered in the past will form a probability distribution over their behaviour, the frequency of observed consequences then act as a measurement of the likelihood that the system in question is equivalent to one of these types. This would explain the feeling of switching between the two examples (they constitute the likely outcomes of two types of system) and thus represent situations where distinct behaviours were appropriate. I.e. as one starts to understand an existing system one gets diminishing returns for optimising interaction with it (a good example is AI programming itself), however systems may be unknown to the user. These unknown systems may demonstrate rare, but highly beneficial or unexpected events, like noticing an anomaly in a physics experiment. In this case it is rational to play/interact as doing so provides more information which may be used to identify the system and thus lead to understanding and thus an expected benefit in the future. comment by Sniffnoy · 2010-08-26T10:44:27.163Z · score: 1 (1 votes) · LW · GW I think the Allais paradox is fascinating, however, although it is very revealing about our likely motives for playing the lottery it doesn't change the potential rationality of actual playing it. I.e. that money and value don't necessarily have a linear relationship, and so optimising for EV is not rational. Of course, that just means you maximise expected utility rather than expected money. (I was almost going to write "expected value" instead of "expected utility" as you used the word "value", but obviously that would be confusing in this context...) comment by JohnDavidBustard · 2010-08-26T12:55:20.300Z · score: 0 (0 votes) · LW · GW Yes, absolutely, apologies for my unfamiliarity with the terms. The point I'm trying to make is that lottery playing optimises utility (assuming utility means what is considered valuable to the person). Saying that lottery playing is irrational is making a statement about what is valuable more that it does about what is reasonable. comment by Kingreaper · 2010-11-14T12:33:52.386Z · score: 0 (0 votes) · LW · GW Imagine someone who is happy to play the lottery but would refuse to play an alternative version where the ticket merely confers a slight increase on a significant pre-existing probability of winning 'life changing money'. (As I understand it, most/all lottery players would in fact refuse the 'alternative' gamble.) This is likely because playing the lottery gives you "hope" of a life-changing event. It means that you KNOW there is a possible life-changing event available. If you already have that knowledge, then paying for the lottery becomes just about the money; which isn't worthwhile. If you don't, paying for the lottery is buying that knowledge, and the knowledge has value to you. comment by Kingreaper · 2010-11-14T12:26:57.539Z · score: 1 (1 votes) · LW · GW Ummm, no. The money pump fails because of the REASON for the preference difference. The reason is, as some have already stated, that in scenario 1B if you lose you know it's your fault you got nothing. In scenario 2B if you lose, you can rationalise it easily as "Would have lost anyway" In your money pump scenario, we have a 1/3rd chance of playing 1. If we get to play 1, we know we're playing 1. So your money pump fails, because a standard player would prefer that the switch be on A at all times. comment by David_Gerard · 2010-12-07T17:59:55.882Z · score: 2 (2 votes) · LW · GW How do I alleviate feeling pleased at myself for having read the statement of the paradox - that people preferred 1A>1B but 2B>2A - and immediately going "WHAT?" and boggling at the screen and pulling confused faces for about thirty seconds, so flabbergasted I had to reread that this choice pattern was common? (Personally I'm really strongly biased these days toward a bird in the hand and would have chosen 1A and 2A every time. I occasionally do bits of sysadmin for dodgy dot-coms that friends are working for. There are people who offer equity; I take an hourly fee. "No, no, that's fine, I am but humble roadie." This may not always be the best life strategy, but it seems to work for me at present.) comment by shokwave · 2010-12-07T18:20:16.464Z · score: 1 (1 votes) · LW · GW There are people who offer equity; I take an hourly fee. Penalise expected value of equity because probability is lower than I have been led to believe - an incredibly useful heuristic. How do I alleviate feeling pleased at myself In 33/34ths of the worlds where you make choice A in 1, you are mercilessly teased and mocked by your inferiors, a la this, thirty seconds in, for not picking B. Assuming counterfactual outcomes are revealed. comment by David_Gerard · 2010-12-07T18:23:28.475Z · score: 1 (1 votes) · LW · GW I'll just have to cry myself to sleep on a big bed made of $24,000! comment by handoflixue · 2011-04-12T17:41:19.823Z · score: 10 (10 votes) · LW · GW It took me 30 minutes of sitting down and doing math before I could finally accept that 1A+2B was an irrational preference. I finally realized that a lot of it came down to: with a 66% vs 67% chance of losing, I could take the riskier option and not feel as bad, because I could sweep it under the rug with "oh, I probably would have lost anyways." Once I ran a scenario where I'd KNOW whether it was that 1% that I controlled, or the 66% that I didn't control, that comfort evaporated. I learned a lot about myself by working through this exercise, so thank you very much :) comment by mendel · 2011-05-26T15:15:27.980Z · score: 0 (0 votes) · LW · GW The problem as stated is hypothetical: there is next to no context, and it is assumed that the utility scales with the monetary reward. Once you confront real people with this offer, the context expands, and the analysis of the hypothetical situation falls short of being an adequate representation of reality, not necessarily because of a fault of the real people. Many real people use a strategy of "don't gamble with money you cannot afford to lose"; this is overall a pretty successful strategy (and if I was looking to make some money, my mark would be the person who likes to take risks - just make him subsequently better offers until he eventually loses, and if he doesn't, hit him over the head, take the now substantial amount of money and run). To abandon this strategy just because in this one case it looks as if it is somewhat less profitable might not be effective in the long run. (In other circumstances, people on this site talk about self-modification to counter some expected situations as one-boxing vs. dual-boxing; can we consider this strategy such a self-modification?) Another useful real-life strategy is, "stay away from stuff you don't understand" -$24,000 free and clear is easier to grasp than the other offer, so that strategy favors 1A as well, and doesn't apply to 2A vs. 2B because they're equally hard to understand. The framing of offer two also suggests that the two offers might be compared by multiplying percentage and values, while offer 1 has no such suggestion in branch 1A. We're looking at a hypothetical situation, analysed for an ideal agent with no past and no future - I'm not surprised the real world is more complex than that. comment by wedrifid · 2011-05-26T16:11:10.544Z · score: 0 (0 votes) · LW · GW The problem is not with the hypothetical. It is with the intuition. Intuitions which really do prompt bad decisions in the real life circumstances along these lines. comment by mendel · 2011-05-27T02:11:40.881Z · score: 0 (0 votes) · LW · GW You seem to have examples in mind? comment by Pavitra · 2011-05-27T02:22:24.903Z · score: 1 (1 votes) · LW · GW The lottery comes immediately to mind. You can't be absolutely sure that you'll lose. comment by [deleted] · 2011-05-26T16:17:53.728Z · score: 4 (4 votes) · LW · GW it is assumed that the utility scales with the monetary reward. Not necessarily. It is assumed that receiving $24000 is equally good in either situation. Your utility function can ignore money entirely (in which case 1A2A is irrational because you should be indifferent in both cases). You can use the utility function which prefers not to receive monetary rewards divisible by 9: in this case, 1A>1B and 2A>2B is your best bet, giving you 100% and 34% chances to avoid 9s, rather than 0% chances. In general, your utility function can have arbitrary preferences on A and B separately; but no matter what, it will prefer 1A to 1B if and only if it prefers 2A to 2B. As for the rest of your reply -- yes, it is true that real people use strategies ("heuristic" is the word used in the original post) that lead them to choose 1A and 2B. That's sort of why it's a paradox, after all. However, these strategies, which work well in most cases, aren't necessarily the best in all cases. The math shows that. What the math doesn't tell us is which case is wrong. My own judgment, for this particular sum of money (which is high relative to my current income), is that choice 1A is correctly better than choice 2A, in order to avoid risk. However, choice 1B is also better than choice 2B, upon reflection, even though my intuitions tell me to go with 2B. This is because my intuitions aren't distinguishing 33% and 34% correctly. In reality, faced with the opportunity to earn amounts on the order of$20K, I should maximize my chances to walk away with something. In the first case, I can maximize them fully, to 100%, which triggers my "success!" instinct or whatever: I know I've done everything I can because I'm certain to get lots of money. In the second case, I don't get any satisfaction from the correct decision, because all I've done is improve my chances by 1%. In general, the heuristic that 1% chances are nearly worthless is correct, no matter what's at stake: I can usually do better by working on something that will give me a 10% or 25% chance. In this case, this heuristic should be ignored, because there is no effort spent making the improvement, and furthermore, there isn't really anything else I can do. On the other hand, suppose that the amount of money at stake is $2.40 or$2.70. Suddenly, our risk-aversion heuristic is no longer being triggered at all (unless you're really strapped for cash), and we have no problem doing the utility calculation. Here, 1A<1B and 2A<2B is the correct choice. comment by mendel · 2011-05-27T02:10:10.206Z · score: 0 (0 votes) · LW · GW The utility function has as its input only the monetary reward in this particular instance. Your idea that risk-avoidance can have utility (or that 1% chances are useless) cannot be modelled with the set of equations given to analyse the situation (the percentage is no input to the U() function) - the model falls short because the utility attaches only to the money and nothing else. (Another example of a group of individuals for whom the risk might out-utilize the reward are gambling addicts.) Security is, all other things being equal, preferred over insecurity, and we could probably devise some experimental setup to translate this into a utility money equivalent (i.e. how much is the test subject prepared to pay for security and predictability? that is the margin of insurance companies, btw). :-P I wanted to suggest that a real-life utility function ought to consider even more: not just to the single case, but the strategies used in this case - do these strategies or heuristics have better utility in my life than trying to figure out the best possible action for each problem? In that case, an optimal strategy may well be suboptimal in some cases, but work well re: a realistic lifetime filled with probable events, even if you don't contrive a $24000 life-or-death operation. (Should I spend two years of my life studying more statistics, or work on my father's farm? The farm might profit me more in the long run, even if I would miss out if somebody made me the 1A/1B offer, which is very unlikely, making that strategy the rational one in the larger context, though it appears irrational in the smaller one.) comment by [deleted] · 2011-05-27T18:34:08.585Z · score: 1 (1 votes) · LW · GW Risk-avoidance is captured in the assignment of U($X). If the risk of not getting any money worries you disproportionately, that means that the difference U($24K) - U($0) is higher than 8 times the difference U($27K) - U($24K). comment by mendel · 2011-05-27T21:30:56.608Z · score: 0 (0 votes) · LW · GW That's a neat trick, however, I am not sure I understand you correctly. You seem to be saying that risk-avoidance does not explain the 1A/2B preference, because you say your assignment captures risk-avoidance, and it doesn't lead to that. (It does lead to your take of the term though - your preference isn't 1A/2B, though). Your assignment looks like "diminishing utility", i.e. a utility function where the utility scales up subproprotionally with money (e.g. twice the money must have less than twice the utility). Do you think diminishing utility is equivalent to risk-avoidance? And if yes, can you explain why? comment by [deleted] · 2011-05-27T22:31:04.046Z · score: 0 (0 votes) · LW · GW I think so, but your question forces me to think about it harder. When I thought about it initially, I did come to that conclusion -- for myself, at least. [I realized that the math I wrote here was wrong. I'm going to try to revise it. In the meantime, another question. Do you think that risk avoidance can be modeled by assigning an additional utility to certainty, and if so, what would that utility depend on?] Also, thinking about the paradox more, I've realized that my intuition about probabilities relies significantly on my experience playing the board game Settlers of Catan. Are you familiar with it? comment by mendel · 2011-05-28T11:05:59.144Z · score: 0 (0 votes) · LW · GW One way to do it to get to the desired outcome is to replace U(x) with U(x,p) (with x being the money reward and p the probability to get it), and define U(x,p)=2x if p=1 and U(x,p)=x, otherwise. I doubt that this is a useful model of reality, but mathematically, it would do the trick. My stated opinion is that this special case should be looked at in the light of more general startegies/heuristics applied over a variety of situations, and this approach would still fall short of that. I know Settlers of Catan, and own it. It's been awhile since I last played it, though. Your point about games made me aware of a crucial difference between real life and games, or other abstract problems of chance: in the latter, chances are always known without error, because we set the game (or problem) up to have certain chances. In real life, we predict events either via causality (100% chance, no guesswork involved, unless things come into play we forgot to consider), or via experience / statistics, and that involves guesswork and margins of error. If there's a prediction with a 100% chance, there is usually a causal relationship at the bottom of it; with a chance less than 100%, there is no such causal chain; there must be some factor that can thwart the favorable outcome; and there is a chance that this factor has been assessed wrong, and that there may be other factors that were overlooked. Worst case, a 33/34 chance might actually only be 30/34 or less, and then I'd be worse off taking the chance. Comparing a .33 with a .34 chance makes me think that there's gotta be a lot of guesswork involved, and that, with error margins and confidence intervals and such, there's usually a sizeable chance that the underlying probabilities might be equal or reversed, so going for the higher reward makes sense. comment by [deleted] · 2011-05-29T14:02:43.376Z · score: 1 (1 votes) · LW · GW One way to do it to get to the desired outcome is to replace U(x) with U(x,p) (with x being the money reward and p the probability to get it), and define U(x,p)=2x if p=1 and U(x,p)=x, otherwise. The problem with this is that dealing with p=1 is iffy. Ideally, our certainty response would be triggered, if not as strongly, when dealing with 99.99% certainty -- for one thing, because we can only ever be, say, 99.99% certain that we read p=1 correctly and it wasn't actually p=.1 or something! Ideally, we'd have a decaying factor of some sort that depends on the probabilities being close to 1 or 0. The reason I asked is that it's very possible that a correct model of "attaching a utility to certainty" would be equivalent to a model with diminishing utility of money. If that were the case, we would be arguing over nothing. If not, we'd at least stand a chance of formulating gambles clarifying our intuitions if we knew what the alternatives are. Comparing a .33 with a .34 chance makes me think that there's gotta be a lot of guesswork involved, and that, with error margins and confidence intervals and such, there's usually a sizeable chance that the underlying probabilities might be equal or reversed, so going for the higher reward makes sense. If the 33% and 34% chances are in the middle of their error margins, which they should be, our uncertainty about the chances cancels out and the expected utility is still the same. Going for the higher expected value makes sense. I brought up Settlers of Catan because, if I imagine a tile on the board with $24K and 34 dots under it, and another tile with$27K and 33 dots, suddenly I feel a lot better about comparing the probabilities. :) Does this help you, or am I atypical in this way? Imagine you are a mathematical advisor to a king who asks you to advise him of a course of action and to predict the outcome. Obviously with the advisor situation, you have to take your advisee's biases into account. The one most relevant to risk avoidance is, I think, the status quo bias: rather than taking into account the utility of the outcomes in general, the king might be angry at you if the utility becomes worse, and not as picky if the utility becomes better (than it is now). You have to take your own utility into account, which depends not on the outcome but on your king's satisfaction with it. comment by Luke_A_Somers · 2011-08-26T11:39:18.591Z · score: 4 (4 votes) · LW · GW I wonder how the results would change if the experiment changes so that the outcomes of 2B are, "You have a 33% chance of receiving $27k, a 66% chance of not getting anything, and a 1% chance of having someone laugh in your face for not picking 2A" comment by Surunveri · 2011-12-13T10:01:45.126Z · score: 0 (4 votes) · LW · GW If you'd ask any person capable of doing the math whether they would want to play 1A or 1B a thousand times you'd probably get a different answer, but not an answer that's more correct. Also the utility value of money is not directly relative to the amount of money. Imagine that you would need a 1000$ dollars of money to save your dying relative with certainty by paying for his/her treatment. Good enough for explaining 1A > 1B, but doesn't resolve the contradiction with 2B > 2A. But even a more revealing edit is based exactly onto the certainty. If you would be presented with these two questions, in such a fashion that you would get the money and get to know the result in 1 month after being presented with it. By selecting 1A you would have 0% chance that the plans you make would fail, and with 1B you would have a 1/34 chance that they would fail. Meanwhile regardless of whether you select 2A or 2B you will have to face uncertainty. So you would be frustrated while trying to make plans that are conditionally dependent with you getting the money. As these conditions are not present in the presentation it's possible to rule these kind of instinctive judgments as flawed, but as it turns out, they're not foolish, on a general level. You could even make a claim that it's costly to perform the calculation that tells you whether the assurance is worth it - but of course instead of saying that you should just figure out how much value this assurance has in each given situation. comment by Vaniver · 2011-12-13T12:37:56.989Z · score: 3 (3 votes) · LW · GW You're right that certainty helps out with planning, and so certainty can be valuable sometimes. It's still a bias to unconsciously add in a value for certainty if you don't need it in this case, even if it sometimes pays off, and so it's worth thinking through the 'paradox.' comment by Surunveri · 2011-12-13T18:30:38.149Z · score: 0 (0 votes) · LW · GW I wanted to point out that this flaw is not a foolish flaw. That's how we create plans, we project and create expectations, and the anticipated feeling of loss is frustrating to plan for. In a theoretical example you might make a bad decision, but isn't it also that this flaw causes you to make good decisions in actual real-world situations? Since they don't tend to occur in such theoretical forms where you have all the required information available and which lack context. If you'd actually encounter this problem in a real-world situation, you might end up making a bad decision because of handling it with a too theoretical approach - what if I told you get to play both games and actually get to choose between both, when you come to visit me? But you didn't have money to pay for the ticket to fly over? What if you took a loan? And without the certainty of A1 you might end up in a bad situation where you'll lack the means to pay back your loan - in other words a decision making agent with this flaw handles the situation well. But of course you can take all that into account. And as it's a problem dealing with rationality, I think it's pretty important to note these things. Anyway I agree with you, Vaniver =) comment by William_Kasper · 2011-12-29T15:46:04.682Z · score: 2 (2 votes) · LW · GW Please correct me if any of my assumptions are innacurate, and I apologize if this comment comes off as completely tautological. Expected utility is explicity defined as the statistic $\sum_{{x}\in{X}}{p(x\$U(x)}) where X is the set of all possible outcomes associated with a particular gamble, p(x) is the proportion of times that outcome x occurs within the gamble, and U(x) is the utility of outcome x, a function that must be strictly increasing with respect to the monetary value of outcome x. To reduce ambiguity: • 1A, 1B, 2A, and 2B are instances of gambles. • For 1B, the possible outcomes are $27000 and$0. • For 1B, the expected utility is p($27000) * U($27000) + p($0) * U($0) = 33/34 * U($27000) + 1/34 * U($0). If you choose 1A over 1B and 2B over 2A, what can we conclude? • that you are not using the rule "maximize expected utility" to make your decisions. Thus you do not fit the definition, as given by the Axiom of Independence, of consistent decision making. If you choose 1A over 1B and 2B over 2A, what can we not conclude? • that your decision rule changes arbitrarily. You could, for example, always follow the rule, "Maximize minimum net utility. In the case of a tie, maximize expected utility." In this case, you would choose 1A and 2B. • that you would be wrong or stupid for using a different decision rule when you only get to play one time, than the rule you would use when you get to play 100 times. comment by thomblake · 2011-12-29T20:08:32.811Z · score: 0 (0 votes) · LW · GW That all seems pretty uncontroversial. comment by ricketson · 2012-01-15T19:28:45.682Z · score: 0 (0 votes) · LW · GW I initially chose 1A and 2B, but after reading the analysis of those decisions, I agree that they are inconsistent in a way that implies that one choice was irrational (in the context of this silly little game). So I did some introspection to figure out where I went wrong. Here's what I found: 1) I may have misjudged how small 1/34 is, and this only became apparent when the question was phased as it is in example 2. 2) I think I assumed an implicit costs in these gambles. The first cost is a delay in learning the outcome of these gambles; the second is the implicit need to work to earn this money. I think that these assumptions are reasonable because there is essentially no realistic condition in which I would instantly see the results of a decision that might earn me $27,000; there would probably be a delay of several months (if working) or years (if investing) between making the decision and learning whether I got the money or not. This prolonged uncertainty has a negative utility, since I am unable to make firm plans for the money during that interval. This negative utility would apply to all options except 1A. Furthermore, earning$24,000 would realistically require several months of work on my part. However, a project that had a 1/3 chance of paying out $24,000 might only take a month. The implicit difference in opportunity cost between scenario 1 and scenario 2 has implications for the marginal utility of money in each scenario (making me more risk-averse in scenario 1, which implicitly has a higher opportunity cost). These implicit costs are not specified in this game, so it is technically "irrational" to incorporate them into my decision-making. However, in any realistic scenario, such costs will exist (regardless of what the salesman says), so it is good that I/we intuitively include them in my/our decision-making. comment by jsalvata · 2012-04-18T00:40:42.741Z · score: 1 (1 votes) · LW · GW While Elezier's argument is still correct (that you should multiply to make decisions based on probabilistic knowledge), I see a perfectly rational and utilitarian explanation for choosing 1A and 2B in the stated problem. The clue lies in Colin Reid's comment: "people do not ascribe a low positive utility to winning nothing or close to nothing - they actively fear it". This fear is explained by Kingreaper: "in scenario 1B if you lose you know it's your fault you got nothing". That makes the two cases, stated as they are, different. In game 1 the utility of U1($0) has negative value: a sense of guilt (or shame) over having made the bad choice, which doesn't seem possible in game 2 (because game 2 is stated in terms of abstract probabilities, see below). This makes the inequations compatible: U($24,000) > 33/34 U($27,000) + 1/34 U1($0) e.g. 24 > 33/34 · 27 + 1/34 · -1000 0.34 U($24,000) + 0.66 U2($0) < 0.33 U($27,000) + 0.67 U2($0) e.g. 0.34 · 24 + 0.66 · 0 < 0.33 · 27 + 0.67 · 0 Note that stating the game with the "switch" rule turns game 2 into one (let's call it 3) in which the guilt/shame reappears, making U3=U1 -- so a rational player with the described negative U1 would choose A in game 3 and there would be no money pump. This solution to the paradox is less valid if it is made clear that the subject will be allowed to play the game many times. Another interesting way to remove this as a possible solution would be to restate case 2 in more concrete terms, to make it clear that you won't get away not knowing that "it was your fault" if you loose: 4A. If a 100-face dice falls on <=34, win$24,000, otherwise win nothing. 4B. If a 100-face dice falls on <=33, win $27,000, otherwise win nothing. Just to prevent the subject being pattern-matching and not thinking, we should add the phrase "note that if the dice falls on a 34 and you've chosen A, you win 24k, but if you've chosen B, you get nothing". I believe game 4 is pretty equivalent to game 3 (the one with the switch). I've checked Allais' document and it suffers the same flaw: it's not an actual experiment in which people are asked to choose A or B and actually allowed to play the game, but a questionnaire asking subjects what they would choose. This is not the same, among other reasons because it doesn't force the experimenter or subject to detail the mechanics of the game (and hence it is not stated whether the subject will be given that sense of shame or even allowed to "chase the rabbit"). It would be interesting to know the result of an actual experiment with this design, possibly with smaller figures to reduce the non-linearity of the utility functions -- since that's not what's being discussed here --, and with subjects filtered against innumeracy (since those are out of hope anyway). comment by Vaniver · 2012-04-18T01:28:34.964Z · score: 2 (2 votes) · LW · GW That makes the two cases, stated as they are, different. In game 1 the utility of U1($0) has negative value: a sense of guilt (or shame) over having made the bad choice, which doesn't seem possible in game 2 (because game 2 is stated in terms of abstract probabilities, see below). If you could choose whether or not to have this guilt, would you choose to have it? Does it make you better off? comment by avichapman · 2012-05-01T05:53:49.121Z · score: 3 (3 votes) · LW · GW I know this was posted 4 years ago, but I had a thought. If I was offered a certainty of $24,000 vs a 33/34 chance of$27,000, my preference would depend on whether this was a once-off. If this was a once-off, my primary concern would be securing the money and being able to put food on the table tonight. Option 1 will put food on the table with 100% certainty, while Option 2 will not. If, however, the option was to be offered many times, I would optimise for greatest return - Option 2. If I miss out this month, I'll just scrape for food until next month, when chance are I'll get the money. I think I just answered my own question. If my goal can be reached with $24,000, then Option 1 is the best one because it reaches the goal in one guaranteed fell swoop. However, if my goal is to make lots of money, then Option 2 is the way to go, because it makes the most over time. That make sense to anyone? comment by Paul Crowley (ciphergoth) · 2012-05-01T06:22:16.849Z · score: 4 (4 votes) · LW · GW It absolutely can make sense to prefer option 1A over option 1B (which I think is what you mean). What does not make sense is to prefer option 1A over 1B, AND prefer 2B over 2A. It's worth reading the two followup articles before you get into this further: Zut Allais and Allaise Malaise. Welcome to Less Wrong! comment by drnickbone · 2012-05-01T07:29:10.917Z · score: 1 (1 votes) · LW · GW This is an old post, but I guess one resolution is that: U($24,000) > 33/34 U($27,000) + 1/34 U($0 & Regret that I didn't take the $24000) Which is consistent with: 0.34 U($24,000) + 0.66 U($0) < 0.33 U($27,000) + 0.67 U($0) It's an interesting psychological fact that the regret is triggered in one case, but not the other. comment by [deleted] · 2012-08-29T22:52:20.272Z · score: 0 (0 votes) · LW · GW I wonder if this bias is somehow trying to compensate for some other bias. Suppose you think the experimenter is overconfident, i.e., their log-odds are twice as much as they should; so, when they say 100% they do mean 100%, but when they say 97.1% they actually mean 85.2% (and when they say 34% they mean 41.8%, and when they say 33% they mean 41.2%). Now, Option 1B suddenly looks much uglier, doesn't it? (I'm not claiming this happens consciously.) comment by Flipnash · 2012-10-15T18:58:56.732Z · score: 0 (0 votes) · LW · GW If flipping the switch before 12:00 pm has no effect on the amount of money one acquires why would one pay anything to do it? why not just flip the switch only once after 12:00 pm and before 12:05PM? comment by Elithrion · 2013-03-02T22:25:33.902Z · score: 0 (0 votes) · LW · GW Question: do the rest of you actually find the choice of 1A clearly intuitive? I think my intuition for examples like this has been safely killed off, so my replacement intuition instead says: "hm, clearly 34*(27-24) > 27, so 1B!" (without actually evaluating 27-24, just noting it's ≥1). Which mainly suggests that I've grown accustomed to calculating expectations out explicitly where they're obvious, not that I'm necessarily good at avoiding real life analogues of the problem. comment by Martin-2 · 2013-03-07T00:32:33.701Z · score: 2 (2 votes) · LW · GW do the rest of you actually find the choice of 1A clearly intuitive? I chose 1B. I seem to be an outlier in that I chose 1B and 2B and did no arithmetic. comment by [deleted] · 2015-06-26T18:10:41.512Z · score: 1 (1 votes) · LW · GW Me too! We're just two greedy people!:) comment by christopherj · 2013-11-28T15:11:54.889Z · score: 0 (0 votes) · LW · GW 1A.$24,000, with certainty. 1B. 33/34 chance of winning $27,000, and 1/34 chance of winning nothing. 2A. 34% chance of winning$24,000, and 66% chance of winning nothing. 2B. 33% chance of winning $27,000, and 67% chance of winning nothing. I would choose 1A over 1B, and 2B over 2A, despite the 9.2% better expected payout of 1B and the small increased risk in 2B. If the option was repeatable several times, I'd choose 1B over 1A as well (but switch back to 1A if I lost too many times). This does not make me susceptible to a money pump or a Dutch book (you're welcome to try, but note that I don't accept trades with negative expected utility). I simply think that my utility function at this time is such that Utility($24,000)>Utility(97% chance $27,000 + 3% chance$0), yet also Utility(34% chance $24,000 + 66% chance$0)<Utility(33% chance $27,000 + 67% chance$0) I acknowledge that in one case, I trade expected payout for certainty, and in the other, I trade increased risk (not certainty) for expected payout. I'm not sure I see anything wrong with this, unless you're offended that I am willing to pay for certainty. Certainty is valuable in this world of overconfident people, accidents, and cheaters. comment by Vaniver · 2013-11-29T01:20:29.956Z · score: 1 (1 votes) · LW · GW This does not make me susceptible to a money pump or a Dutch book (you're welcome to try, but note that I don't accept trades with negative expected utility). I simply think that my utility function at this time is such that Utility($24,000)>Utility(97% chance$27,000 + 3% chance $0), yet also Utility(34% chance$24,000 + 66% chance $0)<Utility(33% chance$27,000 + 67% chance $0) This... means you're vulnerable to the Dutch Book described in the post. Why do you think otherwise? I'm not sure I see anything wrong with this, unless you're offended that I am willing to pay for certainty. Basically, this. The point of utility is that it's linear in probability, which disallows a premium for certainty. If I know your utility for$27,000, and your utility for $24,000, and$0, then I can calculate your preferences over any gamble containing those three outcomes. If your decision procedure is not equivalent to a utility function, then there are cases where you can be made worse off even though it looks to you like you're being made better off. Certainty is valuable in this world of overconfident people, accidents, and cheaters. Isn't certainty impossible in a world of overconfident people, accidents, and cheaters? comment by christopherj · 2013-11-29T15:32:08.947Z · score: 0 (0 votes) · LW · GW This... means you're vulnerable to the Dutch Book described in the post. Why do you think otherwise? I'm really not. You mean, "This means that according to my theory you're vulnerable to the Dutch Book described in the post" Like I said though, I'm not accepting trades with negative utility, and being money pumped and Dutch Booked both have negative utility. As for the "money pump" described in the post, I gain $23,999.98 if it happens as described. Also, there would have been no need to pay the first penny as the state of the switch was not relevant at that time. Also the game was switched from "34% for 24,000 and 33% for 27,000" to "34% chance to play game 1, at which time you may choose" Basically, this. The point of utility is that it's linear in probability, which disallows a premium for certainty. If I know your utility for$27,000, and your utility for $24,000, and$0, then I can calculate your preferences over any gamble containing those three outcomes. If your decision procedure is not equivalent to a utility function, then there are cases where you can be made worse off even though it looks to you like you're being made better off. I agree that if you take the probability out of my utility function, then I am directly altering my preference in the exact same situation. Even so, there is in reality at least one difference: if someone is cheating or made a miscalculation, option 1A is cheat-proof and error-proof but none of the other options are. And I've definitely attached utility to that. This aspect would disappear if probabilities were removed from my utility function. comment by Vaniver · 2013-11-29T19:55:29.205Z · score: 1 (1 votes) · LW · GW Like I said though, I'm not accepting trades with negative utility, and being money pumped and Dutch Booked both have negative utility. You've expressed that 1A>1B, and 2B>2A. The first deal is "Instead of 2A, I'll give you 2B for a penny." By your stated preference, you agree. The second deal is "Instead of 1B, I'll give you 1A." By your stated preference, you agree. You are now two pennies poorer. So either you do not actually hold those stated preferences, or you are vulnerable to Dutch booking. (What does it mean to actually prefer one gamble to another? That you're willing to pay to trade gambles. Suppose you hate selling things; then your preferences depend on the order you received things, which makes you vulnerable to the order in which other people present you options!) Also the game was switched from "34% for 24,000 and 33% for 27,000" to "34% chance to play game 1, at which time you may choose" What is the difference between those two games? The outcome probabilities are the same (multiply them out and check!). Or are you willing to pay hundreds of dollars (in expectation) to have him roll two dice instead of one? Even so, there is in reality at least one difference: if someone is cheating or made a miscalculation, option 1A is cheat-proof and error-proof but none of the other options are. But, don't you have some numerical preference for this? If it were a certain 24,000 against a 33/34ths chance of 27 million, I hope you'd pick the latter, even if there's some chance of the die being loaded in the second option. What this suggests, then, is that you need to adjust your probabilities- but if the probabilities are presented to you as your estimate after cheating is taken into account, then it doesn't make sense to double-count the risk of cheating! (One useful heuristic that people often have when evaluating gambles is imagining the person on the other side of the gamble. If something looks really good on your end and really bad on their end, then this is suspicious- why would they offer you something so bad for them? Keep in mind, though, that gambles are done both against other people and against the environment. If there's gold sitting in the ground underneath you, and you have a 97% chance of successfully extracting it and becoming a millionaire, you shouldn't say "hmm, what's in it for the ground? Why would it offer me this deal?") comment by christopherj · 2013-11-30T06:20:50.359Z · score: 0 (2 votes) · LW · GW You've expressed that 1A>1B, and 2B>2A. The first deal is "Instead of 2A, I'll give you 2B for a penny." By your stated preference, you agree. The second deal is "Instead of 1B, I'll give you 1A." By your stated preference, you agree. Note that it becomes a different problem this way than my stated preferences (and note again that my stated choices (not preferences) were context-dependent) -- there is the additional information that the dealmaker had a good chance to cheat and didn't take it. This information will reduce my disutility calculation for the uncertainty in the offer, as it increases my odds of winning 1B from [33/34 - good chance of cheating] to [33/34 - small chance of cheating] You are now two pennies poorer. Or 23,999.98 dollars richer. So either you do not actually hold those stated preferences, or you are vulnerable to Dutch booking If I did hold those preferences, I would not be vulnerable to Dutch booking, nor money pumping. Money pumping is infinite, whereas by giving me two pairs of different choices you can make me choose twice (and it's not a preference reversal, though it would be exactly a preference reversal if you multiply the first choice's odds by 0.34 and pretend that changes nothing). For me to be vulnerable to Dutch booking, you'd have to somehow get money out of me as well. But how? I can't buy game 1 for less than 24,000 minus the cost of various witnesses if I intend to choose 1A, and you can't sell game 1 for less than 26,200. You'd have an even worse time convincing me to buy game 2. You can't convince me to bid against either of the theoretically superior choices 1B and 2B. If you change my situation I might change my choice, as I already stated several conditions that would cause me to abandon 1A. What is the difference between those two games? Option 1A has a 0% chance of undetected cheating. Options 1B, 2A, and 2B all have a 100% chance of undetected cheating. In Game 3, you can pay to change your default choice twice, and the dealmaker shows a willingness to eliminate his ability to cheat before your second choice. But, don't you have some numerical preference for this? Not currently. There would be a lot of factors determining how likely I think a miscalculation or cheating might be, and there is no way to determine this in the abstract. comment by Jiro · 2013-11-30T04:33:58.682Z · score: 2 (4 votes) · LW · GW I don't like many of the standard arguments against capital punishment. In particular, I'm tired of the argument "if you just put an innocent person in jail, they might be exonerated later. If you execute an innocent person, and they are exonerated later, it's too late." Of course, I then point out that people can be exonerated in the time between being convicted and being executed (which can be quite long sometimes), and the response is generally that in the life sentence there's always some chance of being freed due to exoneration while in the capital punishment case, there's a segment of time where there's no chance of being freed. My response is that a chance X of being freed due to exoneration when sentenced to life in prison is, for some Y, equivalent to having a chance Y of being freed due to exoneration before your execution and zero chance of being freed after being executed. Since there are values of X that are considered acceptable, there are values of Y that must be acceptable too and therefore this argument cannot be used as a basis for an absolutist anti-capital-punishment stance. I have yet to have anyone understand my response (the few times I've tried it, anyway). But it seems to me that I've stumbled onto something equivalent to the Allais problem. People don't think of "chance X of being freed" and "chance Y of being freed before execution and no chance of being freed after execution" as statements that can ever be equivalent, because they really don't like the certain failure in the last example, even though the two may be mathematically equivalent. comment by hyporational · 2013-11-30T13:06:59.857Z · score: 0 (0 votes) · LW · GW Since there are values of X that are considered acceptable, there are values of Y that must be acceptable too and therefore this argument cannot be used as a basis for an absolutist anti-capital-punishment stance. I agree. Have you considered that life in prison has more value than being dead? Also, why compare capital punishment to life sentences? What if there were no life sentences? Of course you can still die in prison for whatever that's worth, but the chance is significantly smaller. comment by Jiro · 2013-11-30T16:48:38.758Z · score: 1 (1 votes) · LW · GW Have you considered that life in prison has more value than being dead? I didn't post that because it was about capital punishment, I posted it because I thought this particular anti-capital punishment argument was relevant to the Allais problem. I don't see how life in prison being more valuable than being dead is relevant to the Allais problem. What if there were no life sentences? Of course you can still die in prison for whatever that's worth, but the chance is significantly smaller. Insofar as that's relevant, it just changes the values of X and Y; the absolutist "we can't do it because an innocent may be exonerated only after he is killed" position still has the same flaw. comment by hyporational · 2013-11-30T17:16:46.304Z · score: 0 (0 votes) · LW · GW Ok, good to know you weren't trying to sneak in politics. I agree it's not relevant. Insofar as that's relevant, it just changes the values of X and Y; the absolutist "we can't do it because an innocent may be exonerated only after he is killed" position still has the same flaw. Yes, if we're strictly logical this is true. comment by Quill_McGee · 2014-04-14T01:20:46.060Z · score: 0 (0 votes) · LW · GW My resolution to this, without changing my intuitions to pick things that I currently perceive as 'simply wrong', would be that I value certainty. A 9/10 chance of winning x dollars is worth much less to me than a 10/10 chance of winning 9x/10 dollars. However, a 2/10 chance of winning x dollars is worth only barely less than a 4/10 chance of winning x/2 dollars, because as far as I can tell the added utility of the lack of worrying increases massively as the more certain option approaches 100%. Now, this becomes less powerful the closer the odds, are, but slower than the dollar difference between the two change. So a 99% chance of x is barely effected by this compared to a 100% chance of .99x, but still by a greater value than .01x, and the more likely option still dominates. I might take a 99% chance of x over a 100% chance of .9x, however, and I would definitely prefer a 99% chance of x over a 100% chance of 0.8x. EDIT: Upon further consideration, this is wrong. If presented with the actual choice, I would still prefer 1A to 1B, but to maintain consistency I will now choose 2A > 2B. comment by [deleted] · 2015-03-16T17:22:39.235Z · score: 0 (0 votes) · LW · GW I don´t really see how me chosing 1A > 1b and 2b >2A is a flaw of mine. First of all, my utility function, which i have inherited from millions of years of evolution, tells me to SOMETIMES take risks IF I CAN AFFORD IT, especially when the increasing stake outweighs the increasing risk. This is how I see it: If it was my life at stake, I would of course try to raise the odds. But this is extra money. I don´t even starve if i don´t get the money. If I am not certain I can get the money in case 2, I think that lowering my win-chance with 1/100 is worth to raise the stake with 3000 dollars, which is 3000/24000 = 1/8 of the original stake. When I lower my odds with 1 % I raise the stake with 12,5 %. Since the outcome is random anyhow, AND not in my favor, and the risk increase is only 1/100, I take my chances. comment by [deleted] · 2015-06-26T12:24:28.044Z · score: 4 (4 votes) · LW · GW The Allais "Paradox" and Scam Vulnerability by Karl Hammer is a much needed update for anyone who reads the OP. comment by Epictetus · 2015-08-18T14:30:34.037Z · score: 2 (2 votes) · LW · GW Would I pay $24k to play a game where I had a 33/34 probability of winning an extra$3k? Let's consult our good friend the Kelly Criterion. We have a bet that pays 1/8:1 with a 33/34 probability of winning, so Kelly suggests staking ~73.5% of my bankroll on the bet. This means I'd have to have an extra ~$8.7k I'm willing to gamble with in order to choose 1b. If I'm risk-averse and prefer a fractional Kelly scheme, I'd need to start with ~$20k for a three-fourths Kelly bet and ~$41k for a one-half Kelly bet. Since I don't have that kind of money lying around, I choose 1a. In case 2, we come across the interesting question of how to analyze the costs and benefits of trading 2a for 2b. In other words, if I had a voucher to play 2a, when would I be willing to trade it for a voucher to play 2b? Unfortunately, I'm not experienced with such analyses. Qualitatively, it appears that if money is tight then one would prefer 2a for the greater chance of winning, while someone with a bigger bankroll would want the better returns on 2b. So, there's some amount of wealth where you begin to prefer 2b over 2a. I don't find it obvious that this should be the same as the boundary between 1a and 1b. This is a problem because the 2s are equal to a one-third chance of playing the 1s. That is, 2A is equivalent to playing gamble 1A with 34% probability, and 2B is equivalent to playing 1B with 34% probability. Equivalence is tricky business. If we look at the winnings distribution over several trials, the 1s look very different from the 2s and it's not just a matter of scale. The distributions corresponding to the 2s are much more diffuse. Surely, the certainty of having$24,000 should count for something. You can feel the difference, right? The solid reassurance? A certain bet has zero volatility. Since much of the theory of gambling has to do with managing volatility, I'd say certainty counts for a lot. comment by Starglow · 2015-10-22T16:59:53.391Z · score: 1 (1 votes) · LW · GW Forgive me if I'm misunderstanding something, but the way I see it, if I choose 1A, it means that I am willing to forgo (i.e. pay) 3000$for an additional 1/34 ~ 3% chance of getting money. Then if I choose 2B, if means I am unwilling to forgo an additional 3000$ in exchange for an additional 1% chance of getting money. So what I learn from this is that the value I assign an extra percentage chance of getting money is somewhere between 1000$and 3000$. comment by lolbifrons · 2016-08-18T00:23:34.396Z · score: 4 (2 votes) · LW · GW So here's why I prefer 1A and 2B after doing the math, and what that math is. 1A = 24000 1B = 26206 (rounded) 2A = 8160 2B = 8910 Now, if you take (iB-iA)/iA, which represents the percent increase in the expected value of iB over iA, you get the same number, as you stated. (iB-iA)/iA = .0919 (rounded) This number's reciprocal represents the number of times greater the expected value of iA is than the marginal expected value of iB iA/(iB-iA) = 10.88 (not rounded) Now, take this number and divide it by the quantity p(iA wins)-p(iB wins). This represents how much you have to value the first $24000 you receive over the next$3000 to pick iA over iB. Keep in mind that 24/3 = 8, so if $1 = 1 utilon in all cases, you should pick iA only when this quotient is less than 8. 1A/(1B-1A)/[p(1A wins)-p(1B wins)] = 369.92 2A/(2B-2A)/[p(2A wins)-p(2B wins)] = 1088 I have liabilities in excess of my assets of around$15000. That first $15000 is very important to me in a very quantized, thresholdy way, but it is not absolute. I can make the money some other way, but not needing to - having it available to me right now because of this game - represents more utility than a linear mapping of dollars to utility suggests, by a large factor. The next threshold like this in my life that I can think of is "enough money to buy a house in Los Angeles without taking out a mortgage," of which$3000 is a negligible portion. I'd say that the utility I assign the first $24000 because of this lies between 370 and 1080 times the utility I assign the next$3000. This is why I take 1A and 2B given that this entire thing is performed only once. Once my debts are paid, all bets (on 1A) are off. If we're dealing with utilons rather than dollars, or I have repeated opportunity to play (which is necessary for you to "money pump" me) iB is the obvious choice in both cases. comment by xSciFix · 2019-07-08T17:59:59.646Z · score: 1 (1 votes) · LW · GW Assuming this is a one off and not a repeated iteration; I'd take 1A because I'd be *really* upset if I lost out on $27k due to being greedy and not taking the sure$24k. That 1/34 is a small risk but to me it isn't worth taking - the $24k is too important for me to lose out on. I'd take 2B instead of 2A because the difference in odds is basically negligible so why not go for the extra$3k? I have ~2/3rds chance to walk away with nothing either way. I don't really see the paradox there. The point is to win, yes? If I play game 1 and pick B and hit that 1/34 chance of loss and walk away with nothing I'll be feeling pretty stupid. Let's say you prefer 1A over 1B, and 2B over 2A, and you would pay a single penny to indulge each preference.  The switch starts in state A.  Before 12:00PM, you pay me a penny to throw the switch to B.  The die comes up 12.  After 12:00PM and before 12:05PM, you pay me a penny to throw the switch to A. But why would I pay to switch it back to A when I've already won given the conditions of B? And as Doug_S. mentions, you can take my pennies if I'm getting paid out tens of thousands of dollars. I do see the point in it being difficult to program this type of decision making, though. comment by Дмитрий Зеленский (dmitrii-zelenskii) · 2019-08-19T00:18:36.720Z · score: 1 (1 votes) · LW · GW Oh, here I come again, I've already commented in similar fashion elsewhere, and several people said the same here: nothing vs. non-nothing as a binary switch may work better if the situation is not repeated to "add up to normality" but only played once. One can argue that repeats may seem as being played once each time, but, being creatures gifted with memory, we can notice a catch of encountering such situations often and modify behaviour.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6829321980476379, "perplexity": 2081.834567143904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315544.11/warc/CC-MAIN-20190820133527-20190820155527-00165.warc.gz"}
http://physics.aps.org/synopsis-for/10.1103/PhysRevLett.108.103002
# Synopsis: Molecular Speed Bump Experimenters slow molecules by shining lasers at them. Physicists have explored many interesting phenomena, including Bose-Einstein condensation, by trapping atoms with temperatures a tiny fraction of a degree above absolute zero. Doing the same for molecules is harder, but in Physical Review Letters researchers report using lasers to significantly slow molecules. When combined with previously demonstrated molecular cooling schemes, this should allow new types of molecules to be trapped, an important prerequisite to studying chemical reactions and quantum phenomena at ultralow temperatures. To prepare ultracold diatomic molecules, one approach is to bind together pairs of ultracold atoms. John Barry and colleagues at Yale University, Connecticut, are exploring a different approach: laser slowing and cooling of existing molecules. The net momentum imparted by many photons, absorbed from a laser beam and then re-emitted in a random direction, can slow molecules enough so that they can then be trapped and cooled. A big challenge is that the photon energy can easily be diverted into heating of the internal vibrations or rotations of the molecules. The researchers chose to study strontium monofluoride, which can be manipulated to nearly eliminate this heating. They directed a beam of these molecules head-on into a laser tuned to excite a specific electronic transition of the molecule. They also added other lasers in order to return any vibrationally excited molecules back to the ground state. The researchers slowed as much as $6%$ of the molecules in their original beam from $140$ meters per second to less than $50$, which required each molecule to absorb and re-emit some $10,000$ photons. Further slowing, needed to capture the molecules in a trap, will require reduction of the sideways velocity of the molecules, which is increased by the random photon emissions during the slowing process. – Don Monroe More Features » ### Announcements More Announcements » ## Subject Areas Atomic and Molecular Physics Astrophysics ## Next Synopsis Biological Physics ## Related Articles Atomic and Molecular Physics ### Synopsis: Detecting a Molecular Duet Using a scanning tunneling microscope, researchers detect coupled vibrations between two molecules. Read More » Atomic and Molecular Physics ### Viewpoint: How to Create a Time Crystal A detailed theoretical recipe for making time crystals has been unveiled and swiftly implemented by two groups using vastly different experimental systems. Read More » Atomic and Molecular Physics ### Viewpoint: What Goes Up Must Come Down A molecular fountain, which launches molecules rather than atoms and allows them to be observed for long times, has been demonstrated for the first time. Read More »
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5632755756378174, "perplexity": 1905.856428306766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00476-ip-10-171-10-70.ec2.internal.warc.gz"}
https://math.iitm.ac.in/event/view/226
Department of Mathematics Indian Institute Of Technology Madras , Chennai OPEN BOOKS FOR CLOSED NON-ORIENTABLE 3-MANIFOLDS Abstract : In this talk, I am going to give a proof of the existence of an open book decomposition for a closed non-orientable 3-manifold. This open book decomposition is analogous to a planner open book decomposition for a closed orientable 3-manifold. More precisely, we will give an open book decomposition of a given closed non-orientable 3-manifold with the pages punctured Mobius bands. If time permits, I also give an algorithm to determine the monodromy of this open book. This is a joint work with Suhas Pandit and Abhijeet Ghanwat. Key Speaker Mr. Selvam A. Place NAC 522 Start Time 3:00 PM Finish Time 4:00 PM
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8342143893241882, "perplexity": 748.3750038763204}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500044.16/warc/CC-MAIN-20230203055519-20230203085519-00366.warc.gz"}
http://www.zora.uzh.ch/34365/
The orbital evolution induced by baryonic condensation in triaxial haloes Valluri, M; Debattista, V P; Quinn, T; Moore, B (2010). The orbital evolution induced by baryonic condensation in triaxial haloes. Monthly Notices of the Royal Astronomical Society, 403(1):525-544. Abstract Using spectral methods, we analyse the orbital structure of prolate/triaxial dark matter (DM) haloes in N-body simulations in an effort to understand the physical processes that drive the evolution of shapes of DM haloes and elliptical galaxies in which central masses are grown. A longstanding issue is whether the change in the shapes of DM haloes is the result of chaotic scattering of the major family of box orbits that serves as the backbone of a triaxial system, or whether they change shape adiabatically in response to the evolving galactic potential. We use the characteristic orbital frequencies to classify orbits into major orbital families, to quantify orbital shapes and to identify resonant orbits and chaotic orbits. The use of a frequency-based method for distinguishing between regular and chaotic N-body orbits overcomes the limitations of Lyapunov exponents which are sensitive to numerical discreteness effects. We show that regardless of the distribution of the baryonic component, the shape of a DM halo changes primarily due to changes in the shapes of individual orbits within a given family. Orbits with small pericentric radii are more likely to change both their orbital type and shape than orbits with large pericentric radii. Whether the evolution is regular (and reversible) or chaotic (and irreversible), it depends primarily on the radial distribution of the baryonic component. The growth of an extended baryonic component of any shape results in a regular and reversible change in orbital populations and shapes, features that are not expected for chaotic evolution. In contrast, the growth of a massive and compact central component results in chaotic scattering of a significant fraction of both box and long-axis tube orbits, even those with pericentre distances much larger than the size of the central component. Frequency maps show that the growth of a disc causes a significant fraction of halo particles to become trapped by major global orbital resonances. We find that despite the fact that shape of a DM halo is always quite oblate following the growth of a central baryonic component, a significant fraction of its orbit population has the characteristics of its triaxial or prolate progenitor. Using spectral methods, we analyse the orbital structure of prolate/triaxial dark matter (DM) haloes in N-body simulations in an effort to understand the physical processes that drive the evolution of shapes of DM haloes and elliptical galaxies in which central masses are grown. A longstanding issue is whether the change in the shapes of DM haloes is the result of chaotic scattering of the major family of box orbits that serves as the backbone of a triaxial system, or whether they change shape adiabatically in response to the evolving galactic potential. We use the characteristic orbital frequencies to classify orbits into major orbital families, to quantify orbital shapes and to identify resonant orbits and chaotic orbits. The use of a frequency-based method for distinguishing between regular and chaotic N-body orbits overcomes the limitations of Lyapunov exponents which are sensitive to numerical discreteness effects. We show that regardless of the distribution of the baryonic component, the shape of a DM halo changes primarily due to changes in the shapes of individual orbits within a given family. Orbits with small pericentric radii are more likely to change both their orbital type and shape than orbits with large pericentric radii. Whether the evolution is regular (and reversible) or chaotic (and irreversible), it depends primarily on the radial distribution of the baryonic component. The growth of an extended baryonic component of any shape results in a regular and reversible change in orbital populations and shapes, features that are not expected for chaotic evolution. In contrast, the growth of a massive and compact central component results in chaotic scattering of a significant fraction of both box and long-axis tube orbits, even those with pericentre distances much larger than the size of the central component. Frequency maps show that the growth of a disc causes a significant fraction of halo particles to become trapped by major global orbital resonances. We find that despite the fact that shape of a DM halo is always quite oblate following the growth of a central baryonic component, a significant fraction of its orbit population has the characteristics of its triaxial or prolate progenitor. Citations 41 citations in Web of Science® 39 citations in Scopus® Altmetrics Detailed statistics Item Type: Journal Article, refereed, original work 07 Faculty of Science > Institute for Computational Science 530 Physics English March 2010 02 Mar 2011 16:04 05 Apr 2016 14:09 Wiley-Blackwell 0035-8711 The definitive version is available at www.blackwell-synergy.com 10.1111/j.1365-2966.2009.16192.x http://arxiv.org/abs/0906.4784 Permanent URL: http://doi.org/10.5167/uzh-34365 Preview Content: Accepted Version Filetype: PDF (Accepted manuscript, Version 2) Size: 13MB View at publisher Preview Content: Accepted Version Filetype: PDF (Accepted manuscript, Version 1) Size: 10MB TrendTerms TrendTerms displays relevant terms of the abstract of this publication and related documents on a map. The terms and their relations were extracted from ZORA using word statistics. Their timelines are taken from ZORA as well. The bubble size of a term is proportional to the number of documents where the term occurs. Red, orange, yellow and green colors are used for terms that occur in the current document; red indicates high interlinkedness of a term with other terms, orange, yellow and green decreasing interlinkedness. Blue is used for terms that have a relation with the terms in this document, but occur in other documents. You can navigate and zoom the map. Mouse-hovering a term displays its timeline, clicking it yields the associated documents.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8511408567428589, "perplexity": 1278.7726442414596}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660746.32/warc/CC-MAIN-20160924173740-00046-ip-10-143-35-109.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/187/euclidean-tilings-that-are-uniform-but-not-vertex-transitive
# Euclidean Tilings that are Uniform but not Vertex-Transitive Basic definitions: a tiling of d-dimensional Euclidean space is a decomposition of that space into polyhedra such that there is no overlap between their interiors, and every point in the space is contained in some one of the polyhedra. A vertex-uniform tiling is a tiling such that each vertex figure is the same: each vertex is contained in the same number of k-faces, etc: the view of the tiling is the same from every vertex. A vertex-transitive tiling is one such that for every two vertices in the tiling, there exists an element of the symmetry group taking one to the other. Clearly all vertex-transitive tilings are vertex-uniform. For n=2, these notions coincide. However, Grunbaum, in his book on tilings, mentions but does not explain that for n >= 3, there exist vertex uniform tilings that are not vertex transitive. Can someone provide an example of such a tiling, or a reference that explains this? - Could you clarify vertix-transitive a bit more? –  Casebash Jul 21 '10 at 6:15 I'm not sure I understand your definition of vertex-uniform. Could you clarify? –  Qiaochu Yuan Jul 28 '10 at 7:45 ok, sorry guys; closed this question because I was being stupid and, as asked, it's not a real question. –  Jamie Banks Aug 5 '10 at 23:08 Transitive action on the vertices is usually the definition of "looking the same from each vertex". So maybe you have two different groups in mind: 1. The automorphism group of the combinatorics of the tiling; the abstract structure of vertices, faces, edges, etc. 2. The group of Euclidean motions that leave the tiling invariant. Motions meaning isometries of space, where one should also specify whether orientation-reversal is allowed or not. Any instance of this is also an automorphism of the tiling combinatorics. If tilings that are vertex-transitive under group #1 are "vertex-uniform" and those that are vertex-transitive in the stricter sense of group #2 are "vertex transitive", then it is easy to give examples where some of the polyhedra are deformations of others (so not isometrically transitive in sense #2). For example, a one-dimensional periodic tiling with several different sizes of interval will have all vertices equivalent combinatorially but a finite number of distinct vertex types under geometric equivalence. This is maybe too simple to be what Grunbaum had in mind, can you quote the book? - Let me discuss an analogous situation with regard to convex polyhedra. Archimedes, generalizing what are often called the Platonic Solids or the convex regular polyhedra (there are five of them), seems to have discovered 13 polyhedra with the property that the pattern of faces around each vertex was the same for every vertex of the polyhedron, and all of the faces of the polyhedron were regular polygons. I say seems to have done this because the manuscript that describes what he did is lost. Pappus who lived many years after Archimedes describes these polyhedra explicitly and mentions 13 convex solids. Years latter many artists and mathematicians talked about these polyhedra. Kepler explicitly mentions two infinite families of polyhedra which have this property: the prisms (consisting of two regular n-gons and n squares) and the anti-prisms (consisting of two regular n-gons and 2n equilateral triangles). Kepler purports to give a proof a proof that there 13 such solids (with at least two types of faces). I say purports because there are in fact 14 convex polyhedra which meet the local symmetry condition described above. The one Archimedes, Pappus, Kepler and others missed is often called the pseudo-rhombicuboctahedron. (Kepler created confusion in referring to 14 solids in a place other than where he gave his "proof.") http://en.wikipedia.org/wiki/Elongated_square_gyrobicupola Many books to this day continue to talk about 13 Archimedean convex solids (other than the prisms and antiprisms). With the definition that Archimedes almost certainly had in mind this is wrong - there are 14 such solids. However, if one considers convex polyhedra with at least two regular polygons as faces, and where the symmetry group of the solid is transitive on the vertices (that is, the symmetry group can take any vertex to any other vertex) then there are only 13 such solids. However, almost certainly Archimedes had no knowledge of the idea of a symmetry group in the modern sense. Depending on whether one uses a local symmetry notion or a global symmetry notion there are either 14 or 13 such convex solids (aside from the prisms and antiprisms).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8935979604721069, "perplexity": 575.3463060753332}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500835119.25/warc/CC-MAIN-20140820021355-00199-ip-10-180-136-8.ec2.internal.warc.gz"}
https://export.arxiv.org/abs/1611.09334
Full-text links: cond-mat.str-el (what is this?) # Title: Twisted gauge theories in 3D Walker-Wang models Abstract: Three dimensional gauge theories with a discrete gauge group can emerge from spin models as a gapped topological phase with fractional point excitations (gauge charge) and loop excitations (gauge flux). It is known that 3D gauge theories can be "twisted", in the sense that the gauge flux loops can have nontrivial braiding statistics among themselves and such twisted gauge theories are realized in models discovered by Dijkgraaf and Witten. A different framework to systematically construct three dimensional topological phases was proposed by Walker and Wang and a series of examples have been studied. Can the Walker Wang construction be used to realize the topological order in twisted gauge theories? This is not immediately clear because the Walker-Wang construction is based on a loop condensation picture while the Dijkgraaf-Witten theory is based on a membrane condensation picture. In this paper, we show that the answer to this question is Yes, by presenting an explicit construction of the Walker Wang models which realize both the twisted and untwisted gauge theories with gauge group $\mathbb{Z}_2 \times \mathbb{Z}_2$. We identify the topological order of the models by performing modular transformations on the ground state wave functions and show that the modular matrices exactly match those for the $\mathbb{Z}_2 \times \mathbb{Z}_2$ gauge theories. By relating the Walker-Wang construction to the Dijkgraaf-Witten construction, our result opens up a way to study twisted gauge theories with fermonic charges, and correspondingly strongly interacting fermionic symmetry protected topological phases and their surface states, through exactly solvable models. Comments: 23 pages Subjects: Strongly Correlated Electrons (cond-mat.str-el); High Energy Physics - Theory (hep-th) Journal reference: Phys. Rev. B 95, 115142 (2017) DOI: 10.1103/PhysRevB.95.115142 Cite as: arXiv:1611.09334 [cond-mat.str-el] (or arXiv:1611.09334v1 [cond-mat.str-el] for this version) ## Submission history From: Zitao Wang [view email] [v1] Mon, 28 Nov 2016 20:43:33 GMT (392kb,D) Link back to: arXiv, form interface, contact.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6849660873413086, "perplexity": 1100.32127618921}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571758.42/warc/CC-MAIN-20220812200804-20220812230804-00718.warc.gz"}
https://thethong.wordpress.com/category/write-up/
## Archive for the ‘Write up’ Category ### Information Criterion This post will be my summary about the Akaike Information Criterion(AIC)  and the Takeuchi Information Criterion(TIC).  In particular, a derivation of AIC and TIC is shown. And if I can understand more about the Generalized Information Criterion, I will cover it too. ### Formulating the PCA Today I have thought about how one can formulate the Principal Component Analysis (PCA) method. In particular I want to reformulate PCA as a solution for a regression problem. The idea of reformulation PCA as a solution for some regression problem is useful in Sparse PCA , in which a $L_1$ regularization term is inserted into a ridge regression formula to enforce spareness of the coefficients (i.e. elastic net). There are at least two equivalent ways to motivate PCA. In this post I will first give a formulation of PCA based on orthogonal projection, and then discuss a regression-type reformulation of PCA. ### Nonparametric Bayesian Seminar 1 : Notes (mục đích chính là viết ra để khỏi quên nên sẽ  lộn xộn. Không có hình vẽ) Paper: Introduction to Nonparametric Bayesian Models (Naonori Ueda, Takeshi Yamada) ### Q (These are some thoughts I got while reading the inspiring book by James D.Watson “Avoid boring people and other lessons from a life in science.” Maybe I will give a full summary on the lessons from the book later.) ### Mathematics and the Unexpected (Just some thoughts while I read Mathematics and the Unexpected and Innumeracy: Mathematical Illiteracy and its consequence. Maybe some more detailed post later.)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8391450643539429, "perplexity": 2366.4850203513374}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676596336.96/warc/CC-MAIN-20180723110342-20180723130342-00435.warc.gz"}
http://math.stackexchange.com/questions/36483/monotonicity-of-fn-frac1n-sum-i-1n-1-frac1i/36489
# Monotonicity of $f(n)= \frac{1}{n} \sum_{i=1}^{n-1} \frac{1}{i}$ Define $f: \mathbb{N} \rightarrow \mathbb{R}$ as $f(n)= \frac{1}{n} \sum_{i=1}^{n-1} \frac{1}{i}$. I was wondering how to tell if $f$ is a increasing or decreasing function? Thanks and regards! - You are taking the average of things that keep getting smaller so... –  Jonas Teuwen May 2 '11 at 19:06 Maybe you take the difference $f(n)-f(n-1)$ and try to figure out whether it is positive or negative... –  Fabian May 2 '11 at 19:10 @Tim:Do you really want the upper limit on the sum to be $n-1$? –  Chris Leary May 2 '11 at 19:10 @Chris: yes, I do. –  Tim May 2 '11 at 19:14 @Tim: Then you take the last term $0$, that doesn't change much about the argument. –  Jonas Teuwen May 2 '11 at 19:24 A formal proof would be \begin{align} f(n+1) - f(n) &= \frac{1}{n+1} \sum_{i=1}^{n} \frac{1}{i} - \frac{1}{n} \sum_{i=1}^{n-1} \frac{1}{i} \\ &= \frac{n}{n+1}\frac{1}{n} \left(\sum_{i=1}^{n-1} \frac{1}{i} + \frac{1}{n}\right) - \frac{1}{n} \sum_{i=1}^{n-1} \frac{1}{i} \\ &= (\frac{n}{n+1} - 1)\frac{1}{n} \sum_{i=1}^{n-1} \frac{1}{i} + \frac{1}{n(n+1)} \\ & = \frac{1}{n(n+1)} \left(1 - \sum_{i=1}^{n-1} \frac{1}{i}\right) < 0 \end{align} for $\forall n \geq 3$ - Actually for $n=2$ the expression is equal to 0. –  Fabian May 2 '11 at 19:24 Thanks Fabian, corrected the mistake –  Shuhao Cao May 2 '11 at 19:26 your welcome. +1 nice answer –  Fabian May 2 '11 at 19:31 A simpler proof would be to notice that $\displaystyle f(n) \gt \frac{1}{n}$ (for $n \gt 2$) Thus $\displaystyle (n+1)f(n+1) - nf(n) = \frac{1}{n} \lt f(n)$ and so $f(n) \gt f(n+1)$. - Nice, simple argument. –  Jonas Teuwen May 2 '11 at 22:25 Thanks! Nice indeed. I wish I were able to accept multiple answers. Why isn't it possible on SE sites? –  Tim May 3 '11 at 1:13 @Tim/Jonas: Thanks! @Tim: I guess the intent is to have one answer which will make it easier for people who come across this later. We can always edit the accepted answer to have multiple proofs, I guess. –  Aryabhata May 3 '11 at 2:13 The following 3 condidions are equivalent: $$f(n)>f(n+1)$$ $$\frac{1+\frac12+\dots+\frac1{n-1}}n>\frac{1+\frac12+\dots+\frac1{n}}{n+1}$$ $$n+\frac{n+1}2+\dots+\frac{n+1}{n-1}+1>n+\frac n2+\dots+\frac n{n-1}+1$$ In the last inequality, the corresponding terms on the LHS are greater (or equal) as the corresponding terms on the RHS. (They both have the same number of terms.) At least one of these inequalities is strict. EDIT: (From the comments I see that this was not clear enough.) There is the same number of terms, since I divided $n+1$ (obtained by multypling the first term in the second inequality) between $n$ and $1$ (the first and the last term in the LHS). - Consider: \begin{align*} n(n+1)(f(n)-f(n+1))&=n(n+1)\left(\frac{1}{n}\sum_{i=1}^{n-1}\frac{1}{i}-\frac{1}{n+1}\sum_{i=1}^{n}\frac{1}{i}\right)\\ &=\sum_{i=1}^{n-1}\left(\frac{n+1}{i}-\frac{n}{i}\right) - 1\\ &=\sum_{i=1}^{n-1}\frac{1}{i} - 1\\ &\geq 0\end{align*} for $n\geq2$. In specific $f(n)-f(n+1)\geq0$ in general. Edit: Little writing error. - Your calculation doesn't seem to be correct... –  Fabian May 2 '11 at 19:25 There is a term missing in the secondl ine, since the last sum includes the $n$th term but the first one does not. That is, the second line should be $$\left(\sum_{i=1}^{n-1}\left(\frac{n+1}{i}-\frac{n}{i}\right)\right) - 1.$$ –  Arturo Magidin May 2 '11 at 19:27 What you do with the term $i=n$ in the second sum? –  Fabian May 2 '11 at 19:28 I do apologize, I forgot the term "-1". Now I edited the answer and it should be correct. –  Giovanni De Gaetano May 3 '11 at 9:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9886292219161987, "perplexity": 1421.4156842932573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928350.51/warc/CC-MAIN-20150521113208-00014-ip-10-180-206-219.ec2.internal.warc.gz"}
https://fuqna.com/2018/03/
## gumuscles?(gum muscle gh release) – meta-saturation #### Human growth hormone response to repeated bouts of aerobic exercise. Content Source – www.physiology.org In summary, results of the present study indicate that the GH response to repeated bouts of aerobic exercise corresponding to 70% V˙ O2max is not attenuated but, rather, may be augmented, at least under semifasting conditions. The increase in GH secretion observed with repeated bouts of exercise was related to an increase in GH pulse amplitude and the mass of GH secreted per pulse. Three 30-min bouts of aerobic exercise (independent of recovery periods) significantly increased daytime integrated GH concentrations without a significant change in nocturnal GH concentrations compared with control conditions. We conclude that high-intensity aerobic exercise is a potent stimulus of GH secretion that is able to overcome GH auto-negative feedback. Thus repeated bouts of exercise on the same day are able to consistently stimulate GH secretion without attenuation of the GH response. Search “Amylase Steroid”
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9581185579299927, "perplexity": 9941.828302601074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250599718.13/warc/CC-MAIN-20200120165335-20200120194335-00249.warc.gz"}
http://math.stackexchange.com/questions/131043/is-there-a-shortcut-for-calculating-summations-such-as-this
# Is there a shortcut for calculating summations such as this? [duplicate] Possible Duplicate: Computing $\sum_{i=1}^{n}i^{k}(n+1-i)$ I'm curious in knowing if there's an easier way for calculating a summation such as $\sum_{r=1}^nr(n+1-r)$ I know the summation $\sum_{r=1}^xr(n+1-r)$ is going to be a cubic equation, which I should be able to calculate by taking the first few values to calculate all the coefficients. Then I can plug in $n$ for $x$ to get the value I'm looking for. But it seems like there must be an easier way. In the meantime, I'll be calculating this the hard way. - ## marked as duplicate by Ross Millikan, Pedro Tamaroff♦, Kannappan Sampath, Guess who it is., Zev ChonolesApr 24 '12 at 13:19 If you already know the summations for consecutive integers and consecutive squares, you can do it like this: \begin{align*} \sum_{r=1}^n r(n+1-r)&=\sum_{r=1}^nr(n+1)-\sum_{r=1}^nr^2\\ &=(n+1)\sum_{r=1}^nr-\sum_{r=1}^nr^2\\ &=(n+1)\frac{n(n+1)}2-\frac{n(n+1)(2n+1)}6\\ &=\frac16n(n+1)\Big(3(n+1)-(2n+1)\Big)\\ &=\frac16n(n+1)(n+2)\;. \end{align*} Added: Which is $\dbinom{n+2}3$, an observation that suggests another way of arriving at the result. First, $r$ is the number of ways to pick one number from the set $\{0,\dots,r-1\}$, and $n+1-r$ is the number of ways to pick one number from the set $\{r+1,r+2,\dots,n+1\}$. Suppose that I pick three numbers from the set $\{0,\dots,n+1\}$; the middle number of the three cannot be $0$ or $n+1$, so it must be one of the numbers $1,\dots,n$. Call it $r$. The smallest number must be from the set $\{0,\dots,r-1\}$, and the largest must be from the set $\{r+1,r+2,\dots,n+1\}$, so there are $r(n+1-r)$ three-element subsets of $\{0,\dots,n+1\}$ having $r$ as middle number. Thus, the total number of three-element subsets of $\{0,\dots,n+1\}$ is $$\sum_{r=1}^nr(n+1-r)\;.$$ But $\{0,\dots,n+1\}$ has $n+2$ elements, so it has $\dbinom{n+2}3$ three-element subsets, and it follows that $$\sum_{r=1}^nr(n+1-r)=\binom{n+2}3=\frac{n(n+1)(n+2)}6\;.$$ - @anon: Yep, I just caught that. Thanks. –  Brian M. Scott Apr 12 '12 at 21:02 I know the formulas for the summation of consecutive integers and consecutive cubes, just not consecutive squares. And it seems like the long way got me the wrong answer. Guess I have a mistake to find... –  Mike Apr 12 '12 at 21:18 Ugh... Just saw the added part. That I should have figured out sooner and avoided the summation in the first place. –  Mike Apr 12 '12 at 21:31 $\binom{n+2}3$ is also the number of ways to choose 3 not necessarily unique elements from the set $\{1,...,n\}$, which turned out to be more or less what I was doing. –  Mike Apr 12 '12 at 22:36 Yes, linearity and a few memorized formulas: $$\begin{array}{c l} \sum_{r=1}^n r(n+1-r) &=(n+1)\left(\sum_{r=1}^n r\right)-\sum_{r=1}^n r^2 \\ & = (n+1)\frac{n(n+1)}{2}-\frac{n(n+1)(2n+1)}{6} \\ & = \frac{n(n+1)(n+2)}{6}.\end{array}$$ - Note that $$r(n+1-r)=n\binom{r}{1}-2\binom{r}{2}\tag{1}$$ Using the identity $$\sum_r\binom{n-r}{a}\binom{r}{b}=\binom{n+1}{a+b+1}\tag{2}$$ with $a=0$, we can sum $(2)$ for $r$ from $1$ to $n$ and get $$n\binom{n+1}{2}-2\binom{n+1}{3}\tag{3}$$ Formula $(3)$ can be manipulated into more convenient forms, e.g. $$\left((n+2)\binom{n+1}{2}-2\binom{n+1}{2}\right)-2\binom{n+1}{3}\\[18pt] 3\binom{n+2}{3}-\left(2\binom{n+1}{2}+2\binom{n+1}{3}\right)\\[18pt] 3\binom{n+2}{3}-2\binom{n+2}{3}$$ $$\binom{n+2}{3}\tag{4}$$ - Guessing that k in (2) should be an r? –  Mike Apr 13 '12 at 0:48 @Mike: indeed. thanks –  robjohn Apr 13 '12 at 2:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8746274709701538, "perplexity": 260.00684354075474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375094662.41/warc/CC-MAIN-20150627031814-00190-ip-10-179-60-89.ec2.internal.warc.gz"}
https://pacific.com.vn/archive/square-root-of-numbers-186e77
x b The particular case of the square root of 2 is assumed to date back earlier to the Pythagoreans, and is traditionally attributed to Hippasus. 3 The Yale Babylonian Collection YBC 7289 clay tablet was created between 1800 BC and 1600 BC, showing What if there is no calculator or a smartphone handy? ; it is denoted To find a definition for the square root that allows us to consistently choose a single value, called the principal value, we start by observing that any complex number x + iy can be viewed as a point in the plane, (x, y), expressed using Cartesian coordinates. This simplifies finding a start value for the iterative method that is close to the square root, for which a polynomial or piecewise-linear approximation can be used. We write it next to the subtracted value already there (which is 4). That was interesting! {\displaystyle x} z When talking of the square root of a positive integer, it is usually the positive square root that is meant. . It was known to the ancient Greeks that square roots of positive integers that are not perfect squares are always irrational numbers: numbers not expressible as a ratio of two integers (that is, they cannot be written exactly as m/n, where m and n are integers). i Last updated at Sept. 11, 2018 by Teachoo. Otherwise, it is a quadratic non-residue. [6] (1;24,51,10) base 60 corresponds to 1.41421296, which is a correct value to 5 decimal points (1.41421356...). θ A cube root of The square roots of the perfect squares (e.g., 0, 1, 4, 9, 16) are integers. Below is the result we got with 13 decimals. {\displaystyle {\sqrt {x}}} I am excited about the idea of helping others acquire high quality resources. n w , But you can also approximate the value of those square roots by hand, and sometimes you can rewrite the square root in a somewhat simpler form. < 1 However, the inequality of arithmetic and geometric means shows this average is always an overestimate of the square root (as noted below), and so it can serve as a new overestimate with which to repeat the process, which converges as a consequence of the successive overestimates and underestimates being closer to each other after each iteration. {\displaystyle {\sqrt[{n}]{x}}. a a a The quadratic residues form a group under multiplication. If you read this far, tweet to the author to show them you care. Learn to code — free 3,000-hour curriculum. This is done by introducing a new number, denoted by i (sometimes j, especially in the context of electricity where "i" traditionally represents electric current) and called the imaginary unit, which is defined such that i = −1. {\displaystyle {\sqrt[{3}]{x}}. n We will divide the space into … 3 . Another useful method for calculating the square root is the shifting nth root algorithm, applied for n = 2. π {\displaystyle {\sqrt {ab}}} ), where r ≥ 0 is the distance of the point from the origin, and It must be the largest possible integer that allows the product to be less than or equal the number on the left. Then, let’s separate the number’s digits into pairs moving from right to left. If the field is finite of characteristic 2 then every element has a unique square root. When we square a negative number we get a positive result.. Just the same as squaring a positive number: (For more detail read Squares and Square Roots in Algebra) . For example, if we choose the number 6, the first number becomes 86 (8 and 6) and we must also multiply it by 6. Figure out the perfect square root using multiplication. {\displaystyle y^{3}=x} {\displaystyle y} For example, the principal square roots of ±i are given by: In the following, the complex z and w may be expressed as: where {\displaystyle {\sqrt {1}}} We also have thousands of freeCodeCamp study groups around the world. e [14][15] When computing square roots with logarithm tables or slide rules, one can exploit the identities. A similar problem appears with other complex functions with branch cuts, e.g., the complex logarithm and the relations logz + logw = log(zw) or log(z*) = log(z)* which are not true in general. The square root of a nonnegative number is used in the definition of Euclidean norm (and distance), as well as in generalizations such as Hilbert spaces. As we have already discussed, the square root of any number is the value which when multiplied by itself gives the original number. Thus in rings where zero divisors do not exist, it is uniquely 0. For example, the principal square root of 9 is 3, which is denoted by Therefore, no negative number can have a real square root. [9] which is positive, and {\displaystyle \varphi } The method uses the same iterative scheme as the Newton–Raphson method yields when applied to the function y = f(x) = x2 − a, using the fact that its slope at any point is dy/dx = f′(x) = 2x, but predates it by many centuries. / ## square root of numbers Mushroom Text Art, Avantika Express Mumbai To Indore Seat Availability, Infinity Basslink Manual, Betty Crocker Cookie Mix, Jamie Oliver White Cabbage Salad, Ctrl+home Excel Not Working, Roasted Garlic Pasta, Longboat Key Rentals, Fender Acoustasonic 100,
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9353798627853394, "perplexity": 428.0501327395977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00306.warc.gz"}
https://math.meta.stackexchange.com/questions/29252/misuse-of-tag-divergence-how-to-deal-with-questions-with-the-wrong-tag/29255
# Misuse of tag “divergence”: How to deal with questions with the wrong tag? According to the tag info of , it is used when questions are about vector calculus. However, it is often tagged with , at questions about convergence/divergence of sequences, series, or improper integrals. Here are such examples: If there questions were few, then I would edit them directly. However, I don't know what an effect occurs when I edit many questions all together. What can I do to solve the problematic situation? • There is a related post in the Tag management 2017. This is also related: Do we need the “divergence” tag? – Martin Sleziak Oct 13 '18 at 14:45 • It is recommended not to bump many questions at the same time - see How much bumping is too much? and other related discussions. So in situations like this, a solution might be to "bookmark" the question in some way and do a few of them each time - and return to the list and continue editing later. – Martin Sleziak Oct 13 '18 at 15:25 • This might be easier if there is a simple way to find such questions and you do not have to compile a list manually - for example, in this specific instance the questions tagged divergence+sequences-and-series are very likely among the questions that need retagging. – Martin Sleziak Oct 13 '18 at 15:25 • The Stack Exchange staff can remove a tag without bumping any questions. However, this can only be done when a tag is completely removed - not only for a subset of questions. This process is called burnination. – Martin Sleziak Oct 13 '18 at 15:28 • Seems like it may be hard to prevent this from happening in the future. It's a reasonable mistake. – Alexander Gruber Oct 13 '18 at 16:33 • @AlexanderGruber Perhaps changing the name of the tag to something more descriptive could help users to notice what the tag is supposed to be used for. (In the thread linked above, the name (divergence-vector-field) was suggested.) – Martin Sleziak Oct 13 '18 at 17:18 • That sounds good to me. – Alexander Gruber Oct 13 '18 at 18:04 • Well, it'll be spammy, but I suppose we can just edit in the new divergence tag to the appropriate questions and then synonym the other one with convergence. Might make the front page messy for a bit but this is a common enough tag that I'd say it'd be worth the noise. – Alexander Gruber Oct 13 '18 at 18:07 • @AlexanderGruber I have posted the suggestion to change the name - to see what other users think about this. I am not sure what is (logistically) the best way to do this. I'll leave a few comments in the Tagging chatroom. – Martin Sleziak Oct 14 '18 at 0:45 • @AlexanderGruber I'd prefer one tag that is called convergence-divergence rather than a synonym. We might end up with the opposite situation of divergence (in the vector calculs sense) being remapped to convergence. – quid Oct 14 '18 at 1:26 • With only 500 questions with this tag out of our almost million on site, and a perfectly good vector calculus tag, do we even need this tag? – Alfred Yerger Oct 16 '18 at 21:00 • @AlfredYerger Depending on how seriously you mean the suggestion to remove the tag completely, it might be useful to post it as an answer rather than as a comment. (In that way more people will notice it and they can upvode/downvote and discuss in the comments under the answer.) As already mentioned, removal of the tag was suggested here: Do we need the “divergence” tag? The situation at the time was slightly different - the tag-info clarifying the intended use of the tag did not exist back then. – Martin Sleziak Oct 17 '18 at 10:29 • That proves it, mathematicians are boring. If this had been asked on meta.StackOverflow.com, there's no way the title wouldn't have been “divergence of the divergence tag”. – leftaroundabout Oct 19 '18 at 13:34 • @leftaroundabout You mean something like: Optimizing the tags for maxima and mimima to an extremum? – Martin Sleziak Oct 20 '18 at 8:48 • It's probably not a big deal, but since a new answer was posted, perhaps my answer should be unaccepted while things are sting being discussed. – Martin Sleziak Oct 22 '18 at 18:03 Proposal: Changing the tag name to something more descriptive might make it clearer to the user what the tag is intended for and thus incorrect use will be less likely. (And thus reducing the frequency of this problem in the future.) I would suggest or or something similar. This is the same suggestion as previously made in the Tag management 2017 thread. (It was suggested to remove the tag completely in the post Do we need the “divergence” tag? - this was before the tag-info was created.) I will point out that moderators can change the name of a tag without bumping any questions, for details on this see: Can you change the name of a tag? (It is also possible to remove all instances of a tag, this is called burnination.) However, even if the tag-name is changed, the questions which are tagged incorrectly have to be retagged manually. However, I don't know what an effect occurs when I edit many questions all together. What can I do to solve the problematic situation? It is recommended not to bump many questions at the same time - see How much bumping is too much? and other related discussions. So in situations like this, a solution might be to "bookmark" the question in some way and do a few of them each time - and return to the list and continue editing later. This might be easier if there is a simple way to find such questions and you do not have to compile a list manually - for example, in this specific instance the questions tagged divergence+sequences-and-series are very likely among the questions that need retagging. I will add a link to this question (which was asked partly as a reaction to this post): What do you use to mark questions that need editing when you want to avoid excessive bumping? Alternative proposal: Remove this tag. There are a only approximately 500 questions with this tag, which already refers to only one very specific operation in elementary vector calculus, or calculus on manifolds, for which we also have appropriate tags which are much larger. In my view, a good tag should carve out an area of mathematics or topic relevant to the community (such as limits-without-l'hopital, a common question we've discussed on meta in the past), while remaining sufficiently broad. Many questions with this tag already contain other relevant tags, and the only ones I can find that do not refer exclusively to calculations in vector calculus. This suggests to me that the divergence tag is often simply something that arises in another problem, perhaps in probability or PDE, where using this tag feels somewhat analogous to using a tag for algebra on a problem involving any formal manipulations. This would also eliminate the confusion discussed in the other answer, distinguishing vector calculus divergence from divergence in the sense of sequences or series. • Agreed. The questions should use vector-calculus or exterior-derivative instead. – user7530 Oct 17 '18 at 16:39 • I will just point out (as a reaction to the 500 questions remark) that the number of questions should not be very important criterion when considering whether tag is useful or not. There are several tags on this site which I consider useful and have much less question. (Although typically from more advanced topics than this.) For this specific issue - whether we should have a separate tag for divergence (in the sense of vector calculus) - I do not have strong opinion either way. Probably it would be good to hear from users who are active in questions about multivariable calculus. – Martin Sleziak Oct 19 '18 at 11:44 • "only" 500 questions? that sounds like a LOT of questions. – Santropedro Oct 20 '18 at 22:56 • @Santropedro Considering most of them are just fine with 'probability' or 'multivariable calculus' as tags, which each have tens of thousands, I'd say not really. – Alfred Yerger Oct 21 '18 at 3:59 Non-solution. Nobody seems to have proposed editing the tag description as a solution, which is good, because it doesn't work. Over on [cs.se], we have a bunch of tags for programming languages which basically say, "If your question is about how to write programs in this language, your question is off-topic". We still get "How do I write this program?" questions using these tags. Of course there's a heavy selection bias, here, since people who read and understand the tag description won't post their question. How to vote on this answer is left as an exercise for the reader. Do you vote up because you agree that editing the tag description won't work? Down for the same reason? Who knows?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4473530352115631, "perplexity": 647.2910237490554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989690.55/warc/CC-MAIN-20210516044552-20210516074552-00366.warc.gz"}
http://mathhelpforum.com/algebra/65350-write-th-folowing-logarithmic-form.html
Thread: Write th folowing in Logarithmic Form 1. Write th folowing in Logarithmic Form I am really stuck with this problem, any help would be greatly appreciated d^5/f^2=h^0.25(c/e-g)^3 Thanks, JP 2. Hello, JP! The instructions are rather vague. I assume we are to take logs ... then "expand" the expression. $\frac{d^5}{f^2} \:=\:h^{\frac{1}{4}}\left(\frac{c}{e-g}\right)^3$ We have: . $\log\left(\frac{d^5}{f^2}\right) \;=\;\log\left[h^{\frac{1}{4}}\left(\frac{c}{e-g}\right)^3\right]$ . . . $\log(d^5) - \log(f^2) \;=\;\log\left(h^{\frac{1}{4}}\right) + \log\left(\frac{c}{e-g}\right)^3$ . . . $5\log(d) - 2\log(f) \;=\;\frac{1}{4}\log(h) + 3\log\left(\frac{c}{e-g}\right)$ . . . $5\log(d) - 2\log(f) \;=\;\tfrac{1}{4}\log(h) + 3\bigg[\log(c) - \log(e-g)\bigg]$ . . . $5\log(d) - 2\log(f) \;=\;\tfrac{1}{4}\log(h) + 3\log(c) - 3\log(e-g)$ 3. thanks very much on that one
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9981502294540405, "perplexity": 1463.935745595264}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982298875.42/warc/CC-MAIN-20160823195818-00230-ip-10-153-172-175.ec2.internal.warc.gz"}
https://control.com/technical-articles/what-is-model-predictive-control-mpc/
Technical Article # What is Model Predictive Control (MPC)? August 10, 2020 by Mitesh Agrawal ## This article discusses the applications and uses of model predictive control. Over the years, control systems for robotics systems have drastically changed to allow for more complex actions and nonlinear dynamics. Model predictive control is one strategy to allow for these more complex behaviors. All these applications involve either dynamic environments or dangerous inaccessible environments that do not allow for human intervention. To add, most of these robot models are highly nonlinear making control strategies more difficult. Typical industrial control strategies such as PD control and PID control can fail in guaranteeing many kinds of features. Though there is a lot of research with different optimal control strategies for requirements such as adaptive control and imitation control, one control strategy clearly stands out in the state of the art research in the domain. It is Model Predictive Control (MPC), which has taken years of researchers developing control strategies curated specifically for different applications. This article will establish the basic fundamentals before picking up MPC. ### What Is MPC? Imagine walking in a dark room. You try to sense the surroundings, predict the best path in the direction of a goal, but take only one step at a time and repeat the cycle. Similarly, the MPC process is like walking into a dark room. The essence of MPC is to optimize the manipulatable inputs and the forecasts of process behavior. MPC is an iterative process of optimizing the predictions of robot states in the future limited horizon while manipulating inputs for a given horizon. The forecasting is achieved using the process model. Thus, a dynamic model is essential while implementing MPC. These process models are generally nonlinear, but for short periods of time, there are methods such as tailor expansion to linearize these models. These approximations in linearization or unpredictable changes in dynamic models might cause errors in forecasting. Thus, MPC only takes action on first computed control input and then recalculates the optimized forecasts based on feedback. This implies MPC is an iterative, model-based, predictive, optimal, and feedback based control strategy. ### How Does MPC Work? MPC has three basic requirements to work. The first one is a cost function J, which describes the expected behavior of the robot. This generally involves parameters of comparison between different possibilities of actions, such as minimization of error from the reference trajectory, minimization of jerk, obstacle avoidance, etc. $J=\sum_{t=k}^{t=k+p}W_{a}(x_{t}-r_{t})&space;+&space;W_{b}\Delta&space;u_{t}^{2}$ ##### In general, cost function can be visualized as shown above. Where J: cost function xt: robot states at time t rt: robot reference states at time t ut: robot predicted input at time t Wt and Wb: Weights according to the requirement The above cost function minimizes error from a reference trajectory as well as jerk caused by drastic deviations in inputs to the robot. The second requirement is a dynamic model of the robot. This dynamic model enables MPC to simulate states of a robot in a given horizon with different possibilities of inputs. The third is the optimization algorithm used to solve given optimization function J. Along with these requirements, MPC provides flexibility to mention certain constraints to be taken into consideration while performing optimization. These constraints can be the minimum and maximum value of states and inputs to the robot. ##### A basic working principle of MPC. Image courtesy of Martin Behrendt. In order to understand the working of MPC consider the robot is at current time k in the simulated robot movement and has a reference trajectory that needs to be followed for a given horizon p. MPC takes current states of the robot as input and simulates possibilities of control inputs for time k to k+p. From different possibilities, MPC selects the best series of inputs that minimize the cost function. From this series of predicted control input, MPC then implements only the first input and repeats the cycle at time k+1. Due to these iterative cycles over the horizon taking one step at a time, MPC is also called receding horizon control. This receding control can be better observed in the given simulation where black markers represent desired trajectories and red markers represent forecasted trajectories from MPC. MPC has the biggest advantage of exploiting plant dynamics as it explores all or most available options of control input depending upon optimization algorithms. The second biggest advantage is its flexibility in achieving complex goals and implementing robust robot constraints. Depending on the requirement, there is a lot of room to curate task-specific objective function and apply design limit specific constraints on inputs, as well as predicted outputs of a robot. MPC, thus gives a very simple control strategy for complex control systems. Along with its advantages, MPC has a list of disadvantages too. Depending on optimization algorithms and dynamic models of robots, MPC has a huge drawback of computational complexity because of iterative calculations at each time step. There are a lot of methods to reduce these computational complexities such as the following: 1. Warm start i.e. using previous calculation as a base for optimization at the next step. 2. Primal log barrier which reduces inequality constraints as a part of the objective function. 3. Use of efficient optimization methods such as Newton’s method. ##### A diagram showing some of the advantages and disadvantages of MPC. Secondly, a dynamic model of the robot is required for MPC which may not be very easy to derive for complex robots. Also, MPC has a drawback of having a high number of control variables. ### Applications of MPC MPC has been efficiently used in several industrial processes, such as chemical plants or oil refineries. However, its application to robotic systems in a true industrial environment, in which unavoidable modeling uncertainties and external disturbances affect the system, is still limited. ABB is using MPC to help clients from different sectors such as mining, minerals, cement, pulp and paper, oil and gas, and marine sectors. In the world of robotics, MPC is most commonly used for the planning and control of autonomous vehicles. Robots with high levels of autonomy and nonlinearities in dynamic models such as space robots and airplanes use MPC. Another example of such a complex robot is a mobile manipulator. These robots must maintain their stability in dynamic environments that call for the use of MPC. Various warehouse management manipulators with dynamic loading use MPC because of instability issues while using PID or adaptive control strategies. MPC is also used in plants with precise machine tooling operations where there are no human interventions. In such cases, either the material of cutting tools is very costly and without human interventions, errors might lead to huge losses. Such precise movements are achieved better using MPC. Robot manipulators with an eye in hand are becoming more popular applications in these industries. As there is a high level of uncertainty associated with these robots, MPC is one of the most commonly used control strategies. 1 Comment • pnachtwey August 13, 2020 Processing power isn’t much of an issue these days. It takes a little skill do to a proper system identification to get an accurate model. Something that wasn’t mention is that the longer the dead time, the further one most calculate into the future.  This increases the need for CPU power.  CPU power can be reduced by updating at intervals that are twice as long if the application doesn’t require high frequency changes in the set point or target trajectory. Like.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4384952783584595, "perplexity": 1051.734963018428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038476606.60/warc/CC-MAIN-20210418103545-20210418133545-00188.warc.gz"}
http://www.r-bloggers.com/modeling-trick-masked-variables/
July 1, 2012 By (This article was first published on Win-Vector Blog » R, and kindly contributed to R-bloggers) library(ggplot2) d <- read.table('http://www.win-vector.com/dfiles/maskVars/FRB_CHGDEL.csv', sep=',',header=T) model1 <- lm(Charge.off.rate.on.single.family.residential.mortgages ~ Charge.off.rate.on.credit.card.loans,data=d) d$model1 <- predict(model1,newdata=d) summary(model1) plot1 <- ggplot(d) + geom_point(aes(x=model1, y=Charge.off.rate.on.single.family.residential.mortgages)) + xlim(-1,3) + ylim(-1,3) #ggsave('plot1.png',plot1) cor(d$model1,d$Charge.off.rate.on.single.family.residential.mortgages, use='complete.obs') # 0.7706394 The plot below shows the performance of this trivial model (which ignores auto-correlation, inventory, dates, regional factors, macro-econmic factors and regulations). What we see is the model incorrectly predicts continuous variation between zero and one percent when actual mortgage charge-offs are more of a step function (the rate stays near zero until it jumps above one percent). Even so the correlation of this model to actuals is 0.77, which is fair. Any one variable linear model is really just a shift and rescaling (or an affine transform) of the single input variable. So we get the exact same shape and correlation if we skip the linear modeling step and directly plot the relation between the two variables. We show this in the R code and graph below. plotXY <- ggplot(d) + geom_point(aes(x=Charge.off.rate.on.credit.card.loans, y=Charge.off.rate.on.single.family.residential.mortgages)) ggsave('plotXY.png',plotXY) cor(d$Charge.off.rate.on.credit.card.loans, d$Charge.off.rate.on.single.family.residential.mortgages, use='complete.obs') # 0.7706394 Now we get to the meat of the masked variable technique. We want to build a step-wise function that better fits the relation. To do this the analyst either by hand or through automation could note in our last graph residential mortgages charge-off rates do not seem to be very sensitive to credit card charge off rates until the credit card charge-off rate exceeds 5%. To encode this domain knowledge we build three new synthetic variables: an indicator that tells us if the credit card charge-off rate is over 5% or note. We call this variable HL (high/low indicator). We then multiply this new variable by our original variable to get a new variable that only varies when the charge-off rate is above 5% (we call this variable H and it is an interaction between the new indicator variable and the original variable). Finally we create a third variable that varies only when the credit card charge-off rate is no more than 5%. This variable is equal to (1-HL) times the original variable and we call it L. We call HL the mask and H and L masked variables. The R-code to form these three new synthetic variables is given below: d$Charge.off.rate.on.credit.card.loans.HL <- ifelse(d$Charge.off.rate.on.credit.card.loans > 5,1,0) d$Charge.off.rate.on.credit.card.loans.H <- with(d,Charge.off.rate.on.credit.card.loans.HL*Charge.off.rate.on.credit.card.loans) d$Charge.off.rate.on.credit.card.loans.L <- with(d,(1-Charge.off.rate.on.credit.card.loans.HL)*Charge.off.rate.on.credit.card.loans) We can now use these new variables to build a slightly better model. We do this by exposing all three synthetic variables to the fitter. Thus the fitter now has available in its concept space all step-wise linear functions with a change at 5% (including discontinuous functions). This is related to kernel tricks: make the unknown function you want a linear combination of functions you have and a standard linear fitter can find it for you. The R-code and graph are given below: modelSplit <- lm(Charge.off.rate.on.single.family.residential.mortgages ~ Charge.off.rate.on.credit.card.loans.HL + Charge.off.rate.on.credit.card.loans.H + Charge.off.rate.on.credit.card.loans.L,data=d) d$modelSplit <- predict(modelSplit,newdata=d) summary(modelSplit) plotSplit <- ggplot(d) + geom_point(aes(x=modelSplit, y=Charge.off.rate.on.single.family.residential.mortgages)) + xlim(-1,3) + ylim(-1,3) #ggsave('plotSplit.png',plotSplit) cor(d$modelSplit,d$Charge.off.rate.on.single.family.residential.mortgages, use='complete.obs') # 0.8133998 Notice we now get a better correlation of 0.81 and the graph shows that the model is more accurate in the sense its predictions are also clustered near zero (without the horizontal stripe that represented mis-predicted variation). Now we could call this modeling technique a “poor man’s GAM.” What a GAM does is try to learn the optimal re-shaping of a variable for a given modeling problem. That is instead of the analyst picking a cut-point and asking the modeling system to find slopes (which is what we did when we introduced separate masked variables) we ask the modeling system to learn a best re-shaping. The R-code and graph for a GAM fit are given below. Notice the s() wrapper which tells the GAM to think about reshaping a given variable. library(gam) modelGAM <- gam(Charge.off.rate.on.single.family.residential.mortgages ~ s(Charge.off.rate.on.credit.card.loans),data=d) summary(modelGAM) d$modelGAM <- predict(modelGAM,newdata=d) plotGAM <- ggplot(d) + geom_point(aes(x=modelGAM,y=Charge.off.rate.on.single.family.residential.mortgages)) + xlim(-1,3) + ylim(-1,3) #ggsave('plotGAM.png',plotGAM) #png(filename='gamShape.png') plot(modelGAM) #dev.off() cor(d$modelGAM,d$Charge.off.rate.on.single.family.residential.mortgages,use='complete.obs') # 0.8160738 The GAM correlation of 0.82 is slightly better than our masked model. And we can ask the GAM to show us how it reshaped the input variable. Notice the shape the GAM splines picked is a hockey stick (piece wise linear continuous curve) with the bend near 5%. #png(filename='gamShape.png') plot(modelGAM) #dev.off() cor(d$modelGAM,d$Charge.off.rate.on.single.family.residential.mortgages, use='complete.obs') # 0.8160738 For completeness we include a neural net fit, but we haven’t tuned its controls or hyper-parameters so it is a fully fair comparison. We just want to emphasize the properly using a neural net takes some work (isn’t completely free). And we feel if you are going to work on variables you are better off using techniques like variable transforms, treatments or masks. library(nnet) modelNN <- nnet(Charge.off.rate.on.single.family.residential.mortgages ~ Charge.off.rate.on.credit.card.loans,data=d, size=3) d$modelNN <- predict(modelNN,newdata=d) plotNN <- ggplot(d) + geom_point(aes(x=modelNN, y=Charge.off.rate.on.single.family.residential.mortgages)) + xlim(-1,3) + ylim(-1,3) #ggsave('plotNN.png',plotNN) cor(d$modelNN,d$Charge.off.rate.on.single.family.residential.mortgages, use='complete.obs') # 0.7961966 The point of the masked variable technique is: it represents a good compromise between using analyst/data-scientist reasoning and sophisticated packages. The masking cuts can be generated once by an analyst and supported by providing the documenting graphs as we have shown here. Then an already in-place standard fitting system can pick the coefficients for the new synthetic variables (causing the fitter itself to compute the shape of the optimal piece-wise curve, saving the analyst this chore). This technique can be used in any data analysis environment that supports graphing, user-defined transformations and regression fitting (linear or otherwise). The technique doesn’t require the analyst to pick the actual transform or slopes (again, the fitter does this). Also, this methodology is good for supporting audit and maintenance. The construction of synthetic variables can be documented and validated and standard explainable methods can be used for the remainder of the fitting process. We feel the masked variable trick represents a good practical compromise in terms of power, rigor and clarity. Related posts: R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2551354169845581, "perplexity": 6169.320778834721}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931009751.93/warc/CC-MAIN-20141125155649-00128-ip-10-235-23-156.ec2.internal.warc.gz"}
http://dergipark.gov.tr/aubtda/issue/31308/346121
Yıl 2017, Cilt 18, Sayı 5, Sayfalar 897 - 907 2017-12-31 | | | | ## REAL-TIME CONTROL OF MOBILE ROBOT USING HMM-BASED SPEECH RECOGNITION SYSTEM #### Hayrettin TOYLAN [1] , Erol TÜRKEŞ [2] , Evren ÇAĞLARER [3] ##### 289 442 Human-robot interaction (HRI) is a significant area of interest in robotics which has attracted a wide variety of studies in recent years. In order to provide natural human-robot interaction, robots will have to acquire the skills to detect and to integrate meaningfully information from multiple modalities. In this paper, a practical speech-controlled mobile robot car system is presented and discussed. In this study the developed Hidden Markov Model (HMM) with separate word recognition system and real-time control were obtained on a mobile robot. Mel-Frequency Cepstral Coefficients (MFCC) were applied as features for the control design of mobile robot. In the study, 270 speech commands (İLERİ=forward, GERİ=backward, DUR=stop, SAĞA=right, SOLA=left) which are collected from 54 different people were applied to a series of mathematical operations and 12 cepstral coefficients were derived. Therefore, a database was generated by 12 cepstral coefficients. Thus, HMM model was trained and tested according to database. Speech data were classified in two groups as 90% training data and 10% test data. The recognition success rate of test commands was measured 94%. Hidden markov model, MFCC, Speech recognition, Mobile robot, Robot • Pfeifer R, Bongard J. How the body shapes the way we think: A new view of intelligence. London, England: MIT Press, 2006. Konular Mühendislik Araştırma Makalesi Yazar: Hayrettin TOYLAN Yazar: Erol TÜRKEŞ Yazar: Evren ÇAĞLARER Bibtex @araştırma makalesi { aubtda346121, journal = {ANADOLU UNIVERSITY JOURNAL OF SCIENCE AND TECHNOLOGY A - Applied Sciences and Engineering}, issn = {1302-3160}, eissn = {2146-0205}, address = {Eskişehir Teknik Üniversitesi}, year = {2017}, volume = {18}, pages = {897 - 907}, doi = {10.18038/aubtda.346121}, title = {REAL-TIME CONTROL OF MOBILE ROBOT USING HMM-BASED SPEECH RECOGNITION SYSTEM}, key = {cite}, author = {TOYLAN, Hayrettin and TÜRKEŞ, Erol and ÇAĞLARER, Evren} } APA TOYLAN, H , TÜRKEŞ, E , ÇAĞLARER, E . (2017). REAL-TIME CONTROL OF MOBILE ROBOT USING HMM-BASED SPEECH RECOGNITION SYSTEM. ANADOLU UNIVERSITY JOURNAL OF SCIENCE AND TECHNOLOGY A - Applied Sciences and Engineering, 18 (5), 897-907. DOI: 10.18038/aubtda.346121 MLA TOYLAN, H , TÜRKEŞ, E , ÇAĞLARER, E . "REAL-TIME CONTROL OF MOBILE ROBOT USING HMM-BASED SPEECH RECOGNITION SYSTEM". ANADOLU UNIVERSITY JOURNAL OF SCIENCE AND TECHNOLOGY A - Applied Sciences and Engineering 18 (2017): 897-907 Chicago TOYLAN, H , TÜRKEŞ, E , ÇAĞLARER, E . "REAL-TIME CONTROL OF MOBILE ROBOT USING HMM-BASED SPEECH RECOGNITION SYSTEM". ANADOLU UNIVERSITY JOURNAL OF SCIENCE AND TECHNOLOGY A - Applied Sciences and Engineering 18 (2017): 897-907 RIS TY - JOUR T1 - REAL-TIME CONTROL OF MOBILE ROBOT USING HMM-BASED SPEECH RECOGNITION SYSTEM AU - Hayrettin TOYLAN , Erol TÜRKEŞ , Evren ÇAĞLARER Y1 - 2017 PY - 2017 N1 - doi: 10.18038/aubtda.346121 DO - 10.18038/aubtda.346121 T2 - ANADOLU UNIVERSITY JOURNAL OF SCIENCE AND TECHNOLOGY A - Applied Sciences and Engineering JF - Journal JO - JOR SP - 897 EP - 907 VL - 18 IS - 5 SN - 1302-3160-2146-0205 M3 - doi: 10.18038/aubtda.346121 UR - http://dx.doi.org/10.18038/aubtda.346121 Y2 - 2019 ER - EndNote %0 Anadolu Üniversitesi Bilim Ve Teknoloji Dergisi A - Uygulamalı Bilimler ve Mühendislik REAL-TIME CONTROL OF MOBILE ROBOT USING HMM-BASED SPEECH RECOGNITION SYSTEM %A Hayrettin TOYLAN , Erol TÜRKEŞ , Evren ÇAĞLARER %T REAL-TIME CONTROL OF MOBILE ROBOT USING HMM-BASED SPEECH RECOGNITION SYSTEM %D 2017 %J ANADOLU UNIVERSITY JOURNAL OF SCIENCE AND TECHNOLOGY A - Applied Sciences and Engineering %P 1302-3160-2146-0205 %V 18 %N 5 %R doi: 10.18038/aubtda.346121 %U 10.18038/aubtda.346121 ISNAD TOYLAN, Hayrettin , TÜRKEŞ, Erol , ÇAĞLARER, Evren . "REAL-TIME CONTROL OF MOBILE ROBOT USING HMM-BASED SPEECH RECOGNITION SYSTEM". ANADOLU UNIVERSITY JOURNAL OF SCIENCE AND TECHNOLOGY A - Applied Sciences and Engineering 18 / 5 (Aralık 2017): 897-907. http://dx.doi.org/10.18038/aubtda.346121
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2499028593301773, "perplexity": 13264.530012665879}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204461.23/warc/CC-MAIN-20190325214331-20190326000331-00144.warc.gz"}
https://www.semanticscholar.org/paper/Multiplicative-Number-Theory-Davenport/85307f488d85a730edacb2f6e27b7778ceb8890e
# Multiplicative Number Theory @inproceedings{Davenport1967MultiplicativeNT, title={Multiplicative Number Theory}, author={Harold Davenport}, year={1967} } From the contents: Primes in Arithmetic Progression.- Gauss' Sum.- Cyclotomy.- Primes in Arithmetic Progression: The General Modulus.- Primitive Characters.- Dirichlet's Class Number Formula.- The Distribution of the Primes.- Riemann's Memoir.- The Functional Equation of the L Function.- Properties of the Gamma Function.- Integral Functions of Order 1.- The Infinite Products for xi(s) and xi(s,Zero-Free Region for zeta(s).- Zero-Free Regions for L(s, chi).- The Number N(T).- The Number N(T, chi… 2,272 Citations ### The simple zeros of the Riemann zeta-function The Simple Zeros of the Riemann Zeta-Function by Melissa Miller There have been many tables of primes produced since antiquity. In 348 BC Plato studied the divisors of the number 5040. In 1202 ### LIMITING DISTRIBUTIONS AND ZEROS OF ARTIN L-FUNCTIONS This thesis is concerned with behaviour of some famous arithmetic functions. The first part of the thesis deals with prime number races. Rubinstein-Sarnak [62] developed a technique to study primes ### Small zeros of Dirichlet L-functions of quadratic characters of prime modulus • Mathematics • 2018 In this paper, we investigate the distribution of the imaginary parts of zeros near the real axis of Dirichlet $L$-functions associated to the quadratic characters $\chi_{p}(\cdot)=(\cdot |p)$ with ### PRIME POLYNOMIALS IN SHORT INTERVALS AND IN ARITHMETIC PROGRESSIONS • Mathematics • 2015 In this paper we establish function field versions of two classical conjectures on prime numbers. The first says that the number of primes in intervals (x,x x(is an element of)] is about x(is an ### Generalized divisor functions in arithmetic progressions: I The k-fold divisor function in arithmetic progressions to large moduli We prove some distribution results for the k-fold divisor function in arithmetic progressions to moduli that exceed the square-root of length X of the sum, with appropriate constrains and averaging ### On Elementary Proofs of the Prime Number Theorem for Arithmetic Progressions, without Characters We consider what one can prove about the distribution of prime numbers in arithmetic progressions, using only Selberg's formula. In particular, for any given positive integer q, we prove that either ### AN EXTENSION OF THE PAIR-CORRELATION CONJECTURE AND APPLICATIONS • Mathematics • 2016 Abstract. We study an extension of Montgomery’s pair-correlation conjecture and its rele-vance in some problems on the distribution of prime numbers.Keywords. Riemann zeta function, pair correlation ### ON THE IDENTITIES BETWEEN THE ARITHMETIC FUNCTIONS Abstract. Dirichlet series is a Riemann zeta function attachedwith an arithmetic function. Here, we studied the properties ofDirichlet series and found some identities between arithmetic func-tions. ### Discrete Mean Values of Dirichlet L-functions • Mathematics • 2015 In 1911 Landau proved an asymptotic formula for sums of the form ∑ γ≤T x ρ over the imaginary parts of the nontrivial zeros of the Riemann zeta function. The formula provided yet another deep ### A Bombieri-Vinogradov theorem for all number fields • Mathematics • 2012 The classical theorem of Bombieri and Vinogradov is generalized to a non-abelian, non-Galois setting. This leads to a prime number theorem of “mixed-type” for arithmetic progressions “twisted” by
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.962425947189331, "perplexity": 1420.0361755651436}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00053.warc.gz"}
https://www.physicsforums.com/threads/hi-all.838509/
# Hi All 1. Oct 18, 2015 ### Tech05 I am currently in to the electrical field, now beginning and I've noticed its highly physics and math intensive. Thanks in advance for any assistance that I can get while in this field, also I'll help anyone if I can. 2. Oct 18, 2015 ### Staff: Mentor Welcome to PF! Similar Discussions: Hi All
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.965896487236023, "perplexity": 5327.785067065748}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814493.90/warc/CC-MAIN-20180223055326-20180223075326-00101.warc.gz"}
https://www.schullv.de/englisch/pruefungswissen/mecklenburg_vorpommern/abitur_ea/2016/teil_b/block_1
Inhalt Smarter Learning! Inhalt Bundesland, Schulart & Klasse Bundesland, Schulart & Klasse MV, Gymnasium Berufl. Gymnasium (AG) Berufl. Gymnasium (BTG) Berufl. Gymnasium (EG) Berufl. Gymnasium (SGG) Berufl. Gymnasium (TG) Berufl. Gymnasium (WG) Berufskolleg - FH Gemeinschaftsschule Gymnasium (G8) Gymnasium (G9) Hauptschule Realschule Werkrealschule Bayern Fachoberschule Gymnasium Mittelschule Realschule Berlin Gymnasium Integrierte Sekundarschule Brandenburg Gesamtschule Gymnasium Oberschule Bremen Gymnasium (G8) Oberschule (G9) Hamburg Gymnasium Hessen Berufl. Gymnasium Gesamtschule Gymnasium (G8) Gymnasium (G9) Haupt- und Realschule Hauptschule Realschule Mecklenburg-Vorpommern Gesamtschule Gymnasium Niedersachsen Gymnasium (G8) Gymnasium (G9) Integrierte Gesamtschule Kooperative Gesamtschule Oberschule Realschule NRW Gesamtschule Gymnasium Hauptschule Realschule Sekundarschule Rheinland-Pfalz Gesamtschule Gymnasium (G8) Gymnasium (G9) Saarland Gemeinschaftsschule Gesamtschule Gymnasium Realschule Sachsen Gymnasium Oberschule Sachsen-Anhalt Fachgymnasium Gesamtschule Gymnasium Sekundarschule Schleswig-Holstein Gemeinschaftsschule Gymnasium (G8) Gymnasium (G9) Thüringen Berufl. Gymnasium Gemeinschaftsschule Gesamtschule Gymnasium Regelschule Klasse 10 Klasse 12 Klasse 11 Klasse 10 Klasse 9 Klasse 8 Klasse 7 Klasse 6 Klasse 5 Fach & Lernbereich Fach: Englisch Mathe Deutsch Englisch Bio Chemie Physik Geschichte Geo Lernbereich Lektürehilfen Digitales Schulbuch Abitur eA Abitur gA VERA 8 Mittlerer Schulabschluss Abitur gA Prüfung wechseln Abitur eA Abitur gA VERA 8 Mittlerer Schulabschluss Smarter Learning! Schneller lernen mit deinem SchulLV-Zugang • Zugang zu über 1.000 Original-Prüfungsaufgaben mit Lösungen von 2004-2019 • Alle Bundesländer und Schularten, empfohlen von über 2.300 Schulen in Deutschland • Digitales Schulbuch: Über 1.700 Themen mit Aufgaben und Lösungen • Monatlich kündbar, lerne solange du möchtest Aufgabenblock 1 Aufgaben $\blacktriangleright\;$ Thema: $\;$ Alice Munro: "Deep Holes" (New Yorker, Monday, 30th June, 2008). $\blacktriangleright\;$ Aufgabenstellung: 1. Summarize the information the excerpt gives about Kent. $(30\;\%)$ #summary 2. Examine the relationships within Kent's family and the way the members interact with each other. $(30\;\%)$ #examination 3. Comment on the cartoon and its caption and give your interpretation of its message about contemporary society. $(40\;\%)$ #cartoon#comment Deep holes $\;$ Sally and Alex went for a picnic with their sons Kent and Peter and baby Savanna. While playing, Kent fell into a hole and broke his legs. […] lt was necessary for Kent to spend the next six months out of school, strung up for the first few weeks in a rented hospital bed. Sally picked up and turned in his school 5 assignments, which he completed in no time. Then he was encouraged to go ahead with Extra Projects. One of these was "Travels and Explorations - Choose Your Country." "I want to pick somewhere nobody else would pick," he said. The accident and the convalescence seemed to have changed him. He acted older than his age now, less antic1, more serene. And Sally told him something that she had not told to another soul. 10 She told him how she was attracted to remote islands. Not to the Hawaiian lslands or the Canaries or the Hebrides or the lsles of Greece, where everybody wanted to go, but to small or obscure islands that nobody talked about and that were seldom, if ever, visited. Ascension, Tristan da Cunha, Chatharn Island and Christmas Island and Desolation Island and the Faeroes. She and Kent began to collect every scrap of information they could find about 15 these places, not allowing themselves to make anything up. And never telling Alex what they were doing. "He would think we were off our heads," Sally said. Desolation lsland's main boast was of a vegetable, of great antiquity, a unique cabbage. They imagined worship ceremonies for it, costumes, and cabbage parades in its honor. 20 Sally told her son that, before he was born, she had seen footage on television of the inhabitants of Tristan da Cunha disembarking at Heathrow Airport, having all been evacuated, owing to a great volcanic eruption on their island. How strange they had looked, docile and dignified, like creatures from another century. They must have adjusted to England, more or less, but when the volcano quieted down, a couple of years later, they almost all wanted to go 25 home. When Kent went back to school, things changed, of course, but he still seemed mature for his age, patient with Savanna, who had grown venturesome and stubborn, and with Peter, who always burst into the house as if on a gale of calamity. And he was especially courteous to his father, bringing him the paper that he had rescued from Savanna and carefully refolded, 30 pulling out his chair at dinnertime. "Honor to the man who saved my life," he might say, or "Home is the hero." He said this rather dramatically, though, not at all sarcastically, yet it got on Alex's nerves. Kent got on his nerves, had done so even before the deep-hole drama. "Cut that out," Alex said, and complained privately to Sally. 35 "He's saying you must have loved him, because you rescued him." "Christ, l'd have rescued anybody." "Don't say that in front of him. Please." When Kent got to high school, things with his father improved. He chose to focus on science. He picked the hard sciences, not the soft earth sciences, and even this roused no opposition in Alex2 . The harder the better. But after six months at college Kent disappeared. People who knew him a little - there did not 40 seem to be anyone who claimed to be a friend - said that he had talked of going to the West Coast. A letter came, just as his parents were deciding to go to the police. He was working at a Canadian Tire3 in a suburb just north of Toronto. Alex went to see him there, to order him back to college. But Kent refused, said that he was very happy with his job, and was making good money, or soon would be, when he got promoted. Then Sally went to see him, without 45 telling Alex, and found him jolly and ten pounds heavier. He said it was the beer. He had friends now. "lt's a phase," she said to Alex when she confessed the visit. "He wants to get a taste of independence." "He can get a bellyful of it, as far as l'm concerned." 50 Kent had not said where he was living, and when she made her next visit to Canadian Tire she was told that he had quit. She was embarrassed - she thought she caught a smirk on the face of the employee who told her -and she did not ask where Kent had gone. She assumed he would get in touch, anyway, as soon as he had settled again. He did write, three years later. His letter was mailed in Needles, California, but he told them not to bother trying to trace him 55 there - he was only passing through. Like Blanche4 , he said, and Alex said, "Who the hell is Blanche?" "Just a joke," Sally said. "lt doesn't matter." Kent did not say where he had been or whether he was working or had formed any connections. He did not apologize for leaving his parents without any information for so long, or ask how they were, or how his brother and sister were. lnstead, he wrote pages about his own life. Not the practical side of his life but 60 what he believed he should be doing - what he was doing - with it. [ … ] Annotations: 1 antic - silly 2 Alex - works in geomorphology, the scientific study of the origin and evolution of topographic features 3 Canadian Tire - Canadian retail company which sells a wide range of automotive, sports, leisure and home products 4 Blanche - a character in the play A Streetcar Named Desire by Tennessee Williams Aus: Alice Munro, "Deep Holes" (New Yorker, Monday, 30th June, 2008). http://www.newyorker.com/magazine/2008/06/30/deep-holes Bildnachweise [nach oben] [1] Weiter lernen mit SchulLV-PLUS! Infos zu SchulLV PLUS Ich habe bereits einen Zugang Tipps $\blacktriangleright\;$ Thema: $\;$ Alice Munro: "Deep Holes" (New Yorker, Monday, 30th June, 2008). $\blacktriangleright\;$Aufgabenstellung: 1. Summarize the information the excerpt gives about Kent. Deine Aufgabe ist es, den vorliegenden Textauszug in Bezug auf Kent zusammenzufassen. Bei einer solchen Summary solltest du dich kurz fassen - verliere dich also nicht in Details. Stell dir also die Frage: Was sind die wichtigsten Informationen über Kent in diesem Textauszug? Lies dir den Text mehrmals genau durch. Nur wenn du den Text gut verstanden hast, kannst du eine gute Summary schreiben. Unterstreiche wichtige Keywords und versuche den Text in Abschnitte einzuteilen. 2. Examine the relationships within Kent's family and the way the members interact with each other. In dieser Aufgabe sollst du die zwischenmenschlichen Beziehungen in Kents Familie untersuchen. Fokussiere dich dabei auf Sally, Alex und Kent. Die Geschwister von Kent spielen in der Geschichte nur eine untergeordnete Rolle. Strukturiere deinen Text, bevor du mit dem schreiben beginnst. Lege eine Reihenfolge fest in der du die Beziehungen untersuchst. So kannst du die Übergänge zwischen den Textpassagen besser schreiben. 3. Comment on the cartoon and its caption and give your interpretation of its message about contemporary society. Hier sollst du zuerst den Cartoon kommentieren. Diese Aufgabe erfordert von dir 4 Teilschritte. Zuerst musst du dir einen allgemeinen Eindruck verschaffen, um die Hintergründe zu erkennen. Thema, Veröffentlichungsort, Zielgruppe? Nun kommt die Beschreibung der Karrikatur. Achte auf die Bildkomposition, wie wird die Szene dargestellt und kommen Symbole vor? Jetzt ist die Erklärung an der Reihe. Was soll dir vermittelt werden? Welches soziale Phänomen wird dargestellt? Abschließend liegt es an dir eine Meinung zu formulieren. Wurdest du überzeugt, wie wirk der Cartoon auf dich? Weiter lernen mit SchulLV-PLUS! Infos zu SchulLV PLUS Ich habe bereits einen Zugang Lösungen $\blacktriangleright\;$ Thema: $\;$ Alice Munro: "Deep Holes" (New Yorker, Monday, 30th June, 2008). $\blacktriangleright\;$ Aufgabenstellung: 1. Summarize the information the excerpt gives about Kent. Tipp Deine Aufgabe ist es, den vorliegenden Textauszug in Bezug auf Kent zusammenzufassen. Bei einer solchen Summary solltest du dich kurz fassen - verliere dich also nicht in Details. Stell dir also die Frage: Was sind die wichtigsten Informationen über Kent in diesem Textauszug? Lies dir den Text mehrmals genau durch. Nur wenn du den Text gut verstanden hast, kannst du eine gute Summary schreiben. Unterstreiche wichtige Keywords und versuche den Text in Abschnitte einzuteilen. Tipp Deine Aufgabe ist es, den vorliegenden Textauszug in Bezug auf Kent zusammenzufassen. Bei einer solchen Summary solltest du dich kurz fassen - verliere dich also nicht in Details. Stell dir also die Frage: Was sind die wichtigsten Informationen über Kent in diesem Textauszug? Lies dir den Text mehrmals genau durch. Nur wenn du den Text gut verstanden hast, kannst du eine gute Summary schreiben. Unterstreiche wichtige Keywords und versuche den Text in Abschnitte einzuteilen. The excerpt at hand is from the short story Deep Holes written by Alice Munro. The extract focuses on Kent, the son of Sally and Alex. Kent is a young boy at the beginning of the story. Einleitung On a picnic with his parents and his two siblings "Peter and baby Savanna" (Text, line 1), he broke his legs. This event transforms the young boy. He starts to act "older than his age" (Text, line 8) and his admiration for his father increases. Unfall This is displayed through certain behaviors. For example, he brings his Dad the paper, pulls out his chair at the dinner table or burst out phrases as "Home is the hero." (Text, line 31). Bewunderung Vater During this phase Kent feels a deep love for his father, "because [he] […] rescued him" (Text, line 35) after his accident. Still, the figure of Kent does not have a good relationship with his father. You can tell this from the sentence "When Kent got to high school, things with his father improved" (Text, line 36 f.). Rückweisung durch Vater In addition to his troubled relationship with his Dad, Kent does not find a connection to his fellow class-mates in college. His parents find out about this, the hard way. Kent had dropped out and left college without a forwarding address. As his parents inquire about his whereabouts, "there did not seem to be anyone who claimed to be a friend" (Text, line 39 f.). Studienabbruch Keine Freunde Subsequently, Kent sends his parents a letter, telling them that he is working at a retail store in Canada. There "he was very happy with his job" (Text, line 43) and does not want to go back to college. Kent's mother visits him there and "found him jolly and ten pounds heavier" (Text, line 45.). The weight gain is allegedly correlated with the beers he drinks with his new found friends. Glücklich und neue Kontakte Later in the story, he moves to California. In order to fend off visitors, he writes his parents that "he was only passing through" (Text, line 55). The same letter, the first after three years of silence between Kent and his family, includes nothing particular about his life but a general rundown on "what he believed he should be doing [with his life]" (Text, line 60). Vermeidet Kontakt The short story trails Kent's transition from a young boy to adulthood. He is presented as a smart young kid, with an unanswered love for his father. Subsequently, Kent is leaving the path that was laid out for him, he quits college and works in a retail store. He evades contact with his family and is in search of his own path. Abschluss #summary 2. Examine the relationships within Kent's family and the way the members interact with each other. Tipp In dieser Aufgabe sollst du die zwischenmenschlichen Beziehungen in Kents Familie untersuchen. Fokussiere dich dabei auf Sally, Alex und Kent. Die Geschwister von Kent spielen in der Geschichte nur eine untergeordnete Rolle. Strukturiere deinen Text, bevor du mit dem schreiben beginnst. Lege eine Reihenfolge fest in der du die Beziehungen untersuchst. So kannst du die Übergänge zwischen den Textpassagen besser schreiben. Tipp In dieser Aufgabe sollst du die zwischenmenschlichen Beziehungen in Kents Familie untersuchen. Fokussiere dich dabei auf Sally, Alex und Kent. Die Geschwister von Kent spielen in der Geschichte nur eine untergeordnete Rolle. Strukturiere deinen Text, bevor du mit dem schreiben beginnst. Lege eine Reihenfolge fest in der du die Beziehungen untersuchst. So kannst du die Übergänge zwischen den Textpassagen besser schreiben. Kent's parents Alex and Sally seem to have some distance in their relationship. This is explicit when Sally shares a secret with Kent that "she had not told to another soul" (Text, line 9). Beziehung der Eltern The secret involves Sally's fascination for remote islands. During Kent's recovery, after he had broken his legs, she lets Kent take part in this fantasy and they "began to collect every scrap of information they could find about these places" (Text, line 14). All this happened without the knowledge of Alex. Verschwiegenheit This theme of keeping ideas or actions secret from their partner is continued in the paragraph when Sally travels to Canada in order to see Kent. She did it behind Alex's back and only later on "confessed the visit" (Text, line 47). We can tell from this scene that Alex was not supportive of Sally seeking contact with Kent and that Sally deliberately deceived her husband. Beichte nach Lüge Clinch zwischen Alex and Kent Here one already had a glimpse at the difficult relationship between father and son. Alex disliked the affection Kent directed at him during his childhood, "it [even] got on his nerves" (Text, line 32). The dramatic way Kent used to express his feelings for his father, actually drove Alex away from him. Vater-Sohn Beziehung As Kent dropped out of college Alex drove to Canada to "to order him back to college" (Text, line 42). By going to Canada Alex wanted to enforce his parental authority to make Kent accept his will. Kent refused to do so and another rift emerged between the two men. After Sally's secret visit to Kent, Alex reaffirmed his discontent with his son. He does not seek to reconcile with Kent but expressed his frustration with the sentence "He can get a bellyful of […][experience], as far as I'm concerned." (Text, line 49). Kents streben nach Unabhängigkeit Even though Kent broke off nearly all ties with his family, his mother is still caring for him. In his letter from California, he included a literary reference as a joke and only his mother understood him. From the onset of the short story, Sally and Kent have a close relationship. She told him her secret about the islands and was overall very close with him during the six months after the accident. Mutter-Sohn Beziehung As Alex is becoming agitated because of Kent's constant admiration, Sally steps in and defends Kent by saying "Don't say that in front of him" (Text, line 36). In this scene, we see how Sally is concerned with her son's well-being as she disallows Alex to make any snide remarks about Kent's display of affection. Verteidigt Sohn She also wanted to see Kent again after her first trip to Canada, displaying how she cared for him. But as she tried to look him up at his workplace "she was told that he had quit" (Text, line 51). Sally sucht Kents Nähe From these episodes, we can tell that many things are going on in the family of Kent without someone talking about them beforehand. Kent dropped out of college and moves around the country without letting his parents know. Sally has trivial secrets she hides from her husband and Alex is constantly holding back his disapproval of his son. Geringe Kommunikation in Familie #examination 3. Comment on the cartoon and its caption and give your interpretation of its message about the contemporary society. Tipp Hier sollst du zuerst den Cartoon kommentieren. Diese Aufgabe erfordert von dir 4 Teilschritte. Zuerst musst du dir einen allgemeinen Eindruck verschaffen, um die Hintergründe zu erkennen. Thema, Veröffentlichungsort, Zielgruppe? Nun kommt die Beschreibung der Karrikatur. Achte auf die Bildkomposition, wie wird die Szene dargestellt und kommen Symbole vor? Jetzt ist die Erklärung an der Reihe. Was soll dir vermittelt werden? Welches soziale Phänomen wird dargestellt? Abschließend liegt es an dir eine Meinung zu formulieren. Wurdest du überzeugt, wie wirk der Cartoon auf dich? Tipp Hier sollst du zuerst den Cartoon kommentieren. Diese Aufgabe erfordert von dir 4 Teilschritte. Zuerst musst du dir einen allgemeinen Eindruck verschaffen, um die Hintergründe zu erkennen. Thema, Veröffentlichungsort, Zielgruppe? Nun kommt die Beschreibung der Karrikatur. Achte auf die Bildkomposition, wie wird die Szene dargestellt und kommen Symbole vor? Jetzt ist die Erklärung an der Reihe. Was soll dir vermittelt werden? Welches soziale Phänomen wird dargestellt? Abschließend liegt es an dir eine Meinung zu formulieren. Wurdest du überzeugt, wie wirk der Cartoon auf dich? The topic of the cartoon is the increasing social isolation in our society, criticizing that people are not meeting up in person as often as before the start of the web-era. The material at hand is a general newspaper cartoon, not directed at a specific interest group, about this topic. Erster Eindruck In the middle of the cartoon, two older people are looking at a computer screen. A female person is sitting in front of the computer at a desk and a male person is standing next to her. The scene is held in black and white. The caption reads "Our son went off to college, fell in love, got married and had twins last summer. We really ought to check his blog more often!" Beschreibung The cartoon is telling us the story of how two parents have lost touch with their son and how they are finding out about his life online. In my opinion, the cartoon is mainly criticizing two developments in our society: First of all, how people are becoming more and more estranged from each other. Second of all, how the passing on of private information is increasingly taking place online and that people who do not access these sites are left out of the picture. Erklärung The parents in this cartoon have had obviously no contact with their son in a while. This is a trend which sadly has grown in the last decades. More and more people are living by themselves and are changing their location more often than before. Therefore, they often get estranged from their roots and are lacking time to take care of friendships or forging them in the beginning. Zunehmende soziale Isolierung In the cartoon, the parents only find out about the happenings in their son's life through his blog. If they had not visited it, they would not have known anything about his latest doings. This is how people share information about their lives, they publish it and you have the chance to look at it. Before the dawn of the web-era, people would personally approach others in order to show and share their pictures. People are now basically required to actively follow someone in order to get insights and personal contact is on the retreat. Wandel des Teilens von Privatem Another issue which the cartoon is criticizing is the habit of sharing private information online. Some people have doubts that pictures from your everyday life are meaningful to others. Other people think that pictures of young kids should not be shared at all by their parents, recently there have even been lawsuits about the lawfulness of this. People tend to forget that they give access to their pictures to the entire web community and that there are individuals out there that might abuse this. Social Media Debattte In general, I have to say the that the cartoon convinces me. I like the stunned looks of its characters and the concise humor. The general topic about how we live together, how we share information online and how many bonds in our society are dissolving caught my attention. I think we all should re-evaluate how we want to interact with each other and this cartoon illustrates this topic well. Eigene Meinung #cartoon#comment Weiter lernen mit SchulLV-PLUS! Infos zu SchulLV PLUS Ich habe bereits einen Zugang Folge uns auf SchulLV als App
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17910000681877136, "perplexity": 17484.885689344803}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540531917.10/warc/CC-MAIN-20191211131640-20191211155640-00029.warc.gz"}
http://bbr.nefu.edu.cn/CN/Y2016/V36/I2/291
• 论文 • 羌活(Notopterygium incisum)nrDNAITS和cpDNA rpl20-rps12序列分子进化特点的分析 1. 1.中国科学院西北高原生物研究所,810008; 2.中国科学院藏药研究重点实验室,西宁 810008; 3.中国科学院大学,北京 100049; 4.青海省铁卜加草原改良试验站,西宁 810008 • 出版日期:2016-03-15 发布日期:2016-05-20 Characteristic of Molecular Evolution of Notopterygium incisum Based on nrDNAITS and cpDNA rpl20-rps12 Sequence Analysis YANG Lu-Cun1,2;LIU He-Chun1,3;ZHOU Xue-Li4;XU Wen-Hua1,2;ZHOU Guo-Ying1,2* 1. 1.Northwest Institute of Plateau Biology,Chinese Academy of Sciences,Xining 810008; 2.Key Laboratory of Tibetan Medicine Research,Chinese Academy of Sciences,Xining 810008; 3.University of Chinese Academy of Sciences,Beijing 100049; 4.Tiebujia Grassland Improvement Experiment Station,Xining 810008 • Online:2016-03-15 Published:2016-05-20 Abstract: We analyzed the differences between the nrDNAITS and cpDNA rpl20-rps12 sequences in Notopterygium incisum by PCR direct sequencing. Total DNA was extracted from silica-dried leaves of N.incisum using modified CTAB method. With the extracted DNA as template, nrDNA ITS and cpDNA rpl20-rps12 regions were amplified, then purified and sequenced. The length of nrDNA ITS sequence of N.incisum was 635 bp, of which 17 were variable sites with a percentage of 2.68%, the(G+C) content was 57.83%. The length of cpDNA rpl20-rps12 sequence of N.incisum was 767 bp, of which 35 was variable site with a percentage of 4.56%, the(G+C) content was 33.06%. The nrDNAITS region of N.incisum was more conserved and evolved more slowly than the cpDNA rpl20-rps12sequence. The present distribution range of N.incisum experienced range expansion by the haplotype analysis of this species, which consisted with the conclusion resulting from cpDNA genome. Therefore, the nrDNA ITS sequence of rpl20-rps12 was fit to the phylogeographic study of this species.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.254018634557724, "perplexity": 24650.257269836613}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945218.30/warc/CC-MAIN-20230323225049-20230324015049-00064.warc.gz"}
https://cran.wustl.edu/web/packages/fmcmc/vignettes/workflow-with-fmcmc.html
# Motivating example We start by loading the dataset that the mcmc package includes. We will use the logit data set to obtain a posterior distribution of the model parameters using the MCMC function. library(fmcmc) data(logit, package = "mcmc") out <- glm(y ~ x1 + x2 + x3 + x4, data = logit, family = binomial, x = TRUE) beta.init <- as.numeric(coefficients(out)) To use the Metropolis-Hastings MCMC algorithm, the function should be (in principle) the log unnormalized posterior. The following block of code, extracted from the mcmc package vignette “MCMC Package Example,” creates the function that we will be using: lupost_factory <- function(x, y) function(beta) { eta <- as.numeric(x %*% beta) logp <- ifelse(eta < 0, eta - log1p(exp(eta)), - log1p(exp(- eta))) logq <- ifelse(eta < 0, - log1p(exp(eta)), - eta - log1p(exp(- eta))) logl <- sum(logp[y == 1]) + sum(logq[y == 0]) return(logl - sum(beta^2) / 8) } lupost <- lupost_factory(out$x, out$y) Let’s give it the first try. In this case, we will use the beta estimates from the estimated GLM model as a starting point for the algorithm, and we will ask it to sample 1e4 points from the posterior distribution (nsteps). # to get reproducible results set.seed(42) out <- MCMC( initial = beta.init, fun = lupost, nsteps = 1e3 ) Since the resulting object is of class mcmc (from the coda R package), we can use all the functions included in coda for model diagnostics: library(coda) plot(out[,1:3]) So this chain has very poor mixing, so let’s try again by using a smaller scale for the normal kernel proposal moving it from 1 (the default value) to .2: # to get reproducible results set.seed(42) out <- MCMC( initial = beta.init, fun = lupost, nsteps = 1e3, kernel = kernel_normal(scale = .2) ) The kernel_normal–the default kernel in the MCMC function–returns an object of class fmcmc_kernel. In principle, it consists of a list of two functions that are used by the MCMC routine: proposal, the proposal kernel function, and logratio, the function that returns the log of the Metropolis-Hastings ratio. We will talk more about fmcmc_kernel objects later. Now, let’s look at the first three variables of our model: plot(out[,1:3]) Better. Now, ideally we should only be using observations from the stationary distribution. Let’s give it another try checking for convergence every 1,000 steps using the convergence_geweke: # to get reproducible results set.seed(42) out <- MCMC( initial = beta.init, fun = lupost, nsteps = 1e4, kernel = kernel_normal(scale = .2), conv_checker = convergence_geweke(200) ) ## Convergence has been reached with 200 steps. avg Geweke's Z: 1.1854. (200 final count of samples). A bit better. As announced by MCMC, the convergence_geweke checker suggests that the chain reached a stationary state. With this in hand, we can now rerun the algorithm such that we start from the last couple of steps of the chain, this time, without convergence monitoring as it is no longer necessary. We will increase the number of steps (sample size), use two chains using parallel computing, and add some thinning to reduce autocorrelation: # Now, we change the seed, so we get a different stream of # pseudo random numbers set.seed(112) out_final <- MCMC( initial = out, # Automagically takes the last 2 points fun = lupost, nsteps = 5e4, # Increasing the sample size kernel = kernel_normal(scale = .2), thin = 10, nchains = 2L, # Running parallel chains multicore = TRUE # in parallel. ) Notice that, instead of specifying the starting points for each chain, we passed the out object to initial. By default, if initial is of class mcmc, MCMC will take the last nchains points from the chain as starting point for the new sequence. If initial is of class mcmc.list, the number of chains in initial must match the nchains parameter. We now see that the output posterior distribution appears to be stationary plot(out_final[, 1:3]) summary(out_final[, 1:3]) ## ## Iterations = 10:50000 ## Thinning interval = 10 ## Number of chains = 2 ## Sample size per chain = 5000 ## ## 1. Empirical mean and standard deviation for each variable, ## plus standard error of the mean: ## ## Mean SD Naive SE Time-series SE ## par1 0.6575 0.3005 0.003005 0.004844 ## par2 0.7998 0.3696 0.003696 0.006680 ## par3 1.1719 0.3629 0.003629 0.006884 ## ## 2. Quantiles for each variable: ## ## 2.5% 25% 50% 75% 97.5% ## par1 0.0924 0.4509 0.6445 0.8535 1.275 ## par2 0.1084 0.5457 0.7846 1.0470 1.560 ## par3 0.5181 0.9193 1.1533 1.4079 1.939 # Reusing fmcmc_kernel objects fmcmc_kernel objects are environments that are passed to the MCMC function. While the MCMC function only returns the mcmc class object (as defined in the coda package), users can exploit the fact that the kernel objects are environments to reuse them or inspect them once the MCMC function returns. For example, fmcmc_kernel objects can be useful with adaptive kernels as users can review the covariance structure or other components. To illustrate this, let’s re-do the MCMC chain of the previous example but using an adaptive kernel instead, in particular, Haario’s 2010 adaptive metropolis. khaario <- kernel_adapt(freq = 1, warmup = 500) The MCMC function will update the kernel at every step (freq = 1), and the adaptation will start from step 500 (warmup = 500). We can see that some of its components haven’t been initialized or have a default starting value before the call of the MCMC function: # Number of iterations (absolute count, starts in 0) khaario$abs_iter ## [1] 0 # Variance covariance matrix (is empty... for now) khaario$Sigma ## NULL Let’s see how it works: set.seed(12) out_haario_1 <- MCMC( initial = out, fun = lupost, nsteps = 1000, # We will only run the chain for 100 steps kernel = khaario, # We passed the predefined kernel thin = 1, # No thining here nchains = 1L, # A single chain multicore = FALSE # Running in serial ) Let’s inspect the output and mark when the adaptation starts: traceplot(out_haario_1[,1], main = "Traceplot of the first parameter") abline(v = 500, col = "red", lwd = 2, lty=2) If we look at the khaario kernel, the fmcmc_kernel object, we can see that things changed from the first time we ran it # Number of iterations (absolute count, the counts equal the number of steps) khaario$abs_iter ## [1] 999 # Variance covariance matrix (now is not empty) (Sigma1 <- khaario$Sigma) ## [,1] [,2] [,3] [,4] [,5] ## [1,] 0.031525360 -0.0039791603 -0.0087910932 -0.044644538 0.0118269806 ## [2,] -0.003979160 0.0102330721 -0.0025791610 -0.004415724 -0.0000456931 ## [3,] -0.008791093 -0.0025791610 0.0056680815 0.014101391 -0.0001920188 ## [4,] -0.044644538 -0.0044157240 0.0141013912 0.125724101 -0.0283035173 ## [5,] 0.011826981 -0.0000456931 -0.0001920188 -0.028303517 0.0179061836 If we re-run the chain, using as starting point the last step of the first run, we can also continue using the kernel object: out_haario_2 <- MCMC( initial = out_haario_1, fun = lupost, nsteps = 2000, # We will only run the chain for 2000 steps now kernel = khaario, # Same as before, same kernel. thin = 1, nchains = 1L, multicore = FALSE, seed = 555 # We can also specify the seed in the MCMC function ) Let’s see again how does everything looks like: traceplot(out_haario_2[,1], main = "Traceplot of the first parameter") abline(v = 500, col = "red", lwd = 2, lty=2) As shown in the plot, since the warmup period already passed for the kernel object, the adaptation process is happening at every step, so we don’t see a big break at step 500 as before. Let’s see the counts and the covariance matrix and compare it with the previous one: # Number of iterations (absolute count, the counts equal the number of steps) # This will have 1000 (first run) + 2000 (second run) steps khaario$abs_iter ## [1] 2998 # Variance covariance matrix (now is not empty) (Sigma2 <- khaario$Sigma) ## [,1] [,2] [,3] [,4] [,5] ## [1,] 0.061228176 0.005644895 0.012027075 -0.026564597 0.016088493 ## [2,] 0.005644895 0.095314914 -0.007193182 -0.031470264 -0.017436287 ## [3,] 0.012027075 -0.007193182 0.092205702 0.019310926 0.006204917 ## [4,] -0.026564597 -0.031470264 0.019310926 0.196759374 -0.009262867 ## [5,] 0.016088493 -0.017436287 0.006204917 -0.009262867 0.093177961 # How different are these? Sigma1 - Sigma2 ## [,1] [,2] [,3] [,4] [,5] ## [1,] -0.029702816 -0.009624055 -0.020818168 -0.018079942 -0.004261512 ## [2,] -0.009624055 -0.085081842 0.004614021 0.027054540 0.017390594 ## [3,] -0.020818168 0.004614021 -0.086537621 -0.005209535 -0.006396936 ## [4,] -0.018079942 0.027054540 -0.005209535 -0.071035273 -0.019040650 ## [5,] -0.004261512 0.017390594 -0.006396936 -0.019040650 -0.075271777 Things have changed since the last time we used the kernel, as expected. Kernel objects in the fmcmc package can also be used with multiple chains and in parallel. The MCMC function is smart enough to create independent copies of fmcmc_kernel objects when running multiple chains, and keep the original kernel objects up-to-date even when using multiple cores to run MCMC. For more technical details on how fmcmc_kernel objects work see the manual ?fmcmc_kernel or the vignette “User-defined kernels” included in the package vignette("user-defined-kernels", package = "fmcmc"). # Accessing other elements of the chain In some situations, you may want to access the computed unnormalized log-posterior probabilities, the states proposed by the kernel, or other process components. In those cases, the functions with the prefix get_* can help you. Starting version 0.5-0, we replaced the family of functions last_* with get_*; a re-design of this “memory” component that gives users access to data generated during the Markov process. After each run of the MCMC function, information regarding the last execution is stored in the environment MCMC_OUTPUT. If you wanted to look at the logposterior of the last call and proposed states, you can do the following: plot(get_logpost(), type="l") # Pretty figure showing proposed and accepted plot( get_draws()[,1:2], pch = 20, col = "gray", main = "Haario's second run" ) points(out_haario_2[,1:2], col = "red", pch = 20) legend( "topleft", legend = c("proposed", "accepted"), col = c("gray", "red"), pch = 20, bty = "n" ) The MCMC_OUTPUT environment also contains the arguments passed to MCMC(): get_initial() ## Markov Chain Monte Carlo (MCMC) output: ## Start = 1000 ## End = 1000 ## Thinning interval = 1 ## par1 par2 par3 par4 par5 ## 1000 0.5582865 0.1135279 1.279888 0.8476546 0.8872229 get_fun() ## function(beta) { ## eta <- as.numeric(x %*% beta) ## logp <- ifelse(eta < 0, eta - log1p(exp(eta)), - log1p(exp(- eta))) ## logq <- ifelse(eta < 0, - log1p(exp(eta)), - eta - log1p(exp(- eta))) ## logl <- sum(logp[y == 1]) + sum(logq[y == 0]) ## return(logl - sum(beta^2) / 8) ## } ## <bytecode: 0x560b1eb151d8> ## <environment: 0x560b226f32d0> get_kernel() ## ## An environment of class fmcmc_kernel: ## ## Ik : num [1:5, 1:5] 1e-04 0e+00 0e+00 0e+00 0e+00 0e+00 1e-04 0e+00 0e+00 0e+00 ... ## Mean_t_prev : num [1, 1:5] 0.6 0.8 1.079 0.291 0.812 ## Sd : num 1.15 ## Sigma : num [1:5, 1:5] 0.06123 0.00564 0.01203 -0.02656 0.01609 ... ## abs_iter : int 2998 ## bw : int 0 ## eps : num 1e-04 ## fixed : logi [1:5] FALSE FALSE FALSE FALSE FALSE ## freq : num 1 ## k : int 5 ## lb : num [1:5] -1.8e+308 -1.8e+308 -1.8e+308 -1.8e+308 -1.8e+308 ## logratio : function (env) ## mu : num [1:5] 0 0 0 0 0 ## proposal : function (env) ## ub : num [1:5] 1.8e+308 1.8e+308 1.8e+308 1.8e+308 1.8e+308 ## until : num Inf ## warmup : num 500 ## which. : int [1:5] 1 2 3 4 5
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3767995834350586, "perplexity": 4559.956719597034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500356.92/warc/CC-MAIN-20230206145603-20230206175603-00206.warc.gz"}
https://puzzling.stackexchange.com/questions/53277/english-word-with-most-valid-substrings
# English word with most valid substrings What English word has the most valid word substrings, no repeats? Examples (I may have missed some words): AIRSPACES: a, i, air, spa, space, spaces, pace, paces, ace, aces DEMONETIZED: demo, demon, net, monetized, i, zed INCANDESCENT: i, in, inca, can, a, an, and, descent, scent, ent I realize long words may have an unfair advantage, such as PNEUMONOULTRAMICROSCOPICSILICOVOLCANOCONIOSIS: mono, ultra, a, tram, am, micro, i, cop, volcano, can, an, no, con, on, is so an alternate question would be: what word has the highest score of [sum of number of letters in all non-repeating substrings]/[length of word] Thus, I am interested in these 3 categories: 1. Word with most valid substrings 2. Word with greatest character count of valid substrings 3. Word with greatest [character count of valid substrings]/[word length] Computer generated results are accepted. Current Top Results: (Mieliestronk) 1. INTERRELATIONSHIPS ; 23 substrings ; Jaap Scherphuis 2. INTERRELATIONSHIPS ; 136 characters ; Jaap Scherphuis 3. INTERRELATIONSHIPS ; 7.55 score ; Jaap Scherphuis (SOWPODS) 1. DEFORESTATIONS , DISFORESTATIONS; 34 substrings ; Peter Taylor 2. BIPARTISANSHIPS ; 176 characters ; Peter Taylor 3. PREPOSSESSIONS ; ~12.29 score ; Peter Taylor • I think the open-ended tag fits well here. – Mordechai Jul 11 '17 at 4:16 • Running at +5 at time of writing; I'm glad your question found its audience :) – AndyT Jul 11 '17 at 8:17 • @AndyT Thanks again for helping me find the correct place to post this – Cheeezburger Jul 11 '17 at 9:30 I confess I wrote a small computer program to search for these words with the most subwords. I used a word list containing about 58000 relatively normal words, with "A" and "I" as the only 1-letter words. No doubt results will be very different if you include more obscure short words. 7 letters, 8 subwords totalling 20 letters, score 20/7 = 2.86: ABANDON: A BAN BAND AN AND DO DON ON 9 letters, 12 subwords totalling 35 letters, score 35/9 = 3.89: ABANDONED: A ABANDON BAN BAND AN AND DO DON DONE ON ONE NE 11 letters, 14 subwords totalling 52 letters, score 52/11 = 4.73: ABERRATIONS: A ABE ABERRATION BE ERR RAT RATIO RATION RATIONS AT I ION IONS ON 12 letters, 18 subwords totalling 63 letters, score 63/12 = 5.25: CHAMPIONSHIP: CHAMP CHAMPION CHAMPIONS HA HAM A AM AMP PI PION PIONS I ION IONS ON SHIP HI HIP 13 letters, 21 subwords totalling 84 letters, score 84/13 = 6.46: CHAMPIONSHIPS: CHAMP CHAMPION CHAMPIONS CHAMPIONSHIP HA HAM A AM AMP PI PION PIONS I ION IONS ON SHIP SHIPS HI HIP HIPS 17 letters, 22 subwords totalling 109 letters, score 109/17 = 6.41: GREATGRANDMOTHERS: GREAT GREATGRANDMOTHER RE EAT A AT GRAND GRANDMOTHER GRANDMOTHERS RAN RAND AN AND MOTH MOTHER MOTHERS OTHER OTHERS THE HE HER HERS 18 letters, 23 subwords totalling 136 letters, score 136/18 = 7.55: INTERRELATIONSHIPS: I IN INTER INTERRELATION INTERRELATIONS INTERRELATIONSHIP ERR RE RELATION RELATIONS RELATIONSHIP RELATIONSHIPS ELATION A AT ION IONS ON SHIP SHIPS HI HIP HIPS • Interesting results. Admittedly, I expected there to be more results with substrings consisting of words formed from separate word fragments, such as DEMON in DEMONITIZED, rather than simply splitting long words into its compound components. – Cheeezburger Jul 11 '17 at 9:37 • @Cheeezburger: That may be partly because I used a word list with few obscure words, but more likely it's just that using the semantic components is more likely to build a meaningful word and if you have several components the number of potential subwords increases exponentially. Fragments of those components are unlikely to be as productive. Still in the last answer, the first S from -SHIPS was very productive, though RELATION->ELATION was not. – Jaap Scherphuis Jul 11 '17 at 11:05 • What dictionary did you use? Does it have a name? – Cheeezburger Jul 12 '17 at 3:04 • I'm not sure since I downloaded it ages ago. I think it was likely the corncob list to which I added the two 1-letter words I mentioned. – Jaap Scherphuis Jul 12 '17 at 7:03 • @Cheeezburger: With this list PREPOSSESSIONS scores 7.42 (17 subwords, total length 104), BIPARTISANSHIPS scores 6.2 (20 subwords, total length 93) and DEFORESTATIONS scores 5.35 (19 subwords, total length 75). – Jaap Scherphuis Jul 12 '17 at 7:19 I only tried one word but it took longer than I had expected, so I won't try another one for a while. Here goes. relisted = 8 relist = 6 listed = 6 list = 4 sted = 4 lis = 3 ist = 3 ted = 3 el = 2 li = 2 is = 2 te = 2 re = 2 e = 1 i = 1 The ratio is 5.125 • Did you do this manually or computationally? – Cheeezburger Jul 11 '17 at 3:49 • @Cheeezburger I did it by hand.I am sure you can find better words. – stack reader Jul 11 '17 at 4:05 As Jaap observed, the results are quite different with a dictionary which emphasises short words. Specifically, I used SOWPODS, which doesn't have words longer than 15 characters but does have a lot of obscure short ones. 1. Word with most valid substrings At 34 words and 14 characters: DEFORESTATIONS with substrings DE, DEF, DEFOREST, DEFORESTATION, EF, FOR, FORE, FORES, FOREST, FORESTATION, FORESTATIONS, OR, ORE, ORES, RE, RES, REST, RESTATION, RESTATIONS, ES, EST, ST, STAT, STATION, STATIONS, TA, TAT, AT, TI, IO, ION, IONS, ON, ONS It ties with the closely related 15-character word DISFORESTATIONS 2. Word with greatest character count of valid substrings At 176 characters from a 15-character word: BIPARTISANSHIPS with substrings BI, BIPARTISAN, BIPARTISANSHIP, PA, PAR, PART, PARTI, PARTIS, PARTISAN, PARTISANS, PARTISANSHIP, PARTISANSHIPS, AR, ART, ARTI, ARTIS, ARTISAN, ARTISANS, ARTISANSHIP, ARTISANSHIPS, TI, TIS, IS, SAN, SANS, AN, SH, SHIP, SHIPS, HI, HIP, HIPS 3. Word with greatest [character count of valid substrings]/[word length] At a ratio of 172/14 ~= 12.29 PREPOSSESSIONS with substrings PRE, PREP, PREPOSSESS, PREPOSSESSION, RE, REP, REPO, REPOS, REPOSSESS, REPOSSESSION, REPOSSESSIONS, EPOS, PO, POS, POSS, POSSE, POSSES, POSSESS, POSSESSION, POSSESSIONS, OS, SESS, SESSION, SESSIONS, ES, ESS, SI, IO, ION, IONS, ON, ONS
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33565959334373474, "perplexity": 18882.617796112}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627997335.70/warc/CC-MAIN-20190615202724-20190615224530-00059.warc.gz"}
https://www.physicsforums.com/threads/nomenclature-question.841434/
# Homework Help: Nomenclature question 1. Nov 4, 2015 Hey guys, quick question. I got confused in some nomenclature between some books and wanted to clarify somehow. A book I have has I=Mk^2 as defining a radius of gyration. Is this the same as I-mr^2? Terminology was just confusing me. Thanks guys. 2. Nov 4, 2015 Oops, I meant I=mr^2. apologies 3. Nov 4, 2015 ### haruspex It depends what r is supposed to be. Certainly "I=Mk2, where M is mass, k is the radius of gyration and I is the moment of inertia about the mass centre" is exactly the same as "I=mr2, where m is mass, r is the radius of gyration and I is the moment of inertia about the mass centre". But most writers reserve r to be a directly observable radius, such as the radius of a hoop or of a point mass in a circular orbit.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.957755446434021, "perplexity": 877.5305673929757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865702.43/warc/CC-MAIN-20180523180641-20180523200641-00173.warc.gz"}
https://rpg.stackexchange.com/questions/26723/can-i-create-a-pc-with-a-template
# Can I create a PC with a template? So, I've been thinking - can I make something like first level Celestial Sun Elf-wizard in a non-homerule non-Dragon Magazine game from the start? Are there any templates that can be applied at the character creation and if there are, which ones are? Especially the ones I don't need to beg my DM for? • Templates pretty much always need DM approval. – Oblivious Sage Jun 28 '13 at 18:04 • i'm not sure what you're asking here... – acolyte Jun 28 '13 at 18:18 • @acolyte - I think they are saying that they are running a strict RAW game (No home rules, no DragMag stuff), and he wants to know about his PC concept. – JohnP Jun 28 '13 at 18:24 • Basically, yeah, I've found some pretty funny templates and wonder if I can use them in a rulebook-only game from the start of the game – Baka-Mastermind Jun 28 '13 at 18:39 ### If it has a Level Adjustment, it is intended to be playable Creatures and templates that cannot be played (according to Wizards) have LA: —. Any actual numerical LA (LA: 0 to LA: some_number) can be played, by the Level Adjustment rules (basically, you could as a higher level than you actually are for determining your “effective character level” that is used for XP to level up and what other characters you are treated as having the same level as). Of course, this means that with LA: +2 (as with Celestial), you are at a minimum a 3rd-level character (LA: +2, plus whatever your first HD is), which may affect you if you are playing in a low-level game. Unfortunately, most LAs are much higher than they really should be. Wizards allowed it, but apparently wanted to discourage player use of these things by overcharging for them. And even if it is on some level “worth it,” it’s very difficult to play with an LA higher than 1 or 2. Even those can be quite problematic, for you and the game. And many DMs don’t like templates for a variety of reasons. So definitely ask your DM about it. Celestial has LA: +2, and therefore you usually can use it. If you do, you are two levels behind at all times: not even remotely worth it, in my opinion. DR 5-10/magic isn’t worth much: most threats have magic weapons. Darkvision is easily obtained via racial quality, spell, or item. Items and spells can also handle energy resistances. Spell Resistance actually hurts you more than it helps you, in my opinion: you have to spend a Standard Action to lower it if someone wants to heal or buff you. And Smite Evil isn’t very useful to a Wizard, not that it’s particularly useful to anyone when it’s only once per day. If you want to play a sun elf who is a native of the Celestial Realms, just ask your DM if that’s an acceptable backstory for a character. You don’t need to hurt yourself with this template just to have a backstory. • For a sun elf native to the celestial realms I strongly suggest the Otherworldly feat. – Zachiel Jun 28 '14 at 12:16 • @Zachiel Why? To get the Outsider type? You don't need it. If you want the Outsider type (and there are decent reasons to get it), sure, Otherworldly's a good feat, but you really don't need the Outsider type to be from the Celestial Realms; plenty of things there don't have it. – KRyan Jun 28 '14 at 15:43 You can without any issue (besides DM approval) if you are starting at Level 3. Since the Celestial has a +2 ECL. This means that you will start with one class level (e.g. Wizard, Bard) and 2 "effective levels" of Celestial. You would get all of the stats that a Sun Elf gets in addition to the details listed in that page. This isn't ideal as it means that you trade Level 2 and 3 abilities for the special qualities of a Celestial: • Darkvision out to 60 feet. • Damage reduction (see the table below). • Resistance to acid, cold, and electricity (see the table below). • Spell resistance equal to HD + 5 (maximum 25). This might be worth it, but it could also cause complications and for you to be weaker than the other players in some areas (but potentially be stronger in others). So talk with your DM about it and see what he says, but it is RAW and completely legal for you to do.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24871671199798584, "perplexity": 1732.588946449377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145648.56/warc/CC-MAIN-20200222023815-20200222053815-00268.warc.gz"}
https://ftp.aimsciences.org/article/doi/10.3934/dcdsb.2007.8.943
American Institute of Mathematical Sciences November  2007, 8(4): 943-970. doi: 10.3934/dcdsb.2007.8.943 Homoclinic trajectories and chaotic behaviour in a piecewise linear oscillator 1 Department of Applied Mathematics, University College, Cork, Ireland Received  September 2006 Revised  July 2007 Published  August 2007 In this paper we consider the equation $\ddot x+x=\sin(\sqrt{2}t)+s(x)\,$ where $s(x)$ is a piece-wise linear map given by min$\{5x,1\}$ if $x\ge0$ and by max$\{-1, 5x\}$ if $x<0$. The existence of chaotic behaviour in the Smale sense inside the instability area is proven. In particular transversal homoclinic fixed point is found. The results follow from the application of topological degree theory the computer-assisted verification of a set of inequalities. Usually such proofs can not be verified by hands due to vast amount of computations, but the simplicity of our system leads to a small set of inequalities that can be verified by hand. Citation: Alexei Pokrovskii, Oleg Rasskazov, Daniela Visetti. Homoclinic trajectories and chaotic behaviour in a piecewise linear oscillator. Discrete and Continuous Dynamical Systems - B, 2007, 8 (4) : 943-970. doi: 10.3934/dcdsb.2007.8.943 [1] Christian Bonatti, Shaobo Gan, Dawei Yang. On the hyperbolicity of homoclinic classes. Discrete and Continuous Dynamical Systems, 2009, 25 (4) : 1143-1162. doi: 10.3934/dcds.2009.25.1143 [2] Andres del Junco, Daniel J. Rudolph, Benjamin Weiss. Measured topological orbit and Kakutani equivalence. Discrete and Continuous Dynamical Systems - S, 2009, 2 (2) : 221-238. doi: 10.3934/dcdss.2009.2.221 [3] Keonhee Lee, Manseob Lee. Hyperbolicity of $C^1$-stably expansive homoclinic classes. Discrete and Continuous Dynamical Systems, 2010, 27 (3) : 1133-1145. doi: 10.3934/dcds.2010.27.1133 [4] Chen-Chang Peng, Kuan-Ju Chen. Existence of transversal homoclinic orbits in higher dimensional discrete dynamical systems. Discrete and Continuous Dynamical Systems - B, 2010, 14 (3) : 1181-1197. doi: 10.3934/dcdsb.2010.14.1181 [5] Flaviano Battelli, Ken Palmer. Transversal periodic-to-periodic homoclinic orbits in singularly perturbed systems. Discrete and Continuous Dynamical Systems - B, 2010, 14 (2) : 367-387. doi: 10.3934/dcdsb.2010.14.367 [6] Maria João Costa. Chaotic behaviour of one-dimensional horseshoes. Discrete and Continuous Dynamical Systems, 2003, 9 (3) : 505-548. doi: 10.3934/dcds.2003.9.505 [7] Enrique R. Pujals. On the density of hyperbolicity and homoclinic bifurcations for 3D-diffeomorphisms in attracting regions. Discrete and Continuous Dynamical Systems, 2006, 16 (1) : 179-226. doi: 10.3934/dcds.2006.16.179 [8] Enrique R. Pujals. Density of hyperbolicity and homoclinic bifurcations for attracting topologically hyperbolic sets. Discrete and Continuous Dynamical Systems, 2008, 20 (2) : 335-405. doi: 10.3934/dcds.2008.20.335 [9] Oksana Koltsova, Lev Lerman. Hamiltonian dynamics near nontransverse homoclinic orbit to saddle-focus equilibrium. Discrete and Continuous Dynamical Systems, 2009, 25 (3) : 883-913. doi: 10.3934/dcds.2009.25.883 [10] Benoît Grébert, Tiphaine Jézéquel, Laurent Thomann. Dynamics of Klein-Gordon on a compact surface near a homoclinic orbit. Discrete and Continuous Dynamical Systems, 2014, 34 (9) : 3485-3510. doi: 10.3934/dcds.2014.34.3485 [11] Shigui Ruan, Junjie Wei, Jianhong Wu. Bifurcation from a homoclinic orbit in partial functional differential equations. Discrete and Continuous Dynamical Systems, 2003, 9 (5) : 1293-1322. doi: 10.3934/dcds.2003.9.1293 [12] W.-J. Beyn, Y.-K Zou. Discretizations of dynamical systems with a saddle-node homoclinic orbit. Discrete and Continuous Dynamical Systems, 1996, 2 (3) : 351-365. doi: 10.3934/dcds.1996.2.351 [13] Xinfu Chen. Lorenz equations part II: "randomly" rotated homoclinic orbits and chaotic trajectories. Discrete and Continuous Dynamical Systems, 1996, 2 (1) : 121-140. doi: 10.3934/dcds.1996.2.121 [14] Antonio Giorgilli, Simone Paleari, Tiziano Penati. Local chaotic behaviour in the Fermi-Pasta-Ulam system. Discrete and Continuous Dynamical Systems - B, 2005, 5 (4) : 991-1004. doi: 10.3934/dcdsb.2005.5.991 [15] Riadh Chteoui, Abdulrahman F. Aljohani, Anouar Ben Mabrouk. Classification and simulation of chaotic behaviour of the solutions of a mixed nonlinear Schrödinger system. Electronic Research Archive, 2021, 29 (4) : 2561-2597. doi: 10.3934/era.2021002 [16] Marc Henrard. Homoclinic and multibump solutions for perturbed second order systems using topological degree. Discrete and Continuous Dynamical Systems, 1999, 5 (4) : 765-782. doi: 10.3934/dcds.1999.5.765 [17] Xianjun Wang, Huaguang Gu, Bo Lu. Big homoclinic orbit bifurcation underlying post-inhibitory rebound spike and a novel threshold curve of a neuron. Electronic Research Archive, 2021, 29 (5) : 2987-3015. doi: 10.3934/era.2021023 [18] Peng Chen, Linfeng Mei, Xianhua Tang. Nonstationary homoclinic orbit for an infinite-dimensional fractional reaction-diffusion system. Discrete and Continuous Dynamical Systems - B, 2022, 27 (10) : 5389-5409. doi: 10.3934/dcdsb.2021279 [19] Minju Lee, Hee Oh. Topological proof of Benoist-Quint's orbit closure theorem for $\boldsymbol{ \operatorname{SO}(d, 1)}$. Journal of Modern Dynamics, 2019, 15: 263-276. doi: 10.3934/jmd.2019021 [20] Kengo Matsumoto. Continuous orbit equivalence of topological Markov shifts and KMS states on Cuntz–Krieger algebras. Discrete and Continuous Dynamical Systems, 2020, 40 (10) : 5897-5909. doi: 10.3934/dcds.2020251 2021 Impact Factor: 1.497
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5315582156181335, "perplexity": 4607.880570138869}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572581.94/warc/CC-MAIN-20220816211628-20220817001628-00759.warc.gz"}
https://space.stackexchange.com/questions/12660/delta-v-to-low-mars-orbit
# Delta-V to Low Mars Orbit I'm making a Mars probe in Kerbal Space Program Real Solar System, and I have 13 km/s of Delta-V in the lifter itself. I have a cryogenic second stage with a KVD-1 engine, it has 7.36 km/s of Delta-V, and about 4 km/s is used for the orbital injection. Does my lifter have enough Delta-V? I can upload a picture with the delta-v overall, if that helps. • "Does my lifter have enough Delta-V?" I'd have thought that KSP was designed to tell you the answer to that. Launch the combo. to find out! Nov 11, 2015 at 4:45 • I've now revised my answer using patch conics and get a very different answer. I will leave my previous analysis up for a bit in case anyone can comment on it. Nov 12, 2015 at 3:52 ## 2 Answers Let's break down your problem into a very simplified launch phase from the ground to low Earth orbit at 250 km altitude. Then we'll use patch conics to get an estimate of the $\Delta v$ necessary to reach a low Mars orbit of 80 km altitude. One method that I like to use to get ball-park estimates of required launch $\Delta v$ is to first consider an instantaneous impulse on the ground that will give you enough velocity to reach the target orbit altitude (like throwing up a ball with enough speed that it just reaches the orbit altitude at it's peak), then consider another instantaneous impulse at that point that gives you enough velocity to reach orbit. Potential energy on the ground and in orbit (where $\mu_{Earth}$ is the gravitational parameter for Earth -- $\mu_{Earth} = 398601.2$ $\frac{\text{km}^3}{s^2}$): $V_{ground} = -\frac{\mu_{Earth}}{r_{ground}}$ $V_{orbit} = -\frac{\mu_{Earth}}{r_{orbit}}$ We'll assume we start at the mean Earth radius of 6378 km (the actual radius at the Kennedy Space Centre could be substituted but probably won't make a significant difference considering our crude analysis). That gives us a difference in potential energy of 2.357 kJ/kg, which translates to a required initial speed of 2.171 km/s. Once we reach our peak height of 250 km, we'll be at zero velocity and need to accelerate to an orbit velocity of 7.755 km/s based on the following equation. $v_{orbit} = \sqrt{\frac{\mu_{Earth}}{r_{orbit}}}$ So that gives us a total launch $\Delta v$ of about 9.926 km/s. If you launch from the equator to the East then you will have an initial speed of 0.465 km/s from the Earth's rotation, so that could reduce your launch $\Delta v$ to 9.461 km/s. This is probably about 5-10% higher than the true value, but a good conservative approximation. Next we use patch conics to analyze the interplanetary transfer. The figure below shows the interplanetary trajectory using a Hohmann transfer from Earth to Mars in the centre, where we can just assume our heliocentric radii are those of the planets. The spheres of influence for Earth and Mars are expanded on the left and right, respectively, to show the hyperbolic trajectories in each planet's reference frame as well as the planetary orbits. First we find the semi-major axis of the Mars transfer orbit or MTO, $a_{MTO}$, based on the radii of the orbits of Earth and Mars (which we assume are circular for this analysis). $a_{MTO} = \frac{1}{2} \left( r_{Earth} + r_{Mars} \right)$ Then we can find the velocities of the spacecraft as it departs Earth and when it reaches Mars. $v_{MTO,E} = \sqrt{\mu_{Sun} \left( \frac{2}{r_{Earth}} - \frac{1}{a_{MTO}}\right)}$ $v_{MTO,M} = \sqrt{\mu_{Sun} \left( \frac{2}{r_{Mars}} - \frac{1}{a_{MTO}}\right)}$ In the Earth reference frame, we consider the spacecraft to "depart" when it leaves the sphere of influence or SOI (at a radius of about 924000 km). Assume we can set up our hyperbolic escape trajectory so that our velocity in the heliocentric frame is parallel to Earth's. So that means that in the Earth frame, our velocity at the edge of the SOI will be: $v_{SOI,E} = v_{MTO,E} - v_{Earth}$ Where $v_{Earth} = \sqrt{\frac{\mu_{Sun}}{r_{Earth}}}$ Given the radius and velocity at the SOI we can find the semi-major axis of the escape trajectory as well as the required MTO insertion velocity to leave low Earth orbit: $a_{MTO,E} = \left( \frac{2}{r_{SOI,E}} - \frac{v^2_{SOI,E}}{\mu_{Earth}}\right)^{-1}$ $v_{Insertion} = \sqrt{\mu_{Earth} \left( \frac{2}{r_{LEO}} - \frac{1}{a_{MTO,E}}\right)}$ Note that our velocity in LEO is: $v_{LEO} = \sqrt{\frac{\mu_{Earth}}{r_{LEO}}}$ Plugging in our numbers, we get $v_{Insertion} = 11.318$ km/s and $v_{LEO} = 7.755$ km/s, resulting in a required $\Delta v$ of 3.563 km/s to leave LEO and enter MTO. Next we use the same analysis at Mars, where the right-hand side of the above figure shows the spacecraft hyperbolic rendezvous trajectory as well as the target low Mars orbit. First we determine the relative velocity that the spacecraft will have at the edge of the sphere of influence (again assuming that the velocity is initially parallel to Mars'): $v_{SOI,M} = v_{Mars} - v_{MTO,M}$ Next we determine the hyperbolic orbit semi-major axis and rendezvous velocity at the periapsis (which will be at low Mars orbit radius): $a_{MTO,M} = \left( \frac{2}{r_{SOI,M}} - \frac{v^2_{SOI,M}}{\mu_{Mars}}\right)^{-1}$ $v_{Rendezvous} = \sqrt{\mu_{Mars} \left( \frac{2}{r_{LMO}} - \frac{1}{a_{MTO,M}}\right)}$ Note that our velocity in LMO is: $v_{LMO} = \sqrt{\frac{\mu_{Mars}}{r_{LMO}}}$ Plugging in our numbers, we get $v_{Rendezvous} = 5.618$ km/s and $v_{LMO} = 3.514$ km/s, resulting in a required $\Delta v$ of 2.105 km/s to enter LMO from the Mars transfer orbit. With those two manoeuvres, the total $\Delta v$ for interplanetary transfer is 5.668 km/s. Adding the approximated launch $\Delta v$ results in a grand total of 15.129 km/s, which should be within reach of your design. • The wording of the question is ambiguous, but knowing a bit about the game, i think they meant from Earth to Mars orbit. Nov 11, 2015 at 15:53 • Oh of course, oops! Nov 11, 2015 at 16:41 • I am launching from the Kennedy Space Center, if that helps. Should I dogleg to equatorial or should I do a plane change? Nov 11, 2015 at 18:28 • You should definitely curve your LV over to the correct inclination. You could really just start with an appropriate heading (with the correct timing) so that you can maintain a constant heading and save some fuel. However, real launches have other constraints that might prevent that, like flying over populated areas. But launching to an Equatorial orbit and then using a plane change will be a lot less efficient. Nov 11, 2015 at 18:50 • You might be interested in this MathJax cheat sheet for formatting the formulas properly: meta.math.stackexchange.com/questions/5020/… . Or a bit later i'll format them once i've finished a couple of things. Nov 11, 2015 at 22:23 I've made a spreadsheet for stuff like this. It assumes circular, coplanar orbits so the numbers are ball park. Here is a screen capture: I've set ellipse's apoaerion altitude at the Sphere of Influence. This is a capture orbit, the sun's influence won't tear it away any time soon. I've set ellipse's periaerion altitude at 100 km. The ship passes through Mars upper atmosphere each periaerion. Atmospheric friction will shed velocity each pass serving to lower the apoaerion. When the apoaerion reaches 400 km or so, a tiny burn will serve to raise the periapsis above the atmosphere. So at Mars arrival it will take a .7 km/s periaerion burn to achieve capture and eventually a low circular orbit. • I lost the spreadsheet during a disk formatting, i'm glad i have it again now :) Nov 11, 2015 at 15:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8445896506309509, "perplexity": 941.9297813963589}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662562410.53/warc/CC-MAIN-20220524014636-20220524044636-00038.warc.gz"}
https://open.kattis.com/contests/xac94q/problems/easiest
UAPSC Weekly Random Problem Assortment #### Start 2019-02-15 22:48 UTC ## UAPSC Weekly Random Problem Assortment #### End 2019-02-22 22:48 UTC The end is near! Contest is over. Not yet started. Contest is starting in -150 days 19:31:29 168:00:00 0:00:00 # Problem AThe Easiest Problem Is This One Some people think this is the easiest problem in today’s problem set. Some people think otherwise, since it involves sums of digits of numbers and that’s difficult to grasp. If we multiply a number $N$ with another number $m$, the sum of digits typically changes. For example, if $m = 26$ and $N=3029$, then $N\cdot m = 78754$ and the sum of the digits is $31$, while the sum of digits of $N$ is 14. However, there are some numbers that if multiplied by $N$ will result in the same sum of digits as the original number $N$. For example, consider $m = 37, N=3029$, then $N\cdot m = 112073$, which has sum of digits 14, same as the sum of digits of $N$. Your task is to find the smallest positive integer $p$ among those that will result in the same sum of the digits when multiplied by $N$. To make the task a little bit more challenging, the number must also be higher than ten. ## Input The input consists of several test cases. Each case is described by a single line containing one positive integer number $N, 1\leq N\leq 100\, 000$. The last test case is followed by a line containing zero. ## Output For each test case, print one line with a single integer number $p$ which is the minimal number such that $N\cdot p$ has the same sum of digits as $N$ and $p$ is bigger than 10. Sample Input 1 Sample Output 1 3029 4 5 42 0 37 28 28 25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.823126494884491, "perplexity": 348.9250274969567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524685.42/warc/CC-MAIN-20190716180842-20190716202842-00132.warc.gz"}
https://paperswithcode.com/search?q=author%3AAlan+C.+Bovik
# Making Video Quality Assessment Models Sensitive to Frame Rate Distortions We consider the problem of capturing distortions arising from changes in frame rate as part of Video Quality Assessment (VQA). # Estimating the Resize Parameter in End-to-end Learned Image Compression By conducting extensive experimental tests on existing deep image compression models, we show results that our new resizing parameter estimation framework can provide Bj{\o}ntegaard-Delta rate (BD-rate) improvement of about 10% against leading perceptual quality engines. # Perceptual Quality Assessment of UGC Gaming Videos In recent years, with the vigorous development of the video game industry, the proportion of gaming videos on major video websites like YouTube has dramatically increased. # Foveation-based Deep Video Compression without Motion Search no code implementations30 Mar 2022, , In our learning based approach, we implement foveation by introducing a Foveation Generator Unit (FGU) that generates foveation masks which direct the allocation of bits, significantly increasing compression efficiency while making it possible to retain an impression of little to no additional visual loss given an appropriate viewing geometry. # Subjective and Objective Analysis of Streamed Gaming Videos A number of studies have been directed towards understanding the perceptual characteristics of professionally generated gaming videos arising in gaming video streaming, online gaming, and cloud gaming. # FUNQUE: Fusion of Unified Quality Evaluators Fusion-based quality assessment has emerged as a powerful method for developing high-performance quality models from quality models that individually achieve lower performances. 0 # FAVER: Blind Quality Prediction of Variable Frame Rate Videos Video quality assessment (VQA) remains an important and challenging problem that affects many applications at the widest scales. 5 # Image Quality Assessment using Contrastive Learning We consider the problem of obtaining image quality representations in a self-supervised manner. 41 # Self-Supervised Learning of Perceptually Optimized Block Motion Estimates for Video Compression Block based motion estimation is integral to inter prediction processes performed in hybrid video codecs. # High Frame Rate Video Quality Assessment using VMAF and Entropic Differences In this work we address the problem of frame rate dependent Video Quality Assessment (VQA) when the videos to be compared have different frame rate and compression factor. # ChipQA: No-Reference Video Quality Prediction via Space-Time Chips We propose a new model for no-reference video quality assessment (VQA). # Assessment of Subjective and Objective Quality of Live Streaming Sports Videos Video live streaming is gaining prevalence among video streaming services, especially for the delivery of popular sporting events. # Convolutional Block Design for Learned Fractional Downsampling The layers of convolutional neural networks (CNNs) can be used to alter the resolution of their inputs, but the scaling factors are limited to integer values. # Space-Time Video Regularity and Visual Fidelity: Compression, Resolution and Frame Rate Adaptation As a stringent test of the new model, we apply it to the difficult problem of predicting the quality of videos subjected not only to compression, but also to downsampling in space and/or time. # Regression or Classification? New Methods to Evaluate No-Reference Picture and Video Quality Models Video and image quality assessment has long been projected as a regression problem, which requires predicting a continuous quality score given an input stimulus. # On the Space-Time Statistics of Motion Pictures It is well-known that natural images possess statistical regularities that can be captured by bandpass decomposition and divisive normalization processes that approximate early neural processing in the human visual system. # RAPIQUE: Rapid and Accurate Video Quality Prediction of User Generated Content However, these models are either incapable or inefficient for predicting the quality of complex and diverse UGC videos in practical applications. 32 # A Hitchhiker's Guide to Structural Similarity The Structural Similarity (SSIM) Index is a very widely used image/video quality model that continues to play an important role in the perceptual evaluation of compression algorithms, encoding recipes and numerous other image/video processing algorithms. 2 # The VIP Gallery for Video Processing Education no code implementations29 Dec 2020, Towards enhancing DVP education we have created a carefully constructed gallery of educational tools that is designed to complement a comprehensive corpus of online lectures by providing examples of DVP on real-world content, along with a user-friendly interface that organizes numerous key DVP topics ranging from analog video, to human visual processing, to modern video codecs, etc. # ST-GREED: Space-Time Generalized Entropic Differences for Frame Rate Dependent Video Quality Prediction We consider the problem of conducting frame rate dependent video quality assessment (VQA) on videos of diverse frame rates, including high frame rate (HFR) videos. 9 # Learning to Compress Videos without Computing Motion Our framework exploits the regularities inherent to video motion, which we capture by using displaced frame differences as video representations to train the neural network. 1 Banding artifacts, which manifest as staircase-like color bands on pictures or video frames, is a common distortion caused by compression of low-textured smooth regions. # Subjective and Objective Quality Assessment of High Frame Rate Videos We also conducted a holistic evaluation of existing state-of-the-art Full and No-Reference video quality algorithms, and statistically benchmarked their performance on the new database. 2 # Perceptually Optimizing Deep Image Compression Mean squared error (MSE) and $\ell_p$ norms have largely dominated the measurement of loss in neural networks due to their simplicity and analytical properties. # Capturing Video Frame Rate Variations via Entropic Differencing High frame rate videos are increasingly getting popular in recent years, driven by the strong requirements of the entertainment and streaming industries to provide high quality of experiences to consumers. # UGC-VQA: Benchmarking Blind Video Quality Assessment for User Generated Content Recent years have witnessed an explosion of user-generated content (UGC) videos shared and streamed over the Internet, thanks to the evolution of affordable and reliable consumer capture devices, and the tremendous popularity of social media platforms. 91 # BBAND Index: A No-Reference Banding Artifact Predictor Banding artifact, or false contouring, is a common video compression impairment that tends to appear on large flat regions in encoded videos. # A Comparative Evaluation of Temporal Pooling Methods for Blind Video Quality Assessment Many objective video quality assessment (VQA) algorithms include a key step of temporal pooling of frame-level quality scores. # ProxIQA: A Proxy Approach to Perceptual Optimization of Learned Image Compression By building on top of an existing deep image compression model, we are able to demonstrate a bitrate reduction of as much as $31\%$ over MSE optimization, given a specified perceptual quality (VMAF) level. 3 # Speeding up VP9 Intra Encoder with Hierarchical Deep Learning Based Partition Prediction 1 code implementation15 Jun 2019, , In VP9 video codec, the sizes of blocks are decided during encoding by recursively partitioning 64$\times$64 superblocks using rate-distortion optimization (RDO). 6 # Adversarial Video Compression Guided by Soft Edge Detection We propose a video compression framework using conditional Generative Adversarial Networks (GANs). # Large-Scale Study of Perceptual Video Quality no code implementations5 Mar 2018, We demonstrate the value of the new resource, which we call the LIVE Video Quality Challenge Database (LIVE-VQC), by conducting a comparison of leading NR video quality predictors on it. # A Probabilistic Quality Representation Approach to Deep Blind Image Quality Prediction 1 code implementation28 Aug 2017, , Recognizing this, we propose a new representation of perceptual image quality, called probabilistic quality representation (PQR), to describe the image subjective score distribution, whereby a more robust loss function can be employed to train a deep BIQA model. 100 # Learning to Predict Streaming Video QoE: Distortions, Rebuffering and Memory 1 code implementation2 Mar 2017, Mobile streaming video data accounts for a large and increasing percentage of wireless network traffic. Multimedia 16 # Perceptual Quality Prediction on Authentically Distorted Images Using a Bag of Features Approach 1 code implementation15 Sep 2016, Current top-performing blind perceptual image quality prediction models are generally trained on legacy databases of human quality opinion scores on synthetically distorted images. 12 # Massive Online Crowdsourced Study of Subjective and Objective Picture Quality Towards overcoming these limitations, we designed and created a new database that we call the LIVE In the Wild Image Quality Challenge Database, which contains widely diverse authentic image distortions on a large number of images captured using a representative variety of modern mobile devices. # Blind Prediction of Natural Video Quality 3) We show that the proposed NSS and motion coherency models are appropriate for quality assessment of videos, and we utilize them to design a blind VQA algorithm that correlates highly with human judgments of quality. 15 # Gradient Magnitude Similarity Deviation: A Highly Efficient Perceptual Image Quality Index We present a new effective and efficient IQA model, called gradient magnitude similarity deviation (GMSD). 240 Cannot find the paper you are looking for? You can Submit a new open access paper.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47934868931770325, "perplexity": 4869.066954062107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662675072.99/warc/CC-MAIN-20220527174336-20220527204336-00583.warc.gz"}
https://onepetro.org/OTCONF/proceedings-abstract/18OTC/2-18OTC/D021S021R007/179256
Predictive equations of normalized shear modulus (G/Gmax) and material damping ratio (D) are presented for calcareous sand, siliceous carbonate sand and carbonate sand of the Bay of Campeche and Tabasco Coastline. This was achieved using a database of 84 resonant column tests and 252 strain-controlled cyclic direct simple shear test that provide data to define the normalized shear modulus, G/Gmax, and material damping ratio, D, versus cyclic shear strain. The range of cyclic shear strains of the database is from 0.0001% to 1%, and the range of carbonate content (Ca2CO3) from 10% to 100%. The curves of normalized modulus reduction and damping ratio were organized in three groups according to the percentage of carbonate content: 1) calcareous sands (10% to 50%), 2) siliceous carbonate sand (50% to 90%) and 3) carbonate sands (90% to 100%). Two independent modified hyperbolic relations for normalized modulus reduction and material damping ratio versus cyclic shear strain were developed for each group. The normalized shear modulus was modeled using two parameters: 1) a reference strain defined as the strain at which G/Gmax is equal to 0.5, and 2) a parameter that controls the curvature of the normalized modulus reduction curve. The material damping ratio was modeled using four parameters: 1) a reference strain γrD defined as the strain at which D/Dmax= 0.5, 2) a curvature parameter αD that controls the curvature of the material damping ratio curve, 3) a maximum material damping ratio Dmax, and 4) a minimum material damping ratio Dmin. The new empirical relationships to predict the normalized modulus reduction and material damping ratio curves as a function of effective confining pressure are easy to apply in practice and can be used when site-specific dynamic laboratory testing is not available. The curves of G/Gmax-γ and D-γ, are similar between silica sand and calcareous sand. The curves of siliceous carbonate sand and carbonate sand are very similar, but show a different shape and width than the curves of silica sand and calcareous sand. This indicates that when the carbonate content is smaller than 50% there is a small effect on the curves of G/Gmax-γ and D-γ, and a considerable effect when the carbonate content is greater than 50%. This content is only available via PDF.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8137690424919128, "perplexity": 3176.248876115919}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710534.53/warc/CC-MAIN-20221128171516-20221128201516-00092.warc.gz"}
https://brilliant.org/problems/first-isomorphism-theorem/
# First isomorphism theorem Algebra Level 5 Let $${\mathbb C}^*$$ be the group of nonzero complex numbers under multiplication. Consider the homomorphism $$f \colon {\mathbb C}^* \to {\mathbb C}^*$$ given by $$f(z) = z^2.$$ Then the First Isomorphism Theorem says that $${\mathbb C}^*/\text{ker}(f) \simeq \text{im}(f).$$ Now $$\text{ker}(f) = \{ \pm 1\},$$ and $$\text{im}(f) = {\mathbb C}^*$$ (that is, $$f$$ is onto), so the conclusion is that ${\mathbb C}^*/\{\pm 1\} \simeq {\mathbb C}^*,$ i.e. $${\mathbb C}^*$$ is isomorphic to a nontrivial quotient of itself. What is wrong with this argument? ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9991926550865173, "perplexity": 93.5888719496404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189472.3/warc/CC-MAIN-20170322212949-00605-ip-10-233-31-227.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/286755/dates-and-times-with-no-repeated-digits?answertab=votes
# Dates and times with no repeated digits? I have a digital clock that shows the date and time like this: $$\mathsf{YYYY-(M)M-(D)D\qquad (H)H:MM \; [:SS]}$$ That is, the seconds display is optional, and if the month or day or hour is single-digit, it won't display a leading zero. This is a nice year, $2013$, because all the digits are different... that hasn't happened since $1987$. What's more, if I turn off the seconds display, there are some times coming up this year that use nine of the ten digits with no repeats. That's pretty good, but it looks like I'll have to wait a long time to see all ten digits with no repeats, and I wanted to do even better. So I custom-ordered a clock that uses base eleven (i.e., it shows the same year, month, day, hour, minute, and second as the other clock, but using base eleven -- it doesn't use $66$-minute hours or anything like that). I'm waiting patiently for that one to show all eleven digits with no repeats. • When's the next time (this year) we'll see nine distinct digits? • When's the next time we'll see all ten distinct digits? • When's the next time the base-eleven clock will show all eleven distinct digits? -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3553405702114105, "perplexity": 856.8057546532297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096944.75/warc/CC-MAIN-20150627031816-00197-ip-10-179-60-89.ec2.internal.warc.gz"}
https://www.dml.cz/handle/10338.dmlcz/140066
# Article Full entry | PDF   (1.7 MB) Keywords: crack growth; phase field model; numerical simulation Summary: A phase field model for anti-plane shear crack growth in two dimensional isotropic elastic material is proposed. We introduce a phase field to represent the shape of the crack with a regularization parameter $\epsilon >0$ and we approximate the Francfort–Marigo type energy using the idea of Ambrosio and Tortorelli. The phase field model is derived as a gradient flow of this regularized energy. We show several numerical examples of the crack growth computed with an adaptive mesh finite element method. References: [1] L. Ambrosio and V. M. Tortorelli: On the approximation of free discontinuity problems. Boll. Un. Mat. Ital. 7 (1992), 6-B, 105–123. MR 1164940 [2] B. Bourdin: The variational formulation of brittle fracture: numerical implementation and extensions. Preprint 2006, to appear in IUTAM Symposium on Discretization Methods for Evolving Discontinuities (T. Belytschko, A. Combescure, and R. de Borst eds.), Springer. [3] B. Bourdin: Numerical implementation of the variational formulation of brittle fracture. Interfaces Free Bound. 9 (2007), 411–430. MR 2341850 [4] B. Bourdin, G. A. Francfort, and J.-J. Marigo: Numerical experiments in revisited brittle fracture. J. Mech. Phys. Solids 48 (2000), 4, 797–826. MR 1745759 [5] M. Buliga: Energy minimizing brittle crack propagation. J. Elasticity 52 (1998/99), 3, 201–238. MR 1700752 | Zbl 0947.74055 [6] C. M. Elliott and J. R. Ockendon: Weak and Variational Methods for Moving Boundary Problems. Pitman Publishing Inc. 1982. MR 0650455 [7] G. A. Francfort and J.-J. Marigo: Revisiting brittle fracture as an energy minimization problem. J. Mech. Phys. Solids 46 (1998), 1319–1342. MR 1633984 [8] A. A. Griffith: The phenomenon of rupture and flow in solids. Phil. Trans. Royal Soc. London A 221 (1920), 163–198. [9] M. Kimura, H. Komura, M. Mimura, H. Miyoshi, T. Takaishi, and D. Ueyama: Adaptive mesh finite element method for pattern dynamics in reaction-diffusion systems. In: Proc. Czech–Japanese Seminar in Applied Mathematics 2005 (M. Beneš, M. Kimura, and T. Nakaki, eds.), COE Lecture Note Vol. 3, Faculty of Mathematics, Kyushu University 2006, pp. 56–68. MR 2277123 [10] M. Kimura, H. Komura, M. Mimura, H. Miyoshi, T. Takaishi, and D. Ueyama: Quantitative study of adaptive mesh FEM with localization index of pattern. In: Proc. of the Czech–Japanese Seminar in Applied Mathematics 2006 (M. Beneš, M. Kimura, and T. Nakaki, eds.), COE Lecture Note Vol. 6, Faculty of Mathematics, Kyushu University 2007, pp. 114–136. MR 2277123 [11] R. Kobayashi: Modeling and numerical simulations of dendritic crystal growth. Physica D 63 (1993), 410–423. Zbl 0797.35175 [12] A. Schmidt and K. G. Siebert: Design of Adaptive Finite Element Software. The Finite Element Toolbox ALBERTA (Lecture Notes in Comput. Sci. Engrg. 42.) Springer–Verlag, Berlin 2005. MR 2127659 [13] A. Visintin: Models of Phase Transitions. Birkhäuser–Verlag, Basel 1996. MR 1423808 | Zbl 0903.35097 Partner of
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8761654496192932, "perplexity": 5228.656001300653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103989282.58/warc/CC-MAIN-20220702071223-20220702101223-00718.warc.gz"}
https://devtalk.blender.org/t/blender-build-failed/14346
# Blender build failed I was following the instructions on how to build Blender, and i keep getting an error. here is a paste of my Build.log file: https://pastebin.com/w1UD3PsK Not sure if the warnings have anything to do with it, but there is an error saying i am missing a file named “shobjidl_core.h”. i dont understand how that can happen since i should have followed the directions and downloaded all files through git clone git://git.blender.org/blender.git and make update. what am i doing wrong? oh i am on Windows 7. Let me see if I can help you figure this out. Anything in (parentheses) and italics isn’t for you @ktonegawa. They are notes to myself and developers about possible improvements in documentation. I take it that you are following the instructions on: Building Blender/Windows - Blender Developer Wiki At the top of that page, there is a (hard to see IMO due to the header’s font size right below it) link to go back to: Building Blender - Blender Developer Wiki (Note that the sidebar has this as the very first thing listed, but it isn’t a link. If I had a wiki account, that would be my #1 change: make those headers actual links!) On: Building Blender - Blender Developer Wiki, the second header is Resolving Build Failures. (I feel that this should be added as a link at the bottom of the Building Blender/Windows - Blender Developer Wiki page) The first section says (bolding mine): Missing dependencies cause two types of compiler errors. No such file errors mean a header (.h) file is missing, while unresolved symbol errors when linking mean a library is missing. This is usually because either a path to the dependency was not set correctly in the build system, the dependency was not installed, or a wrong version of the dependency was used. Finding out which dependencies are broken may sometimes be difficult. Searching online for the missing filenames or symbols will often give a quick answer. On systems with package managers the headers and libraries are usually in a separate development package, called for example foo-dev or foo-devel. Troubleshooting 101: If you don’t know what it is, google search for it online. Looking at the results, it shows that shobjidl_core.h is a Win32 API file, not a file provided by Blender. The next step I took was to search my C: drive for shobjidl_core.h. It was found in my C:\Program Files (x86)\Windows Kits\10\Include\10.0.18362.0\um folder. As you are on Windows 7, it would be in a slightly different location. Search for it to see 1. if you even have the file and 2. if so, where is it? Hi there, thank you for that information, i totally did not see the “Resolving Build Errors” page, and when I was searching for that header file I honestly thought it was Blender related so when I put “Blender” as a keyword i guess nothing relevant popped up. I immediately then looked for this ShObjIdl_core.h file, and funny enough i found four copies of it in my system, all located here: C:\Program Files (x86)\Windows Kits\10\Include\10.0.15063.0\um C:\Program Files (x86)\Windows Kits\10\Include\10.0.16299.0\um C:\Program Files (x86)\Windows Kits\10\Include\10.0.17763.0\um C:\Program Files (x86)\Windows Kits\10\Include\10.0.18362.0\um So if this is the case, how come these files aren’t being picked up by Blender when building? what paths do i need to modify to be able to utilize these header files? Hi, is it possible you are on a 32 Bit system? I am on Linux but iirc C:\Program Files (x86)\ is for 32 Bit software. 32 Bit is not supported but may you need only a Windows Kits for x64 version. Cheers, mib @mib2berlin a perfectly reasonable suggestion if you aren’t familiar with Windows SDK and MSVC. MSVC is only available as a 32 bit program, there is no 64 bit version. Edit: that is to say that while you are correct in the fact that Blender is only set up to build and run as a 64 bit program on a 64 bit OS, it is still built using a 32 bit program on Windows. Cheers! 1 Like To clarify, yes I am running 64bit of Windows 7. So what do i do next…? And not sure if this is relevant, but i opened the c:\blender-git\build_windows_x64_vc16_Release\source\blender\blenlib\bf_blenlib.vcxproj file on Visual Studio 2019, and took a look at its External Dependencies, it looks like through this it can locate this shobjidl_core.h header file just fine slight update: i was looking into this more, and i checked the c file where this was erroring out on, and to my surprise Visual Studio also seems to be logging the same error of not being able to find that header file https://pasteboard.co/Jhl87Xb.png is it possible that when this whole WIN32 is set it was using the wrong/older directory…? if so, how do i change that within the entire Blender build…? That is the shobjidl.h header file, not the shobjidl_core.h. Also notice that it is looking in the 8.1 folder, but you only have shobjidl_core.h in the 10. I am not sure if building on Windows 7 is suppose to use Windows 8.1 and Windows 10 SDKs, if you are missing the SDKs you need, or if there is an issue with the Blender code in how it looks for external dependencies. Time to call in someone who knows what they are doing. @LazyDodo advice? Looks like the header changed names between SDK versions, can you try replacing it in storage.c with # include <ShObjIdl.h> and see if it builds? so i made this change in storage.c And i ran the make command again, and it looks like it succeeded, but there is a warning that doesn’t sound too great, can anyone confirm whether this is a safe build to be working with…? https://pastebin.com/LrfVx6Su if it build, you’re good, i’ll get that change in tomorrow so you don’t have to do any manual changes. 1 Like okay. well i assume this occurred for me because i have multiple versions of Win32(?) installed for whatever reason. i am hoping the changes you submit will accommodate for cases like mine and not completely disregard shobjidl_core.h as a whole anyways thank you for yours and EAW’s support. greatly appreciated 1 Like fix landed earlier today, you should be good to go (and it’ll still work for people on the Win10SDK) 1 Like
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7296918034553528, "perplexity": 1728.6721694257199}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.97/warc/CC-MAIN-20230322211955-20230323001955-00737.warc.gz"}
https://en.wikisource.org/wiki/A_Treatise_on_Electricity_and_Magnetism/Part_II/Chapter_V
# A Treatise on Electricity and Magnetism/Part II/Chapter V ## CHAPTER V. ELECTROLYTIC POLARIZATION. 264.] When an electric current is passed through an electrolyte bounded by metal electrodes, the accumulation of the ions at the electrodes produces the phenomenon called Polarization, which consists in an electromotive force acting in the opposite direction to the current, and producing an apparent increase of the resistance. When a continuous current is employed, the resistance appears to increase rapidly from the commencement of the current, and at last reaches a value nearly constant. If the form of the vessel in which the electrolyte is contained is changed, the resistance is altered in the same way as a similar change of form of a metallic conductor would alter its resistance, but an additional apparent resistance, depending on the nature of the electrodes, has always to be added to the true resistance of the electrolyte. 265.] These phenomena have led some to suppose that there is a finite electromotive force required for a current to pass through an electrolyte. It has been shewn, however, by the researches of Lenz, Neumann, Beetz, Wiedemann[1], Paalzow[2], and recently by those of MM. F. Kohlrausch and W. A. Nippoldt[3], that the conduction in the electrolyte itself obeys Ohm s Law with the same precision as in metallic conductors, and that the apparent resistance at the bounding surface of the electrolyte and the electrodes is entirely due to polarization. 266.] The phenomenon called polarization manifests itself in the case of a continuous current by a diminution in the current, indicating a force opposed to the current. Resistance is also perceived as a force opposed to the current, but we can distinguish between the two phenomena by instantaneously removing or reversing the electromotive force. The resisting force is always opposite in direction to the current, and the external electromotive force required to overcome it is proportional to the strength of the current, and changes its direction when the direction of the current is changed. If the external electromotive force becomes zero the current simply stops. The electromotive force due to polarization, on the other hand, is in a fixed direction, opposed to the current which produced it. If the electromotive force which produced the current is removed, the polarization produces a current in the opposite direction. The difference between the two phenomena may be compared with the difference between forcing a current of water through a long capillary tube, and forcing water through a tube of moderate length up into a cistern. In the first case if we remove the pressure which produces the flow the current will simply stop. In the second case, if we remove the pressure the water will begin to flow down again from the cistern. To make the mechanical illustration more complete, we have only to suppose that the cistern is of moderate depth, so that when a certain amount of water is raised into it, it begins to overflow. This will represent the fact that the total electromotive force due to polarization has a maximum limit. 267.] The cause of polarization appears to be the existence at the electrodes of the products of the electrolytic decomposition of the fluid between them. The surfaces of the electrodes are thus rendered electrically different, and an electromotive force between them is called into action, the direction of which is opposite to that of the current which caused the polarization. The ions, which by their presence at the electrodes produce the phenomena of polarization, are not in a perfectly free state, but are in a condition in which they adhere to the surface of the electrodes with considerable force. The electromotive force due to polarization depends upon the density with which the electrode is covered with the ion, but it is not proportional to this density, for the electromotive force does not increase so rapidly as this density. This deposit of the ion is constantly tending to become free, and either to diffuse into the liquid, to escape as a gas, or to be precipitated as a solid. The rate of this dissipation of the polarization is exceedingly small for slight degrees of polarization, and exceedingly rapid near the limiting value of polarization. 268.] We have seen, Art. 262, that the electromotive force acting in any electrolytic process is numerically equal to the mechanical equivalent of the result of that process on one electrochemical equivalent of the substance. If the process involves a diminution of the intrinsic energy of the substances which take part in it, as in the voltaic cell, then the electromotive force is in the direction of the current. If the process involves an increase of the intrinsic energy of the substances, as in the case of the electrolytic cell, the electromotive force is in the direction opposite to that of the current, and this electromotive force is called polarization. In the case of a steady current in which electrolysis goes on continuously, and the ions are separated in a free state at the electrodes, we have only by a suitable process to measure the intrinsic energy of the separated ions, and compare it with that of the electrolyte in order to calculate the electromotive force required for the electrolysis. This will give the maximum polarization. But during the first instants of the process of electrolysis the ions when deposited at the electrodes are not in a free state, and their intrinsic energy is less than their energy in a free state, though greater than their energy when combined in the electrolyte. In fact, the ion in contact with the electrode is in a state which when the deposit is very thin may be compared with that of chemical combination with the electrode, but as the deposit increases in density, the succeeding portions are no longer so intimately combined with the electrode, but simply adhere to it, and at last the deposit, if gaseous, escapes in bubbles, if liquid, diffuses through the electrolyte, and if solid, forms a precipitate. In studying polarization we have therefore to consider (1) The superficial density of the deposit, which we may call σ. This quantity σ represents the number of electrochemical equivalents of the ion deposited on unit of area. Since each electrochemical equivalent deposited corresponds to one unit of electricity transmitted by the current, we may consider σ as representing either a surface-density of matter or a surface-density of electricity. (2) The electromotive force of polarization, which we may call p. This quantity p is the difference between the electric potentials of the two electrodes when the current through the electrolyte is so feeble that the proper resistance of the electrolyte makes no sensible difference between these potentials. The electromotive force p at any instant is numerically equal to the mechanical equivalent of the electrolytic process going on at that instant which corresponds to one electrochemical equivalent of the electrolyte. This electrolytic process, it must be remembered, consists in the deposit of the ions on the electrodes, and the state in which they are deposited depends on the actual state of the surface of the electrodes, which may be modified by previous deposits. Hence the electromotive force at any instant depends on the previous history of the electrode. It is, speaking very roughly, a function of σ, the density of the deposit, such that p = 0 when σ = 0, but p approaches a limiting value much sooner than σ does. The statement, however, that p is a function of σ cannot be considered accurate. It would be more correct to say that p is a function of the chemical state of the superficial layer of the deposit, and that this state depends on the density of the deposit according to some law involving the time. 269.] (3) The third thing we must take into account is the dissipation of the polarization. The polarization when left to itself diminishes at a rate depending partly on the intensity of the polarization or the density of the deposit, and partly on the nature of the surrounding medium, and the chemical, mechanical, or thermal action to which the surface of the electrode is exposed. If we determine a time T such that at the rate at which the deposit is dissipated, the whole deposit would be removed in a time T, we may call T the modulus of the time of dissipation. When the density of the deposit is very small, T is very large, and may be reckoned by days or months. When the density of the deposit approaches its limiting value T diminishes very rapidly, and is probably a minute fraction of a second. In fact, the rate of dissipation increases so rapidly that when the strength of the current is maintained constant, the separated gas, instead of contributing to increase the density of the deposit, escapes in bubbles as fast as it is formed. 270.] There is therefore a great difference between the state of polarization of the electrodes of an electrolytic cell when the polarization is feeble, and when it is at its maximum value. For instance, if a number of electrolytic cells of dilute sulphuric acid with platinum electrodes are arranged in series, and if a small electromotive force, such as that of one Daniell's cell, be made to act on the circuit, the electromotive force will produce a current of exceedingly short duration, for after a very short time the electromotive force arising from the polarization of the cell will balance that of the Daniell's cell. The dissipation will be very small in the case of so feeble a state of polarization, and it will take place by a very slow absorption of the gases and diffusion through the liquid. The rate of this dissipation is indicated by the exceedingly feeble current which still continues to flow without any visible separation of gases. If we neglect this dissipation for the short time during which the state of polarization is set up, and if we call $Q$ the total quantity of electricity which is transmitted by the current during this time, then if $A$ is the area of one of the electrodes, and $\sigma$ the density of the deposit, supposed uniform, $Q = A\sigma$. If we now disconnect the electrodes of the electrolytic apparatus from the Daniell's cell, and connect them with a galvanometer capable of measuring the whole discharge through it, a quantity of electricity nearly equal to $Q$ will be discharged as the polarization disappears. 271.] Hence we may compare the action of this apparatus, which is a form of Ritter's Secondary Pile, with that of a Leyden jar. Both the secondary pile and the Leyden jar are capable of being charged with a certain amount of electricity, and of being afterwards discharged. During the discharge a quantity of electricity nearly equal to the charge passes in the opposite direction. The difference between the charge and the discharge arises partly from dissipation, a process which in the case of small charges is very slow, but which, when the charge exceeds a certain limit, becomes exceedingly rapid. Another part of the difference between the charge and the discharge arises from the fact that after the electrodes have been connected for a time sufficient to produce an apparently complete discharge, so that the current has completely disappeared, if we separate the electrodes for a time, and afterwards connect them, we obtain a second discharge in the same direction as the original discharge. This is called the residual discharge, and is a phenomenon of the Leyden jar as well as of the secondary pile. The secondary pile may therefore be compared in several respects to a Leyden jar. There are, however 3 certain important differences. The charge of a Leyden jar is very exactly proportional to the electromotive force of the charge, that is, to the difference of potentials of the two surfaces, and the charge corresponding to unit of electromotive force is called the capacity of the jar, a constant quantity. The corresponding quantity, which may be called the capacity of the secondary pile, increases when the electromotive force increases. The capacity of the jar depends on the area of the opposed surfaces, on the distance between them, and on the nature of the substance between them, but not on the nature of the metallic surfaces themselves. The capacity of the secondary pile depends on the area of the surfaces of the electrodes, but not on the distance between them, and it depends on the nature of the surface of the electrodes, as well as on that of the fluid between them. The maximum difference of the potentials of the electrodes in each element of a secondary pile is very small compared with the maximum difference of the potentials of those of a charged Leyden jar, so that in order to obtain much electromotive force a pile of many elements must be used. On the other hand, the superficial density of the charge in the secondary pile is immensely greater than the utmost superficial density of the charge which can be accumulated on the surfaces of a Leyden jar, insomuch that Mr. C. F. Varley[4], in describing the construction of a condenser of great capacity, recommends a series of gold or platinum plates immersed in dilute acid as preferable in point of cheapness to induction plates of tinfoil separated by insulating material. The form in which the energy of a Leyden jar is stored up is the state of constraint of the dielectric between the conducting surfaces, a state which I have already described under the name of electric polarization, pointing out those phenomena attending this state which are at present known, and indicating the imperfect state of our knowledge of what really takes place. See Arts. 62, 111. The form in which the energy of the secondary pile is stored up is the chemical condition of the material stratum at the surface of the electrodes, consisting of the ions of the electrolyte and the substance of the electrodes in a relation varying from chemical combination to superficial condensation, mechanical adherence, or simple juxtaposition. The seat of this energy is close to the surfaces of the electrodes, and not throughout the substance of the electrolyte, and the form in which it exists may be called electrolytic polarization. After studying the secondary pile in connexion with the Leyden jar, the student should again compare the voltaic battery with some form of the electrical machine, such as that described in Art. 211. Mr. Varley has lately[5] found that the capacity of one square inch is from 175 to 542 microfarads and upwards for platinum plates in dilute sulphuric acid, and that the capacity increases with the electromotive force, being about 175 for 0.02 of a Daniell's cell, and 542 for 1.6 Daniell's cells. But the comparison between the Leyden jar and the secondary pile may be carried still farther, as in the following experiment, due to Buff[6]. It is only when the glass of the jar is cold that it is capable of retaining a charge. At a temperature below 100°C the glass becomes a conductor. If a test-tube containing mercury is placed in a vessel of mercury, and if a pair of electrodes are connected, one with the inner and the other with the outer portion of mercury, the arrangement constitutes a Leyden jar which will hold a charge at ordinary temperatures. If the electrodes are connected with those of a voltaic battery, no current will pass as long as the glass is cold, but if the apparatus is gradually heated a current will begin to pass, and will increase rapidly in intensity as the temperature rises, though the glass remains apparently as hard as ever. This current is manifestly electrolytic, for if the electrodes are disconnected from the battery, and connected with a galvanometer, a considerable reverse current passes, due to polarization of the surfaces of the glass. If, while the battery is in action the apparatus is cooled, the current is stopped by the cold glass as before, but the polarization of the surfaces remains. The mercury may be removed, the surfaces may be washed with nitric acid and with water, and fresh mercury introduced. If the apparatus is then heated, the current of polarization appears as soon as the glass is sufficiently warm to conduct it. We may therefore regard glass at 100°C, though apparently a solid body, as an electrolyte, and there is considerable reason to believe that in most instances in which a dielectric has a slight degree of conductivity the conduction is electrolytic. The existence of polarization may be regarded as conclusive evidence of electrolysis, and if the conductivity of a substance increases as the temperature rises, we have good grounds for suspecting that it is electrolytic. ### On Constant Voltaic Elements. 272.] When a series of experiments is made with a voltaic battery in which polarization occurs, the polarization diminishes during the time that the current is not flowing, so that when it begins to flow again the current is stronger than after it has flowed for some time. If, on the other hand, the resistance of the circuit is diminished by allowing the current to flow through a short shunt, then, when the current is again made to flow through the ordinary circuit, it is at first weaker than its normal strength on account of the great polarization produced by the use of the short circuit. To get rid of these irregularities in the current, which are exceedingly troublesome in experiments involving exact measurements, it is necessary to get rid of the polarization, or at least to reduce it as much as possible. It does not appear that there is much polarization at the surface of the zinc plate when immersed in a solution of sulphate of zinc or in dilute sulphuric acid. The principal seat of polarization is at the surface of the negative metal. When the fluid in which the negative metal is immersed is dilute sulphuric acid, it is seen to become covered with bubbles of hydrogen gas, arising from the electrolytic decomposition of the fluid. Of course these bubbles, by preventing the fluid from touching the metal, diminish the surface of contact and increase the resistance of the circuit. But besides the visible bubbles it is certain that there is a thin coating of hydrogen, probably not in a free state, adhering to the metal, and as we have seen that this coating is able to produce an electromotive force in the reverse direction, it must necessarily diminish the electromotive force of the battery. Various plans have been adopted to get rid of this coating of hydrogen. It may be diminished to some extent by mechanical means, such as stirring the liquid, or rubbing the surface of the negative plate. In Smee's battery the negative plates are vertical, and covered with finely divided platinum from which the bubbles of hydrogen easily escape, and in their ascent produce a current of liquid which helps to brush off other bubbles as they are formed. A far more efficacious method, however, is to employ chemical means. These are of two kinds. In the batteries of Grove and Bunsen the negative plate is immersed in a fluid rich in oxygen, and the hydrogen, instead of forming a coating on the plate, combines with this substance. In Grove's battery the plate is of platinum immersed in strong nitric acid. In Bunsen's first battery it is of carbon in the same acid. Chromic acid is also used for the same purpose, and has the advantage of being free from the acid fumes produced by the reduction of nitric acid. A different mode of getting rid of the hydrogen is by using copper as the negative metal, and covering the surface with a coat of oxide. This, however, rapidly disappears when it is used as the negative electrode. To renew it Joule has proposed to make the copper plates in the form of disks, half immersed in the liquid, and to rotate them slowly, so that the air may act on the parts exposed to it in turn. The other method is by using as the liquid an electrolyte, the cation of which is a metal highly negative to zinc. In Daniell's battery a copper plate is immersed in a saturated solution of sulphate of copper. When the current flows through the solution from the zinc to the copper no hydrogen appears on the copper plate, but copper is deposited on it. When the solution is saturated, and the current is not too strong, the copper appears to act as a true cation, the anion SO4 travelling towards the zinc. When these conditions are not fulfilled hydrogen is evolved at the cathode, but immediately acts on the solution, throwing down copper, and uniting with SO4 to form oil of vitriol. When this is the case, the sulphate of copper next the copper plate is replaced by oil of vitriol, the liquid becomes colourless, and polarization by hydrogen gas again takes place. The copper deposited in this way is of a looser and more friable structure than that deposited by true electrolysis. To ensure that the liquid in contact with the copper shall be saturated with sulphate of copper, crystals of this substance must be placed in the liquid close to the copper, so that when the solution is made weak by the deposition of the copper, more of the crystals may be dissolved. We have seen that it is necessary that the liquid next the copper should be saturated with sulphate of copper. It is still more necessary that the liquid in which the zinc is immersed should be free from sulphate of copper. If any of this salt makes its way to the surface of the zinc it is reduced, and copper is deposited on the zinc. The zinc, copper, and fluid then form a little circuit in which rapid electrolytic action goes on, and the zinc is eaten away by an action which contributes nothing to the useful effect of the battery. To prevent this, the zinc is immersed either in dilute sulphuric acid or in a solution of sulphate of zinc, and to prevent the solution of sulphate of copper from mixing with this liquid, the two liquids are separated by a division consisting of bladder or porous earthenware, which allows electrolysis to take place through it, but effectually prevents mixture of the fluids by visible currents. In some batteries sawdust is used to prevent currents. The experiments of Graham, however, shew that the process of diffusion goes on nearly as rapidly when two liquids are separated by a division of this kind as when they are in direct contact, provided there are no visible currents, and it is probable that if a septum is employed which diminishes the diffusion, it will increase in exactly the same ratio the resistance of the element, because electrolytic conduction is a process the mathematical laws of which have the same form as those of diffusion, and whatever interferes with one must interfere equally with the other. The only difference is that diffusion is always going on, while the current flows only when the battery is in action. In all forms of Daniell's battery the final result is that the sulphate of copper finds its way to the zinc and spoils the battery. To retard this result indefinitely, Sir W. Thomson[7] has constructed Darnell's battery in the following form. Fig. 21. In each cell the copper plate is placed horizontally at the bottom and a saturated solution of sulphate of zinc is poured over it. The zinc is in the form of a grating and is placed horizontally near the surface of the solution. A glass tube is placed vertically in the solution with its lower end just above the surface of the copper plate. Crystals of sulphate of copper are dropped down this tube, and, dissolving in the liquid, form a solution of greater density than that of sulphate of zinc alone, so that it cannot get to the zinc except by diffusion. To retard this process of diffusion, a siphon, consisting of a glass tube stuffed with cotton wick, is placed with one extremity midway between the zinc and copper, and the other in a vessel outside the cell, so that the liquid is very slowly drawn off near the middle of its depth. To supply its place, water, or a weak solution of sulphate of zinc, is added above when required. In this way the greater part of the sulphate of copper rising through the liquid by diffusion is drawn off by the siphon before it reaches the zinc, and the zinc is surrounded by liquid nearly free from sulphate of copper, and having a very slow downward motion in the cell, which still further retards the upward motion of the sulphate of copper. During the action of the battery copper is deposited on the copper plate, and SO4 travels slowly through the liquid to the zinc with which it combines, forming sulphate of zinc. Thus the liquid at the bottom becomes less dense by the deposition of the copper, and the liquid at the top becomes more dense by the addition of the zinc. To prevent this action from changing the order of density of the strata, and so producing instability and visible currents in the vessel, care must be taken to keep the tube well supplied with crystals of sulphate of copper, and to feed the cell above with a solution of sulphate of zinc sufficiently dilute to be lighter than any other stratum of the liquid in the cell. Daniell's battery is by no means the most powerful in common use. The electromotive force of Grove's cell is 192,000,000, of Daniell's 107,900,000 and that of Bunsen's 188,000,000. The resistance of Daniell's cell is in general greater than that of Grove's or Bunsen's of the same size. These defects, however, are more than counterbalanced in all cases where exact measurements are required, by the fact that Daniell's cell exceeds every other known arrangement in constancy of electromotive force. It has also the advantage of continuing in working order for a long time, and of emitting no gas. 1. Galvanismus, bd. i. 2. Berlin Monatsbericht, July, 1868. 3. Pogg, Ann. bd. cxxxviii. s. 286 (October, 1869). 4. Specification of C. F. Varley, 'Electric Telegraphs, &c.,' Jan. 1860. 5. Proc. R. S., Jan. 12, 1871. 6. Annalen der Chemie und Pharmacie, bd. xc. 257 (1854). 7. Proc. R. S., Jan. 19, 1871.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8209472894668579, "perplexity": 573.5356270346872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064160.12/warc/CC-MAIN-20150827025424-00168-ip-10-171-96-226.ec2.internal.warc.gz"}
https://reference.opcfoundation.org/Core/docs/Amendment11/12.30/
## 12.30 Frame This abstract Structured DataType is the base to define multi-dimensional frames. There are no specific elements defined as shown in Table 181. Subtypes shall add a field called CartesianCoordinates representing the cartesian coordinates of the frame using as DataType a subtype of CartesianCoordinates, and a field called Orientation, representing the orientation of the frame using as DataType a subtype of Orientation. The concrete DataTypes need to be defined by the subtype. Both fields shall have the same number of dimensions. Table 181 – Frame Structure Name Type Description Frame Structure Its representation in the AddressSpace is defined Table 182. Table 182 – Frame Definition Attributes Value BrowseName Frame IsAbstract TRUE References NodeClass BrowseName IsAbstract HasSubtype DataType 3DFrame FALSE
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8146474957466125, "perplexity": 5252.385718235559}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103034930.3/warc/CC-MAIN-20220625095705-20220625125705-00482.warc.gz"}
https://hssliveguru.com/author/hsslive/
## HSSLive Plus Two Study Material / Question Bank Kerala HSSLive.Guru providing HSE Kerala Board Syllabus HSSLive Higher Secondary SCERT Plus Two Study Material, Question Bank, Quick Revision Notes, Chapter Wise Notes, Chapter Wise Previous Year Important Questions and Answers, Previous Year Sample Question Papers with Answers based on CBSE NCERT syllabus in both English Medium and Malayalam Medium for Class 12. Students can also read NCERT Solutions. ## HSSLive Plus Two Study Material / Question Bank Kerala ### HSSLive Plus Two Previous Year Question Papers and Answers Kerala Here HSSLive.Guru have given HSSLive Higher Secondary Kerala Plus Two Previous Year Sample Question Papers with Answers. ### HSSLive Kerala Plus Two Notes Chapter Wise Kerala Here HSSLive.Guru have given HSSLive Higher Secondary Kerala Plus Two Chapter Wise Quick Revision Notes. ### HSSLive Plus Two Chapter Wise Questions and Answers Kerala Here HSSLive.Guru have given HSSLive Higher Secondary Kerala Plus Two Chapter Wise Questions and Answers. ### HSSLive Plus Two Chapter Wise Previous Year Questions and Answers Kerala Here HSSLive.Guru have given HSSLive Higher Secondary Kerala Plus Two Chapter Wise Previous Year Important Questions and Answers. ## Plus One Accountancy Chapter Wise Questions and Answers Kerala HSE Kerala Board Syllabus HSSLive Plus One Accountancy Chapter Wise Questions and Answers Pdf Free Download in both English Medium and Malayalam Medium are part of SCERT Kerala Plus One Chapter Wise Questions and Answers. Here HSSLive.Guru has given Higher Secondary Kerala Plus One Accountancy Chapter Wise Questions and Answers based on CBSE NCERT syllabus. Board SCERT, Kerala Text Book NCERT Based Class Plus One Subject Accountancy Chapter All Chapters Category Kerala Plus One ## Kerala Plus One Accountancy Chapter Wise Questions and Answers We hope the given HSE Kerala Board Syllabus HSSLive Plus One Accountancy Chapter Wise Questions and Answers Pdf Free Download in both English Medium and Malayalam Medium will help you. If you have any query regarding Higher Secondary Kerala Plus One Accountancy Chapter Wise Questions and Answers based on CBSE NCERT syllabus, drop a comment below and we will get back to you at the earliest. ## Plus One Chemistry Notes Chapter 2 Structure of Atom Students can Download Chapter 2 Structure of Atom Notes, Plus One Chemistry Notes helps you to revise the complete Kerala State Syllabus and score more marks in your examinations. ## Kerala Plus One Chemistry Notes Chapter 2 Structure of Atom Plus One Chemistry Chapter 2 Notes Pdf Introduction The atomic theory of matter was first proposed by John Dalton. His theory, called Dalton’s atomic theory, regarded the atom as the ultimate particle of matter. Sub-Atomic Particles Discovery Of Electron The experiments of Michael Faraday in discharge tubes showed that when a high potential is applied to a gas taken in the discharge tube at very low pres-sures, certain rays are emitted from the cathode. These rays were called cathode rays. The results of these experiments are summarised below: 1. The cathode rays start from cathode and move towards the anode. 2. In the absence of electrical or magnetic field, these rays travel in straight lines.ln the presence of electrical or magnetic field, they behave as negatively charged particles, i.e.,they consist of negatively charged particles, called electrons. 3. The characteristics of cathode rays (electrons) do not depend upon the material of electrodes and the nature of the gas present in the cathode ray tube.Thus, we can conclude that electrons are the basic constituent of all the atoms. Charge To Mass Ratio Of Electron In 1897, the British physicist J.J. Thomson measured the ratio of electrical charge (e) to the mass of electron (m<sub>e</sub>) by using cathode ray tube and applying electrical and magnetic field perpendicular to each other as well as to the path of electrons. From the amount of deviation of the particles from their path in the presence of electrical or magnetic field, the value of e/m was found to be 1.75882 × 1011 coulomb per kg or approximately 1.75288 × 10<sup>8</sup> cou-lomb per gram. The ratio e/m was found to be same irrespective of the nature of the gas taken in the dis-charge tube and the material used as the cathode. Structure Of Atom Class 11 Notes Charge Of The Electron Millikan (1868-1953) devised a method known as Oil drop experiment (1906-14), to determine the charge on the electrons. He found the charge on the electron to be – 1.6 × 10-19C. Mass of the electron (m) Discovery Of Protons And Neutrons Electrical discharge earned out in the modified cathode ray tube led to the discovery of canal rays. The characteristics of these positively charged particles are listed below: • unlike cathode rays, the e/m ratio of the particles depend upon the nature of gas present in the cathode ray tube. • Some of the positively charged particles carry a multiple of the fundamental unit of electrical charge. • The behaviour of these particles in the magnetic or electrical field is opposite to that observed for cathode rays. The smallest and lightest positive ion was obtained from hydrogen and was called proton. Later, electrically neutral particles were discovered by Chadwick (1932) by bombarding a thin sheet of beryllium by α – particles when electrically neutral particles having a mass slightly greater than that of the protons was emitted. He named these particles as neutrons. Atomic Models Structure Of Atom Class 11 Notes Hsslive Thomson Model Of Atom J.J. Thomson was the first to propose a model of the atom. According to him, the atom is a sphere in which positive charge is spread uniformly and the electrons are embedded in it so as to make the atom electrically neutral. This model is also known as “plumpudding model’. But this model was soon discarded as it could not explain many of the experimental observations. Hsslive Structure Of Atom Notes Rutherford’s Nuclear Model of Atom Rutherford and his students (Hans Geiger and Ernest Marsden) bombarded very thin gold foil with α – particles. The experiment is known as α -particle scattering experiment. On the basis of the observations, Rutherford drew the following conclusions regarding the structure of atom : 1. Most of the space in the atom is empty as most of the α -particles passed through the foil undeflected. 2. A few α – particles were deflected. Since the α – particles are positively charged, the deflection must be due to enormous repulsive force showing that the positive charge of the atom is not spread throughout the atom as Thomson had presumed. The positive charge has to be concentrated in a very small volume that repelled and deflected the positively charged α – particles. 3. Calculations by Rutherford showed that the volume occupied by the nucleus is negligibly small as compared to the total volume of the atom. On the basis of above observations and conclusions, Rutherford proposed the nuclear model of atom (after the discovery of protons). According to this model: 1.The positive charge and most of the mass of the atom was densely concentrated in extremely small region. This very small portion of the atom was called nucleus by Rutherford. 2. The electrons move around the nucleus with a very high speed in circular paths called orbits. Thus, Rutherford’s model of atom resembles the solar system in which the nucleus plays the role of sun and the electrons that of revolving planets. 3. Electrons and the nucleus are held together by electrostatic forces of attraction. Chemistry Notes For Class 11 Chapter 2 Atomic Numberand Mass Number ’ Knowing the atomic number Z and mass number A of an element, we can calculate the number of protons, electrons and neutrons present in the atom of the element. Atomic Number (Z) = Number of protons = Number of electrons Mass Number (A) – Atomic number (Z) = Number of neutrons Isotopes, Isobars And Isotones Isotopes are atoms of the same element having the same atomic number but different mass numbers. They contain different number of neutrons. For ex-ample, there are three isotopes of hydrogen having mass numbers 1,2 and 3 respectively. All the three isotopes have atomic number 1. They are represented as $$_{ 1 }^{ 1 }{ H }$$, $$_{ 1 }^{ 2 }{ H }$$ and $$_{ 1 }^{ 3 }{ H }$$ and named as hydrogen or protium, deuterium (D) and tritium (T) respectively. Isobars are atoms of different elements which have the same mass number. For example, $$_{ 6 }^{ 14 }{ C }$$ and $$_{ 7 }^{ 14 }{ N }$$ are isobars. Isotones may be defined as atoms of different elements containing same number of neutrons. For example $$_{ 6 }^{ 13 }{ C }$$ and $$_{ 7 }^{ 14 }{ N }$$ are isotones. Developments Leading To The Bohr’S Model Of Atom Neils Bohr improved the model proposed by Rutherford. Two developments played a major role in the formulation of Bohr’s model of atom. These were: 1. electromagnetic radiation possess both wave like and particle like properties(Dual character) 2. Experimental results regarding atomic spectra which can be explained only by assuming quantized electronic energy levels in atoms. Wave Nature Of Electromagnetic Ra-Diation Light is the form of radiation and it was supposed to be made of particles known as corpuscules. As we know, waves are characterised by wavelength (λ), frequency (υ) and velocity of propagation (c) and these are related by the equation c = vλ or v = $$\frac { c }{ \lambda }$$ The wavelengths of various electro magnetic radia-tions increase in the order. γ rays < X-rays< uv rays < visible < IR < Microwaves < Radio waves Particle Nature Of Electro Magnetic Radiation: Planck’S Quantum Theory Planck suggested that atoms and molecules could emit (or absorb) energy only in discrete quantities and not in a continuous manner, a belief popular at that time. Planck gave the name quantum to the smallest quantity of energy that can be emitted or absorbed in the form of electromagnetic radiation. The energy (E) of a quantum of radiation is proportional to its frequency (υ) and is expressed by the equation E = hυ Class 11 Chemistry Chapter 2 Notes Photoelectric Effect When a metal was exposed to a beam of light, electrons were emitted. This phenomenon is called photoelectric effect. Obseravations of the photoelectric effect experiment are the following: • There is no time lag belween the striking of light beam and the ejection of electrons from the metal surface. • The number of electrons ejected is proportional to the intensity or brightness of light. • For each metal, there is a characteristic minimum frequency, u0 (also known as threshold frequency) below which photoelectric effect is not observed. At a frequency u>u0, the ejected electrons come out with certain kinetic energy. The kinetic energies of these electrons increase with the increase of frequency of the light used. Using Plank’s quantum theory Einstein explained photoelectric effect. When a light particle, photon with sufficient energy strikes an electron instantaneously to the electron during the collision and the electron is ejected without any time lag. Greater the energy of photon greater will be the kinetic energy of ejected electron and greater will be the frequency of radiation. If minimum energy to eject an electron is hv0 and the photon has an energy equal to hv. Then kinetic en-ergy of photoelectron is given by, hv=hv0 + 1/2 mev2 where me is the mass of electron and hv0 is called the work function. Duel Behaviour Of Electromagnetic Ra-Diation Light has dual behaviour that is it behaves either as a wave or as a particle. Due to this wave nature, it shows the phenomena of interference and diffraction. Evidence For The Quantized Electronic Energy Levels : Atomic Spectra It is observed that when a ray of white light is passed through a prism, the wave with shorter wavelength bends more than the one with a longer wavelength. Since ordinary white light consists of waves with ail the wave-lengths in the visible range, a ray of white light is spread out into a series of coloured bands called spectrum. In a continuous spectrum light of different colours merges together. For example violet merges into blue, blue into green and soon. Chapter 2 Class 11 Chemistry Notes Emission and absorption spectra The spectrum of radiation emitted by a substance that has absorbed energy is called an emission spectrum. Atoms, molecules or ions that have absorbed radiation are said to be “excited”. A continuum of radiation is passed through a sample which absorbs radiation of certain wavelengths. The missing wavelength which corresponds to the radiation absorbed by the matter, leave dark spaces in the bright continuous spectrum. The study of emission or absorption spectra is referred to as spectroscopy Line spectra or atomic spectra is the spectra where emitted radiation is identified by the appearance of bright lines in the spectra. Line spectrum of Hydrogen The hydrogen spectrum consists of several series of lines named after their discoverers. Balmershowed in 1885 on the basis of experimental observations that if spectral lines are expressed in terms of wavenumber ($$\overline { v }$$), then the visible lines of the hydrogen spectrum obey the following formula : $$\overline { v }$$ = 109,677 $$\left[\frac{1}{2^{2}}-\frac{1}{n^{2}}\right] \mathrm{cm}^{-1}$$ where n = 3, 4, 5, …………. The series of lines described by this formula are called the Balmer series. The value 109,677cm-1 is called the Rydberg constant for hydrogen. The first 5 series of lines correspond to n1 = 1, 2, 3, 4, 5 are known as Lyman, Balmer, Paschen, Bracket and Pfund series respectively. Line specrum becomes more complex for heavier atoms. Chapter 2 Chemistry Class 11 Notes Bhor’S Model For Hydrogen Atom Bhors model for hydrogen atom says that 1. the energy of an electron does not change with time. The diagram shows the Lyman, Balmer and Paschen series of transitions for hydrogen atom. 2. The frequency of radiation absorbed or emitted when transition occurs between two stationary states that differ in energy by ∆E, is given by : $$v=\frac{\Delta E}{h}=\frac{E_{2}-E_{1}}{h}$$ E1 and E2 are the energies of the lower and higher allowed energy states respectively. The angular momentum of an electron in a given stationary state can be expressed as in equation, Chemistry Chapter 2 Class 11 Notes Bohr’s theory for hydrogen atom: 1. The stationary states for electron are numbered n = 1,2,3. These integral numbers are known as Principal quantum numbers. 2. The radii of the stationary states are expressed as: rn = n² a0 where a0 = 52.9 pm 3. The most important property associated with the electron, is the energy of its stationary state. It is given by the expression, $$E_{n}=-R_{H}\left(\frac{1}{n^{2}}\right)$$ where RH is called Rydberg constant and its value is 2.18 × 10-18 J. The energy of the lowest state, also called as the ground state, is E1 = -2.18 × 10-18 $$\left(\frac{1}{1^{2}}\right)$$ = -2.18 × 10-18 J. The energy of the stationary state for n = ∝, will be : E2 = -2.18 × 10-18 J$$\left(\frac{1}{2^{2}}\right)$$ = -0.545 × 10-18 J. When the electron is free from the influence of nucleus(n = ∞), the energy is taken as zero. When the electron is attracted by the nucleus and is present in orbit n, the energy is emitted and its energy is lowered. That is the reason for the presence of negative sign and depicts its stability relative to the reference state of zero energy and n = ∞ 4. Bohr’s theory can also be applied to the ions containing only one electron, similar to that present in hydrogen atom. For example, He<sup>+</sup> Li<sup>2+</sup>, Be<sup>3+</sup> and so on. The energies of the stationary states associated with these hydrogen-like species are given by the expression, Structure Of Atom Class 11 Notes Pdf Explanation of Line Spectrum of Hydrogen The frequency (v) associated with the absorption and emission of the photon can be evaluated by using equation, Class 11 Chapter 2 Chemistry Notes Limitations of Bohr’s Model Bohr’s model was too simple to account for the following points: 1. It fails to account for the finer details (doublet, that is two closely spaced lines) of the hydrogen atom spectrum. This model is also unable to explain the spectrum of atoms other than hydrogen Further, Bohr’s theory was also unable to explain the splitting of spectral lines in the presence of magnetic field (Zeeman effect) or an electric field (Stark effect). 2. It could not explain the ability of atoms to form molecules by chemical bonds. Towards Quantum Mechanical Model Of The Atom Two important developments which contributed significantly in the formulation of a more suitable and general model for atoms were: 1. Dual behaviour of matter 2. Heisenberg uncertainty principle Structure Of Atom Class 10 Notes Pdf Dual Behaviour of Matter The French physicist, de Broglie proposed that matter, like radiation, should also exhibit dual behaviour i. e., both particle and wavelike properties. This means that just as the photon, electrons should. also have momentum as well as wavelength. de Broglie, from this analogy, gave the following relation between wavelength (λ) and momentum (p) of a material particle. $$\lambda=\frac{h}{m v}=\frac{h}{p}$$ Heisenberg’s Uncertainty Principle Werner Heisenberg a German physicist in 1927, stated uncertainty principle which is the consequence of dual behaviour of matter and radiation. It states that it is impossible to determine simultaneously, the exact position and exact momentum (or velocity) of an electron. Mathematically, it can be given as in equation, ∆x is the uncertainty in position and ∆p<sub>x</sub> (or ∆v<sub>x</sub>) is the uncertainty in momentum (or velocity) of the particle. If the position of the electron is known with high degree of accuracy (∆x is small), then the velocity of the electron will be uncertain ∆v<sub>x</sub> is large]. On the other hand, if the velocity of the electron is known precisely ( ∆v<sub>x</sub> is small), then the position of the electron will be uncertain (∆x will be large). Thus, if we carry out some physical measurements on the electron’s position or velocity, the outcome will always depict a fuzzy or blur picture. Significance of Uncertainty Principle Heisenberg Uncertainty Principle rules out existence of definite paths or trajectories of electrons and other similar particles. The trajectory of an object is determined by its location and velocity at various moments. If we know where a body is at a particular instant and if we also know its velocity and the forces acting on it at that instant, we can tell where the body would be sometime later. We, therefore, conclude that the position of an object and its velocity fix its trajectory. The effect of Heisenberg Uncertainty Principle is significant only for motion of microscopic objects and is negligible for that of macroscopic objects. Reasons for the Failure of the Bohr Model In Bohr model, an electron is regarded as a charged particle moving in well defined circular orbits about the nucleus. The wave character of the electron is not considered in Bohr model. Further, an orbit is a clearly defined path and this path can completely be defined only if both the position and the velocity of the electron are known exactly at the same time. This is not possible according to the Heisenberg uncertainty principle. Bohr.model of the hydrogen atom, therefore, not only ignores dual behaviour of matter but also contradicts Heisenberg uncertainty principle. There was no point in extending Bohr model to other atoms. In fact, an insight into the structure of the atom was needed which could account for wave-particle duality of matter and be consistent with Heisenberg uncertainty principle. This came with the advent of quantum mechanics. Quantum Mechanical Model Of Atom Quantum mechanics is a theoretical science that deals with the study of motions of microscopic objects such as electrons. In quantum mechanical model of atom, the behaviour of an electron in an atom is described by an equation known as Schrodinger wave equation. Fora system, such as an atom or molecule whose energy does not change with time, the Schrodinger equation written as Hψ = Eψ where H is a mathematical operator, called Hamiltonian operator, E is the total energy and ψ is the amplitude of the electron wave called wave function. Hydrogen Atom And The Schrodinger Equation The wave function ψ as such has no physical significance. It only represents the amplitude of the electron wave. However ψ² may be considered as the probability density of the electron cloud. Thus, by determining ψ² at different distances from the nucleus, it is possible to trace out or identify a region of space around the nucleus where there is high probability of locating an electron with a specific energy. According to the uncertainty principle, it is not possible to determine simultaneously the position and momentum of an electron in an atom precisely. So Bohr’s concept of well defined orbits for electron in an atom cannot hold good. Thus, in quantum mechanical mode, we speak of probability of finding an electron with a particular energy around the nucleus. There are certain regions around the nucleus where probability of finding the electron is high. Such regions are called orbitals. Thus an orbital may be defined as the region in space around the nucleus where there is maximum probability of finding an electron having a specific energy. Orbitals and Quantum Numbers Orbitals in an atom can be distinguished by their size, shape and orientation. An orbital of smaller size means there is more chance of finding the electron near the nucleus. Similarly, shape and orientation mean that there is more probability of finding the electron along certain directions than along others. Atomic orbitals are precisely distinguished by what are known as quantum numbers. Each orbital is designated by three quantum numbers labelled as n, l and m<sub>l</sub> The principal quantum number n’ is a positive integer with value of n= 1, 2, 3 …………… The principal quantum number determines the size and to large extent the energy of the orbital. The principal quantum number also identifies the shell. With the increase in the value of ‘n’, the number of allowed orbital increases and are given by ‘n²’ Ait the orbitals of a given value of ‘n’ constitute a single shell of atom and are represented by the following letters n= 1 2 3 4 ……………… Shell = K LM N ……………… Size of an orbital increases with increase of principal quantum number ‘n’. Since energy of the orbital will increase with increase of n. Azimuthal quantum number, ‘F is also known as orbital angular momentum or subsidiary quantum number. It defines the three-dimensional shape of the orbital. For a given value of n, l can have n values ranging from 0 to (n – 1), that is, for a given value of n, the possible value of l are: l = 0, 1, 2, ……….. (n – 1) Each shell consists of one or more subshells or sub-levels. The number of subshells in a principal shell is equal to the value of n. For example h the first shell (n = 1), there is only one sub-shell which corresponds to l = 0. There are two sub-shells (l= 0, 1) in the second shell (n = 2), three l= 0, 1, 2) and so on. Each sub-shell is assigned an azimuths! quantum number (l). Sub-shells corresponding to different values of l are represented by the following symbols. l : 0 1 2 3 4 5 ……………. Notation for sub-shell : s p d f g h ……………. Magnetic orbital quantum number. ‘m<sub>l</sub>’ gives information about the spatial orientation of the or bital with respect to standard set of co-ordinate axis. For any sub-shell (defined by T value) 21+ 1 values of m,are possible and these values are given by: m, = -l, -(l-1), (l-2)… 0, 1… (l-2), (l-1), l Thus for l = 0, the only permitted value of m,= 0, [2(0) + 1 = 1, one s orbital]. Electron spin ‘s’: George Uhlenbeck and Samuel Goudsmit proposed the presence of the fourth quantum number known as the electron spin quantum number (m<sub>s</sub>). Spin angular momentum of the electron — a vector quantity, can have two orientations relative to the chosen axis. These two orientations are distinguished by the spin quantum numbers ms which can take the values of +½ or -½. These are called the two spin states of the electron and are. normally represented by two arrows, ↑ (spin up) and ↓ (spin down). Two electrons that have different m<sub>s</sub> values (one +½ and the other -½) are said to have opposite spins. An orbital cannot hold more than two electrons and these two electrons should have opposite spins. Shapes of Atomic Orbitals The orbital wave function or V for an electron in an atom has no physical meaning. It is simply a mathematical function of the coordinates of the electron. According to the German physicist, Max Bom, the square of the wave function (i.e., ψ²) at a point gives the probability density of the electron at that point. For 1 s orbital the probability density is maximum at the nucleus and it decreases sharply as we move away from it. The region where this probability I density function reduces to zero is called nodal surfaces or simply nodes. In general, it has been found that ns-orbital has (n – 1) nodes, that is, number of nodes increases with increase of principal quantum number n. These probability density variation can be visualised . in terms of charge cloud diagrams. Boundary surface diagrams of constant probability density for different orbitals give a fairly good representation of the shapes of the orbitals. In this representation, a boundary surface or contour surface is drawn in space for an orbital on which the value of probability density |ψ|² is constant. Boundary ‘ surface diagram for a s orbital is actually a sphere centred on the nucleus. In two dimensions, this sphere looks like a circle. It encloses a region in which probability of finding the electron is about 90%. The s-orbitals are spherically symmetric, that is, the probability of finding the electron at a given distance is equal in all the directions. unlike s-orbitals, the boundary surface diagrams of p orbitals are not spherical. Instead, each p orbital consists of two sections called lobes that are on either side of the plane that passes through the nucleus. The probability density function is zero on the plane where the two lobes touch each other. The size, shape and energy of the three orbitals are identical. They differ, however, in the way the lobes are oriented. Since the lobes may be considered to lie along the x, y or z-axis, they are given the designations 2px, 2py, and 2pz. It should be understood, however, that there is no simple relation between the values of m, (-1, 0 and+1) and the x, y and z directions. For our purpose, it is sufficient to remember that, because there are three possible values of m, there are, therefore, three p orbitals whose axes are mutually perpendicular. Like s orbitals, p orbitals increase in size and energy with increase in the principal quantum number The number of nodes are given by (n -2), that is number of radial node is 1 for 3p orbital, two for 4p orbital and so on. For l = 2, the orbital is known as d-orbital and the minimum value of principal quantum number (n) has to be 3 as the value of l cannot be greater than n-1. There are five m; values (-2, -1, 0, +1 and +2) for l = 2 and thus there are five d orbitals. The five d-orbitals are designated as dxy, dyz, dxz, dx²-y² and d. The shapes of the first fourd-orbitals are similarto each other, where as that of the fifth one, d, is different from others, but all five 3d orbitals are equivalent in energy. The d orbitals for which n is greater than 3 (4d, 5d…) also have shapes similar to 3d orbital, but differ in energy and size. Besides the radial nodes (i.e., probability density function is zero), the probability density functions for the np and nd orbitals are zero at the plane (s), passing through the nucleus (origin). For example, in case of pz orbital, xy-plane is a nodal plane, in case of dxy orbital, there are two nodal planes passing through the origin and bisecting the xy plane containing z-axis. These are called angular nodes and number of angular nodes are given by T, i.e., one angular node for p orbitals, two angular nodes for cf orbitals and so on. The total number of nodes are given by (n-1), i.e., sum of I angular nodes and (n-l-1) radial nodes. Energies Of Orbitals The order of energy of orbitals in single electron sys-tem are given below: 1s < 2s = 2p < 3s = 3p = 3d < 4s = 4p = 4d = 4f The orbitals having same energy are called degenerate. Filling Of Orbitals In Atom Aufbau principle: According to this principle in the ground state of an atom, an electron will occupy the orbital of lowest energy and orbitals are occupied by electrons in the order of increasing energy. Pauli’s exclusiohn principle : Pauli’s exclusion principle states that ‘no two electrons in an atom can have the same values for all the four quantum numbers’ Since the electrons in an orbital must have the same n, I and m quantum numbers, if follows that an orbital can contain a maximum of two electrons provided their spin quantum numbers are different. This is an important consequence of Pauli’s exclusion principle which says that an orbital can have maximum two electrons and these must have opposite spins. Hund’s rule of maximum multiplicity : This rule states that electron pairing in orbitals of same energy will not take place until each available orbital of a given subshell is singly occupied (with parallel spin). The rule can be illustrated by taking the example of carbon atom. The atomic number of carbon is 6 and its electronic configuration is 1s²2s²2p². The two electrons of the 2p subshell can be distributed in the following three ways. According to Hund’s rule, the configuration in which the two unpaired electron occupying 2px, and 2py orbitals with parallel spin is the correct configuration of carbon. Exceptional configurations of chromium and copper The electronic configuration of Cr (atomic number 24) is expected to be [Ar] 4s² 3d4, but the actual configuration is [Ar] 4s¹ 3d5. Similarly, the actual configuration of Cu (At. No. 29) is [Ar] 4s¹ 3d10 instead of the expected configuration [Ar] 4s² 3d9. This is because of the fact that exactly half filled or completely filled orbitals (i.e., d5, d10, f7, f14) have lower energy and hence have extra stability. ## HSSLive Plus One Study Material / Question Bank Kerala HSSLive.Guru providing HSE Kerala Board Syllabus HSSLive Higher Secondary SCERT Plus One Study Material, Question Bank, Quick Revision Notes, Chapter Wise Notes, Chapter Wise Previous Year Important Questions and Answers, Previous Year Sample Question Papers with Answers based on CBSE NCERT syllabus in both English Medium and Malayalam Medium for Class 11. ## HSSLive Plus One Study Material / Question Bank Kerala ### HSSLive Plus One Previous Year Question Papers and Answers Kerala Here HSSLive.Guru have given HSSLive Higher Secondary Kerala Plus One Previous Year Sample Question Papers with Answers. ### HSSLive Plus One Notes Chapter Wise Kerala Here HSSLive.Guru have given HSSLive Higher Secondary Kerala Plus One Chapter Wise Quick Revision Notes. ### HSSLive Plus One Chapter Wise Questions and Answers Kerala Here HSSLive.Guru have given HSSLive Higher Secondary Kerala Plus One Chapter Wise Questions and Answers. ### HSSLive Plus One Chapter Wise Previous Year Questions and Answers Kerala Here HSSLive.Guru have given HSSLive Higher Secondary Kerala Plus One Chapter Wise Previous Year Important Questions and Answers. • Plus One Maths Chapter Wise Previous Questions and Answers • Plus One Physics Chapter Wise Previous Questions and Answers • Plus One Chemistry Chapter Wise Previous Questions and Answers • Plus One Botany Chapter Wise Previous Questions and Answers • Plus One Zoology Chapter Wise Previous Questions and Answers • Plus One Economics Chapter Wise Previous Questions and Answers • Plus One Business Studies Chapter Wise Previous Questions and Answers • Plus One Accountancy Chapter Wise Previous Questions and Answers • Plus One Computer Science Chapter Wise Previous Questions and Answers • Plus One Computer Application Chapter Wise Previous Questions and Answers ## Plus Two History Notes Chapter Wise HSSLive Kerala HSE Kerala Board Syllabus HSSLive Plus Two History Notes Chapter Wise Pdf Free Download in both English Medium and Malayalam Medium are part of SCERT Kerala HSSLive Plus Two Notes. Here HSSLive.Guru has given Higher Secondary Kerala Plus Two History Chapter Wise Quick Revision Notes based on CBSE NCERT syllabus. Board SCERT, Kerala Text Book NCERT Based Class Plus Two Subject History Chapter All Chapters Category Kerala Plus Two ## Kerala Plus Two History Notes Chapter Wise We hope the given HSE Kerala Board Syllabus HSSLive Plus Two History Notes Chapter Wise Pdf Free Download in both English Medium and Malayalam Medium will help you. If you have any query regarding Higher Secondary Kerala Plus Two History Chapter Wise Quick Revision Notes based on CBSE NCERT syllabus, drop a comment below and we will get back to you at the earliest. ## Plus One Business Studies Previous Year Question Papers and Answers Kerala HSE Kerala Board Syllabus Plus One Business Studies Previous Year Model Question Papers and Answers Pdf HSSLive Free Download in both Business Studies medium and Malayalam medium are part of SCERT Kerala Plus One Previous Year Question Papers and Answers. Here HSSLive.Guru have given Higher Secondary Kerala Plus One Business Studies Previous Year Sample Question Papers with Answers based on CBSE NCERT syllabus. Board SCERT, Kerala Board Textbook NCERT Based Class Plus One Subject Business Studies Papers Previous Papers, Model Papers, Sample Papers Category Kerala Plus One ## Kerala Plus One Business Studies Previous Year Question Papers and Answers We hope the given HSE Kerala Board Syllabus Plus One Business Studies Previous Year Model Question Papers and Answers Pdf HSSLive Free Download in both Business Studies medium and Malayalam medium will help you. If you have any query regarding HSS Live Kerala Plus One Business Studies Previous Year Sample Question Papers with Answers based on CBSE NCERT syllabus, drop a comment below and we will get back to you at the earliest. ## Plus Two Computer Application Chapter Wise Questions and Answers Kerala HSE Kerala Board Syllabus HSSLive Plus Two Computer Application Chapter Wise Questions and Answers Pdf Free Download in both English Medium and Malayalam Medium are part of SCERT Kerala Plus Two Chapter Wise Questions and Answers. Here HSSLive.Guru has given Higher Secondary Kerala Plus Two Computer Application Chapter Wise Questions and Answers based on CBSE NCERT syllabus. Board SCERT, Kerala Text Book NCERT Based Class Plus Two Subject Computer Application Chapter All Chapters Category Kerala Plus Two ## Kerala Plus Two Computer Application Chapter Wise Questions and Answers We hope the given HSE Kerala Board Syllabus HSSLive Plus Two Computer Application Chapter Wise Questions and Answers Pdf Free Download in both English Medium and Malayalam Medium will help you. If you have any query regarding Higher Secondary Kerala Plus Two Computer Application Chapter Wise Questions and Answers based on CBSE NCERT syllabus, drop a comment below and we will get back to you at the earliest. ## Plus One Zoology Previous Year Question Papers and Answers Kerala HSE Kerala Board Syllabus Plus One Zoology Previous Year Model Question Papers and Answers Pdf HSSLive Free Download in both Zoology medium and Malayalam medium are part of SCERT Kerala Plus One Previous Year Question Papers and Answers. Here HSSLive.Guru have given Higher Secondary Kerala Plus One Zoology Previous Year Sample Question Papers with Answers based on CBSE NCERT syllabus. Board SCERT, Kerala Board Textbook NCERT Based Class Plus One Subject Zoology Papers Previous Papers, Model Papers, Sample Papers Category Kerala Plus One ## Kerala Plus One Zoology Previous Year Question Papers and Answers We hope the given HSE Kerala Board Syllabus Plus One Zoology Previous Year Model Question Papers and Answers Pdf HSSLive Free Download in both Zoology medium and Malayalam medium will help you. If you have any query regarding HSS Live Kerala Plus One Zoology Previous Year Sample Question Papers with Answers based on CBSE NCERT syllabus, drop a comment below and we will get back to you at the earliest. ## Plus Two Chemistry Notes Chapter Wise HSSLive Kerala HSE Kerala Board Syllabus HSSLive Plus Two Chemistry Notes Chapter Wise Pdf Free Download in both English Medium and Malayalam Medium are part of SCERT Kerala HSSLive Plus Two Notes. Here HSSLive.Guru has given Higher Secondary Kerala Plus Two Chemistry Chapter Wise Quick Revision Notes based on CBSE NCERT syllabus. Board SCERT, Kerala Text Book NCERT Based Class Plus Two Subject Chemistry Chapter All Chapters Category Kerala Plus Two ## Kerala Plus Two Chemistry Notes Chapter Wise We hope the given HSE Kerala Board Syllabus HSSLive Plus Two Chemistry Notes Chapter Wise Pdf Free Download in both English Medium and Malayalam Medium will help you. If you have any query regarding Higher Secondary Kerala Plus Two Chemistry Chapter Wise Quick Revision Notes based on CBSE NCERT syllabus, drop a comment below and we will get back to you at the earliest. ## Plus One Business Studies Notes Chapter Wise HSSLive Kerala HSE Kerala Board Syllabus HSSLive Plus One Business Studies Notes Chapter Wise Pdf Free Download in both English Medium and Malayalam Medium are part of SCERT Kerala HSSLive Plus One Notes. Here HSSLive.Guru has given Higher Secondary Kerala Plus One Business Studies Chapter Wise Quick Revision Notes based on CBSE NCERT Syllabus. Board SCERT, Kerala Text Book NCERT Based Class Plus One Subject Business Studies Chapter All Chapters Category Kerala Plus One ## Kerala Plus One Business Studies Notes Chapter Wise We hope the given HSE Kerala Board Syllabus HSSLive Plus One Business Studies Notes Chapter Wise Pdf Free Download in both English Medium and Malayalam Medium will help you. If you have any query regarding Higher Secondary Kerala Plus One Business Studies Chapter Wise Quick Revision Notes based on CBSE NCERT syllabus, drop a comment below and we will get back to you at the earliest. ## Plus Two Zoology Previous Year Question Papers and Answers Kerala HSE Kerala Board Syllabus Plus Two Zoology Previous Year Model Question Papers and Answers Pdf HSSLive Free Download in both English medium and Malayalam medium are part of SCERT Kerala Plus Two Previous Year Question Papers and Answers. Here HSSLive.Guru have given Higher Secondary Kerala Plus Two Zoology Previous Year Sample Question Papers with Answers based on CBSE NCERT syllabus. Board SCERT, Kerala Board Textbook NCERT Based Class Plus Two Subject Zoology Papers Previous Papers, Model Papers, Sample Papers Category Kerala Plus Two ## Kerala Plus Two Zoology Previous Year Question Papers and Answers We hope the given HSE Kerala Board Syllabus Plus Two Zoology Previous Year Model Question Papers and Answers Pdf HSSLive Free Download in both English medium and Malayalam medium will help you. If you have any query regarding HSS Live Kerala Plus Two Zoology Previous Year Sample Question Papers with Answers based on CBSE NCERT syllabus, drop a comment below and we will get back to you at the earliest. ## Plus Two Computer Application Previous Year Question Papers and Answers Kerala HSE Kerala Board Syllabus Plus Two Computer Application Previous Year Model Question Papers and Answers Pdf HSSLive Free Download in both Computer Application medium and Malayalam medium are part of SCERT Kerala Plus Two Previous Year Question Papers and Answers. Here HSSLive.Guru have given Higher Secondary Kerala Plus Two Computer Application Previous Year Sample Question Papers with Answers based on CBSE NCERT syllabus. Board SCERT, Kerala Board Textbook NCERT Based Class Plus Two Subject Computer Application Papers Previous Papers, Model Papers, Sample Papers Category Kerala Plus Two ## Kerala Plus Two Computer Application Previous Year Question Papers and Answers We hope the given HSE Kerala Board Syllabus Plus Two Computer Application Previous Year Model Question Papers and Answers Pdf HSSLive Free Download in both Computer Application medium and Malayalam medium will help you. If you have any query regarding HSS Live Kerala Plus Two Computer Application Previous Year Sample Question Papers with Answers based on CBSE NCERT syllabus, drop a comment below and we will get back to you at the earliest. ## Plus One Accountancy Previous Year Question Papers and Answers Kerala HSE Kerala Board Syllabus Plus One Accountancy Previous Year Model Question Papers and Answers Pdf HSSLive Free Download in both Accountancy medium and Malayalam medium are part of SCERT Kerala Plus One Previous Year Question Papers and Answers. Here HSSLive.Guru have given Higher Secondary Kerala Plus One Accountancy Previous Year Sample Question Papers with Answers based on CBSE NCERT syllabus. Board SCERT, Kerala Board Textbook NCERT Based Class Plus One Subject Accountancy Papers Previous Papers, Model Papers, Sample Papers Category Kerala Plus One ## Kerala Plus One Accountancy Previous Year Question Papers and Answers We hope the given HSE Kerala Board Syllabus Plus One Accountancy Previous Year Model Question Papers and Answers Pdf HSSLive Free Download in both Accountancy medium and Malayalam medium will help you. If you have any query regarding HSS Live Kerala Plus One Accountancy Previous Year Sample Question Papers with Answers based on CBSE NCERT syllabus, drop a comment below and we will get back to you at the earliest. ## Plus Two Botany Previous Year Question Papers and Answers Kerala HSE Kerala Board Syllabus Plus Two Botany Previous Year Model Question Papers and Answers Pdf HSSLive Free Download in both English medium and Malayalam medium are part of SCERT Kerala Plus Two Previous Year Question Papers and Answers. Here HSSLive.Guru have given Higher Secondary Kerala Plus Two Botany Previous Year Sample Question Papers with Answers based on CBSE NCERT syllabus. Board SCERT, Kerala Board Textbook NCERT Based Class Plus Two Subject Botany Papers Previous Papers, Model Papers, Sample Papers Category Kerala Plus Two ## Kerala Plus Two Botany Previous Year Question Papers and Answers We hope the given HSE Kerala Board Syllabus Plus Two Botany Previous Year Model Question Papers and Answers Pdf HSSLive Free Download in both English medium and Malayalam medium will help you. If you have any query regarding HSS Live Kerala Plus Two Botany Previous Year Sample Question Papers with Answers based on CBSE NCERT syllabus, drop a comment below and we will get back to you at the earliest. ## SCERT Kerala Textbooks for Class 1 | Kerala State Syllabus 1st Standard Textbooks English Malayalam Medium Students can find the latest 2019-2020 Edition of SCERT Kerala State Board Syllabus 1st Standard Textbooks Download English Medium and Malayalam Medium Part 1 and Part 2 of SCERT Kerala Textbooks for Class 1, SCERT Kerala Textbooks 1st Standard, Kerala Syllabus 1st Standard Textbooks, SCERT Kerala Teachers Handbook Class 1. Here HSSLive.Guru has given the Kerala State Syllabus 1st Standard Textbooks English Malayalam Medium 2019 2020 published by the Kerala State Council of Educational Research and Training. These SCERT Kerala Textbooks 1st Standard English Medium 2020 are prepared by a group of expert faculty members. These SCERT Kerala Textbooks for Class 1 Malayalam Medium are an excellent resource for students, as they can learn and revise through all the different chapters present in the syllabus for subjects like Maths, English and Malayalam. ## Kerala State Syllabus 1st Standard Textbooks English Malayalam Medium These books are prescribed by the SCERT and published by KBPS (Kerala Books and Publications Society) at Kochi. All the files of the SCERT Kerala State Board Syllabus Class 1st Standard Textbooks Download in English Medium and Malayalam Medium are accessible PDF format, we can simply tap the download link and it will begin downloading automatically. We have compiled all the different subjects for Class 1 students, available as a PDF below. ### SCERT Kerala State Syllabus 1st Standard Textbooks English Medium Students can download the Samagra SCERT Kerala Textbooks for Class 1 English Medium. ### SCERT Kerala State Syllabus 1st Standard Textbooks Malayalam Medium Students can download the Samagra SCERT Kerala Textbooks for Class 1 Malayalam Medium. We hope the given SCERT Kerala State Syllabus 1st Standard Textbooks in English Medium and Malayalam Medium 2019-20 will help you. If you have any queries regarding SCERT Kerala Textbooks for Class 1st Standard 2019 2020 Part 1 and Part 2, drop a comment below and we will get back to you at the earliest.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6734479665756226, "perplexity": 1865.5414714427834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304309.5/warc/CC-MAIN-20220123172206-20220123202206-00104.warc.gz"}
https://math.stackexchange.com/questions/1255074/creating-a-sequence-convergent-to-zero-with-special-characteristic/1256489
# Creating a sequence convergent to zero with special characteristic Let $\{a_k\}$ and $\{b_k\}$ be positive sequences in $\mathbb{R}$ that both converge to zero. Can we choose $\{c_k\}$ such that it converges to zero and $$0<\lim_{k \to \infty} \frac{a_k}{c_k} = \lim_{k \to \infty} \frac{b_k}{c_k} < +\infty$$ • $a_k$ and $b_k$ are given. We can only decide on $c_k$ – Mehdi Jafarnia Jahromi Apr 27 '15 at 23:14 • I think that would be very difficult for $a_k=e^{-k}$ and $b_k=1/k$. I am pretty certain you would need $\lim_{k\to\infty}a_k/b_k=1$ – Arthur Apr 27 '15 at 23:14 • @arthur: Actually, not that hard. Just take c_k that goes slower to 0 than both a_k and b_k. Like 1/ln(k). Both limits will then be +oo (But yes, I'm not sure that's what the OP had in head) – Tryss Apr 27 '15 at 23:19 • @Tryss As long as you think $\infty=\infty$ is a correct statement, then you're right (except that you would want $c_k$ to go to zero faster, not slower). I know many would disagree. – Arthur Apr 27 '15 at 23:22 • I edited the question, we do not need $+\infty$ – Mehdi Jafarnia Jahromi Apr 27 '15 at 23:24 This is not possible in general, for example, consider the sequences $(a_n) = (\frac{1}{n})$ and $(b_n)=(\frac{1}{n^2})$. Then if possible let $(c_k)$ be a sequence such that $$0<\lim_\limits{n\to \infty}\frac{a_n}{c_n} =\lim_\limits{n\to \infty}\frac{b_n}{c_n}=l<\infty.$$ Then it will follow that $$\frac{\lim_\limits{n\to \infty}\frac{a_n}{c_n}}{\lim_\limits{n\to \infty}\frac{b_n}{c_n}} =1= \lim_\limits{n\to \infty}\frac{a_n}{b_n}=\lim_\limits{n\to \infty}n.$$ which is obviously not true. $\textbf{Note:}$ The above observation shows that it is crucial to have $\lim_\limits{n\to \infty}\frac{a_n}{b_n}=1$ in the hypothesis and if this is the case then it can be easily seen that taking the sequence $(c_n) = (b_n)$ will do the job. $\textbf{Observation}:$ Suppose if it is added in the hypothesis that $\lim_\limits{n\to \infty}\frac{a_n}{b_n}=1$ and it is asked that whether there is a sequence $(c_n)$ such that $(c_n)$ converges to $0$ and $$0<\lim_\limits{n\to \infty}\frac{a_n}{c_n} =\lim_\limits{n\to \infty}\frac{b_n}{c_n}=l<\infty.$$ and $l$ is different than $1$. Even in this case the answer turns out to be affirmative. Suppose $l\in\mathbb{R}$ is any real number different from $1$. Take the sequence $(c_n) =(\frac{a_n^2}{lb_n})$. Then it is clear that $c_n\longrightarrow 0$ and moreover, $$0<\lim_\limits{n\to \infty}\frac{a_n}{c_n} =\lim_\limits{n\to \infty}\frac{b_n}{c_n}=l<\infty.$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 4, "x-ck12": 0, "texerror": 0, "math_score": 0.9589671492576599, "perplexity": 165.12097275841256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144722.77/warc/CC-MAIN-20200220100914-20200220130914-00423.warc.gz"}
https://www.greencarcongress.com/2023/01/20230128-doeh2.html
## DOE to award $47M to develop affordable clean hydrogen technologies in support of Hydrogen Shot ##### 28 January 2023 The US Department of Energy (DOE) announced up to$47 million in funding (DE-FOA-0002920) to accelerate the research, development, and demonstration (RD&D) of affordable clean hydrogen technologies. Projects funded under this opportunity will reduce costs, enhance hydrogen infrastructure, and improve the performance of hydrogen fuel cells—advancing the Department’s Hydrogen Shot goal of reducing the cost of clean hydrogen to $1 per kilogram within a decade. This funding opportunity, which is administered by DOE’s Hydrogen and Fuel Cell Technologies Office (HFTO), focuses on RD&D of key hydrogen delivery and storage technologies as well as affordable and durable fuel cell technologies. Fuel cell RD&D projects will focus particularly on applications for heavy-duty trucks, to reduce carbon dioxide emissions and eliminate tailpipe emissions that are harmful to local air quality. Specific topics to be funded in this interest area are: Topic 1: Hydrogen Carrier Development. This topic seeks applications for R&D of novel hydrogen carriers and hydrogen carrier hydrogenation/dehydrogenation catalysts and catalyst supports with the goal of providing quantitative cost and performance advantages over conventional compressed gas or liquid hydrogen systems. Hydrogen carriers are a unique storage and delivery medium that have the potential to enable efficient, compact, and low-cost transport, on-site generation, and storage of hydrogen across multiple sectors in the economy. Carriers exhibit a wide range of properties and behaviors, allowing for the matching of different hydrogen-rich materials to the needs of a specific end use. Relevant end uses that address the overall performance needs, such as pressure, temperature, rates of hydrogen release, purity, and cost at scale, must be considered within the topic. One example of interest includes catalysts that are based on perovskite materials, or that use perovskite materials as catalyst supports. Such materials and other innovative concepts with potential to meet specific metrics are of interest and projects will be expected to collaborate with HFTO’s HyMARC consortium. Topic 2: Onboard Storage Systems for Liquid Hydrogen. This topic solicits applications for the development of LH2 storage vessels and the required balance-of-plant hardware to enable low-cost, energy-dense LH2 storage onboard medium- and heavy-duty (MD/HD) transportation applications. Hydrogen fuel cell systems can offer benefits in MD/HD transportation, particularly for long-haul trucks, such as long driving ranges, short refueling times, and high payload capacities. However, to do so, significant quantities of hydrogen are required (e.g., 40 – 120 kg for long-haul trucks and several hundred kg for other heavy-duty applications such as off-road mining and construction vehicles). As LH2 has a considerably higher energy density compared to 700 bar compressed hydrogen gas, there is significant interest in the development of onboard LH2 storage systems. Analyses have shown the potential of LH2 systems to meet capacity requirements for MD/HD applications and achieve the storage cost target of less than or equal to$8/kWh. Topic 3: Liquid Hydrogen Transfer/Fueling Components and Systems. This topic seeks applications to develop LH2 transfer and vehicular fueling technologies and approaches to enable high-flow LH2 transfers and/or LH2 fueling for MD and HD transportation applications. Hydrogen fueling stations for MD/HD fuel cell transportation applications, which encompass trucks, buses, off- road, marine, and rail, are expected to dispense tons of hydrogen per day. The large-scale storage and transfer of LH2 for such end-users requires the development of advanced LH2 transfer and fueling components and systems that address the challenges of hydrogen losses, materials compatibility, and safety while enabling fueling times comparable to incumbent technologies (i.e., liquid fuels). This will require much higher hydrogen flow rates, for instance over five times greater (at least 10 kg/min average) than those in current light-duty vehicle hydrogen fueling stations. Topic 4: M2FCT: High Performing, Durable Membrane Electrode Assemblies for Medium- and Heavy-duty Applications. This topic solicits applications that, in coordination with DOE’s Million Mile Fuel Cell Truck (M2FCT) consortium, will develop membrane electrode assemblies (MEAs) that will reduce the cost and enhance the durability and performance of proton-exchange membrane (PEM) fuel cell stacks for MD/HD applications. R&D needs for both applications have been identified with industry, university, and national laboratory expert stakeholder input. The topic targets advances in MEAs to enable significant progress towards meeting 2030 system level HD truck targets of 25,000-hour durability and \$80/kW system cost. For all topic areas, DOE envisions awarding financial assistance awards in the form of cooperative agreements. The estimated period of performance for each award will be approximately two to four years. DOE encourages applicant teams that include stakeholders within academia, industry, and national laboratories across multiple technical disciplines. Teams are also encouraged to include representation from diverse entities such as minority-serving institutions, labor unions, community colleges, and other entities connected through Opportunity Zones. Here are two conflicting evaluations of pumping hydrogen in existing infrastructure, so make of them what you will: https://www.nrel.gov/docs/fy23osti/81704.pdf Obviously any report from NREL has to be taken seriously, and their evaluation seems to indicate that blending hydrogen in existing NG pipelines is somewhere between difficult and impossible. OTOH, if that were the case, the engineers in the UK grid would have to be complete fools, as they reckon they are ready to go right now, and it should be borne in mind that some districts in Germany, for instance, already and for some years have transported up to something of the order of 10% hydrogen in the NG grid, whilst the old town gas (coal gas) was up to 50% hydrogen donkeys years ago. I present both evaluations though, as I rather despise ' arguing a case' and even when evaluations are contrary to my own take like to cite their strongest case and most reputable evidence. My own evaluation from past experience is that NREL does exactly the opposite, and loads all sorts of negative assumptions when hydrogen is mentioned. It should also be noted that they talk primarily about the US grid, where the specs of the pipes etc are very different to in Europe. But just the same, is anyone wants relatively credible and sourced materials to be 'agin hydrogen, here you go. It is loads better than most of the sledging I read here, anyway! ;-) ".... and it should be borne in mind that some districts in Germany, for instance....." It should also be borne in mind that these mentally deranged also made contracts with Putin for the delivery of NG. The results of such a praised feat are well known world-wide. @yoatman: I do try to be patient, but what on earth do you imagine that has to do with the price of fish? FYI, there are approx. 80 communities in Germany that are completely energy self-sufficient. They achieved this without any financial assistance from the ignorant leaders of the federal government who placed all their bets on gas from Putin. They concentrated their efforts on REs (wind, solar and bio gas.) The presently rising energy prices are not effecting them in the least. Foresight is always better than hindsight. This applies to a H2 economy just as well; H2 BS is just another fairy tale from the Oil -Monopoly and their proponents to keep the broad public in their dependency so they can push business as usual. @yoatman: Your mental processes are entirely mysterious to me. Your original critique referenced NG contracts with Putin. Now you have wandered off to areas where renewables are sufficient locally without recourse to transfers from other areas. Yeah, sure, but the thing is that renewables are very dependent on the precise local conditions, climate, and population density etc. And whatever you may imagine, there is absolutely no way at all that electricity plus batteries can cope with all power needs in Germany, whatever may be the case in areas closer to the equator. The comments to this entry are closed.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.235543355345726, "perplexity": 4179.39301512195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950422.77/warc/CC-MAIN-20230402074255-20230402104255-00402.warc.gz"}
https://www.mathway.com/examples/trigonometry/radian-measure-and-circular-functions/converting-to-degrees-minutes-and-seconds?id=828
# Trigonometry Examples To convert decimal degree to degree, minutes, and seconds, the whole units of degrees will remain the same. Multiply the decimal by . The whole number becomes the minutes. Multiply by to get . Take the remaining decimal and multiply by and take the whole number portion to find the seconds. Multiply by to get . Take the three sets of whole numbers and combine them, using the symbols for degrees , minutes , and seconds . We're sorry, we were unable to process your request at this time Step-by-step work + explanations •    Step-by-step work •    Detailed explanations •    Access anywhere Access the steps on both the Mathway website and mobile apps $--.--/month$--.--/year (--%) Visa and MasterCard security codes are located on the back of card and are typically a separate group of 3 digits to the right of the signature strip. American Express security codes are 4 digits located on the front of the card and usually towards the right. This option is required to subscribe. Go Back
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5101041197776794, "perplexity": 2229.486201930338}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891811794.67/warc/CC-MAIN-20180218062032-20180218082032-00641.warc.gz"}
https://www.veritas.com/support/en_US/article.TECH209599
# Drive(s) appears in Microsoft Windows Disk Management. but does not appear in System Recovery Console • Article ID:100010295 • Modified Date: • Product(s): ### Problem A Drive(s) does not appear in the System Recovery Console but it does appear in Microsoft Windows Disk Management. ### Error Message Error EC8F1780: Cannot successfully reconcile changes since last session. Error EC8F1771: Cannot enumerate the current drives on this system.Error E0BB0147: The operation 'Snap Volume' is not currently enabled for this Volume. (UMI:V-281-3215-6016) ### Cause This can be caused by VTrack not being registered correctly. Vtrack is the system driver responsible for tracking changes on the volume for incremental images. This replaces the Vdiff driver in previous builds. ### Solution To resolve this issue, verify if VTRACK is loaded and running: 1. Open the Windows services console and check for service name SymTrackService. It should be status STARTED 2. Browse to C:\Windows\System32, locate MSINFO32.exe. Right-mouse click and chose to run as Administrator. 3. In the System Information window,  expand SOFTWARE ENVIRONMENT by clicking the + sign. 4. Highlight System Drivers, then locate VTRACK in the right pane. The status should be started If either of the conditions in the previous steps are not met, attempt to repair SSR 2013 through Control Panel > Programs and Features. If that fails: 1. Your system must be running SSR 2013 SP2 or later (Click Help>About. Version should be 11.0.2.49853 or later). If not, use LiveUpdate or contact support for manual install instructions. 2. Browse to C:\Program Files\veritas\veritas System Recovery\Shared\Drivers\Driver\Vtrack_2k8 (In case of Windows 2003, see Note below.) 3. Locate VTrack.sys 4. Copy (do not move) the file to C:\WINDOWS\System32\Drivers. If the file already exists, rename the original file to Vtrack.sys.old. 5. Browse to C:\Program Files\veritas\veritas System Recovery\Shared\Drivers\Driver\Vtrack_2k8 (In case of Windows 2003, see Note below.) 6. Locate the file VTrack.inf. Right-mouse click on this file and choose install. 7. When install finished, reboot the system to complete the installation process. (This is a mandatory part of the install process). Note: In case of Windows 2003 (x64 / x86), the VTrack.sys and VTrack.inf files are located at C:\Program Files\veritas\veritas System Recovery\Shared\Drivers\Driver\Vtrack_2k3 Applies To System Recovery 2013 and later ### Related Articles Unable to enumerate local drives within SSR console for backup while showing under my computer drive details
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8889105916023254, "perplexity": 14047.128066055751}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825812.89/warc/CC-MAIN-20171023073607-20171023093607-00202.warc.gz"}
https://math.stackexchange.com/questions/805655/show-that-mathbbq-zeta-contains-one-of-the-two-numbers-sqrt-pm5-and
# Show that $\mathbb{Q}(\zeta)$ contains one of the two numbers $\sqrt{\pm5}$ and decide which one is contained in $\mathbb{Q}(\zeta)$. Let $\zeta$ be the 15th primitive root of unity in $\mathbb{C}$, show that $\mathbb{Q}(\zeta)$ contains one of the two numbers $\sqrt{\pm5}$ and decide which one is contained in $\mathbb{Q}(\zeta)$. First consider the element $\zeta^{3}\in\mathbb{Q}(\zeta)$; it is a primitive 5th root of unity since $(\zeta^{3})^{5}=\zeta^{15}=1$. We are allowed to use the following theorem; let $p$ be an odd prime and let $\zeta$ be a primitive $p$th root of unity in $\mathbb{C}$ then $S^{2}=(\frac{-1}{p})p$. In particular, the cyclotomic field $\mathbb{Q}(\zeta)$ contains at least one of the quadratic fields $\mathbb{Q}(\sqrt{p})$ or $\mathbb{Q}(\sqrt{-p})$. The Legendre symbol $(\frac{-1}{5})=1$ since $5\equiv 1(mod4)$ so we get the following: $$S^{2}=(\frac{-1}{5})5=5$$ Therefore $\mathbb{Q}(\zeta)$ contains $\mathbb{Q}(\sqrt{5})$ by the theorem above. So I can prove that $\mathbb{Q}(\zeta^{3})$ contains $\mathbb{Q}(\sqrt{5})$ therefore $\mathbb{Q}(\zeta)$ contains $\mathbb(\sqrt{5})$ but I don't know how to show that $\sqrt{-5}$ is not contained in $\mathbb{Q}(\zeta)$. I was thinking that it might involve showing that $i\notin\mathbb{Q}(\zeta)$ but I'm not sure how to go about this neither. I also looked into showing that if $\mathbb{Q}(\sqrt{-5})\subseteq\mathbb{Q}(\zeta)$ then $[\mathbb{Q}(\zeta)\colon\mathbb{Q}]$ is divisible by $$[\mathbb{Q}(\sqrt{5},\sqrt{-5})\colon\mathbb{Q}]=4$$ But I cannot use this to disprove anything because the minimal polynomial of $\zeta$ is $x^8-x^7+x^5-x^4+x^3-x+1$ and clearly 4 does divide 8. Any help would be appreciated, thank you. If $\sqrt 5$ and $\sqrt{-5}$ were both contained in $\mathbb Q(\zeta)$, then $\mathbb Q(\zeta)$ would also contain their quotient $\sqrt{-5}/\sqrt 5 = i$. But the only roots of unity in $\mathbb Q(\zeta)$ are of the form $\pm \zeta^k$ (the $30$th roots of unity). Since 4 does not divide 30, $i \not\in \mathbb Q(\zeta)$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9804308414459229, "perplexity": 36.319298462621326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304471.99/warc/CC-MAIN-20220124023407-20220124053407-00404.warc.gz"}
https://buboflash.eu/bubo5/show-dao2?d=149630105
Tags #odersky-programming-in-scala-2ed #scala Question When multiple operators of the same precedence appear side by side in an expression, the [...] of the operators determines the way operators are grouped. associativity Tags #odersky-programming-in-scala-2ed #scala Question When multiple operators of the same precedence appear side by side in an expression, the [...] of the operators determines the way operators are grouped. ? Tags #odersky-programming-in-scala-2ed #scala Question When multiple operators of the same precedence appear side by side in an expression, the [...] of the operators determines the way operators are grouped. associativity If you want to change selection, open document below and click on "Move attachment" Open it When multiple operators of the same precedence appear side by side in an expression, the associativity of the operators determines the way operators are grouped. #### Summary status measured difficulty not learned 37% [default] 0 No repetitions
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8216538429260254, "perplexity": 2249.620885241614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570921.9/warc/CC-MAIN-20220809094531-20220809124531-00486.warc.gz"}
http://mathhelpforum.com/advanced-algebra/184130-representatives-classes-theorem.html
# Math Help - Representatives of classes theorem 1. ## Representatives of classes theorem Suppose {a1, a2,......, am} is a complete set of representatives for Z/mZ Show if gcd(b, m) > 1, then {ba1, ba2, ......, bam} is not a complete set of representatives. So far I have [a]m ≡ [n]m [ba]m ≡ [bn]m ≠ [n]m not sure if this constitutes much of a proof though. 2. ## Re: Representatives of classes theorem Originally Posted by scruz10 Suppose {a1, a2,......, am} is a complete set of representatives for Z/mZ Show if gcd(b, m) > 1, then {ba1, ba2, ......, bam} is not a complete set of representatives. So far I have [a]m ≡ [n]m [ba]m ≡ [bn]m ≠ [n]m not sure if this constitutes much of a proof though. This basically comes down to the fact that a number $0\leq i generates $\mathbb{Z}/m\mathbb{Z}$ if and only if $gcd(m, i)=1$. (Hint: use the Euclidean algorithm). Can you see how your result follows from this? 3. ## Re: Representatives of classes theorem That's pretty much what I have to prove. I already proved the first part where gcd(b,m)=1. I just don't know how to prove it is not a full set if it is greater than 11 Assume [ba]m ≡ [bn]m ≡ [n]m bn - n = mx n(b - 1) = mx b - 1 = mx/n b - mx/n = 1 Therefore (b,m) = 1 only thing is I can't tell if x/n is an integer. 4. ## Re: Representatives of classes theorem Originally Posted by scruz10 That's pretty much what I have to prove. I already proved the first part where gcd(b,m)=1. I just don't know how to prove it is not a full set if it is greater than 11 Assume [ba]m ≡ [bn]m ≡ [n]m bn - n = mx n(b - 1) = mx b - 1 = mx/n b - mx/n = 1 Therefore (b,m) = 1 only thing is I can't tell if x/n is an integer. Assume it is a full set, then you have that $ba_i + cm=1$ for some $i$ and some $c\in \mathbb{Z}$ (by the definition of modulus), so... 5. ## Re: Representatives of classes theorem what you want to show is that the mapping $[a_j]\to [ba_j]$ (j = 1,....,m) is not 1-1 if gcd(b,m) ≠ 1. suppose that $[0] = [m] = [a_s]$, and let gcd(b,m) = d > 1. since d divides m, there is some 1 ≤ u < m with du = m (we are using the fact that d > 1 here). note that [u] ≠ [m]. we'll use this later. since d divides b, there is some integer v with dv = b. let $[a_t] = [u]$ (we can do this because we have a complete set of representatives, so u is in one of them). clearly, $[ba_s] = [b][a_s] = [b][0] = [0b] = [0]$. (if $a_s$ is a multiple of m, surely $ba_s$ is as well). however, $[ba_t] = [b][a_t] = [b][u] = [bu] = [(dv)u] = [v(du)]$ $= [vm] = [v][m] = [v][0] = [0v] = [0]$. so $[ba_s] = [ba_t]$ but $[a_s] \neq [a_t]$. so since the map $[a_j]\to [ba_j]$ is not 1-1, it cannot be onto (since our set of representatives is finite), so $\{ba_1,ba_2,\dots ,ba_m\}$ is not a complete set of representatives. (i'm not usually so fussy about the brackets, but since the aj are presumably numbers, i use the brackets to indicate the entire equivalence class. hopefully you understand already that multiplication is well-defined on the equivalence classes, that is, that [a][b] = [ab], no matter "which" representatives we choose). 6. ## Re: Representatives of classes theorem Originally Posted by Deveno what you want to show is that the mapping $[a_j]\to [ba_j]$ (j = 1,....,m) is not 1-1 if gcd(b,m) ≠ 1. suppose that $[0] = [m] = [a_s]$, and let gcd(b,m) = d > 1. since d divides m, there is some 1 ≤ u < m with du = m (we are using the fact that d > 1 here). note that [u] ≠ [m]. we'll use this later. since d divides b, there is some integer v with dv = b. let $[a_t] = [u]$ (we can do this because we have a complete set of representatives, so u is in one of them). clearly, $[ba_s] = [b][a_s] = [b][0] = [0b] = [0]$. (if $a_s$ is a multiple of m, surely $ba_s$ is as well). however, $[ba_t] = [b][a_t] = [b][u] = [bu] = [(dv)u] = [v(du)]$ $= [vm] = [v][m] = [v][0] = [0v] = [0]$. so $[ba_s] = [ba_t]$ but $[a_s] \neq [a_t]$. so since the map $[a_j]\to [ba_j]$ is not 1-1, it cannot be onto (since our set of representatives is finite), so $\{ba_1,ba_2,\dots ,ba_m\}$ is not a complete set of representatives. (i'm not usually so fussy about the brackets, but since the aj are presumably numbers, i use the brackets to indicate the entire equivalence class. hopefully you understand already that multiplication is well-defined on the equivalence classes, that is, that [a][b] = [ab], no matter "which" representatives we choose). Seriously awesome....thanks
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 30, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8175631761550903, "perplexity": 13989.430706316289}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462232.5/warc/CC-MAIN-20150226074102-00126-ip-10-28-5-156.ec2.internal.warc.gz"}
http://www.science.gov/topicpages/o/oak+silkworm+antheraea.html
These are representative sample records from Science.gov related to your search topic. For comprehensive and current results, perform a real-time search at Science.gov. 1 PubMed Central Supercritical carbon dioxide (SC-CO2) extraction of oil from oak silkworm pupae was performed in the present research. Response surface methodology (RSM) was applied to optimize the parameters of SC-CO2 extraction, including extraction pressure, temperature, time and CO2 flow rate on the yield of oak silkworm pupal oil (OSPO). The optimal extraction condition for oil yield within the experimental range of the variables researched was at 28.03 MPa, 1.83 h, 35.31 C and 20.26 L/h as flow rate of CO2. Under this condition, the oil yield was predicted to be 26.18%. The oak silkworm pupal oil contains eight fatty acids, and is rich in unsaturated fatty acids and ?-linolenic acid (ALA), accounting for 77.29% and 34.27% in the total oil respectively. PMID:22408458 Pan, Wen-Juan; Liao, Ai-Mei; Zhang, Jian-Guo; Dong, Zeng; Wei, Zhao-Jun 2012-01-01 2 PubMed Central To better understand the molecular mechanism underlying of diapause in Antheraea pernyi (A.pernyi), we cloned a novel diapause-associated protein 3 (DAP3) gene from A.pernyi by reverse transcription-polymerase chain reaction (RT-PCR) and studied the biological functions. Sequence analysis revealed that this gene encodes 171 amino acids and has a conserved domain of Copper/Zinc superoxide dismutase (Cu/Zn-SOD). Western blot and qRT-PCR results showed that DAP3 was mainly expressed in the pupal stage, and gradually decreased as diapause development. DAP3 was also expressed in 1st and 5th instar larvae of A.pernyi. In tissues of 5th instar larvae of A.pernyi, DAP3 was mainly expressed in the epidermis, followed by the head, hemolymph and fat body. To identify the SOD activity of DAP3, we constructed a prokaryotic expression vector by inserting the coding region sequence into plasmid pET-28a (+) and obtained the purified rHIS-DAP3 fusion protein by Ni-NTA affinitive column. Importantly, we found the SOD activity of DAP3 fusion protein was approximately 0.6674 U/g. To further confirm the SOD activity of DAP3 in vivo, we induced the oxidative stress model of pupae by UV irradiation. The results showed that both the mRNA and protein level of DAP3 significantly increased by UV irradiation. Furthermore, the SOD activity of the total lysate of pupae increased significantly at 10 min post UV irradiation and transiently returned to normal level afterwards. These results suggested that DAP3 might be a novel protein with SOD activity getting involved in regulation of diapause in A.pernyi. PMID:24613963 Yu, Wei; Shu, Jianhong; Zhang, Yaozhou 2014-01-01 3 PubMed As a protective shell against environmental damage and attack by natural predators, the silkworm cocoon has outstanding mechanical properties. In particular, this multilayer non-woven composite structure can be exceptionally tough to enhance the chance of survival for silkworms while supporting their metabolic activity. Peel, out-of-plane compression and nano-indentation tests and micro-structure analysis were performed on four types of silkworm cocoon walls (domesticated Bombyx mori, semi-domesticated Antheraea assamensis and wild Antheraea pernyi and Antheraea mylitta silkworm cocoons) to understand the structure and mechanical property relationships. The wild silkworm cocoons were shown to be uniquely tough composite structures. The maximum work-of-fracture for the wild cocoons (A. pernyi and A. mylitta) was approximately 1000 J/m(2), which was almost 10 times the value for the domesticated cocoon (Bombyx mori) and 3~4 times the value for the semi-domesticated cocoon (A. assamensis). Calcium oxalate crystals were found to deposit on the outer surfaces of the semi-domesticated and wild cocoons. They did not show influence in enhancing the interlaminar adhesion between cocoon layers but exhibited much higher hardness than the cocoon pelades. PMID:23706202 Zhang, J; Kaur, J; Rajkhowa, R; Li, J L; Liu, X Y; Wang, X G 2013-08-01 4 PubMed Biological materials are hierarchically organized complex composites, which embrace multiple practical functionalities. As an example, the wild silkworm cocoon provides multiple protective functions against environmental and physical hazards, promoting the survival chance of moth pupae that resides inside. In the present investigation, the microstructure and thermal property of the Chinese tussah silkworm (Antheraea pernyi) cocoon in both warm and cold environments under windy conditions have been studied by experimental and numerical methods. A new computational fluid dynamics model has been developed according to the original fibrous structure of the Antheraea pernyi cocoon to simulate the unique heat transfer process through the cocoon wall. The structure of the Antheraea pernyi cocoon wall can promote the disorderness of the interior air, which increases the wind resistance by stopping most of the air flowing into the cocoon. The Antheraea pernyi cocoon is wind-proof due to the mineral crystals deposited on the outer layer surface and its hierarchical structure with low porosity and high tortuosity. The research findings have important implications to enhancing the thermal function of biomimetic protective textiles and clothing. PMID:25280854 Jin, Xing; Zhang, Jin; Gao, Weimin; Li, Jingliang; Wang, Xungai 2014-09-01 5 PubMed The tasar silkworm, Antheraea mylitta Drury, Andhra local ecorace is an exclusive race of Andhra Pradesh. It is on the verge of extinction due to difficulty of acclimatisation at breeding and rearing stages. As an attempt to protect this race, a method of total indoor rearing has been done. In this context, the estimation of free amino acids, excretory products- urea and uric acid were compared during the fourth and fifth instars of tasar silkworm, reared under outdoor and indoor conditions. The study has revealed that amino acids decreased in the fat body in outdoor and indoor reared larvae in contrast to that in the haemolymph where it has gradually increased from first to third crops. This is an important finding as it reveals that indoor worms seem to adopt proteolytic activity in the haemolymph. Secondly, in the fifth instar the excretory products are more compared to fourth instar in the indoor reared worms. During fifth instar, formation of nitrogenous products lessens as silk synthesis enhances. The present study reveals that decrease in uric acid in fifth instar implies increase in growth rate and silk synthesis in both outdoor and indoor worms. The findings of the present investigation is helpful in the conservation and protection of the A. mylitta, Andhra local ecorace. PMID:19297987 Shamitha, G; Rao, A Purushotham 2008-11-01 6 NSDL National Science Digital Library Adult silkworm moths lay eggs to reproduce. The eggs hatch into silkworm larvae. The larvae spin silk cocoons and use them as they change from larvae to silkworm moths. Silkworm larvae exclusively eat mulberry leaves and their cocoons are used by human to make silk products such as silk fabric. Olivia Worland (Purdue University;Biological Sciences) 2008-06-03 7 PubMed To understand how the increase in atmospheric CO2 from human activity may affect leaf damage by forest insects, we examined host plant preference and larval performance of a generalist herbivore, Antheraea polyphemus Cram., that consumed foliage developed under ambient or elevated CO2. Larvae were fed leaves from Quercus alba L. and Quercus velutina Lam. grown under ambient or plus 200 microl/liter CO2 using free air carbon dioxide enrichment (FACE). Lower digestibility of foliage, greater protein precipitation capacity in frass, and lower nitrogen concentration of larvae indicate that growth under elevated CO2 reduced the food quality of oak leaves for caterpillars. Consuming leaves of either oak species grown under elevated CO2 slowed the rate of development of A. polyphemus larvae. When given a choice, A. polyphemus larvae preferred Q. velutina leaves grown under ambient CO2; feeding on foliage of this species grown under elevated CO2 led to reduced consumption, slower growth, and greater mortality. Larvae compensated for the lower digestibility of Q. alba leaves grown under elevated CO2 by increasing the efficiency of conversion of ingested food into larval mass. Despite equivalent consumption rates, larvae grew larger when they consumed Q. alba leaves grown under elevated compared with ambient CO2. Reduced consumption, slower growth rates, and increased mortality of insect larvae may explain lower total leaf damage observed previously in plots in this forest exposed to elevated CO2. By subtly altering aspects of leaf chemistry, the ever-increasing concentration of CO2 in the atmosphere will change the trophic dynamics in forest ecosystems. PMID:17540072 Knepp, Rachel G; Hamilton, Jason G; Zangerl, Arthur R; Berenbaum, May R; DeLucia, Evan H 2007-06-01 8 PubMed Central In the present study, the total hydroperoxides, catalase, glutathione-s-transferase, and ascorbic acid contents were determined in different developmental stages of the non-diapause and the diapause generation of the tropical tasar silkworm, Antheraea mylitta Drury (Lepidoptera: Saturniidae). The results showed stage-specific significantly higher levels of total hydroperoxides, catalase, and ascorbic acid contents in the non-diapause as compared to the diapause generation (p < 0.05). However, a significantly enhanced level of glutathione-S-transferase activity was observed in mature 5th instar larvae of the diapause generation (p < 0.05). In the case of pupae, significantly higher levels of total hydroperoxides, catalase, and glutathione-s-transferase activity were observed in the non-diapause generation (p < 0.05). These results could be the effect of intensive metabolic transformation that takes place in tissues of the non-diapause generation and causes increased production of reactive oxygen species, such as hydroperoxides. The results suggest that antioxidants play an important role in protecting cells against reactive oxygen species. PMID:24786341 Jena, Karmabeer; Kar, Prasanta K.; Babu, Chittithoti S.; Giri, Shantakar; Singh, Shyam S.; Prasad, Bhagwan C. 2013-01-01 9 PubMed Conventional scaffold fabrication techniques result in narrow pore architectures causing a limited interconnectivity and use of porogens, which affects the bio- or cyto-compatibility. To ameliorate this, cryogels are immensely explored due to their macro-porous nature, ease in fabrication, using ice crystals as porogens, the shape property, easy reproducibility and cost-effective fabrication technique. Cryogels in the present study are prepared from nonmulberry Indian muga silk gland protein fibroin of Antheraea assamensis using two different fabrication temperatures (-20 and -80 C). Anionic surfactant sodium dodecyl sulfate is used to solubilize fibroin, which in turn facilitates gelation by accelerating the -sheet formation. Ethanol is employed to stabilize the 3D network and induces bimodal porosity. The gels thus formed demonstrate increased -sheet content (FTIR) and a considerable effect of pre-freezing temperatures on 3D micro-architectures. The cryogels are capable of absorbing large amounts of water and withstanding mechanical compression without structure deformation. Further, cell impregnated cryogels well support the viability of human hepatocarcinoma cells (live/dead assay). The formation of cellular aggregates (confocal laser and scanning electron microscope), derivation in metabolic activity and proliferation rate are obtained in constructs fabricated at different temperatures. In summary, the present work reveals promising insights in the development of a biomimetic functional template for biomedical therapeutics and liver tissue engineering. PMID:24002731 Kundu, Banani; Kundu, S C 2013-10-01 10 E-print Network RESEARCH ARTICLE Open Access Comparative genomics of parasitic silkworm microsporidia reveal the pébrine disease of domesticated silkworms results in great economic losses in the silkworm industry. So to undomesticated silkworms Antheraea pernyi), were sequenced and compared with their distantly related species, N Keeling, Patrick 11 PubMed Central In this study we successfully constructed a full-length cDNA library from Chinese oak silkworm, Antheraea pernyi, the most well-known wild silkworm used for silk production and insect food. Total RNA was extracted from a single fresh female pupa at the diapause stage. The titer of the library was 5 105 cfu/ml and the proportion of recombinant clones was approximately 95%. Expressed sequence tag (EST) analysis was used to characterize the library. A total of 175 clustered ESTs consisting of 24 contigs and 151 singlets were generated from 250 effective sequences. Of the 175 unigenes, 97 (55.4%) were known genes but only five from A. pernyi, 37 (21.2%) were known ESTs without function annotation, and 41 (23.4%) were novel ESTs. By EST sequencing, a gene coding KK-42-binding protein in A. pernyi (named as ApKK42-BP; GenBank accession no. FJ744151) was identified and characterized. Protein sequence analysis showed that ApKK42-BP was not a membrane protein but an extracellular protein with a signal peptide at position 1-18, and contained two putative conserved domains, abhydro_lipase and abhydrolase_1, suggesting it may be a member of lipase superfamily. Expression analysis based on number of ESTs showed that ApKK42-BP was an abundant gene in the period of diapause stage, suggesting it may also be involved in pupa-diapause termination. PMID:19564928 Li, Yu-Ping; Xia, Run-Xi; Wang, Huan; Li, Xi-Sheng; Liu, Yan-Qun; Wei, Zhao-Jun; Lu, Cheng; Xiang, Zhong-Huai 2009-01-01 12 NSDL National Science Digital Library Silkworm moths are the adult form of silkworm larvae. They emerge from the silk cocoons to mate. Mating is their only purpose and they do not eat or drink water. The females will lay hundreds of tiny white eggs. Gerd A.T. Müller (None;) 2002-05-18 13 NSDL National Science Digital Library Silkworm larvae spin silk cocoons to live in while they go through metamorphosis. They change from silkworm larvae into white silk moths. The silk cocoons are valuable to humans and can be made into silk fabric. Roman Neumüller (None;) 2006-07-05 14 NSDL National Science Digital Library Silkworm larvae hatch from eggs. They have 13 segments, split up into the head, thorax, and abdomen regions. The walking legs are on the thorax region and the prolegs are on the abdomen region. The larvae have a false eye on one of the segments to appear larger, spiracles on each segment to breathe through, and spinnerets to spin silk with near the head. Ma?gorzata Mi?aszewska (None;) 2007-08-04 15 PubMed Central Background The insect predator, Arma chinensis, is capable of effectively controlling many pests, such as Colorado potato beetle, cotton bollworm, and mirid bugs. Our previous study demonstrated several life history parameters were diminished for A. chinensis reared on an artificial diet compared to a natural food source like the Chinese oak silk moth pupae. The molecular mechanisms underlying the nutritive impact of the artificial diet on A. chinensis health are unclear. So we utilized transcriptome information to better understand the impact of the artificial diet on A. chinensis at the molecular level. Methodology/Principal Findings Illumina HiSeq2000 was used to sequence 4.79 and 4.70 Gb of the transcriptome from pupae-fed and artificial diet-fed A. chinensis libraries, respectively, and a de novo transcriptome assembly was performed (Trinity short read assembler). This resulted in 112,029 and 98,724 contigs, clustered into 54,083 and 54,169 unigenes for pupae-fed and diet-fed A. chinensis, respectively. Unigenes from each samples assembly underwent sequence splicing and redundancy removal to acquire non-redundant unigenes. We obtained 55,189 unigenes of A. chinensis, including 12,046 distinct clusters and 43,143 distinct singletons. Unigene sequences were aligned by BLASTx to nr, Swiss-Prot, KEGG and COG (E-value <10?5), and further aligned by BLASTn to nt (E-value <10?5), retrieving proteins of highest sequence similarity with the given unigenes along with their protein functional annotations. Totally, 22,964, 7,898, 18,069, 15,416, 8,066 and 5,341 unigenes were annotated in nr, nt, Swiss-Prot, KEGG, COG and GO, respectively. We compared gene expression variations and found thousands of genes were differentially expressed between pupae-fed and diet-fed A. chinensis. Conclusions/Significance Our study provides abundant genomic data and offers comprehensive sequence information for studying A. chinensis. Additionally, the physiological roles of the differentially expressed genes enable us to predict effects of some dietary ingredients and subsequently propose formulation improvements to artificial diets. PMID:23593338 Zou, Deyu; Coudron, Thomas A.; Liu, Chenxi; Zhang, Lisheng; Wang, Mengqing; Chen, Hongyin 2013-01-01 16 PubMed Variability is a common feature of natural silk fibres, caused by a range of natural processing conditions. Better understanding of variability will not only be favourable for explaining the enviable mechanical properties of animal silks but will provide valuable information for the design of advanced artificial and biomimetic silk-like materials. In this work, we have investigated the origin of variability in forcibly reeled Antheraea pernyi silks from different individuals using dynamic mechanical thermal analysis (DMTA) combined with the effect of polar solvent penetration. Quasi-static tensile curves in different media have been tested to show the considerable variability of tensile properties between samples from different silkworms. The DMTA profiles (as a function of temperature or humidity) through the glass transition region of different silks as well as dynamic mechanical properties after high temperature and water annealing are analysed in detail to identify the origin of silk variability in terms of molecular structures and interactions, which indicate that different hydrogen bonded structures exist in the amorphous regions and they are notably different for silks from different individuals. Solubility parameter effects of solvents are quantitatively correlated with the different glass transitions values. Furthermore, the overall ordered fraction is shown to be a key parameter to quantify the variability in the different silk fibres, which is consistent with DMTA and FTIR observations. PMID:25030083 Wang, Yu; Guan, Juan; Hawkins, Nick; Porter, David; Shao, Zhengzhong 2014-09-01 17 E-print Network ,weexaminedhostplantpreferenceandlarvalperformanceofageneralist herbivore, Antheraea polyphemus Cram., that consumed foliage developed under ambient or elevated CO2. LarvaePLANTÐINSECT INTERACTIONS Foliage of Oaks Grown Under Elevated CO2 Reduces Performance of Antheraea polyphemus (Lepidoptera: Saturniidae) RACHEL G. KNEPP,1 JASON G. HAMILTON,2 ARTHUR R. ZANGERL,3 MAY R DeLucia, Evan H. 18 NSDL National Science Digital Library Adult silkworms lay eggs to reproduce. Silkworm larvae hatch from these eggs. The larvae constantly eat only one thing- mulberry leaves. The larvae will spin silk cocoons for metamorphosis. The adults mate after emerging from the cocoon and the female will lay many small eggs. Hubert Ludwig (None;) 2004-11-27 19 PubMed Antheraea pernyi (A. pernyi) silk fibroin, which is spun from a wild silkworm, has increasingly attracted interest in the field of tissue engineering. The aim of this study was to investigate the nucleation of hydroxyapatite (HAp) on A. pernyi fibroin film. Von Kossa staining proved that A. pernyi fibroin had Ca binding activity. The A. pernyi fibroin film was mineralized with HAp crystals by alternative soaking in calcium and phosphate solutions. Spherical crystals were nucleated on the A. pernyi fibroin film according to scanning electron microscopeimaging results. The FT-IR and X-ray diffraction spectra confirmed that these spherical crystals were HAp. The results of in vitro cell culture using MG-63 cells demonstrated that the mineralized A. pernyi fibroin film showed excellent cytocompatibility and sound improvement of the MG-63 cellviability. PMID:24211958 Yang, Mingying; Shuai, Yajun; Zhou, Guanshan; Mandal, Namita; Zhu, Liangjun 2014-01-01 20 Commercial silkworm silk is presumed to be much weaker and less extensible than spider dragline silk, which has been hailed as a 'super-fibre'. But we show here that the mechanical properties of silkworm silks can approach those of spider dragline silk when reeled under controlled conditions. We suggest that silkworms might be able to produce threads that compare well with Zhengzhong Shao; Fritz Vollrath 2002-01-01 21 PubMed Binding properties of six heterologously expressed pheromone-binding proteins (PBPs) identified in the silkmoths Antheraea polyphemus and Antheraea pernyi were studied using tritium-labelled pheromone components, ( E, Z)-6,11-hexadecadienyl acetate ((3)H-Ac1) and ( E, Z)-6,11-hexadecadienal ((3)H-Ald), common to both species. In addition, a known ligand of PBP and inhibitor of pheromone receptor cells, the tritium-labelled esterase inhibitor decyl-thio-1,1,1-trifluoropropanone ((3)H-DTFP), was tested. The binding of ligands was measured after native gel electrophoresis and cutting gel slices. In both species, PBP1 and PBP3 showed binding of (3)H-Ac1. In competition experiments with (3)H-Ac1 and the third unlabelled pheromone component, ( E, Z)-4,9-tetradecadienyl acetate (Ac2), the PBP1 showed preferential binding of Ac1, whereas PBP3 preferentially bound Ac2. The PBP2 of both species bound (3)H-Ald only. All of the six PBPs strongly bound (3)H-DTFP. Among unlabelled pheromone derivatives, alcohols were revealed to be the best competitors for (3)H-Ac1 and (3)H-Ald bound to PBPs. No pH influence was found for (3)H-Ac1 binding to, or its release from, the PBP3 of A. polyphemus and A. pernyi between pH 4.0 and pH 7.5. The data indicate binding preference of each of the three PBP-subtypes (1-3) for a specific pheromone component and support the idea that PBPs contribute to odour discrimination, although to a smaller extent than receptor activation. PMID:12879348 Maida, R; Ziegelberger, G; Kaissling, K-E 2003-09-01 22 PubMed Freestanding membranes created from Bombyx mori silk fibroin (BMSF) offer a potential vehicle for corneal cell transplantation since they are transparent and support the growth of human corneal epithelial (HCE) cells. Fibroin derived from the wild silkworm Antheraea pernyi (APSF) might provide a superior material by virtue of containing putative cell-attachment sites that are absent from BMSF. Thus we have investigated the feasibility of producing transparent, freestanding membranes from APSF and have analysed the behaviour of HCE cells on this material. No significant differences in cell numbers or phenotype were observed in short term HCE cell cultures established on either fibroin. Production of transparent freestanding APSF membranes, however, proved to be problematic as cast solutions of APSF were more prone to becoming opaque, displayed significantly lower permeability and were more brittle than BMSF-membranes. Cultures of HCE cells established on either membrane developed a normal stratified morphology with cytokeratin pair 3/12 being immuno-localized to the superficial layers. We conclude that while it is feasible to produce transparent freestanding membranes from APSF, the technical difficulties associated with this biomaterial, along with an absence of enhanced cell growth, currently favour the continued development of BMSF as a preferred vehicle for corneal cell transplantation. Nevertheless, it remains possible that refinement of techniques for processing APSF might yet lead to improvements in the handling properties and performance of this material. PMID:24565906 Hogerheyde, Thomas A; Suzuki, Shuko; Stephenson, Sally A; Richardson, Neil A; Chirila, Traian V; Harkin, Damien G; Bray, Laura J 2014-04-01 23 NASA Astrophysics Data System (ADS) Commercial silkworm silk is presumed to be much weaker and less extensible than spider dragline silk, which has been hailed as a 'super-fibre'. But we show here that the mechanical properties of silkworm silks can approach those of spider dragline silk when reeled under controlled conditions. We suggest that silkworms might be able to produce threads that compare well with spider silk by changing their spinning habits, rather than by having their silk genes altered. Shao, Zhengzhong; Vollrath, Fritz 2002-08-01 24 PubMed The silkworm, Bombyx mori, played an important role in the old Silk Road that connected ancient Asia and Europe. However, to date, there have been few studies of the origins and domestication of this species using molecular methods. In this study, DNA sequences of mitochondrial and nuclear loci were used to infer the phylogeny and evolutionary history of the domesticated silkworm and its relatives. All of the phylogenetic analyses indicated a close relationship between the domesticated silkworm and the Chinese wild silkworm. Domestication was estimated to have occurred about 4100 years ago (ya), and the radiation of the different geographic strains of B. mori about 2000 ya. The Chinese wild silkworm and the Japanese wild silkworm split about 23600 ya. These estimates are in good agreement with the fossil evidence and historical records. In addition, we show that the domesticated silkworm experienced a population expansion around 1000 ya. The divergence times and the population dynamics of silkworms presented in this study will be useful for studies of lepidopteran phylogenetics, in the genetic analysis of domestic animals, and for understanding the spread of human civilizations. PMID:22744178 Sun, Wei; Yu, Hongsong; Shen, Yihong; Banno, Yutaka; Xiang, Zhonghuai; Zhang, Ze 2012-06-01 25 PubMed Antheraea pernyi silk fibroin (SF) hydrolysate were characterized using UV-VIS spectrometer, amino acid composition and heavy metal contents to explore its potential sources for food or cosmetic additives. The hydrolyzed A. pernyi SF was separated into two parts: (a) SFA, alanine-rich fraction and (b) SFB, tyrosine-rich fraction. SFB exhibited strong absorption peaks at 210 and 280 nm due to the presence of the tyrosine. Heavy metal analysis showed that arsenic and mercury did not detect. Other heavy metals, which includes lead, cadmium, etc., were recorded only a trace amount. Therefore, A. pernyi SF hydrolysate could be safely used as sources of food, cosmetic and pharmaceuticals. PMID:20937302 Lee, Kwang-gill; Kweon, HaeYong; Yeo, Joo-hong; Woo, SoonOk; Han, SangMi; Kim, Jong-Ho 2011-01-01 26 NASA Astrophysics Data System (ADS) The possibility of silkworm (Bombyx mori) protein as a base material of biomimetic actuator was investigated in this paper. Silkworm films were prepared from high concentrations of regenerated fibroin in aqueous solution. Films with thickness of about 100 ?m were prepared for coating electrodes. The cast silk films were coated by very thin gold electrode on both sides of the film. Tensile test of cast film showed bi-modal trend, which is typical stress-strain relation of polymeric film. As the test of a possible biomimetic actuator, silkworm film actuator provides bending deformations according to the magnitude and frequency of the applied electric filed. Although the present bending deformation of silkworm film actuator is smaller than that of Electro-Active Paper actuator, it provides the possibility of biomimetic actuator. Jin, Hyoung-Joon; Myung, Seung Jun; Kim, Heung Soo; Jung, Woochul; Kim, Jaehwan 2006-03-01 27 IN 1924, Watanabe1 postulated that silkworm eggs overwinter as a result of receiving an inhibitory' substance from the mother moths, but the existence of the substance has not hitherto been substantiated experimentally. On the other hand, I have found2 that the suboesophageal ganglion of the silkworm is responsible for the hibernation of silkworm eggs; but the organ that furnishes the Kinsaku Hasegawa 1957-01-01 28 E-print Network Snmp-1, a Novel Membrane Protein of Olfactory Neurons of the Silk Moth Antheraea polyphemus of the wild silk moth Antheraea polyphemus. We have purified and cloned a prominent 67-kDa protein which we of olfactory neuron receptor membranes of the wild silk moth Antheraea polyphemus. The morphology of the A Vogt, Richard G. 29 PubMed Wolbachia naturally infects a wide variety of arthropods, where it plays important roles in host reproduction. It was previously reported that Wolbachia did not infect silkworm. By means of PCR and sequencing we found in this study that Wolbachia is indeed present in silkworm. Phylogenetic analysis indicates that Wolbachia infection in silkworm may have occurred via transfer from parasitic wasps. Furthermore, Southern blotting results suggest a lateral transfer of the wsp gene into the genomes of some wild silkworms. By antibiotic treatments, we found that tetracycline and ciprofloxacin can eliminate Wolbachia in the silkworm and Wolbachia is important to ovary development of silkworm. These results provide clues towards a more comprehensive understanding of the interaction between Wolbachia and silkworm and possibly other lepidopteran insects. PMID:25249781 Zha, Xingfu; Zhang, Wenji; Zhou, Chunyan; Zhang, Liying; Xiang, Zhonghuai; Xia, Qingyou 2014-09-01 30 PubMed Central Background In contrast to wild species, which have typically evolved phenotypes over long periods of natural selection, domesticates rapidly gained human-preferred agronomic traits in a relatively short-time frame via artificial selection. Under domesticated conditions, many traits can be observed that cannot only be due to environmental alteration. In the case of silkworms, aside from genetic divergence, whether epigenetic divergence played a role in domestication is an unanswered question. The silkworm is still an enigma in that it has two DNA methyltransferases (DNMT1 and DNMT2) but their functionality is unknown. Even in particular the functionality of the widely distributed DNMT1 remains unknown in insects in general. Results By embryonic RNA interference, we reveal that knockdown of silkworm Dnmt1 caused decreased hatchability, providing the first direct experimental evidence of functional significance of insect Dnmt1. In the light of this fact and those that DNA methylation is correlated with gene expression in silkworms and some agronomic traits in domesticated organisms are not stable, we comprehensively compare silk gland methylomes of 3 domesticated (Bombyx mori) and 4 wild (Bombyx mandarina) silkworms to identify differentially methylated genes between the two. We observed 2-fold more differentiated methylated cytosinces (mCs) in domesticated silkworms as compared to their wild counterparts, suggesting a trend of increasing DNA methylation during domestication. Further study of more domesticated and wild silkworms narrowed down the domesticates epimutations, and we were able to identify a number of differential genes. One such gene showing demethyaltion in domesticates correspondently displays lower gene expression, and more interestingly, has experienced selective sweep. A methylation-increased gene seems to result in higher expression in domesticates and the function of its Drosophila homolog was previously found to be essential for cell volume regulation, indicating a possible correlation with the enlargement of silk glands in domesticated silkworms. Conclusions Our results imply epigenetic influences at work during domestication, which gives insight into long time historical controversies regarding acquired inheritance. PMID:24059350 2013-01-01 31 SciTech Connect The basic arrangement is shown for a combination silkworm breeding house and solar hothouse with subsoil irrigation and accumulation of heat; it employs a semicylindrical film covering. The process of accumulation of solar heat in the subsoil pebble stores, in water-heater banks, and in the soil is described. Vardiashvili, A.B.; Muradov, M.; Kim, V.D. 1980-01-01 32 PubMed Acoustic signals produced by caterpillars have been documented for over 100 years, but in the majority of cases their significance is unknown. This study is the first to experimentally examine the phenomenon of audible sound production in larval Lepidoptera, focusing on a common silkmoth caterpillar, Antheraea polyphemus (Saturniidae). Larvae produce airborne sounds, resembling ;clicks', with their mandibles. Larvae typically signal multiple times in quick succession, producing trains that last over 1 min and include 50-55 clicks. Individual clicks within a train are on average 24.7 ms in duration, often consisting of multiple components. Clicks are audible in a quiet room, measuring 58.1-78.8 dB peSPL at 10 cm. They exhibit a broadband frequency that extends into the ultrasound spectrum, with most energy between 8 and 18 kHz. Our hypothesis that clicks function as acoustic aposematic signals, was supported by several lines of evidence. Experiments with forceps and domestic chicks correlated sound production with attack, and an increase in attack rate was positively correlated with the number of signals produced. In addition, sound production typically preceded or accompanied defensive regurgitation. Bioassays with invertebrates (ants) and vertebrates (mice) revealed that the regurgitant is deterrent to would-be predators. Comparative evidence revealed that other Bombycoidea species, including Actias luna (Saturniidae) and Manduca sexta (Sphingidae), also produce airborne sounds upon attack, and that these sounds precede regurgitation. The prevalence and adaptive significance of warning sounds in caterpillars is discussed. PMID:17337712 Brown, Sarah G; Boettner, George H; Yack, Jayne E 2007-03-01 33 The Silkworm Knowledgebase (SilkDB) is a web- based repository for the curation, integration and study of silkworm genetic and genomic data. With the recent accomplishment of a 6X draft genome sequence of the domestic silkworm (Bombyx mori), SilkDB provides an integrated representation of the large-scale, genome-wide sequence assembly, cDNAs, clusters of expressed sequence tags (ESTs), transposable elements (TEs), mutants, single Jing Wang; Qingyou Xia; Ximiao He; Mingtao Dai; Jue Ruan; Jie Chen; Guo Yu; Haifeng Yuan; Yafeng Hu; Ruiqiang Li; Tao Feng; Chen Ye; Cheng Lu; Jun Wang; Songgang Li; Gane Ka-shu Wong; Huanming Yang; Jian Wang; Zhonghuai Xiang; Zeyang Zhou; Jun Yu 2005-01-01 34 PubMed Central A tropical climate prevails in most of the sericultural areas in India, where temperature increases during the summer lead to adverse effects on temperate bivoltine silkworm rearing and cause crop losses. Screening for thermotolerance in the silkworm, Bombyxmori L. (Lepidoptera: Bombycidae) is an essential prerequisite for the development of thermotolerant breeds/hybrids. In the current study, the aim was to identify potential bivoltine silkworm strains specific for tolerance to high temperature. The third day of fifth stage silkworm larvae of bivoltine strains were subjected to high temperature of 36 1 C with RH of 50 5 % for six hours (10:0016:00) every day until spinning for three consecutive generations. Highly significant differences were found among all genetic traits of bivoltine silkworm strains in the treated groups. Three groups of silkworm resulted including susceptible, moderately tolerant, and tolerant by utilizing pupation rate or survival rate with thermal stress as the index for thermotolerance. Furthermore, based on the overall silkworm rearing performance of nine quantitative genetic traits such as larval weight, cocoon yield by number and weight, pupation, single cocoon and shell weight, shell ratio, filament length and denier, three bivoltine silkworm strains, BD2-S, SOF-BR and BO2 were developed as having the potential for thermotolerance. The data from the present study enhance knowledge for the development of thermo tolerant silkworm breeds/ hybrids and their effective commercial utilization in the sericulture industry. PMID:22225406 Kumari, Savarapu Sugnana; Subbarao, Sure Venkata; Misra, Sunil; Murty, Upadyayula Suryanarayana 2011-01-01 35 PubMed By feeding the silkworms with the nano Fe3O4 powder together with mulberry leaves, we directly obtained silkworm spun pristine magnetic silk fiber, MSF. To compare with the normal SF found that this MSF not only has expected magnetic properties, but also has enhanced thermal stability and mechanical properties, e.g. stress and strain. PMID:24269584 Wang, Jun-Ting; Li, Lu-Lu; Feng, Lei; Li, Jin-Fan; Jiang, Lin-Hai; Shen, Qing 2014-02-01 36 NASA Astrophysics Data System (ADS) Silkworm provides an ideal model system for study of calcium oxalate crystallization in kidney-like organs, called Malpighian tubules. During their growth and development, silkworm larvae accumulate massive amounts of calcium oxalate crystals in their Malpighian tubules with no apparent harm to the organism. This manuscript reports studies of crystal structure in the tubules along with analyses identifying molecular constituents of tubule exudate. Wyman, Aaron J.; Webb, Mary Alice 2007-04-01 37 A new method for localizing odour sources by mimicking the behaviour of silkworm moths is proposed. A male silkworm moth is able to localize its female counterpart by tracking airborne sexual pheromone. Through the observation of this behaviour, we have confirmed that wing vibrations are effective in enhancing the directivity of the odour stimulus. An artificial system with this mechanism H. Ishida; K. Hayashi; M. Takakusaki; T. Nakamoto; T. Moriizumi; R. Kanzaki 1995-01-01 38 The silkworm (Bombyx mori L.) is a lepidopteran insect with a long history of significant agricultural value. We have constructed the first amplified fragment length polymorphism (AFLP) genetic linkage map of the silkworm B. mori at a LOD score of 2.5. The mapping AFLP markers were genotyped in 47 progeny from a backcross population of the cross no. 782 3 Yuan-De Tan; Chunling Wan; Yufang Zhu; Chen Lu; Zhonghuai Xiang; Hong-Wen Deng 39 The domesticated silkworm, Bombyx mori serves as an ideal representative of lepidopteran species for a variety of scientific studies. As a result, databases have been created to organize information pertaining to the silkworm genome that is subject to constant updating. Of these, four main databases are important for store nucleotide information in the form of genomic data, ESTs and microsatelites. Nicole Koshy; Kangayam M. Ponnuvel; Randhir K. Sinha; S. M. H. Qadri 40 NASA Astrophysics Data System (ADS) In order to investigate of a possibility of utilizing silkworm for the space agriculture, rearing of silkworms was examined under hypobaric and hypoxia conditions. In terms of structural mechanics, the lower inner pressure of Martian greenhouse has advantage to reduce requirements on physical properties of mechanical member of the pressurized structure. The main objective of this study is to know the influence of lower total pressure and hypoxia condition on silkworm. Silkworms are reared under following four hypobaric and hypoxia conditions, 10kPa pure oxygen, 20kPa pure oxygen, 10kPa oxygen and 10kPa nitrogen, and 10kPa oxygen and 90kPa nitrogen. After rearing them to pupa stage, growth of silkworms was found poor under all hypobaric hypoxia conditions compared to those grown under the normal atmospheric condition; the control group. The growth under total pressure of 20kPa is slightly fast. Hashimoto, Hirofumi; Nakayama, Shin; Yamashita, Masamichi; Space Agriculture Task Force, J. 41 Summary Three virus-like particles have been isolated from diseased larvae of Antheraea eucalypti. Serological tests established that one of them was indistinguishable from cricket paralysis virus (CrPV). CrPV isolated from crickets and from Antheraea were cross-infectious, and crickets could acquire lethal doses of the virus by feeding on infected Antheraea larvae. In addition to two species of Teleogryllus, three other Carl Reinganum 1975-01-01 42 PubMed Females of the sibling silkmoth species Antheraea polyphemus and A. pernyi use the same three sex pheromone components in different ratios to attract conspecific males. Accordingly, the sensory hairs on the antennae of males contain three receptor cells sensitive to each of the pheromone components. In agreement with the number of pheromones used, three different pheromone-binding proteins (PBPs) could be identified in pheromone-sensitive hairs of both species by combining biochemical and molecular cloning techniques. MALDI-TOF MS of sensillum lymph droplets from pheromone-sensitive sensilla trichodea of male A. polyphemus revealed the presence of three major peaks with m/z of 15702, 15752 and 15780 and two minor peaks of m/z 15963 and 15983. In Western blots with four antisera raised against different silkmoth odorant-binding proteins, immunoreactivity was found only with an anti-(Apol PBP) serum. Free-flow IEF, ion-exchange chromatography and Western blot analyses revealed at least three anti-(Apol PBP) immunoreactive proteins with pI values between 4.4 and 4.7. N-Terminal sequencing of these three proteins revealed two proteins (Apol PBP1a and Apol PBP1b) identical in the first 49 amino acids to the already known PBP (Apol PBP1) [Raming, K. , Krieger, J. & Breer, H. (1989) FEBS Lett. 256, 2215-2218] and a new PBP having only 57% identity with this amino-acid region. Screening of antennal cDNA libraries with an oligonucleotide probe corresponding to the N-terminal end of the new A. polyphemus PBP, led to the discovery of full length clones encoding this protein in A. polyphemus (Apol PBP3) and in A. pernyi (Aper PBP3). By screening the antennal cDNA library of A. polyphemus with a digoxigenin-labelled A. pernyi PBP2 cDNA [Krieger, J., Raming, K. & Breer, H. (1991) Biochim. Biophys. Acta 1088, 277-284] a homologous PBP (Apol PBP2) was cloned. Binding studies with the two main pheromone components of A. polyphemus and A. pernyi, the (E,Z)-6, 11-hexadecadienyl acetate (AC1) and the (E,Z)-6,11-hexadecadienal (ALD), revealed that in A. polyphemus both Apol PBP1a and the new Apol PBP3 bound the 3H-labelled acetate, whereas no binding of the 3H-labelled aldehyde was found. In A. pernyi two PBPs from sensory hair homogenates showed binding affinity for the AC1 (Aper PBP1) and the ALD (Aper PBP2), respectively. PMID:10806387 Maida, R; Krieger, J; Gebauer, T; Lange, U; Ziegelberger, G 2000-05-01 43 PubMed Central This study was conducted to confirm the possible use of female Yangwonjam as a host for synnemata production of Isaria tenuipes in eight local areas in Korea. Silkworm pupation rate, infection rate and synnemata characteristics of I. tenuipes were examined. Normal silkworms had a higher pupation rate than silkworms inoculated with I. tenuipes. The pupae survival percentage of normal silkworm in cocoons was 92.5~97.6%, whereas it ranged from 91.1~95.6% in silkworms sprayed with I. tenuipes. Female Yangwonjam showed the highest survival percentage at 97.6% among the silkworm varieties tested. I. tenuipes infection rate of larvae of 5th instar newly-exuviated silkworms was 89.2~90.7% in the spring rearing season and 98.2~99.3% in the autumn rearing season. Synnemata production of I. tenuipes was excellent in female Yangwonjam with an incidence rate of 98.0% followed by male Yangwonjam (94.1%) and Baegokjam (93.3%) in the spring rearing season. Synnemata living weight ranged from 1.44~0.94 g in the spring rearing season. The female Yangwonjam had the heaviest synnemata weight (1.44 g) in the spring rearing season. The synnemata of I. tenuipes produced on pupae were white or milky-white in color, and were similar in shape and color to wild synnemata collected in Korea. PMID:22783097 Ji, Sang-Duk; Sung, Gyoo-Byung; Kang, Pil-Don; Kim, Kee-Young; Choi, Yong-Soo; Kim, Nam-Suk; Woo, Soon-Ok; Han, Sang-Mi; Ha, Nam-Gyu 2011-01-01 44 PubMed This study was conducted to confirm the possible use of female Yangwonjam as a host for synnemata production of Isaria tenuipes in eight local areas in Korea. Silkworm pupation rate, infection rate and synnemata characteristics of I. tenuipes were examined. Normal silkworms had a higher pupation rate than silkworms inoculated with I. tenuipes. The pupae survival percentage of normal silkworm in cocoons was 92.5~97.6%, whereas it ranged from 91.1~95.6% in silkworms sprayed with I. tenuipes. Female Yangwonjam showed the highest survival percentage at 97.6% among the silkworm varieties tested. I. tenuipes infection rate of larvae of 5th instar newly-exuviated silkworms was 89.2~90.7% in the spring rearing season and 98.2~99.3% in the autumn rearing season. Synnemata production of I. tenuipes was excellent in female Yangwonjam with an incidence rate of 98.0% followed by male Yangwonjam (94.1%) and Baegokjam (93.3%) in the spring rearing season. Synnemata living weight ranged from 1.44~0.94 g in the spring rearing season. The female Yangwonjam had the heaviest synnemata weight (1.44 g) in the spring rearing season. The synnemata of I. tenuipes produced on pupae were white or milky-white in color, and were similar in shape and color to wild synnemata collected in Korea. PMID:22783097 Ji, Sang-Duk; Sung, Gyoo-Byung; Kang, Pil-Don; Kim, Kee-Young; Choi, Yong-Soo; Kim, Nam-Suk; Woo, Soon-Ok; Han, Sang-Mi; Hong, In-Pyo; Ha, Nam-Gyu 2011-09-01 45 PubMed Female Attacus atlas respond electrophysiologically to both of the Antheraea polyphemus pheromone components (E,Z)-6,11-hexadecadienyl acetate and (E,Z)-6,11-hexadecadienal. Moreover, they possess a pheromone-binding protein (PBP) and general odorant-binding proteins (GOBPs), as well as a pheromone-degrading sensillar esterase and aldehyde oxidase enzymes. They show no electroantennogram responses to their own gland extract. In contrast, female A. polyphemus do not respond to their own or to A. atlas pheromone. Male A. atlas do not detect any of the A. polyphemus compounds but only the conspecific female gland extracts. Both male A. atlas and female A. polyphemus possess PBP and GOBP but lack the pheromone-degrading esterases of male Antheraea. The results indicate that the two species use quite distinct classes of chemicals as pheromones. In spite of this, the N-terminal amino acid sequences of the PBPs show homology of 68%. PMID:11124211 Maida, R; Ziesmann, J 2001-01-01 46 A method to detect peptidoglycan and (1 ? 3)-?-d-glucan with silkworm larvae plasma (SLP) derived from the hemolymph of the silkworm, Bombyx mori was developed. SLP contains all of the factors of the pro-phenol oxidase cascade, an important self-defense mechanism of insects. Peptidoglycan or (1 ? 3)-?-d-glucan initiates the cascade, in which pro-phenol oxidase is finally activated to phenol oxidase. Masakazu Tsuchiya; Nobuo Asahi; Fukiko Suzuoki; Masaaki Ashida; Shuji Matsuura 1996-01-01 47 PubMed Central Background Gene flow plays an important role in domestication history of domesticated species. However, little is known about the demographic history of domesticated silkworm involving gene flow with its wild relative. Results In this study, four model-based evolutionary scenarios to describe the demographic history of B. mori were hypothesized. Using Approximate Bayesian Computation method and DNA sequence data from 29 nuclear loci, we found that the gene flow at bottleneck model is the most likely scenario for silkworm domestication. The starting time of silkworm domestication was estimated to be approximate 7,500years ago; the time of domestication termination was 3,984years ago. Using coalescent simulation analysis, we also found that bi-directional gene flow occurred during silkworm domestication. Conclusions Estimates of silkworm domestication time are nearly consistent with the archeological evidence and our previous results. Importantly, we found that the bi-directional gene flow might occur during silkworm domestication. Our findings add a dimension to highlight the important role of gene flow in domestication of crops and animals. PMID:25123546 2014-01-01 48 SciTech Connect The effects of fluorides on mulberry and silkworm were investigated. The results had shown that polluted mulberry leaves which contain more than 30 parts per million fluorides (dry wt.) may induce acute damage to silkworm. 6 tables. Wang Chia-hsi; Qian Da-fu; Li Zheng-fang; Gao Xu-ping 1980-01-01 49 E-print Network LETTER Larval Legs of Mulberry Silkworm Bombyx mori Are Prototypes for the Adult Legs Amit Singh,1; silkworm; limb development; lepidoptera; imaginal disc; drosophila INTRODUCTION During evolution there has during pupal metamorphosis to give rise to adult derivatives (Cohen, 1993). The mulberry silkworm Bombyx Singh, Amit 50 E-print Network Study of Protein Conformation and Orientation in Silkworm and Spider Silk Fibers Using Raman mori and Samia cynthia ricini silkworms, and from the spider Nephila edulis. It is shown that.19 ( 0.02, respectively, even though the two types of silkworm fibroins strongly differ in their primary Pezolet, Michel 51 E-print Network Targeted mutagenesis in the silkworm Bombyx mori using zinc finger nuclease mRNA injection Yoko insects. Yet many methods remain to be adapted to non- drosophilid species. The silkworm, B. mori, has established for silkworm, including stable transgenesis of the germline (Tamura et al., 2000) targeted gene ?urovec, Michal 52 PubMed Central Background Silkworm is the basis of sericultural industry and the model organism in insect genetics study. Mapping quantitative trait loci (QTLs) underlying economically important traits of silkworm is of high significance for promoting the silkworm molecular breeding and advancing our knowledge on genetic architecture of the Lepidoptera. Yet, the currently used mapping methods are not well suitable for silkworm, because of ignoring the recombination difference in meiosis between two sexes. Results A mixed linear model including QTL main effects, epistatic effects, and QTL sex interaction effects was proposed for mapping QTLs in an F2 population of silkworm. The number and positions of QTLs were determined by F-test and model selection. The Markov chain Monte Carlo (MCMC) algorithm was employed to estimate and test genetic effects of QTLs and QTL sex interaction effects. The effectiveness of the model and statistical method was validated by a series of simulations. The results indicate that when markers are distributed sparsely on chromosomes, our method will substantially improve estimation accuracy as compared to the normal chiasmate F2 model. We also found that a sample size of hundreds was sufficiently large to unbiasedly estimate all the four types of epistases (i.e., additive-additive, additive-dominance, dominance-additive, and dominance-dominance) when the paired QTLs reside on different chromosomes in silkworm. Conclusion The proposed method could accurately estimate not only the additive, dominance and digenic epistatic effects but also their interaction effects with sex, correcting the potential bias and precision loss in the current QTL mapping practice of silkworm and thus representing an important addition to the arsenal of QTL mapping tools. PMID:21276233 2011-01-01 53 PubMed Mulberry-silkworm ecosystem is one of the important agro-ecosystems in China. Based on the principles and methods of emergy analysis, this paper studied the interior structure of mulberry-silkworm ecosystem and its relationship with exterior environment and economy. Some emergy indices for this ecosystem were quantitatively calculated, and compared with those of the agro-ecosystem in China. The results showed that the emergy investment ratio, emergy yield ratio, environmental loading ratio and emergy sustainability index was 3.78, 4.68, 0.18 and 26.0, respectively, suggesting the low environmental pressure and good ecological benefit in mulberry-silkworm ecosystem in China. Hi-technology was required to further decrease the labor force input and enhance the comprehensive utilization of sericultural resources. PMID:16706044 Chen, Mingang; Jin, Peihua; Huang, Lingxia; Lu, Xingmeng 2006-02-01 54 PubMed The imaginal antenna of the male silkmoth Antheraea polyphemus is a featherlike structure; its flagellum consists of about 30 stem segments each giving off two pairs of side branches. The antenna develops during the pupal stage (lasting in total about 21 days) from a leaf-shaped anlage by incisions proceeding from the periphery towards the prospective antennal stem. Primary incisions, starting about 3 days after apolysis, form double branches, which arethen split into single branches by parallel running secondary incisions. The initial pattern of tracheae and peripheral nerves is completely rearranged during these morphogenetic processes which are finished 9-10 days after apolysis. In Antheraea the dorsal and ventral epithelial monolayers of the antennal anlage are successively subdivided during development into a pattern of repetitive epithelial zones. Within the first day after apolysis alternating stripes of sensillogenic and non-sensillogenic epithelium are differentiating. Then the latter are further subdivided, and at last four different stripelike zones (I-IV) can be discriminated. Long basal protrusions of the epidermal cells ('epidermal feet'), and most probably haemocytes, seem to be involved in the reconstruction of the epithelium: both show characteristic arrangements within the antennal anlage during successive developmental stages. PMID:18621244 Steiner, C; Keil, T A 1993-06-01 55 PubMed This study was conducted out to select a silkworm variety suitable for synnemata production of Isaria tenuipes. Four kinds of the mulberry silkworm varieties, Bombyx mori, were hybridized using a Japanese parental line and a Chinese parental line, and used to test for synemata formation in I. tenuipes. The larval period of normal silkworms was 22 hr longer than the silkworms inoculated with this fungus. Among the silkworm varieties tested, Hachojam had the shortest larval period with 23.02 days. The non-cocooning silkworm had a shorter larval period than the cocoon producing silkworms. The pupation rate of normal silkworms was about 9% higher than that of silkworms sprayed with I. tenuipes. Hachojam had the highest infection rate at 99.8%, but no significant difference was observed for the infection rate by silkworm variety. The production of synnemata was the best in JS171 CS188 with an incidence rate of 99.3%, followed by Hachojam, and Chugangjam. The synnemata produced from Hachojam were the heaviest and showed white or milky-white in color. PMID:23956651 Kang, Pil-Don; Sung, Gyoo-Byung; Kim, Kee-Young; Kim, Mi-Ja; Hong, In-Pyo; Ha, Nam-Gyu 2010-09-01 56 E-print Network populace, and is prominent in literature � often as a symbol of strength or character. There are many susceptibility of red oaks to both oak wilt and sudden oak death (SOD), these differences in wood anatomy may Harrington, Thomas C. 57 NASA Astrophysics Data System (ADS) A simple subunit of the bioregenerative life support system (BLSS) consisting of the ground-controlled mulberry ( Morus alba L.) and the silkworms was set up on the ground. The mulberry tree could provide nutrient mulberry fruits for astronauts and its leaves as the main feedstuff for the silkworms until their third instar. Astronauts utilized curled lettuce ( Lactuca sativa L.) stem as vegetables and the silkworms over third instar could be fed on 65% of inedible leaves of the lettuce. About 71.4% of protein were detected in the silkworm larval powder; thus, 105 silkworms could satisfy the requirement of one person per day. Besides, 18 kinds of amino acids were determined in the obtained silkworm powder. Moreover, the R-criterion was suggested to estimate and optimize the animal feeding facilities. The scenario of treating the wastes is also proposed in this paper. Our results may be valuable for the establishment of a complex BLSS in the future. Yu, XiaoHui; Liu, Hong; Tong, Ling 58 PubMed Although the overall cytoskeletal morphology of the olfactory dendrite in the antennae of the silkmoths Antheraea polyphemus and A. pernyi is known, the cytoskeleton proteins that structurally and functionally support these structures remain to be identified in this paper, we describe the identification of tubulin, actin and intermediate filament-like proteins in the olfactory dendrites, and motor proteins such as kinesin and unconventional myosin in the antennal branches by the use of antibodies. We also show that the tubulins within the olfactory dendrites and in the antennal branches are acetylated. This study provides valuable information concerning the possible role of these proteins in transduction, transport and motility, as is evident in other systems. PMID:8905709 Kumar, G L; Maida, R; Keil, T A 1996-08-12 59 PubMed We studied in individual males of Antheraea polyphemus the activity of the sensillar esterase, a pheromone-degrading enzyme present in the sensillum lymph surrounding the olfactory receptor cells. In parallel, receptor potentials from single pheromone-sensitive sensilla trichodea were recorded. Our screening revealed a large variability of the enzyme activity in individuals with similar electrophysiological responses. In some moths the sensillar esterase was not detectable, i.e. present with 100-fold less activity. However, such variable esterase activity showed no correlation to the time course of the receptor potential. Thus, enzymatic pheromone degradation does not seem to be involved in the rapid pheromone inactivation at the end of the stimulus, but rather serves as the final pheromone sequestration step. PMID:7605955 Maida, R; Ziegelberger, G; Kaissling, K E 1995-03-27 60 PubMed Behavioral and electrophysiological evidence has suggested that sex pheromone is rapidly inactivated within the sensory hairs soon after initiation of the action-potential spike. We report the isolation and characterization of a sex-pheromone-degrading enzyme from the sensory hairs of the silkmoth Antheraea polyphemus. In the presence of this enzyme at physiological concentration, the pheromone [(6E,11Z)-hexadecadienyl acetate] has an estimated half-life of 15 msec. Our findings suggest a molecular model for pheromone reception in which a previously reported pheromone-binding protein acts as a pheromone carrier, and an enzyme acts as a rapid pheromone inactivator, maintaining a low stimulus noise level within the sensory hairs. PMID:3001718 Vogt, R G; Riddiford, L M; Prestwich, G D 1985-12-01 61 PubMed The intersegmental muscles of the giant silkmoth Antheraea polyphemus (Cramer) can undergo two forms of degenerative changes: a wasting atrophy that lasts about 6 days or rapid dissolution that is completed within 30 h. Muscle atrophy is induced by a dramatic decline in the endogenous titres of the steroid moulting hormone 20-hydroxyecdysone. 20-Hydroxyecdysone appears to act as a trophic factor for the muscles as infusion or injection of this steroid blocks further atrophy of the muscle. The normal decline of 20-hydroxyecdysone also allows the muscles to become competent to respond to the peptide eclosion hormone. Eclosion hormone is then released and acts directly on these muscles to induce rapid cell death which is morphologically and physiologically distinct from steroid-regulated atrophy. PMID:6491588 Schwartz, L M; Truman, J W 1984-07-01 62 PubMed Two proteins of the IP3 transduction pathway were identified by Western blots in homogenates of isolated pheromone-sensitive sensilla of the silkmoth Antheraea polyphemus. A 110 kDa protein was recognized by an antiserum raised against the Drosophila phospholipase C beta (PLC beta p121) and a 80kDa protein was labelled by an antiserum against a synthetic peptide of a conserved region of protein kinase C (PKC). Incubation of homogenized sensory hairs with the main sex pheromone component, (E,Z) 6-11 hexadecadienyl acetate, resulted in a 6-fold increase in the activity of PKC compared to controls without pheromone. In contrast, incubation with pheromone did not affect the activity of protein kinase A (PKA). Activation of PKC by the membrane permeable dioctanoylglycerol led to excitation of the pheromone-sensitive receptor neurons. These data support the current concept that pheromone perception of moths is mediated by the IP3 transduction pathway. PMID:10852242 Maida, R; Redkozubov, A; Ziegelberger, G 2000-06-01 63 The nature of the centromere and the orientation in meiosis of silkworm chromosomes were investigated using the trivalent of the F1 hybrid between the wild and domestic silkworm and X-ray-induced aberrant chromosomes as well as normal silkworm chromosomes. The results of the experiments were as follows: (1) Pro-metaphase chromosomes showed no distinct primary constriction even after treatment with hypotonic solution, Akio Murakami; Hirotami T. Imai 1974-01-01 64 PubMed The Streptomyces bacteriophage, ?C31, uses a site-specific integrase enzyme to perform efficient recombination. The recombination system uses specific sequences to integrate exogenous DNA from the phage into a host. The sequences are known as the attP site in the phage and the attB site in the host. The system can be used as a genetic manipulation tool. In this study it has been applied to the transformation of cultured BmN cells and the construction of transgenic Bombyx mori individuals. A plasmid, pSK-attB/Pie1-EGFP/Zeo-PASV40, containing a cassette designed to express a egfp-zeocin fusion gene, was co-transfected into cultured BmN cells with a helper plasmid, pSK-Pie1/NLS-Int/NSL. Expression of the egfp-zeocin fusion gene was driven by an ie-1 promoter, downstream of a ?C31 attB site. The helper plasmid encoded the ?C31 integrase enzyme, which was flanked by two nuclear localization signals. Expression of the egfp-zeocin fusion gene could be observed in transformed cells. The two plasmids were also transferred into silkworm eggs to obtain transgenic silkworms. Successful integration of the fusion gene was indicated by the detection of green fluorescence, which was emitted by the silkworms. Nucleotide sequence analysis demonstrated that the attB site had been cut, to allow recombination between the attB and endogenous pseudo attP sites in the cultured silkworm cells and silkworm individuals. PMID:24990696 Yin, Yajuan; Cao, Guangli; Xue, Renyu; Gong, Chengliang 2014-10-01 65 NASA Astrophysics Data System (ADS) Silkworm could be an alternative to provide edible animal protein in Controlled Ecological Life Support System (CELSS) for long-term manned space missions. Silkworms can consume non-edible plant residue and convert plant nutrients to high quality edible animal protein for astronauts. The preliminary investigation of silkworm culture was carried out in earth environment. The silkworms were fed with artificial silkworm diet and the leaves of stem lettuce ( Lactuca sativa L. var. angustana Irish) separately and the nutritional structure of silkworm was investigated and compared, The culture experiments showed that: (1) Stem lettuce leaves could be used as food of silkworm. The protein content of silkworm fed with lettuce leaves can reach 70% of dry mass. (2) The protein content of silkworm powder produced by the fifth instar silkworm larvae was 70%, which was similar to the protein content of silkworm pupae. The powder of the fifth instar silkworm larvae can be utilized by astronaut. (3) The biotransformation rate of silkworm larvae between the third instar and the fifth instar could reach above 70%. The biotransformation cycle of silkworm was determined as 24 days. (4) Using the stem lettuce leaves to raise silkworm, the coarse fiber content of silkworm excrements reached about 33%. The requirements of space silkworm culture equipment, feeding approaches and feeding conditions were also preliminarily designed and calculated. It is estimated that 2.2 m 3 of culture space could satisfy daily animal protein demand for seven astronauts. Yang, Yunan; Tang, Liman; Tong, Ling; Liu, Yang; Liu, Hong; Li, Xiaomin 2010-09-01 66 PubMed Injection of a Japanese cedar pollen suspension into silkworm hemolymph kills the silkworms. A certain species of bacteria proliferated in the hemolymph of the dead silkworms. A 16S rDNA analysis demonstrated that the proliferating bacteria were Bacillus cereus, Bacillus thuringiensis, Bacillus weihenstephanensis, and Bacillus amyloliquefaciens. Among them, B. cereus, B. thuringiensis, and B. weihenstephanensis exhibited hemolysis against sheep red blood cells and were lethal to mice. A culture filtrate of B. amyloliquefaciens showed enzyme activity toward the pectic membrane of cedar pollen. These results suggest that silkworms as an animal model are useful for evaluating the pathogenicity of bacteria attached to cedar pollen. PMID:24071577 Hu, Yuan; Hamamoto, Hiroshi; Sekimizu, Kazuhisa 2013-08-01 67 E-print Network 167 Keywords. Bombyx mori; Distal-less (Dll); nubbin (nub), silkworm; wing development; wingless organization of appendages which develop by various mechanisms. In the mulberry silkworm, Bombyx mori a pair 68 Host-pathogen interactions are complex relationships, and a central challenge is to reveal the interactions between pathogens and their hosts. Bacillus bombysepticus (Bb) which can produces spores and parasporal crystals was firstly separated from the corpses of the infected silkworms (Bombyx mori). Bb naturally infects the silkworm can cause an acute fuliginosa septicaemia and kill the silkworm larvae generally within one Lulin Huang; Tingcai Cheng; Pingzhen Xu; Daojun Cheng; Ting Fang; Qingyou Xia; Georg Hcker 2009-01-01 69 E-print Network First-Order, Networked Control Models of Swarming Silkworm Moths Musad A. Haque, Magnus Egerstedt to predict observed, biological behaviors. In particular, we study the silkworm moth, the Bombyx Mori, and we by the female moths, as is the case in actual silkworm moths as well. I. INTRODUCTION The research on multi Egerstedt, Magnus 70 E-print Network In this letter, we adopt a new approach combining theoretical modeling with silk stretching measurements to explore the mystery of the structures between silkworm and spider silks, leading to the differences in mechanical response against stretching. Hereby the typical stress-strain profiles are reproduced by implementing the newly discovered and verified "$\\beta$-sheet splitting" mechanism, which primarily varies the secondary structure of protein macromolecules; our modeling and simulation results show good accordance with the experimental measurements. Hence, it can be concluded that the post-yielding mechanical behaviors of both kinds of silks are resulted from the splitting of crystallines while the high extensibility of spider dragline is attributed to the tiny $\\beta$-sheets solely existed in spider silk fibrils. This research reveals for the first time the structural factors leading to the significant difference between spider and silkworm silks in mechanical response to the stretching force. Addition... Wu, Xiang; Du, Ning; Xu, Gang-Qin; Li, Bao-Wen 2009-01-01 71 We describe the generation of transgenic silkworms that produce cocoons containing recombinant human collagen. A fusion cDNA was constructed encoding a protein that incorporated a human type III procollagen mini-chain with C-propeptide deleted, a fibroin light chain (L-chain), and an enhanced green fluorescent protein (EGFP). This cDNA was ligated downstream of the fibroin L-chain promoter and inserted into a piggyBac Masahiro Tomita; Hiroto Munetsuna; Tsutomu Sato; Takahiro Adachi; Rika Hino; Masahiro Hayashi; Katsuhiko Shimizu; Namiko Nakamura; Toshiki Tamura; Katsutoshi Yoshizato 2002-01-01 72 PubMed Sacrificing model animals is required for developing effective drugs before being used in human beings. In Japan today, at least 4,210,000 mice and other mammals are sacrificed to a total of 6,140,000 per year for the purpose of medical studies. All the animals treated in Japan, including test animals, are managed under control of "Act on Welfare and Management of Animals". Under the principle of this Act, no person shall kill, injure, or inflict cruelty on animals without due cause. "Animal" addressed in the Act can be defined as a "vertebrate animal". If we can make use of invertebrate animals in testing instead of vertebrate ones, that would be a remarkable solution for the issue of animal welfare. Furthermore, there are numerous advantages of using invertebrate animal models: less space and small equipment are enough for taking care of a large number of animals and thus are cost-effective, they can be easily handled, and many biological processes and genes are conserved between mammals and invertebrates. Today, many invertebrates have been used as animal models, but silkworms have many beneficial traits compared to mammals as well as other insects. In a Genome Pharmaceutical Institute's study, we were able to achieve a lot making use of silkworms as model animals. We would like to suggest that pharmaceutical companies and institutes consider the use of the silkworm as a model animal which is efficacious both for financial value by cost cutting and ethical aspects in animals' welfare. PMID:23006994 Sekimizu, N; Paudel, A; Hamamoto, H 2012-08-01 73 The effectiveness of silkworm hemolymph was investigated as a substitute for fetal bovine serum (FBS) in insect cell culture. Cells were adapted to grow in reduced FBS medium supplemented with silkworm hemolymph through a gradual adaptation process. FBS concentration in the medium could be reduced to 1% without decrease in cell growth rate and maximum cell concentration by adding 5% Sung Ho Ha; Tai Hyun Park; Sam-Eun Kim 1996-01-01 74 We report a draft sequence for the genome of the domesticated silkworm (Bombyx mori), covering 90.9% of all known silkworm genes. Our estimated gene count is 18,510, which exceeds the 13,379 genes reported for Drosophila melanogaster. Comparative analyses to fruitfly, mosquito, spider, and butterfly reveal both similarities and differences in gene content. Qingyou Xia; Zeyang Zhou; Cheng Lu; Daojun Cheng; Fangyin Dai; Bin Li; Ping Zhao; Xingfu Zha; Tingcai Cheng; Chunli Chai; Guoqing Pan; Jinshan Xu; Chun Liu; Ying Lin; Jifeng Qian; Yong Hou; Zhengli Wu; Guanrong Li; Minhui Pan; Chunfeng Li; Yihong Shen; Xiqian Lan; Lianwei Yuan; Tian Li; Hanfu Xu; Guangwei Yang; Yongji Wan; Yong Zhu; Maode Yu; Weide Shen; Dayang Wu; Zhonghuai Xiang; Jun Yu; Jun Wang; Ruiqiang Li; Jianping Shi; Heng Li; Guangyuan Li; Jianning Su; Xiaoling Wang; Guoqing Li; Zengjin Zhang; Qingfa Wu; Jun Li; Qingpeng Zhang; Ning Wei; Jianzhe Xu; Haibo Sun; Le Dong; Dongyuan Liu; Shengli Zhao; Xiaolan Zhao; Qingshun Meng; Fengdi Lan; Xiangang Huang; Yuanzhe Li; Lin Fang; Changfeng Li; Dawei Li; Yongqiao Sun; Zhenpeng Zhang; Zheng Yang; Yanqing Huang; Yan Xi; Qiuhui Qi; Dandan He; Haiyan Huang; Xiaowei Zhang; Zhiqiang Wang; Wenjie Li; Yuzhu Cao; Yingpu Yu; Hong Yu; Jinhong Li; Jiehua Ye; Huan Chen; Yan Zhou; Bin Liu; Jing Wang; Jia Ye; Hai Ji; Shengting Li; Peixiang Ni; Jianguo Zhang; Yong Zhang; Hongkun Zheng; Bingyu Mao; Wen Wang; Chen Ye; Songgang Li; Jian Wang; Gane Ka-Shu Wong; Huanming Yang 2004-01-01 75 NASA Astrophysics Data System (ADS) The major studied in this dissertation is the non-destructive testing method of silkworm cocoon's quality, based on the digital image processing and photoelectricity technology. Through the images collection and the data analysis, procession and calculation of the tested silkworm cocoons with the non-destructive testing technology, internet applications automatically reckon all items of the classification indexes. Finally we can conclude the classification result and the purchase price of the silkworm cocoons. According to the domestic classification standard of the silkworm cocoons, the author investigates various testing methods of silkworm cocoons which are used or have been explored at present, and devices a non-destructive testing scheme of the silkworm cocoons based on the digital image processing and photoelectricity technology. They are dissertated about the project design of the experiment. The precisions of all the implements are demonstrated. I establish Manifold mathematic models, compare them with each other and analyze the precision with technology of databank to get the best mathematic model to figure out the weight of the dried silkworm cocoon shells. The classification methods of all the complementary items are designed well and truly. The testing method has less error and reaches an advanced level of the present domestic non-destructive testing technology of the silkworm cocoons. Gan, Yong; Kong, Qing-hua; Wei, Li-fu 2008-03-01 76 We have isolated and characterized microsatellites (simple sequence repeat (SSR) loci) from the silkworm genome. The screening of a partial genomic library by the conventional hybridization method led to the isolation of 28 microsatellites harbouring clones. The abundance of (CA)n repeats in the silkworm genome was akin to those reported in the other organisms such as honey bee, pig, and K. Damodar Reddy; E. G. Abraham; J. Nagaraju 1999-01-01 77 PubMed Central Background Antheraea mylitta cytoplasmic polyhedrosis virus (AmCPV), a cypovirus of Reoviridae family, infects non mulberry Indian silk worm, Antheraea mylitta, and contains eleven segmented double stranded RNA in its genome (S1-S11). Some of its genome segments (S1-S3, and S6-S11) have been previously characterized but genome segment encoding the viral guanylyltransferase which helps in RNA capping has not been characterized. Results In this study genome segment 5 (S5) of AmCPV was converted to cDNA, cloned and sequenced. S5 consisted of 2180 nucleotides, with one long ORF of 1818 nucleotides and could encode a protein of 606 amino acids with molecular mass of ~65kDa (p65). Bioinformatics analysis showed presence of KLRS and HxnH motifs as observed in some other reoviral guanylyltransferase and suggests that S5 may encodes viral guanylyltransferase. The ORF of S5 was expressed in E. coli as 65kDa his tagged fusion protein, purified through Ni-NTA chromatography and polyclonal antibody was raised. Immunoblot analysis of virion particles with the purified antibody showed specific immunoreactive band and suggests p65 as a viral structural protein. Functional analysis showed that recombinant p65 possesses guanylyltransferase activity, and transfers GMP moiety to the 5' diphosphate (A/G) ended viral RNA after the formation of p65-GMP complex for capping. Kinetic analysis showed Km of this enzyme for GTP and RNA was 34.24 uM and 98.35 nM, respectively. Site directed mutagenesis at K21A in KLRS motif, and H93A or H105A in HxnH motif completely abolished the autoguanylylation activity and indicates importance of these residues at these sites. Thermodynamic analysis showed p65-GTP interaction was primarily driven by enthalpy (?H?=?-399.1??4.1kJ/mol) whereas the p65-RNA interaction by favorable entropy (0.043??0.0049kJ/ mol). Conclusion Viral capping enzymes play a critical role in the post transcriptional or post replication modification in case of RNA virus. Our results of cloning, sequencing and functional analysis of AmCPV S5 indicates that S5 encoded p65 through its guanylyltransferase activity can transfer guanine residue to the 5' end of viral RNA for capping. Further studies will help to understand complete capping process of cypoviral RNA during viral replication within the viral capsid. PMID:24649879 2014-01-01 78 NSDL National Science Digital Library Oak Wilt: A Threat to Red Oaks & White Oaks Species was created by Dr. David L. Roberts at Michigan State University Extension. Dr. Robertâs concise site contains brief sections addressing oak wilt distribution, field diagnosis, management, disease cycle, and more. This guide contains extensive links to images and other informational extension sites that will help you make informed decisions regarding the health of your trees. The site compiles a great deal of research on oak wilt and is an excellent resource for students and professionals alike. Roberts, David L. 2008-02-22 79 PubMed The inhibitor of apoptosis proteins (IAP) plays an important role in cell apoptosis. We cloned two novel IAP family members, Ap-iap1 and Ap-iap2, from Antheraea pernyi nucleopolyhedrovirus (ApNPV) genome. Ap-IAP1 contains two baculoviral IAP repeat (BIR) domains followed by a RING domain, but Ap-IAP2 has only one BIR domain and RING. The result of transient expression in Spodoptera frugiperda (Sf21) showed that Ap-iap1 blocked cell apoptosis induced by actinomycin D treatment and also rescued the p35 deficient Autographa californica nucleopolyhedrovirus (AcNPV) to replicate in Sf9 cells, while Ap-iap2 does not have this function. Several Ap-IAP1 truncations were constructed to test the activity of BIRs or RING motif to inhibit cell apoptosis. The results indicated that BIRs or RING of Ap-IAP1 had equally function to inhibit cell apoptosis. Therefore deletion of above both of the above domains could not block apoptosis induced by actinomycin D or rescue the replication of AcMNPV Delta p35. We also screened two phage-display peptides that might interact with Ap-IAP1. PMID:20437152 Yan, Feng; Deng, Xiaobei; Yan, Junpeng; Wang, Jiancheng; Yao, Lunguang; Lv, Songya; Qi, Yipeng; Xu, Hua 2010-04-01 80 PubMed Surface modification of silk fibroin (SF) materials using environmentally friendly and non-hazardous process to tailor them for specific application as biomaterials has drawn a great deal of interest in the field of biomedical research. To further explore this area of research, in this report, polypropylene (PP) grafted muga (Antheraea assama) SF (PP-AASF) suture is developed using plasma treatment and plasma graft polymerization process. For this purpose, AASF is first sterilized in argon (Ar) plasma treatment followed by grafting PP onto its surface. AASF is a non-mulberry variety having superior qualities to mulberry SF and is still unexplored in the context of suture biomaterial. AASF, Ar plasma treated AASF (AASFAr ) and PP-AASF are subjected to various characterization techniques for better comparison and the results are attempted to correlate with their observed properties. Excellent mechanical strength, hydrophobicity, antibacterial behavior, and remarkable wound healing activity of PP-AASF over AASF and AASFAr make it a promising candidate for application as sterilized suture biomaterial. 2013 Wiley Periodicals, Inc. Biopolymers 101: 355-365, 2014. PMID:23913788 Gogoi, Dolly; Choudhury, Arup Jyoti; Chutia, Joyanti; Pal, Arup Ratan; Khan, Mojibur; Choudhury, Manash; Pathak, Pallabi; Das, Gouranga; Patil, Dinkar S 2014-04-01 81 SciTech Connect The segments 10 (S10) of the 11 double stranded RNA genomes from Antheraea mylitta cytoplasmic polyhedrosis virus (AmCPV) encoding a novel polyhedrin polypeptide was converted to cDNA, cloned, and sequenced. Three cDNA clones consisting of 1502 (AmCPV10-1), 1120 (AmCPV10-2), and 1415 (AmCPV10-3) nucleotides encoding polyhedrin of 254, 339, and 319 amino acids with molecular masses of 29, 39, and 37 kDa, respectively, were obtained, and verified by Northern analysis. These clones showed 70-94% sequence identity among them but none with any sequences in databases. The expression of AmCPV10-1 cDNA encoded polyhedrin in Sf-9 cells was detected by immunoblot analysis and formation of polyhedra by electron microscopy, as observed in AmCPV-infected gut cells, but no expression of AmCPV10-2 or AmCPV10-3 cDNA was detected, indicating that during AmCPV replication, along with functional S10 RNA, some defective variant forms of S10 RNAs are packaged in virion particles. Sinha-Datta, Uma [Department of Biotechnology, Indian Institute of Technology, Kharagpur 721302 (India); Chavali, Venkata Ramana Murthy [Department of Biotechnology, Indian Institute of Technology, Kharagpur 721302 (India); Ghosh, Ananta K. [Department of Biotechnology, Indian Institute of Technology, Kharagpur 721302 (India)]. E-mail: [email protected] 2005-07-08 82 PubMed Central The outstanding properties of spider dragline silk are likely to be determined by a combination of the primary sequences and the secondary structure of the silk proteins. Antheraea pernyi silk has more similar sequences to spider dragline silk than the silk from its domestic counterpart, Bombyx mori. This makes it much potential as a resource for biospinning spider dragline silk. This paper further verified its possibility as the resource from the mechanical properties and the structures of the A. pernyi silks prepared by forcible reeling. It is surprising that the stress-strain curves of the A. pernyi fibers show similar sigmoidal shape to those of spider dragline silk. Under a controlled reeling speed of 95?mm/s, the breaking energy was 1.04 105?J/kg, the tensile strength was 639 MPa and the initial modulus was 9.9 GPa. It should be noted that this breaking energy of the A. pernyi silk approaches that of spider dragline silk. The tensile properties, the optical orientation and the ?-sheet structure contents of the silk fibers are remarkably increased by raising the spinning speeds up to 95?mm/s. PMID:20454537 Zhang, Yaopeng; Yang, Hongxia; Shao, Huili; Hu, Xuechao 2010-01-01 83 PubMed One subtype of the pheromone binding proteins of the silkmoth Antheraea polyphemus (ApolPBP1) has been analysed exploiting the two endogenous tryptophan residues as fluorescent probe. The intrinsic fluorescence exhibited a rather narrow spectrum with a maximum at 336 nm. Site-directed mutagenesis experiments revealed that one of the tryptophan residues (Trp37) is located in a hydrophobic environment whereas Trp127 is more solvent exposed, as was predicted modeling the ApolPBP1 sequence on the proposed structure of the Bombyx mori pheromone binding protein. Monitoring the interaction of ApolPBP1 as well as its Trp mutants with the three species-specific pheromone compounds by recording the endogenous fluorescence emission revealed profound differences; whereas (E6,Z11)-hexadecadienal induced a dose-dependent quenching of the fluorescence, both (E6,Z11)-hexadecadienyl-1-acetate and (E4,Z9)-tetradecadienyl-1-acetate elicited an augmentation of the endogenous fluorescence. These data indicate that although ApolPBP1 can bind all three pheromones, there are substantial differences concerning their interaction with the protein, which may have important functional implications. PMID:11804795 Bette, Stefanie; Breer, Heinz; Krieger, Jrgen 2002-03-01 84 PubMed Single-channel patch-clamp techniques were used to identify and characterize a Ca2(+)-activated nonspecific cation channel (CAN channel) on insect olfactory receptor neurones (ORNs) from antennae of male Antheraea polyphemus. The CAN channel was found both in acutely isolated ORNs from developing pupae and in membrane vesicles from mature ORNs that presumably originated from inner dendritic segments. Amplitude histograms of the CAN single-channel currents presented well-defined peaks corresponding to at least four channel substates each having a conductance of about 16 pS. Simultaneous gating of the substates was achieved by intracellular Ca2(+) with an EC(50) value of about 80 nmol(-l). Activity of the CAN channel could be blocked by application of amiloride (IC(50)<100 nmoll(-1)). Moreover, in the presence of l ?moll(-1) Ca2+,opening of the CAN channel was totally suppressed by 10 ?moll(-1) cyclic GMP,whereas ATP (1 mmoll(-1)) was without effect. We suggest that the CAN channel plays a specific role in modulation of cell excitability and in shaping the voltage response of ORNs. PMID:22141156 Zufall, F; Hatt, H 1991-11-01 85 PubMed In pheromone-sensitive hairs of the male silkmoth Antheraea polyphemus, two electrophoretically distinct pheromone-binding proteins (PBPs) are present. They indicate no amino acid sequence diversity according to peptide mapping, but differ in their redox state, as shown by free-sulfhydryl-group-specific cleavage at cysteine residues with 2-nitro-5-thiocyanobenzoic acid. In kinetic studies, the pheromone was initially bound mainly by the reduced PBP but later by the oxidized PBP, where all six cysteine residues form disulfide bonds. This redox shift was observed only in the homogenate of isolated olfactory hairs, where proteins of the sensillum lymph and receptive dendrites are present. In control experiments with purified binding proteins, the proportion of pheromone bound to the oxidized PBP did not increase with increasing incubation time, suggesting that disulfide formation does not occur spontaneously but is mediated by the sensory hairs, possibly by interaction with the receptor cell membrane. These data suggest that arriving hydrophobic pheromone molecules are first bound by the reduced PBP and transported through the aqueous sensillum lymph towards the receptor molecules of the dendritic membrane. The oxidized complex might not be able to activate further receptors and, thus, effectively deactivate the pheromone molecules within the sensillum lymph. PMID:7588707 Ziegelberger, G 1995-09-15 86 PubMed Three different clones have been isolated from a genomic library of the silkmoth Antheraea polyphemus by employing a subtractive hybridization technique. The clones with inserts of 13-16 kb of DNA each, code for mRNAs expressed in the wing epidermis during JH induced second pupal cuticle deposition. While two of the clones code for a single mRNA each, the third one codes for two mRNAs. All the four mRNAs code for distinct polypeptides that can be precipitated with antibodies raised against pupal cuticular proteins. These genes are activated at the same period of pupal development and their transcripts follow similar patterns of accumulation. Although these genes are expressed in a tissue and time specific manner attesting to their pupal wing epidermal specificity, three of them are expressed in the adult wing epidermis also, but not at the larval stage. While DNAs from other silkmoths and insects hybridize to these genes, only one of the A. polyphemus genes hybridizes to RNA from second pupal wings of two other silkmoths tested. PMID:7517270 Kumar, M N; Sridhara, S 1994-03-01 87 PubMed Female moths produce blends of odorant chemicals, called pheromones. These precise chemical mixtures both attract males and elicit appropriate mating behaviors. To locate females, male moths must rapidly detect changes in environmental pheromone concentration. Therefore, the regulation of pheromone concentration within antennae, their chief organ of smell, is important. We describe antennal-specific aldehyde oxidases from the moths Antheraea polyphemus and Bombyx mori that are capable of catabolizing long chain, unsaturated aldehydes such as their aldehyde pheromones. These soluble enzymes are associated uniquely with male and female antennae and have molecular masses of 175 and 130 kDa, respectively. The A. polyphemus aldehyde oxidase has been localized to the olfactory sensilla which contain the pheromone receptor cell dendrites. These same sensilla contain a previously described sensilla-specific esterase that degrades the acetate ester component of A. polyphemus pheromone. We propose that sensillar pheromone-degrading enzymes modulate pheromone concentration in the receptor space and hence play a dynamic role in the pheromone-mediated reproductive behaviors of these animals. PMID:2246254 Rybczynski, R; Vogt, R G; Lerner, M R 1990-11-15 88 PubMed The fine structure of the superposition eye of the Saturniid moth Antheraea polyphemus Cramer was investigated by electron microscopy. Each of the approximately 10,000 ommatidia consists of the same structural components, but regarding the arrangement of the ommatidia and the rhabdom structure therein, two regions of the eye have to be distinguished. In a small dorsal rim area, the ommatidia are characterized by rectangularly shaped rhabdoms containing parallel microvilli arranged in groups that are oriented perpendicular to each other. In all other ommatidia, the proximal parts of the rhabdoms show radially arranged microvilli, whereas the distal parts may reveal different patterns, frequently with microvilli in two directions or sometimes even in one direction. Moreover, the microvilli of all distal cells are arranged in parallel to meridians of the eyes. By virtue of these structural features the eyes should enable this moth not only discrimination of the plane of polarized light but also skylight-orientation via the polarization pattern, depending on moon position. The receptor cells exhibit only small alterations during daylight within the natural diurnal cycle. However, under illumination with different monochromatic lights of physiological intensity, receptor cells can be unbalanced: Changes in ultrastructure of the rhabdomeres and the cytoplasm of such cells are evident. The effects are different in the daytime and at night. These findings are discussed in relation to the breakdown and regeneration of microvilli and the influence of the diurnal cycle. They are compared with results on photoreceptor membrane turnover in eyes of other arthropod species. PMID:3383218 Anton-Erxleben, F; Langer, H 1988-05-01 89 PubMed Vascularization is a crucial challenge in tissue engineering. One solution for this problem is to implant scaffolds that contain functional genes that promote vascularization by providing angiogenic growth factors via a gene delivery carrier. Poly(ethylenimine) (PEI) is a gene delivery carrier with high transfection efficiency but with cytotoxicity. To solve this problem, we utilized Antheraea pernyi silk fibroin (ASF), which has favorable cytocompatibility and biodegradability, RGD sequences and a negative charge, in conjunction with PEI, as the delivery vector for vascular endothelial growth factor (VEGF) 165-angiopoietin-1 (Ang-1) dual gene simultaneous expression plasmid, creating an ASF/PEI/pDNA complex. The results suggested that the zeta potential of the ASF/PEI/pDNA complex was significantly lower than that of the PEI/pDNA complex. Decreased nitrogen and increased oxygen on the surface of the complex demonstrated that the ASF had successfully combined with the surface of the PEI/pDNA. Furthermore, the complexes resisted digestion by nucleic acid enzymes and degradation by serum. L929 cells were cultured and transfected in vitro and improved cytotoxicity was found when the cells were transfected with ASF/PEI/pDNA compared with PEI/pDNA. In addition, the transfection efficiency and VEGF secretion increased. In general, this study provides a novel method for decreasing the cytotoxicity of PEI gene delivery vectors and increasing transfection efficiency of angiogenesis-related genes. PMID:24867887 Ma, Caili; Lv, Linlin; Liu, Yu; Yu, Yanni; You, Renchuan; Yang, Jicheng; Li, Mingzhong 2014-06-01 90 PubMed Central The silkworm, Bombyx mori, is one of the major insect model organisms, and its draft and fine genome sequences became available in 2004 and 2008, respectively. Transposable elements (TEs) constitute ?40% of the silkworm genome. To better understand the roles of TEs in organization, structure and evolution of the silkworm genome, we used a combination of de novo, structure-based and homology-based approaches for identification of the silkworm TEs and identified 1308 silkworm TE families. These TE families and their classification information were organized into a comprehensive and easy-to-use web-based database, BmTEdb. Users are entitled to browse, search and download the sequences in the database. Sequence analyses such as BLAST, HMMER and EMBOSS GetORF were also provided in BmTEdb. This database will facilitate studies for the silkworm genomics, the TE functions in the silkworm and the comparative analysis of the insect TEs. Database URL: http://gene.cqu.edu.cn/BmTEdb/. PMID:23886610 Xu, Hong-En; Zhang, Hua-Hao; Xia, Tian; Han, Min-Jin; Shen, Yi-Hong; Zhang, Ze 2013-01-01 91 PubMed A heightened immune response, in which immune responses are primed by repeated exposure to a pathogen, is an important characteristic of vertebrate adaptive immunity. In the present study, we examined whether invertebrate animals also exhibit a primed immune response. The LD50 of Gram-negative enterohemorrhagic Escherichia coli O157:H7 Sakai in silkworms was increased 100-fold by pre-injection of heat-killed Sakai cells. Silkworms pre-injected with heat-killed cells of a Gram-positive bacterium, Staphylococcus aureus, did not have resistance to Sakai. Silkworms preinjected with enterohemorrhagic E. coli peptidoglycans, cell surface components of bacteria, were resistant to Sakai infection. Silkworms preinjected with S. aureus peptidoglycans, however, were not resistant to Sakai. Silkworms preinjected with heat-killed Sakai cells showed persistent resistance to Sakai infection even after pupation. Repeated injection of heat-killed Sakai cells into the silkworms induced earlier and greater production of antimicrobial peptides than a single injection of heat-killed Sakai cells. These findings suggest that silkworm recognition of Gram-negative peptidoglycans leads to a primed immune reaction and increased resistance to a second round of bacterial infection. PMID:24706746 Miyashita, Atsushi; Kizaki, Hayato; Kawasaki, Kiyoshi; Sekimizu, Kazuhisa; Kaito, Chikara 2014-05-16 92 NASA Astrophysics Data System (ADS) In this study, we found for the first time that silkworm eggs were able to survive in vacuum for a long period of time. Subsequently, low energy Ar+ ions with different energies and fluences were used to bombard silkworm eggs so as to explore the resulting biological effects. Results showed that (i) the exposure of silkworm eggs to vacuum within 10 min did not cause significant impact on the hatching rates, while the irradiation of silkworm eggs by Ar+ ions of 25 keV or 30 keV with fluences ranging from 2.62.6 1015 ion/cm2 to 82.6 1015 ion/cm2 caused a significant impact on the hatching rates, and the hatching rates decreased with the increase in the fluence and energy level; (ii) the irradiation of silkworm eggs by Ar+ ions of 30 keV with a fluence of 82.6 1015 ion/cm2 or 92.6 1015 ion/cm2 resulted in a noticeable etching on the egg shell surface which could be observed by a scanning electron microscope; and (iii) the irradiation of silkworm eggs by Ar+ ions of 30 keV with a fluence of 92.6 1015 ion/cm2 generated several mutant phenotypes which were observed in the 5th instar silkworms and a moth. Xu, Jiaping; Wu, Yuejin; Liu, Xuelan; Yuan, Hang; Yu, Zengliang 2009-06-01 93 PubMed Central Porphyromonas gingivalis, a pathogen that causes inflammation in human periodontal tissue, killed silkworm (Bombyx mori, Lepidoptera) larvae when injected into the blood (hemolymph). Silkworm lethality was not rescued by antibiotic treatment, and heat-killed bacteria were also lethal. Heat-killed bacteria of mutant P. gingivalis strains lacking virulence factors also killed silkworms. Silkworms died after injection of peptidoglycans purified from P. gingivalis (pPG), and pPG toxicity was blocked by treatment with mutanolysin, a peptidoglycan-degrading enzyme. pPG induced silkworm hemolymph melanization at the same dose as that required to kill the animal. pPG injection increased caspase activity in silkworm tissues. pPG-induced silkworm death was delayed by injecting melanization-inhibiting reagents (a serine protease inhibitor and 1-phenyl-2-thiourea), antioxidants (N-acetyl-l-cysteine, glutathione, and catalase), and a caspase inhibitor (Ac-DEVD-CHO). Thus, pPG induces excessive activation of the innate immune response, which leads to the generation of reactive oxygen species and apoptotic cell death in the host tissue. PMID:20702417 Ishii, Kenichi; Hamamoto, Hiroshi; Imamura, Katsutoshi; Adachi, Tatsuo; Shoji, Mikio; Nakayama, Koji; Sekimizu, Kazuhisa 2010-01-01 94 PubMed Male moths respond to conspecific female-released pheromones with remarkable sensitivity and specificity, due to highly specialized chemosensory neurons in their antennae. In Antheraea silkmoths, three types of sensory neurons have been described, each responsive to one of three pheromone components. Since also three different pheromone binding proteins (PBPs) have been identified, the antenna of Antheraea seems to provide a unique model system for detailed analyzes of the interplay between the various elements underlying pheromone reception. Efforts to identify pheromone receptors of Antheraea polyphemus have led to the identification of a candidate pheromone receptor (ApolOR1). This receptor was found predominantly expressed in male antennae, specifically in neurons located beneath pheromone-sensitive sensilla trichodea. The ApolOR1-expressing cells were found to be surrounded by supporting cells co-expressing all three ApolPBPs. The response spectrum of ApolOR1 was assessed by means of calcium imaging using HEK293-cells stably expressing the receptor. It was found that at nanomolar concentrations ApolOR1-cells responded to all three pheromones when the compounds were solubilized by DMSO and also when DMSO was substituted by one of the three PBPs. However, at picomolar concentrations, cells responded only in the presence of the subtype ApolPBP2 and the pheromone (E,Z)-6,11-hexadecadienal. These results are indicative of a specific interplay of a distinct pheromone component with an appropriate binding protein and its related receptor subtype, which may be considered as basis for the remarkable sensitivity and specificity of the pheromone detection system. PMID:20011135 Forstner, Maike; Breer, Heinz; Krieger, Jrgen 2009-01-01 95 PubMed Central The silkworm (Bombyx mori L.) is a lepidopteran insect with a long history of significant agricultural value. We have constructed the first amplified fragment length polymorphism (AFLP) genetic linkage map of the silkworm B. mori at a LOD score of 2.5. The mapping AFLP markers were genotyped in 47 progeny from a backcross population of the cross no. 782 x od100. A total of 1248 (60.7%) polymorphic AFLP markers were detected with 35 PstI/TaqI primer combinations. Each of the primer combinations generated an average of 35.7 polymorphic AFLP markers. A total of 545 (44%) polymorphic markers are consistent with the expected segregation ratio of 1:1 at the significance level of P = 0.05. Of the 545 polymorphic markers, 356 were assigned to 30 linkage groups. The number of markers on linkage groups ranged from 4 to 36. There were 21 major linkage groups with 7-36 markers and 9 relatively small linkage groups with 4-6 markers. The 30 linkage groups varied in length from 37.4 to 691.0 cM. The total length of this AFLP linkage map was 6512 cM. Genetic distances between two neighboring markers on the same linkage group ranged from 0.2 to 47 cM with an average of 18.2 cM. The sex-linked gene od was located between the markers P1T3B40 and P3T3B27 at the end of group 3, indicating that AFLP linkage group 3 was the Z (sex) chromosome. This work provides an essential basic map for constructing a denser linkage map and for mapping genes underlying agronomically important traits in the silkworm B. mori L. PMID:11238411 Tan, Y D; Wan, C; Zhu, Y; Lu, C; Xiang, Z; Deng, H W 2001-01-01 96 PubMed Central The activity of sericulture is declining due the reduction of mulberry production area in sericulture practicing countries lead to adverse effects on silkworm rearing and cocoon production. Screening for nutrigenetic traits in silkworm, Bombyx mori L. (Lepidoptera: Bombycidae) is an essential prerequisite for better understanding and development of nutritionally efficient breeds/hybrids, which show less food consumption with higher efficiency conversion. The aim of this study was to identify nutritionally efficient polyvoltine silkworm strains using the germplasm breeds RMW2, RMW3, RMW4, RMG3, RMG1, RMG4, RMG5, RMG6 and APM1 as the control. The 1st day of 5th stage silkworm larvae of polyvoltine strains were subjected to standard gravimetric analysis until spinning for three consecutive generations covering 3 different seasons on 19 nutrigenetic traits. Highly significant (p ? 0.001) differences were found among all nutrigenetic traits of polyvoltine silkworm strains in the experimental groups. The nutritionally efficient polvoltine silkworm strains were resulted by utilizing nutrition consumption index and efficiency of conversion of ingesta/cocoon traits as the index. Higher nutritional efficiency conversions were found in the polyvoltine silkworm strains on efficiency of conversion of ingesta to cocoon and shell than control. Comparatively smaller consumption index, respiration, metabolic rate with superior relative growth rate, and quantum of food ingesta and digesta requisite per gram of cocoon and shell were found; the lowest amount was in new polyvoltine strains compared to the control. Furthermore, based on the overall nutrigenetic traits utilized as index or biomarkers, three polyvoltine silkworm strains (RMG4, RMW2, and RMW3) were identified as having the potential for nutrition efficiency conversion. The data from the present study advances our knowledge for the development of nutritionally efficient silkworm breeds/hybrids and their effective commercial utilization in the sericulture industry. PMID:22934597 Ramesha, Chinnaswamy; Lakshmi, Hothur; Kumari, Savarapu Sugnana; Anuradha, Chevva M.; Kumar, Chitta Suresh 2012-01-01 97 PubMed Central The activity of sericulture is declining due the reduction of mulberry production area in sericulture practicing countries lead to adverse effects on silkworm rearing and cocoon production. Screening for nutrigenetic traits in silkworm, Bombyx mori L. (Lepidoptera: Bombycidae) is an essential prerequisite for better understanding and development of nutritionally efficient breeds/hybrids, which show less food consumption with higher efficiency conversion. The aim of this study was to identify nutritionally efficient polyvoltine silkworm strains using the germplasm breeds RMW2, RMW3, RMW4, RMG3, RMG1, RMG4, RMG5, RMG6 and APM1 as the control. The 1st day of 5th stage silkworm larvae of polyvoltine strains were subjected to standard gravimetric analysis until spinning for three consecutive generations covering three different seasons on 19 nutrigenetic traits. Highly significant (p ? 0.001) differences were found among all nutrigenetic traits of polyvoltine silkworm strains in the experimental groups. The nutritionally efficient polvoltine silkworm strains were resulted by utilizing nutrition consumption index and efficiency of conversion of ingesta/cocoon traits as the index. Higher nutritional efficiency conversions were found in the polyvoltine silkworm strains on efficiency of conversion of ingesta to cocoon and shell than control. Comparatively smaller consumption index, respiration, metabolic rate with superior relative growth rate, and quantum of food ingesta and digesta requisite per gram of cocoon and shell were shown; the lowest amount was in new polyvoltine strains compared to the control. Furthermore, based on the overall nutrigenetic traits utilized as index or biomarkers, three polyvoltine silkworm strains (RMG4, RMW2, and RMW3) were identified as having the potential for nutrition efficiency conversion. The data from the present study advances our knowledge for the development of nutritionally efficient silkworm breeds/hybrids and their effective commercial utilization in the sericulture industry. PMID:22938037 Chinnaswamy, Ramesha; Lakshmi, Hothur; Kumari, Savarapu S.; Anuradha, Chebba M.; Kumar, Chitta S. 2012-01-01 98 SciTech Connect The bioengineering design principles evolved in silkworm cocoons make them ideal natural prototypes and models for structural composites. Cocoons depend for their stiffness and strength on the connectivity of bonding between their constituent materials of silk fibers and sericin binder. Strain-activated mechanisms for loss of bonding connectivity in cocoons can be translated directly into a surprisingly simple yet universal set of physically realistic as well as predictive quantitative structure-property relations for a wide range of technologically important fiber and particulate composite materials. Chen Fujia; Porter, David; Vollrath, Fritz [Department of Zoology, University of Oxford, Oxford OX1 3PS (United Kingdom) 2010-10-15 99 PubMed Central A micro burette, micro pipette, and methods for their use in quantitative toxicological investigations on mandibulate insects are described. It is suggested that the form of curves relating speed of toxic action to dosage may be explained by postulating suitable changes in rate of distribution, excretion, and cell penetration of poison as dosage varies. The speed of toxic action of pentavalent arsenic in the silkworm is proportional to an integral power of the dosage at lower concentrations, and to a fractional power of the dosage at higher concentrations. PMID:19872265 Campbell, F. L. 1926-01-01 100 PubMed Pupal wing tissue of the American silkmoth Antheraea polyphemus has been used as a model system to study 20-hydroxyecdysone and juvenile hormone control of cuticle protein synthesis. Juvenile hormone does not affect either the content or rate of synthesis of RNA and protein of the wing tissue. both of which show linear increases during the first few days of hormonal treatment. Based on the fractionation of total RNA on oligo-dT columns the percent of mRNA remains the same throughout development after both hormone treatments. However, both the amount of poly-A+ RNA in the wing tissue, and its content of poly-A show considerable increases as a function of development. The products of translation of the various poly-A+ RNA populations in the cell-free wheatgerm system have been analyzed by one- and two-dimensional gel electrophoresis and fluorography. Qualitative changes occur during the first 24 h; the production of a mRNA coding for a protein of approx. 40 000 dalton is stimulated and the production of a mRNA coding for a protein of 29 000 dalton is greatly reduced. Only a few differences are observed between samples from the 2 hormone treatments. Over the next 5-15 days of development mainly quantitative changes are observed. Juvenile hormone application results in quantitative changes in specific mRNAs, but no new mRNAs unique to juvenile hormone action are observed. The data are consistent with the concept that in altering the epidermal developmental program, juvenile hormone is apparently modulating the action of 20-hydroxyecdysone. PMID:6173272 Katula, K; Gilbert, L I; Sridhara, S 1981-12-01 101 PubMed Central The regulation of antagonistic OVO isoforms is critical for germline formation and differentiation in Drosophila. However, little is known about genes related to ovary development. In this study, we cloned the Bombyx mori ovo gene and investigated its four alternatively spliced isoforms. BmOVO-1, BmOVO-2 and BmOVO-3 all had four C2H2 type zinc fingers, but differed at the N-terminal ends, while BmOVO-4 had a single zinc finger. Bmovo-1, Bmovo-2 and Bmovo-4 showed the highest levels of mRNA in ovaries, while Bmovo-3 was primarily expressed in testes. The mRNA expression pattern suggested that Bmovo expression was related to ovary development. RNAi and transgenic techniques were used to analyze the biological function of Bmovo. The results showed that when the Bmovo gene was downregulated, oviposition number decreased. Upregulation of Bmovo-1 in the gonads of transgenic silkworms increased oviposition number and elevated the trehalose contents of hemolymph and ovaries. We concluded that Bmovo-1 was involved in protein synthesis, contributing to the development of ovaries and oviposition number in silkworms. PMID:25119438 Cao, Guangli; Huang, Moli; Xue, Gaoxu; Qian, Ying; Song, Zuowei; Gong, Chengliang 2014-01-01 102 PubMed Central Serine protease inhibitors (serpins) are a superfamily of proteins, most of which control protease-mediated processes by inhibiting their cognate enzymes. Sequencing of the silkworm genome provides an opportunity to investigate serpin structure, function, and evolution at the genome level. There are thirty-four serpin genes in Bombyx mori. Six are highly similar to their Manduca sexta orthologs that regulate innate immunity. Three alternative exons in serpin1 gene and four in serpin28 encode a variable region including the reactive site loop. Splicing of serpin2 pre-mRNA yields variations in serpin2A, 2A? and 2B. Sequence similarity and intron positions reveal the evolutionary pathway of seven serpin genes in group C. RT-PCR indicates an increase in the mRNA levels of serpin1, 3, 5, 6, 9, 12, 13, 25, 27, 32 and 34 in fat body and hemocytes of larvae injected with bacteria. These results suggest that the silkworm serpins play regulatory roles in defense responses. PMID:19150649 Zou, Zhen; Picheng, Zhao; Weng, Hua; Mita, Kazuei; Jiang, Haobo 2009-01-01 103 PubMed Central Background Molecular genetic studies of Bombyx mori have led to profound advances in our understanding of the regulation of development. Bombyx mori brain, as a main endocrine organ, plays important regulatory roles in various biological processes. Microarray technology will allow the genome-wide analysis of gene expression patterns in silkworm brains. Results We reported microarray-based gene expression profiles in silkworm brains at four stages including V7, P1, P3 and P5. A total of 4,550 genes were transcribed in at least one selected stage. Of these, clustering algorithms separated the expressed genes into stably expressed genes and variably expressed genes. The results of the gene ontology (GO) and Kyoto encyclopedia of genes and genomes (KEGG) analysis of stably expressed genes showed that the ribosomal and oxidative phosphorylation pathways were principal pathways. Secondly, four clusters of genes with significantly different expression patterns were observed in the 1,175 variably expressed genes. Thirdly, thirty-two neuropeptide genes, six neuropeptide-like precursor genes, and 117 cuticular protein genes were expressed in selected developmental stages. Conclusion Major characteristics of the transcriptional profiles in the brains of Bombyx mori at specific development stages were present in this study. Our data provided useful information for future research. PMID:21247463 2011-01-01 104 PubMed We have investigated the structural features of three pheromone binding protein (PBP) subtypes from Antheraea polyphemus and monitored possible changes induced upon interaction with the Antheraea pheromonal compounds 4E,9Z-14:Ac [(E4,Z9)-tetradecadienyl-1-acetate], 6E,11Z-16:Ac [(E6,Z11)-hexadecadienyl-1-acetate], and 6E,11Z-16:Al [(E6,Z11)-hexadecadienal]. Circular dichroism and second derivative UV-difference spectroscopy data demonstrate that the structure of subtype PBP1 significantly changes upon binding of 4E,9Z-14:Ac. The related 6E,11Z-16:Ac was less effective and 6E,11Z-16:Al showed only a small effect. In contrast, in subtype PBP2 pronounced structural changes were only induced by the 6E,11Z-16:Al, and the subtype PBP3 did not show any considerable changes in response to the pheromonal compounds. The UV-spectroscopic data suggest that histidine residues are likely to be involved in the ligand-induced structural changes of the proteins, and this notion was confirmed by site-directed mutagenesis experiments. These results demonstrate that appropriate ligands induce structural changes in PBPs and provide evidence for ligand specificity of these proteins. PMID:12488967 Mohl, Claudia; Breer, Heinz; Krieger, Jrgen 2002-10-01 105 PubMed The silkworm is an important economic insect. Poisoning of silkworms by organophosphate pesticides causes tremendous loss to the sericulture. In this study, Solexa sequencing technology was performed to profile the gene expression changes in the midgut of silkworms in response to 24h of phoxim exposure and the impact on detoxification, apoptosis and immune defense were addressed. The results showed that 254 genes displayed at least 2.0-fold changes in expression levels, with 148 genes up-regulated and 106 genes down-regulated. Cytochrome P450 played an important role in detoxification. Histopathology examination and transmission electron microscope revealed swollen mitochondria and disappearance of the cristae of mitochondria, which are the important features in insect apoptotic cells. Cytochrome C release from mitochondria into the cytoplasm was confirmed. In addition, the Toll and immune deficiency (IMD) signal pathways were all inhibited using qRT-PCR. Our results could help better understand the impact of phoxim exposure on silkworm. PMID:23899924 Gu, ZhiYa; Zhou, YiJun; Xie, Yi; Li, FanChi; Ma, Lie; Sun, ShanShan; Wu, Yu; Wang, BinBin; Wang, JuMei; Hong, Fashui; Shen, WeiDe; Li, Bing 2014-02-01 106 PubMed Central Background Our previous studies suggest silkworms can be used as model animals instead of mammals in pharmacologic studies to develop novel therapeutic medicines. We examined the usefulness of the silkworm larvae Bombyx mori as an animal model for evaluating tissue injury induced by various cytotoxic drugs. Drugs that induce hepatotoxic effects in mammals were injected into the silkworm hemocoel, and alanine aminotransferase (ALT) activity was measured in the hemolymph 1 day later. Results Injection of CCl4 into the hemocoel led to an increase in ALT activity. The increase in ALT activity was attenuated by pretreatment with N-acetyl-L-cysteine. Injection of benzoic acid derivatives, ferric sulfate, sodium valproate, tetracycline, amiodarone hydrochloride, methyldopa, ketoconazole, pemoline (Betanamin), N-nitroso-fenfluramine, and D-galactosamine also increased ALT activity. Conclusions These findings indicate that silkworms are useful for evaluating the effects of chemicals that induce tissue injury in mammals. PMID:23137391 2012-01-01 107 The silkworm Bombyx mori is one of the most well-studied insects in terms of both genetics and physiology and is recognized as the model lepidopteran insect. To develop an efficient system for analyzing gene function in the silkworm, we investigated the feasibility of using the GAL4\\/UAS system in conjunction with piggyBac vector-mediated germ-line transformation for targeted gene expression. To drive Morikazu Imamura; Junichi Nakai; Satoshi Inoue; Guo Xing Quan; Toshio Kanda; Toshiki Tamura 108 NASA Astrophysics Data System (ADS) Based on our previous work on light penetration-based silkworm gender identification, we find that unwanted optical noises scattering from the surrounding area near the silkworm pupa and the transparent support are sometimes analyzed and misinterpreted leading to incorrect silkworm gender identification. To alleviate this issue, we place a small rectangular hole on a transparent support so that it not only helps the user precisely place the silkworm pupa but also functions as a region of interest (ROI) for blocking unwanted optical noises and for roughly locating the abdomen region in the image for ease of image processing. Apart from the external ROI, we also assign a smaller ROI inside the image in order to remove strong scattering light from all edges of the external ROI and at the same time speed up our image processing operations. With only the external ROI in function, our experiment shows a measured 86% total accuracy in identifying gender of 120 silkworm pupae with a measured average processing time of 38 ms. Combining the external ROI and the image ROI together revamps the total accuracy in identifying the silkworm gender to 95% with a measured faster 18 ms processing time. Kamtongdee, Chakkrit; Sumriddetchkajorn, Sarun; Sa-ngiamsak, Chiranut 2013-06-01 109 PubMed Although there are many studies to show a key role of transposable elements (TEs) in adaptive evolution of higher organisms, little is known about the molecular mechanisms. In this study, we found that a partial TE (Taguchi) inserted in the cis-regulatory region of the silkworm ecdysone oxidase (EO) gene, which encodes a crucial enzyme to reduce the titer of molting hormone (20-hydroxyecdysone, 20E). The TE insertion occurred during domestication of silkworm and the frequency of the TE insertion in the domesticated silkworm (Bombyx mori) is high, 54.24%. The linkage disequilibrium in the TE inserted strains of the domesticated silkworm was elevated. Molecular population genetics analyses suggest that this TE insertion is adaptive for the domesticated silkworm. Luminescent reporter assay shows that the TE inserted in the cis-regulatory region of the EO gene functions as a 20E-induced enhancer of the gene expression. Further, phenotypic bioassay indicates that the silkworm with the TE insertion exhibited more stable developmental phenotype than the silkworm without the TE insertion when suffering from food shortage. Thus, the inserted TE in the cis-regulatory region of the EO gene increased developmental uniformity of silkworm individuals through regulating 20E metabolism, partially explaining transformation of a domestication developmental trait in the domesticated silkworm. Our results emphasize the exceptional role of gene expression regulation in developmental transition of domesticated animals. PMID:25213334 Sun, Wei; Shen, Yi-Hong; Han, Min-Jin; Cao, Yun-Feng; Zhang, Ze 2014-12-01 110 NASA Astrophysics Data System (ADS) Silk production has evolved to be energetically efficient and functionally optimized, yielding a material that can outperform most industrial fibres, particularly in toughness. Spider silk has hitherto defied all attempts at reproduction, despite advances in our understanding of the molecular mechanisms behind its superb mechanical properties. Spun fibres, natural and man-made, rely on the extrusion process to facilitate molecular orientation and bonding. Hence a full understanding of the flow characteristics of native spinning feedstock (dope) will be essential to translate natural spinning to artificial silk production. Here we show remarkable similarity between the rheologies for native spider-dragline and silkworm-cocoon silk, despite their independent evolution and substantial differences in protein structure. Surprisingly, both dopes behave like typical polymer melts. This observation opens the door to using polymer theory to clarify our general understanding of natural silks, despite the many specializations found in different animal species. Holland, C.; Terry, A. E.; Porter, D.; Vollrath, F. 2006-11-01 111 PubMed Integrated phylogenetic and developmental analyses should enhance our understanding of morphological evolution and thereby improve systematists' ability to utilize morphological characters, but case studies are few. The eggshell (chorion) of Lepidoptera (Insecta) has proven especially tractable experimentally for such analyses because its morphogenesis proceeds by extracellular assembly of proteins. This study focuses on a morphological novelty, the aeropyle crown, that arises at the end of choriogenesis in the wild silkmoth genus Antheraea. Aeropyle crowns are cylindrical projections, ending in prominent prongs, that surround the openings of breathing tubes (aeropyle channels) traversing the chorion. They occur over the entire egg surface in some species, are localized to a circumferential band in many others, and in some are missing entirely, thus exhibiting variation typical of discrete characters analyzed in morphological phylogenetics. Seeking an integrated developmental-phylogenetic view, we first survey aeropyle crown variation broadly across Antheraea and related genera. We then map these observations onto a robust phylogeny, based on three nuclear genes, to test the adequacy of character codings for aeropyle crown variation and to estimate the frequency and direction of change in those characters. Thirdly, we draw on previous studies of choriogenesis, supplemented by new data on gene expression, to hypothesize developmental-genetic bases for the inferred chorion character transformations. Aeropyle crowns are inferred to arise just once, in the ancestor of Antheraea, but to undergo four or more subsequent reductions without regain, a pattern consistent with Dollo's Law. Spatial distribution shows an analogous trend, though less clear-cut, toward reduction of coverage by aeropyle crowns. These trends suggest either that there is little or no natural selection on the details of the aeropyle crown structure or that evolution toward functional optima is ongoing, although no direct evidence exists for either. Genetic, biochemical, and microscopy studies point to at least two developmental changes underlying the origin of the aeropyle crown, namely, reinitiation of deposition of chorionic lamellae after the end of normal choriogenesis (i.e., heterochrony), and sharply increased production of underlying "filler" proteins that push the nascent final lamellae upward to form the crown (i.e., heteroposy). Identification of a unique putative cis-regulatory element shared by unrelated genes involved in aeropyle crown formation suggests a possible simple mechanism for repeated evolutionary reduction and spatial restriction of aeropyle crowns. PMID:16012096 Regier, Jerome C; Paukstadt, Ulrich; Paukstadt, Laela H; Mitter, Charles; Peigler, Richard S 2005-04-01 112 PubMed Central The insect olfactory system, particularly the peripheral sensory system for sex pheromone reception in male moths, is highly selective, but specificity determinants at the receptor level are hitherto unknown. Using the Xenopus oocyte recording system, we conducted a thorough structure-activity relationship study with the sex pheromone receptor of the silkworm moth, Bombyx mori, BmorOR1. When co-expressed with the obligatory odorant receptor co-receptor (BmorOrco), BmorOR1 responded in a dose-dependent fashion to both bombykol and its related aldehyde, bombykal, but the threshold of the latter was about one order of magnitude higher. Solubilizing these ligands with a pheromone-binding protein (BmorPBP1) did not enhance selectivity. By contrast, both ligands were trapped by BmorPBP1 leading to dramatically reduced responses. The silkworm moth pheromone receptor was highly selective towards the stereochemistry of the conjugated diene, with robust response to the natural (10E,12Z)-isomer and very little or no response to the other three isomers. Shifting the conjugated diene towards the functional group or elongating the carbon chain rendered these molecules completely inactive. In contrast, an analogue shortened by two omega carbons elicited the same or slightly higher responses than bombykol. Flexibility of the saturated C1C9 moiety is important for function as addition of a double or triple bond in position 4 led to reduced responses. The ligand is hypothesized to be accommodated by a large hydrophobic cavity within the helical bundle of transmembrane domains. PMID:22957053 Xu, Pingxi; Hooper, Antony M.; Pickett, John A.; Leal, Walter S. 2012-01-01 113 PubMed Males of the giant silk moth Antheraea polyphemus Cramer (Lepidoptera: Saturniidae) were video-recorded in a sustained-flight wind tunnel in a constant plume of sex pheromone. The plume was experimentally truncated, and the moths, on losing pheromone stimulus, rapidly changed their behaviour from up-tunnel zig-zag flight to lateral casting flight. The latency of this change was in the range 300-500 ms. Video and computer analysis of flight tracks indicates that these moths effect this switch by increasing their course angle to the wind while decreasing their air speed. Combined with previous physiological and biochemical data concerning pheromone processing within this species, this behavioural study supports the argument that the temporal limit for this behavioural response latency is determined at the level of genetically coded kinetic processes located within the peripheral sensory hairs. PMID:3209970 Baker, T C; Vogt, R G 1988-07-01 114 PubMed Female sex pheromones applied to freshly isolated, living antennae of male Antheraea polyphemus and Bombyx mori led to an increase of cGMP. A 1:1 mixture of 2 pheromone components of Antheraea polyphemus blown for 10 sec in physiological concentrations over their antennal branches raised cGMP levels about 1.34-fold (+/- 0.08 SEM, n = 23) from a basal level of 3.0 +/- 0.6 (SEM, n = 20) pmol/mg protein. Similarly, bombykol elicited a 1.29-fold (+/- 0.13 SEM, n = 23) cGMP increase in antennae of male Bombyx mori from a basal level of 2.7 +/- 0.5 (SEM, n = 24) pmol/mg protein. No cross-sensitivity was found with respect to pheromones from either species. In antennae of female silkmoths, the cGMP response was missing upon stimulation with their own respective pheromones according to the known lack of pheromone receptor cells in the female. cAMP levels in the male antennae of 14.2 +/- 2.9 (SEM, n = 4) pmol/mg protein in A. polyphemus and 15.0 +/- 3.0 (SEM, n = 5) pmol/mg protein in B. mori were not affected by pheromone stimulation. Within 1-60 sec, the extent of cGMP increase in B. mori was independent of the duration of pheromone exposure. The levels of cGMP in pheromone-stimulated antennae of both species remained elevated for at least 10 min, i.e., much longer than the duration of the receptor potential measured in single-cell recordings. Guanylate cyclase activity was identified in homogenates of male and female antennae from both species. The Km of the guanylate cyclase from male B. mori for the preferential substrate MnGTP was 175 microM.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:1970356 Ziegelberger, G; van den Berg, M J; Kaissling, K E; Klumpp, S; Schultz, J E 1990-04-01 115 PubMed Central Background Carboxylesterase is a multifunctional superfamily and ubiquitous in all living organisms, including animals, plants, insects, and microbes. It plays important roles in xenobiotic detoxification, and pheromone degradation, neurogenesis and regulating development. Previous studies mainly used Dipteran Drosophila and mosquitoes as model organisms to investigate the roles of the insect COEs in insecticide resistance. However, genome-wide characterization of COEs in phytophagous insects and comparative analysis remain to be performed. Results Based on the newly assembled genome sequence, 76 putative COEs were identified in Bombyx mori. Relative to other Dipteran and Hymenopteran insects, alpha-esterases were significantly expanded in the silkworm. Genomics analysis suggested that BmCOEs showed chromosome preferable distribution and 55% of which were tandem arranged. Sixty-one BmCOEs were transcribed based on cDNA/ESTs and microarray data. Generally, most of the COEs showed tissue specific expressions and expression level between male and female did not display obvious differences. Three main patterns could be classified, i.e. midgut-, head and integument-, and silk gland-specific expressions. Midgut is the first barrier of xenobiotics peroral toxicity, in which COEs may be involved in eliminating secondary metabolites of mulberry leaves and contaminants of insecticides in diet. For head and integument-class, most of the members were homologous to odorant-degrading enzyme (ODE) and antennal esterase. RT-PCR verified that the ODE-like esterases were also highly expressed in larvae antenna and maxilla, and thus they may play important roles in degradation of plant volatiles or other xenobiotics. Conclusion B. mori has the largest number of insect COE genes characterized to date. Comparative genomic analysis suggested that the gene expansion mainly occurred in silkworm alpha-esterases. Expression evidence indicated that the expanded genes were specifically expressed in midgut, integument and head, implying that these genes may have important roles in detoxifying secondary metabolites of mulberry leaves, contaminants in diet, and odorants. Our results provide some new insights into functions and evolutionary characteristics of COEs in phytophagous insects. PMID:19930670 2009-01-01 116 NSDL National Science Digital Library Learners work in teams to investigate how scientists use physical characteristics to classify living things. First, learners examine drawings of a variety of leaves from different species of oak trees and work to develop the characteristics of a "typical" oak leaf. Then, learners examine samples of oak leaves and work to classify them. This activity uses drawings of leaves, but it could also work with a collection of real leaves. History, American M. 2001-01-01 117 NASA Astrophysics Data System (ADS) This paper focuses on the problem about a configuration with complete nutrition for humans in a Controlled Ecological Life Support System (CELSS) applied in the spacebases. The possibility of feeding silkworms to provide edible animal protein with high quality for taikonauts during long-term spaceflights and lunar-based missions was investigated from several aspects, including the nutrition structure of silkworms, feeding method, processing methods, feeding equipment, growing conditions and the influences on the space environmental condition changes caused by the silkworms. The originally inedible silk is also regarded as a protein source. A possible process of edible silk protein was brought forward in this paper. After being processed, the silk can be converted to edible protein for humans. The conclusion provides a promising approach to solving the protein supply problem for the taikonauts living in space during an extended exploration period. Yang, Yunan; Tang, Liman; Tong, Ling; Liu, Hong 2009-04-01 118 PubMed Insect cytokine paralytic peptide (PP) upregulates the expression of immune-related genes and contributes to host defense in the silkworm Bombyx mori. The present findings demonstrated that PP promotes nitric oxide (NO) production and induces the expression of NO synthase. A pharmacologic NO synthase inhibitor suppressed the PP-dependent (i) induction of immune-related genes, (ii) activation of p38 mitogen-activated protein kinase, and (iii) killing delay of silkworm larvae by Staphylococcus aureus. The upstream mechanism of NO synthesis in insect immunity has been unknown, and the present results suggest for the first time that an insect cytokine induces NO and contributes to self-defense. PMID:23178406 Ishii, Kenichi; Adachi, Tatsuo; Hamamoto, Hiroshi; Oonishi, Tadahiro; Kamimura, Manabu; Imamura, Katsutoshi; Sekimizu, Kazuhisa 2013-03-01 119 PubMed Central Bombyx mori cypovirus is a major pathogen which causes significant losses in silkworm cocoon harvests because the virus particles are embedded in micrometer-sized protein crystals called polyhedra and can remain infectious in harsh environmental conditions for years. But the remarkable stability of polyhedra can be applied on slow-release carriers of cytokines for tissue engineering. Here we show the complete healing in critical-sized bone defects by bone morphogenetic protein-2 (BMP-2) encapsulated polyhedra. Although absorbable collagen sponge (ACS) safely and effectively delivers recombinant human BMP-2 (rhBMP-2) into healing tissue, the current therapeutic regimens release rhBMP-2 at an initially high rate after which the rate declines rapidly. ACS impregnated with BMP-2 polyhedra had enough osteogenic activity to promote complete healing in critical-sized bone defects, but ACS with a high dose of rhBMP-2 showed incomplete bone healing, indicating that polyhedral microcrystals containing BMP-2 promise to advance the state of the art of bone healing. PMID:23226833 Matsumoto, Goichi; Ueda, Takayo; Shimoyama, Junko; Ijiri, Hiroshi; Omi, Yasushi; Yube, Hisato; Sugita, Yoshihiko; Kubo, Katsutoshi; Maeda, Hatsuhiko; Kinoshita, Yukihiko; Arias, Duverney Gaviria; Shimabukuro, Junji; Kotani, Eiji; Kawamata, Shin; Mori, Hajime 2012-01-01 120 PubMed Central Objectives This study aimed to investigate the profile of sensitization to silkworm moth (Bombyx mori) and other 9 common inhalant allergens among patients with allergic diseases in southern China. Methods A total of 175 patients were tested for serum sIgE against silkworm moth in addition to combinations of other allergens: Dermatophagoides pteronyssinus, Dermatophagoides farinae, Blomia tropicalis, Blattella germanica, Periplaneta americana, cat dander, dog dander, Aspergillus fumigatus and Artemisia vulgaris by using the ImmunoCAP system. Correlation between sensitization to silkworm moth and to the other allergens was analyzed. Results Of the 175 serum samples tested, 86 (49.14%) were positive for silkworm moth sIgE. With high concordance rates, these silkworm moth sensitized patients were concomitantly sensitized to Dermatophagoides pteronyssinus (94.34%), Dermatophagoides farinae (86.57%), Blomia tropicalis (93.33%), Blattella germanica (96.08%), and Periplaneta americana (79.41%). Moreover, there was a correlation in serum sIgE level between silkworm moth and Dermatophagoides pteronyssinus (r?=?0.518), Dermatophagoides farinae (r?=?0.702), Blomia tropicalis (r?=?0.701), Blattella germanica (r?=?0.878), and Periplaneta americana (r?=?0.531) among patients co-sensitized to silkworm moth and each of these five allergens. Conclusion In southern Chinese patients with allergic diseases, we showed a high prevalence of sensitization to silkworm moth, and a co-sensitization between silkworm moth and other five common inhalant allergens. Further serum inhibition studies are warranted to verify whether cross-reactivity exists among these allergens. PMID:24787549 Wei, Nili; Huang, Huimin; Zeng, Guangqiao 2014-01-01 121 Three molecular forms of prothoracicotropic hormone were isolated from the head of the adult silkworm, Bombyx mori, and the amino acid sequence of 19 amino acid residues in the amino terminus of these prothoracicotropic hormones was determined. These residues exhibit significant homology with insulin and insulin-like growth factors. Hiromichi Nagasawa; Hiroshi Kataoka; Akira Isogai; Saburo Tamura; Akinori Suzuki; Hironori Ishizaki; Akira Mizoguchi; Yuko Fujiwara; Atsushi Suzuki 1984-01-01 122 We have determined the complete amino acid sequence of 4K-PTTH-II, one of three forms of the Mr 4400 prothoracicotropic hormone of the silkworm Bombyx mori, active to brainless pupae of Samia cynthia ricini. Like vertebrate insulin, it consists of two nonidentical peptide chains (A and B chains). The A chain consists of 20 amino acid residues. The B chain is Hiromichi Nagasawa; Hiroshi Kataoka; Akira Isogai; Saburo Tamura; Akinori Suzuki; Akira Mizoguchi; Yuko Fujiwara; Atsushi Suzuki; Susumu Y. Takahashi; Hironori Ishizaki 1986-01-01 123 The effects of starvation and feeding on the release of bombyxin, a peptide of insulin superfamily in insects, from the larval brain of the silkworm Bombyx mori were investigated. Following starvation, the bombyxin titer in the hemolymph of larvae decreased, whereas its content in the brain increased. On the other hand, refeeding of the starved larvae resulted in an increase Makoto Masumura; Shin'Ichiro Satake; Hironao Saegusa; Akira Mizoguchi 2000-01-01 124 To estimate the size of the haploid genome of the silkworm, Bombyx mori (Lepidoptera), amounts of Feulgen-DNA staining in individual nuclei of primary spermatocytes, spermatids, maturing sperm, and larval or pupal hemocytes were determined with an integrating microdensitometer and compared with the Feulgen-DNA levels found for chicken erythrocyte nuclei, or the sperm and erythrocyte nuclei of Xenopus laevis that were Ellen M. Rasch 1974-01-01 125 Diapause hormone (DH) is a neuropeptide hormone which is secreted from the suboesophageal ganglion (SG) and is responsible for induction of embryonic diapause of the silkworm, Bombyx mori. DH is isolated from SGs and determined to be a 24 amino acid peptide amide. The cDNA encodes the polyprotein precursor from which DH, pheromone biosynthesis activating neuropeptide (PBAN) and three other Okitsugu Yamashita 1996-01-01 126 THE concentration of protein in the blood of insects changes remarkably in the course of metamorphosis1. I have shown that the concentration of blood protein increases after the middle period of the last larval instar2 in the silkworm. It is not yet clear, however, what organ is concerned in the synthesis of blood protein in larval stage, although there are Hajime Shigematsu 1958-01-01 127 IN the course of biochemical studies1 on the embryonic diapause of the Bombyx silkworm, it has been shown that the glycogen content of the egg decreases markedly at the onset of diapause and reaches the lowest level at about thirty days after oviposition. When diapause is broken by cold treatment, glycogen increases progressively even at low temperature and regains the Haruo Chino 1957-01-01 128 The expression of cecropin and lysozyme genes is induced in response to bacterial peptidoglycan in the fat body of the silkworm, Bombyx mori. Specific inhibitors of either phospholipase A2, cyclooxygenase or lipoxygenase significantly inhibit the induction of the immune genes both in vivo and in cultured fat body as detected by means of Northern hybridization. Arachidonic acid injected into the Isao Morishima; Yoshiaki Yamano; Kenji Inoue; Noriyuki Matsuo 1997-01-01 129 PubMed Digital gene expression (DGE) was performed to investigate the gene expression profiles of 4008 and p50 silkworm strains at 48 h after oral infection with BmCPV. 3,668,437 clean tags were identified in the BmCPV-infected p50 silkworms and 3,540,790 clean tags in the control p50. By contrast, 4,498,263 clean tags were identified in the BmCPV-infected 4008 silkworms and 4,164,250 clean tags in the control 4008. A total of 691 differentially expressed genes were detected in the infected 4008 DGE library and 185 were detected in the infected p50 DGE library, respectively. The expression profiles identified some important differentially expressed genes involved in signal transduction, enzyme activity and apoptotic changes, some of which were verified using quantitative real-time PCR (qRT-PCR). These results provide important clues on the molecular mechanism of BmCPV invasion and resistance mechanism of silkworms against BmCPV infection. PMID:24525400 Gao, Kun; Deng, Xiang-Yuan; Qian, He-Ying; Qin, Guang-Xing; Hou, Cheng-Xiang; Guo, Xi-Jie 2014-04-15 130 PubMed Identification of the resistance mechanism of insects against Bacillus thuringiensis Cry1A toxin is becoming an increasingly challenging task. This fact highlights the need for establishing new methods to further explore the molecular interactions of Cry1A toxin with insects and the receptor-binding region of Cry1A toxins for their wider application as biopesticides and a gene source for gene-modified crops. In this contribution, a quantum dot-based near-infrared fluorescence imaging method has been applied for direct dynamic tracking of the specific binding of Cry1A toxins, CrylAa and CrylAc, to the midgut tissue of silkworm. The in vitro fluorescence imaging displayed the higher binding specificity of CrylAa-QD probes compared to CrylAc-QD to the brush border membrane vesicles of midgut from silkworm. The in vivo imaging demonstrated that more CrylAa-QDs binding to silkworm midgut could be effectively and distinctly monitored in living silkworms. Furthermore, frozen section analysis clearly indicated the broader receptor-binding region of Cry1Aa compared to that of Cry1Ac in the midgut part. These observations suggest that the insecticidal activity of Cry toxins may depend on the receptor-binding sites, and this scatheless and visual near-infrared fluorescence imaging could provide a new avenue to study the resistance mechanism to maintain the insecticidal activity of B. thuringiensis toxins. PMID:24252542 Li, Na; Wang, Jing; Han, Heyou; Huang, Liang; Shao, Feng; Li, Xuepu 2014-02-15 131 Living organisms require mechanisms regulating reactive oxygen species (ROS) such as hydrogen peroxide and superoxide anion. Catalase is one of the regulatory enzymes and facilitates the degradation of hydrogen peroxide to oxygen and water. Biochemical information on an insect catalase is, however, insufficient. Using mRNA from fat body of the silkworm, Bombyx mori, a cDNA encoding a putative catalase was Kohji Yamamoto; Yutaka Banno; Hiroshi Fujii; Fumio Miake; Nobuhiro Kashige; Yoichi Aso 2005-01-01 132 PubMed In the male silkmoth Antheraea polyphemus, the formation of the side branches of the quadripectinate antennal flagellum was disturbed by an experimental manipulation. Normally the side branches develop in the pupa via deep incisions which proceed from the periphery towards the centerline of the leaf-shaped antennal anlage. Local removal of the uppermost, pigmented cuticular layers of the pupal antennal pocket ('cuticular window') led to a local standstill of branch formation in the manipulated region of the pocket, most probably caused by increased evaporation of water through the remaining layers of meso- and endocuticle. These parts of the antenna retained an unbranched, plate-like shape. This early morphogenetic stage was conserved by the secretion of antennal cuticle. Besides cuticle formation, development of sensilla is not impeded by the manipulation. In the plate-shaped regions, the initial pattern formed by the sensilla in the antennal epidermis is preserved, because they maturate at their birth places. In the individual segments, the pattern of sensilla shows a mirror-like symmetry with respect to the segmental midline. From the edge to the midline, we found large s. trichodea, followed by small s. trichodea, s. basiconica, and s. coeloconica on the dorsal side whereas on the ventral side, there are only large s. trichodea and s. campaniformia. We conclude that the development of the featherlike antennal shape on the one hand and the development of sensilla and cuticle on the other hand are independent processes. PMID:18621301 Steiner, C; Keil, T A 1995-06-01 133 PubMed The sensilla trichodea of the silkmoth Antheraea polyphemus are innervated by three types of receptor neurons each responding specifically to one of three pheromone components. The sensillum lymph of these sensilla surrounding the sensory dendrites contains three different types of pheromone-binding proteins (PBPs) in high concentrations. The sensilla trichodea of the silkmoth Bombyx mori are supplied by two receptor neurons each tuned specifically to one of the two pheromone components bombykol and bombykal, but only one type of PBP has been found so far in these sensilla. Recombinant PBPs of both silkmoth species in various combinations with pheromone components were applied to the receptor neurons via tip-opened sensilla during electrophysiological recordings. Over a fairly broad range of pheromone concentrations the responses of the receptor neurons depended on both, the pheromone component and the type of the PBP. Therefore, the PBPs appear to contribute to the excitation of the receptor neurons. Furthermore, bombykal in combination with the expressed PBP of B. mori failed to activate the corresponding receptor neuron of B. mori, but did so if combined with one of the PBPs of A. polyphemus. Therefore, a still unknown binding protein involved in bombykal transport might be present in B. mori. PMID:14977808 Pophof, Blanka 2004-02-01 134 PubMed Pheromone-binding proteins (PBPs), located in the sensillum lymph of pheromone-responsive antennal hairs, are thought to transport the hydrophobic pheromones to the chemosensory membranes of olfactory neurons. It is currently unclear what role PBPs may play in the recognition and discrimination of species-specific pheromones. We have investigated the binding properties and specificity of PBPs from Mamestra brassicae (MbraPBP1), Antheraea polyphemus (ApolPBP1), Bombyx mori (BmorPBP), and a hexa-mutant of MbraPBP1 (Mbra1-M6), mutated at residues of the internal cavity to mimic that of BmorPBP, using the fluorescence probe 1-aminoanthracene (AMA). AMA binds to MbraPBP1 and ApolPBP1, however, no binding was observed with either BmorPBP or Mbra1-M6. The latter result indicates that relatively limited modifications to the PBP cavity actually interfere with AMA binding, suggesting that AMA binds in the internal cavity. Several pheromones are able to displace AMA from the MbraPBP1- and ApolPBP1-binding sites, without, however, any evidence of specificity for their physiologically relevant pheromones. Moreover, some fatty acids are also able to compete with AMA binding. These findings bring into doubt the currently held belief that all PBPs are specifically tuned to distinct pheromonal compounds. PMID:11274212 Campanacci, V; Krieger, J; Bette, S; Sturgis, J N; Lartigue, A; Cambillau, C; Breer, H; Tegoni, M 2001-06-01 135 PubMed Odorant-degrading enzymes have been postulated to participate in the fast deactivation of insect pheromones. These proteins are expressed specifically in the sensillar lymph of insect antennae in such low amounts that, hitherto, isolation and protein-based cDNA cloning has not been possible. Using degenerate primers based on conserved amino acid sequences of insect carboxylesterases and juvenile hormone esterases, we were able to amplify partial cDNA fragments, which were then used for the design of gene-specific primers for RACE. This bioinformatics approach led us to the cloning of cDNAs, encoding a putative odorant-degrading enzyme (Apol-ODE) and a putative integumental esterase (Apol-IE) from the wild silkmoth, Antheraea polyphemus. Apol-ODE had a predicted molecular mass of 59,994 Da, pI of 6.63, three potential N-glycosylation sites, and a putative catalytic site Ser characterized by the sequence Gly(195)-Glu-Ser-Ala-Gly-Ala. Apol-IE gave calculated molecular mass of 61,694 Da, pI of 7.49, two potential N-glycosylation sites, and a putative active site with the sequence Gly(214)-Tyr-Ser-Ala-Gly. The transcript of Apol-ODE was detected by RT-PCR in male antennae and branches (sensillar tissues), but not in female antennae and other control tissues. Apol-IE was detected in male and female antennae as well as legs. PMID:12429129 Ishida, Yuko; Leal, Walter S 2002-12-01 136 PubMed In this study, the full-length cDNA of a peptidoglycan recognition protein named BmPGRP-S3 was identified from the silkworm, Bombyx mori by rapid amplification of cDNA ends. It is 807bp and comprises the following: a 5'-untranslated region (UTR) with a length of 112bp, a 3'-UTR with a length of 92bp including a poly-adenylation signal sequence (AATAAA) and a poly(A) tail. The longest open reading frame (ORF) of BmPGRP-S3 is 603bp and encodes a polypeptide of 200 amino acids with a predicted molecular weight of 22.3kDa including a PGRP domain. Sequence similarity and phylogenic analysis results indicated that BmPGRP-S3 belongs to the group of insect PGRPs and is closer to BmPGRP-S4 with the highest identity of 68%. Fluorescent quantitative real-time PCR results revealed that the mRNA transcripts of BmPGRP-S3 were presented in all of the tissues, but were highest in the midgut. In the silkworm larvae infected with B. mori cytoplasmic polyhedrosis virus (BmCPV), the relative expression level of BmPGRP-S3 was upregulated. The DNA segment of a mature BmPGRP-S3 peptide was inserted into the expression plasmid pET-28a(+) to construct a recombinant expression plasmid. Western blot results revealed that mature BmPGRP-S3 could be detected in the hemolymph and midgut which were the most important immune tissues in silkworm. All the results suggested that BmPGRP-S3 may play an important role in the immune response of silkworm to BmCPV infection and provided helpful information for further studying the function of BmPGRP-S3 in silkworm. PMID:25218236 Gao, Kun; Deng, Xiang-Yuan; Qian, He-Ying; Qin, Guang-Xing; Hou, Cheng-Xiang; Guo, Xi-Jie 2014-11-15 137 PubMed Central Background Lepidoptera insects have a novel development process comprising several metamorphic stages during their life cycle compared with vertebrate animals. Unlike most Lepidoptera insects that live on nectar during the adult stage, the Bombyx mori silkworm adults do not eat anything and die after egg-laying. In addition, the midguts of Lepidoptera insects produce antimicrobial proteins during the wandering stage when the larval tissues undergo numerous changes. The exact mechanisms responsible for these phenomena remain unclear. Principal Findings We used the silkworm as a model and performed genome-wide transcriptional profiling of the midgut between the feeding stage and the wandering stage. Many genes concerned with metabolism, digestion, and ion and small molecule transportation were down-regulated during the wandering stage, indicating that the wandering stage midgut loses its normal functions. Microarray profiling, qRT-PCR and western blot proved the production of antimicrobial proteins (peptides) in the midgut during the wandering stage. Different genes of the immune deficiency (Imd) pathway were up-regulated during the wandering stage. However, some key genes belonging to the Toll pathway showed no change in their transcription levels. Unlike butterfly (Pachliopta aristolochiae), the midgut of silkworm moth has a layer of cells, indicating that the development of midgut since the wandering stage is not usual. Cell division in the midgut was observed only for a short time during the wandering stage. However, there was extensive cell apoptosis before pupation. The imbalance of cell division and apoptosis probably drives the continuous degeneration of the midgut in the silkworm since the wandering stage. Conclusions This study provided an insight into the mechanism of the degeneration of the silkworm midgut and the production of innate immunity-related proteins during the wandering stage. The imbalance of cell division and apoptosis induces irreversible degeneration of the midgut. The Imd pathway probably regulates the production of antimicrobial peptides in the midgut during the wandering stage. PMID:22937093 Xiao, Guohua; Yang, Bing; Zhang, Jie; Li, Xuquan; Guan, Jingmin; Shao, Qimiao; Beerntsen, Brenda T.; Zhang, Peng; Wang, Chengshu; Ling, Erjun 2012-01-01 138 A 50-kDa hemolymph protein, having strong affinity to the cell wall of Gram(-) bacteria, was purified from the hemolymph of the silkworm, Bombyx mori. The cDNA encoding this Gram(-) bacteria-binding protein (GNBP) was isolated from an immunized silkworm fat body cDNA library and sequenced. Comparison of the deduced amino acid sequence with known sequences revealed that GNBP contained a region Won-Jae Lee; Jiing-Dwan Lee; Vladimir V. Kravchenko; Richard J. Ulevitch; Paul T. Brey 1996-01-01 139 Silkworm is one of the most attractive hosts for large-scale production of eukaryotic proteins as well as recombinant baculoviruses for gene transfer to mammalian cells. The bacmid system of Autographa californica nuclear polyhedrosis virus (AcNPV) has already been established and widely used. However, the AcNPV does not have a potential to infect silkworm. We developed the first practical Bombyx mori Tomoko Motohashi; Tsukasa Shimojima; Tatsuo Fukagawa; Katsumi Maenaka; Enoch Y. Park 2005-01-01 140 In order to obtain an overall view on silkworm response to Bombyx mori cytoplasmic polyhedrosis virus (BmCPV) infection, a microarray system comprising 22,987 oligonucluotide 70-mer probes was\\u000a employed to compare differentially expressed genes in the midguts of BmCPV-infected and normal silkworm larvae. At 72h post-inoculation,\\u000a 258 genes exhibited at least 2.0-fold differences in expression level. Out of these, 135 genes Ping Wu; Xiu Wang; Guang-xing Qin; Ting Liu; Yun-Feng Jiang; Mu-Wang Li; Xi-Jie Guo 2011-01-01 141 E-print Network Managing acute oak decline Practice Note FCPN015 April 2010 1 Pedunculate oak Oak trees in Britain dieback' or decline' is the name used to describe poor health in oak trees. The symptoms of oak decline; oak trees in Britain have been affected for the most part of the past century. Both species of oak 142 PubMed SNMP-1 (sensory neuron membrane protein 1) is an olfactory-specific membrane-bound protein which is homologous with the CD36 receptor family. Previous light level immunocytochemical studies suggested that SNMP-1 was localized in the dendrites and distal cell body of sex-pheromone-specific olfactory receptor neurons (ORN); these studies further suggested SNMP-1 was expressed in only one of two to three neurons in male-specific pheromone-sensitive trichoid sensilla. To better understand the expression and localization of SNMP-1, an immunocytochemical study was performed using electron microscopy to visualize the distribution of SNMP-1 among the neurons of several classes of olfactory sensilla of both male and female antennae of the silkmoth Antheraea polyphemus. SNMP-1 antigenicity was primarily restricted to the receptive dendritic membranes of ORNs of all sensilla types examined and was observed in cytosolic granules, but not plasma membranes, of the cell soma. Mean labeling densities ranged from 1 to 16 gold particles per micrometer of dendrite circumference; dendrites of trichoid and intermediate sensilla showed significantly higher labeling densities than those of basiconic sensilla. Larger dendrites of trichoid sensilla showed significantly higher mean labeling densities (13-16/micron) than smaller diameter dendrites (3-7/micron). Immunofluorescence studies using baculovirus expressed SNMP-1 and multiphoton photon laser scanning microscopy (MPLSM) indicated that rSNMP-1, which was post-translationally processed to the in vivo molecular weight, was inserted into the plasma membrane in a topography presenting extracellular epitopes. These studies suggest SNMP-1 is a common feature of the ORNs, is asymmetrically expressed among functionally distinct neurons, and possesses a topography which permits interaction with components of the extracellular sensillum lymph. PMID:11320659 Rogers, M E; Steinbrecht, R A; Vogt, R G 2001-03-01 143 PubMed Pheromone-binding proteins (PBPs) located in the antennae of male moth species play an important role in olfaction. They are carrier proteins, believed to transport volatile hydrophobic pheromone molecules across the aqueous sensillar lymph to the membrane-bound G protein-coupled olfactory receptor proteins. The roles of PBPs in molecular recognition and the mechanisms of pheromone binding and release are poorly understood. Here, we report the NMR structure of a PBP from the giant silk moth Antheraea polyphemus. This is the first structure of a PBP with specific acetate-binding function in vivo. The protein consists of nine alpha-helices: alpha1a (residues 2-5), alpha1b (8-12), alpha1c (16-23), alpha2 (27-34), alpha3a (46-52), alpha3b (54-59), alpha4 (70-79), alpha5 (84-100) and alpha6 (107-125), held together by three disulfide bridges: 19-54, 50-108 and 97-117. A large hydrophobic cavity is located inside the protein, lined with side-chains from all nine helices. The acetate-binding site is located at the narrow end of the cavity formed by the helices alpha3b and alpha4. The pheromone can enter this cavity through an opening between the helix alpha1a, the C-terminal end of the helix alpha6, and the loop between alpha2 and alpha3a. We suggest that Trp37 may play an important role in the initial interaction with the ligand. Our analysis also shows that Asn53 plays the key role in recognition of acetate pheromones specifically, while Phe12, Phe36, Trp37, Phe76, and Phe118 are responsible for non-specific binding, and Leu8 and Ser9 may play a role in ligand chain length recognition. PMID:15003458 Mohanty, Smita; Zubkov, Sergey; Gronenborn, Angela M 2004-03-19 144 PubMed The imaginal antenna of the male silkmoth Antheraea polyphemus is a feather-shaped structure consisting of about 30 flagellomeres, each of which gives off two pairs of side branches. During the pupal stage (lasting for 3 weeks), the antenna develops from a leaf-shaped, flattened epidermal sac ('antennal blade') via two series of incisions which proceed from the periphery towards the prospective antennal stem. The development of the peripheral nervous system was studied by staining the neurons with an antibody against horseradish peroxidase as well as by electron microscopy. The epithelium is subdivided in segmentally arranged sensillogenic regions alternating with non-sensillogenic regions. Immediately after apolysis, clusters consisting of 5 sensory neurons each and belonging to the prospective sensilla chaetica can be localized at the periphery of the antennal blade in the sensillogenic regions. During the first day following apolysis, the primordia of ca. 70,000 olfactory sensilla arise in the sensillogenic regions. Axons from their neurons are collected in segmentally arranged nerves which run towards the CNS along the dorsal as well as the ventral epidermis and are enveloped by a glial sheath. This 'primary innervation pattern' is completed within the second day after apolysis. A first wave of incisions ('primary incisions') subdivide the antennal blade into segmental 'double branches' without disturbing the innervation pattern. Then a second wave of incisions ('secondary incisions') splits the double branches into single antennal branches. During this process, the segmental nerves and their glial sheaths are disintegrated. The axons are then redistributed into single branch nerves while their glial sheath is reconstituted, forming the 'secondary', or adult, innervation pattern. The epidermis is backed by a basal lamina which is degraded after outgrowth of the axons, but is reconstituted after formation of the single antennal branches. PMID:18621300 Steiner, C; Keil, T A 1995-06-01 145 E-print Network This publication explains how to minimize damage to live oak trees by the oak leaf roller and an associated caterpillar species, which occur throughout Texas but are most damaging in the Hill Country and South Texas.... Drees, Bastiaan M. 2004-03-26 146 PubMed Synthesis and secretion of the insect molting hormone ecdysteroid in the prothoracic glands (PGs) are stimulated by the prothoracicotropic hormone (PTTH) secreted by the brain. Bombyxins, insulin-like peptides of the silkworm Bombyx mori, show prothoracicotropic activity when administered to the saturniid silkworm Samia cynthia ricini, but they are inactive to B. mori itself. Recently, the genes for the bombyxin homologs of S. cynthia ricini (referred to as Samia bombyxin-related peptides, SBRPs) were cloned. To examine the prothoracicotropic activity of SBRPs on S. cynthia ricini, we synthesized two representative molecules, SBRP-A1 and -B1. They promoted pupa-to-adult development with ED(50) of 50 and 10 ng/pupa (EC(50) of 5 and 1 nM), respectively. PMID:10600544 Nagata, K; Maruyama, K; Kojima, K; Yamamoto, M; Tanaka, M; Kataoka, H; Nagasawa, H; Isogai, A; Ishizaki, H; Suzuki, A 1999-12-20 147 PubMed Central Juvenile hormone (JH) coordinates with 20-hydroxyecdysone (20E) to regulate larval growth and molting in insects. However, little is known about how this cooperative control is achieved during larval stages. Here, we induced silkworm superlarvae by applying the JH analogue (JHA) methoprene and used a microarray approach to survey the mRNA expression changes in response to JHA in the silkworm integument. We found that JHA application significantly increased the expression levels of most genes involved in basic metabolic processes and protein processing and decreased the expression of genes associated with oxidative phosphorylation in the integument. Several key genes involved in the pathways of insulin/insulin-like growth factor signaling (IIS) and 20E signaling were also upregulated after JHA application. Taken together, we suggest that JH may mediate the nutrient-dependent IIS pathway by regulating various metabolic pathways and further modulate 20E signaling. PMID:24809046 Cheng, Daojun; Peng, Jian; Meng, Meng; Wei, Ling; Kang, Lixia; Qian, Wenliang; Xia, Qingyou 2014-01-01 148 E-print Network 39 Contemporary California Indians, Oaks, and Sudden Oak Death (Phytophthora ramorum)1 Beverly R) the symbolic context of the foods in terms of ecological and social relationships that connect people to place of plant species affected by Sudden Oak Death (Phytophthora ramorum). An overview follows of the impact Standiford, Richard B. 149 PubMed Central Host-pathogen interactions are complex relationships, and a central challenge is to reveal the interactions between pathogens and their hosts. Bacillus bombysepticus (Bb) which can produces spores and parasporal crystals was firstly separated from the corpses of the infected silkworms (Bombyx mori). Bb naturally infects the silkworm can cause an acute fuliginosa septicaemia and kill the silkworm larvae generally within one day in the hot and humid season. Bb pathogen of the silkworm can be used for investigating the host responses after the infection. Gene expression profiling during four time-points of silkworm whole larvae after Bb infection was performed to gain insight into the mechanism of Bb-associated host whole body effect. Genome-wide survey of the host genes demonstrated many genes and pathways modulated after the infection. GO analysis of the induced genes indicated that their functions could be divided into 14 categories. KEGG pathway analysis identified that six types of basal metabolic pathway were regulated, including genetic information processing and transcription, carbohydrate metabolism, amino acid and nitrogen metabolism, nucleotide metabolism, metabolism of cofactors and vitamins, and xenobiotic biodegradation and metabolism. Similar to Bacillus thuringiensis (Bt), Bb can also induce a silkworm poisoning-related response. In this process, genes encoding midgut peritrophic membrane proteins, aminopeptidase N receptors and sodium/calcium exchange protein showed modulation. For the first time, we found that Bb induced a lot of genes involved in juvenile hormone synthesis and metabolism pathway upregulated. Bb also triggered the host immune responses, including cellular immune response and serine protease cascade melanization response. Real time PCR analysis showed that Bb can induce the silkworm systemic immune response, mainly by the Toll pathway. Anti-microorganism peptides (AMPs), including of Attacin, Lebocin, Enbocin, Gloverin and Moricin families, were upregulated at 24 hours post the infection. PMID:19956592 Huang, Lulin; Cheng, Tingcai; Xu, Pingzhen; Cheng, Daojun; Fang, Ting; Xia, Qingyou 2009-01-01 150 PubMed Japanese agricultural scientist Toyama Kametaro's report about the Mendelian inheritance of silkworm cocoon color in Studies on the Hybridology of Insects (1906) spurred changes in Japanese silk production and thrust Toyama and his work into a scholarly exchange with American entomologist Vernon Kellogg. Toyama's work, based on research conducted in Japan and Siam, came under international scrutiny at a time when analyses of inheritance flourished after the "rediscovery" of Mendel's laws of heredity in 1900. The hybrid silkworm studies in Asia attracted the attention of Kellogg, who was concerned with how experimental biology would be used to study the causes of natural selection. He challenged Toyama's conclusions that Mendelism alone could explain the inheritance patterns of silkworm characters such as cocoon color because they had been subject to hundreds of years of artificial selection, or breeding. This examination of the intersection of Japanese sericulture and American entomology probes how practical differences in scientific interests, societal responsibilities, and silkworm materiality were negotiated throughout the processes of legitimating Mendelian genetics on opposite sides of the Pacific. The ways in which Toyama and Kellogg assigned importance to certain silkworm properties show how conflicting intellectual orientations arose in studies of the same organism. Contestation about Mendelism took place not just on a theoretical level, but the debate was fashioned through each scientist's rationale about the categorization of silkworm breeds and races and what counted as "natural". This further mediated the acceptability of the silkworm not as an experimental organism, but as an appropriately "natural" insect with which to demonstrate laws of inheritance. All these shed light on the challenges that came along with the use of agricultural animals to convincingly articulate new biological principles. PMID:20665229 Onaga, Lisa 2010-01-01 151 Antiviral assays of chemically and biologically synthesized silver nanoparticles were made against BmNPV (Bombyx mori Nuclear Polyhedrosis Virus). Reduction of silver ions by sodium citrate and Spirulina platensis led to the formation of spherical silver nanoparticles of 4060 and 716nm size. Single cell protein (Spirulina platensis)-synthesized silver nanoparticles showed the strongest antiviral activity. Immunological studies made on the silkworm Bombyx K. Govindaraju; S. Tamilselvan; V. Kiruthiga; G. Singaravelu 152 PubMed Central Microsporidia have attracted much attention because they infect a variety of species ranging from protists to mammals, including immunocompromised patients with AIDS or cancer. Aside from the study on Nosema ceranae, few works have focused on elucidating the mechanism in host response to microsporidia infection. Nosema bombycis is a pathogen of silkworm pbrine that causes great economic losses to the silkworm industry. Detailed understanding of the host (Bombyx mori) response to infection by N. bombycis is helpful for prevention of this disease. A genome-wide survey of the gene expression profile at 2, 4, 6 and 8 days post-infection by N. bombycis was performed and results showed that 64, 244, 1,328, 1,887 genes were induced, respectively. Up to 124 genes, which are involved in basal metabolism pathways, were modulated. Notably, B. mori genes that play a role in juvenile hormone synthesis and metabolism pathways were induced, suggesting that the host may accumulate JH as a response to infection. Interestingly, N. bombycis can inhibit the silkworm serine protease cascade melanization pathway in hemolymph, which may be due to the secretion of serpins in the microsporidia. N. bombycis also induced up-regulation of several cellular immune factors, in which CTL11 has been suggested to be involved in both spore recognition and immune signal transduction. Microarray and real-time PCR analysis indicated the activation of silkworm Toll and JAK/STAT pathways. The notable up-regulation of antimicrobial peptides, including gloverins, lebocins and moricins, strongly indicated that antimicrobial peptide defense mechanisms were triggered to resist the invasive microsporidia. An analysis of N. bombycis-specific response factors suggested their important roles in anti-microsporidia defense. Overall, this study primarily provides insight into the potential molecular mechanisms for the host-parasite interaction between B. mori and N. bombycis and may provide a foundation for further work on host-parasite interaction between insects and microsporidia. PMID:24386341 Pan, Guoqing; Li, Zhihong; Han, Bing; Xu, Jinshan; Lan, Xiqian; Chen, Jie; Yang, Donglin; Chen, Quanmei; Sang, Qi; Ji, Xiaocun; Li, Tian; Long, Mengxian; Zhou, Zeyang 2013-01-01 153 Thirteen diverse strains of the silkworm Bombyx mori were analysed using the simple sequence repeat anchored polymerase chain reaction (SSR-anchored PCR) or Inter-SSR-PCR (ISSR-PCR). A set of four 5-anchored and two 3-anchored repeat primers amplified a total of 239 bands out of which 184 (77%) were polymorphic. The 5-anchored primers revealed more distinct polymorphic markers than the 3-anchored primers and K. DAMODAR REDDY; J. NAGARAJU; E. G. ABRAHAM 1999-01-01 154 Silkworm larvae plasma (SLP) reagent is activated by peptidoglycan (PG), a fragment of both the gram-positive and gram-negative bacterial cell wall, as well as ?-glucan (BG), a component of fungi. It is possible to measure contamination of gram-positive bacteria quantitatively by combining the conventional limulus amebocyte lysate (LAL) and PG measurement methods. Therefore, a more highly accurate analysis of dialysate K. Tsuchida; Y. Takemoto; S. Yamagami; H. Edney; M. Niwa; M. Tsuchiya; T. Kishimoto; S. Shaldon 1997-01-01 155 Background: Insects use volatile organic molecules to communicate messages with remarkable sensitivity and specificity. In one of the most studied systems, female silkworm moths (Bombyx mori) attract male mates with the pheromone bombykol, a volatile 16-carbon alcohol. In the male moths antennae, a pheromone-binding protein conveys bombykol to a membrane-bound receptor on a nerve cell. The structure of the pheromone-binding Benjamin H Sandler; Larisa Nikonova; Walter S Leal; Jon Clardy 2000-01-01 156 PubMed The silkworm is a poikilothermic animal, whose growth and development is significantly influenced by environmental temperature. To identify genes and metabolic pathways involved in the heat-stress response, digital gene expression analysis was performed on the midgut of the thermotolerant silkworm variety '932' and thermosensitive variety 'HY' after exposure to high temperature (932T and HYT). Deep sequencing yielded 6,211,484, 5,898,028, 5,870,395 and 6,088,303 reads for the 932, 932T, HY and HYT samples, respectively. The annotated genes associated with these tags numbered 4357, 4378, 4296 and 4658 for the 932, 932T, HY and HYT samples, respectively. In the HY-vs-932, 932-vs-932T, and HY-vs-HYT comparisons, 561, 316 and 281 differentially expressed genes were identified, which could be assigned to 179, 140 and 123 biological pathways, respectively. It was found that some of the biological pathways, which included oxidative phosphorylation, related to glucose and lipid metabolism, are greatly affected by high temperature and may lead to a decrease in the ingestion of fresh mulberry. When subjected to an early period of continuous heat stress, HSP genes, such as HSP19.9, HSP23.7, HSP40-3, HSP70, HSP90 and HSP70 binding protein, are up-regulated but then reduced after 24h and the thermotolerant '932' strain has higher levels of mRNA of some HSPs, except HSP70, than the thermosensitive variety during continuous high temperature treatment. It is suggested that HSPs and the levels of their expression may play important roles in the resistance to high temperature stress among silkworm varieties. This study has generated important reference tools that can be used to further analyze the mechanisms that underlie thermotolerance differences among silkworm varieties. PMID:25046138 Li, Qing Rong; Xiao, Yang; Wu, Fu Quan; Ye, Ming Qiang; Luo, Guo Qing; Xing, Dong Xu; Li, Li; Yang, Qiong 2014-10-01 157 A series of experiments was conducted to determine whether the interconversion of glycogen and sorbitol at the initiation and termination of diapause in eggs of the silkworm,Bombyx mori was influenced by low temperatures (5C and 1C). The conversion of glycogen to sorbitol and glycerol at the initiation of diapause was not affected by exposure to 5C and 1C. Chilling diapause Toshiharu Furusawa; Masayoshi Shikata; Okitsugu Yamashita 1982-01-01 158 BACKGROUND: MicroRNAs (miRNAs) are expressed by a wide range of eukaryotic organisms, and function in diverse biological processes. Numerous miRNAs have been identified in Bombyx mori, but the temporal expression profiles of miRNAs corresponding to each stage transition over the entire life cycle of the silkworm remain to be established. To obtain a comprehensive overview of the correlation between miRNA Shiping Liu; Liang Zhang; Qibin Li; Ping Zhao; Jun Duan; Daojun Cheng; Zhonghuai Xiang; Qingyou Xia 2009-01-01 159 PubMed Central Here we report the detection and localisation of chitin in the cuticle of the spinning ducts of both the spider Nephila edulis and the silkworm Bombyx mori. Our observations demonstrate that the duct walls of both animals contain chitin notwithstanding totally independent evolutionary pathways of the systems. We conclude that chitin may well be an essential component for the construction of spinning ducts; we further conclude that in both species chitin may indicate the evolutionary origin of the spinning ducts. PMID:24015298 Davies, Gwilym J. G.; Knight, David P.; Vollrath, Fritz 2013-01-01 160 The effect of heterosis was studied in several quantitative traits of clone breed and interbreed silkworm hybrids exposed to electromagnetic irradiation (? = 1.6 cm, power density 700 W\\/cm2) during postdiapause embryonic development. The influence of the type of reproduction on the manifestation of irradiation effects in the next generation was also examined. In hybrids, the resistance to low-intensity high-frequency Ye. A. Boyko; S. V. Sukhanov; V. G. Shakhbazov 2004-01-01 161 BACKGROUND: The major royal jelly proteins\\/yellow (MRJP\\/YELLOW) family possesses several physiological and chemical functions in the development of Apis mellifera and Drosophila melanogaster. Each protein of the family has a conserved domain named MRJP. However, there is no report of MRJP\\/YELLOW family proteins in the Lepidoptera. RESULTS: Using the YELLOW protein sequence in Drosophila melanogaster to BLAST silkworm EST database, Ai-Hua Xia; Qing-Xiang Zhou; Lin-Lin Yu; Wei-Guo Li; Yong-Zhu Yi; Yao-Zhou Zhang; Zhi-Fang Zhang 2006-01-01 162 NASA Astrophysics Data System (ADS) Most animals have the ability to adapt, to some extends and in different ways, the variation or disturbance of environment. In our experiments, we forced a silkworm caterpillar to spin two, three or four thin cocoons by taking it out from the cocoon being constructed. The mechanical properties of these cocoons were studied by static tensile tests and dynamic mechanical thermal analysis. Though external disturbances may cause the decrease in the total weight of silk spun by the silkworm, a gradual enhancement was interestingly found in the mechanical properties of these thin cocoons. Scanning electron microscopy observations of the fractured specimens of the cocoons showed that there exist several different energy dissipation mechanisms occurred simultaneously at macro-, meso-, and micro-scales, yielding a superior capacity of cocoons to adsorb the energy of possible attacks from the outside and to protect efficiently its pupa against damage. Through evolution of millions of years, therefore, the silkworm Bombyx mori seems to have gained the ability to adapt external disturbances and to redesign a new cocoon with optimized protective function when its first cocoon has been damaged for some reasons. Huang, S. Q.; Zhao, H. P.; Feng, X. Q.; Cui, W.; Lin, Z.; Xu, M. Q. 2008-04-01 163 SciTech Connect Guanine nucleotide-binding protein (G protein) coupled receptors (GPCRs) are frequently expressed by a baculovirus expression vector system (BEVS). We recently established a novel BEVS using the bacmid system of Bombyx mori nucleopolyhedrovirus (BmNPV), which is directly applicable for protein expression in silkworms. Here, we report the first example of GPCR expression in silkworms by the simple injection of BmNPV bacmid DNA. Human nociceptin receptor, an inhibitory GPCR, and its fusion protein with inhibitory G protein alpha subunit (G{sub i}{alpha}) were both successfully expressed in the fat bodies of silkworm larvae as well as in the BmNPV viral fraction. Its yield was much higher than that from Sf9 cells. The microsomal fractions including the nociceptin receptor fusion, which are easily prepared by only centrifugation steps, exhibited [{sup 35}S]GTP{gamma}S-binding activity upon specific stimulation by nociceptin. Therefore, this rapid method is easy-to-use and has a high expression level, and thus will be an important tool for human GPCR production. Kajikawa, Mizuho; Sasaki, Kaori [Medical Institute of Bioregulation, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka 812-8582 (Japan)] [Medical Institute of Bioregulation, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka 812-8582 (Japan); Wakimoto, Yoshitaro; Toyooka, Masaru [Department of Chemistry and Chemical Biology, Graduate School of Engineering, Gunma University, 1-5-1 Tenjin-cho, Kiryu, Gunma 376-8515 (Japan)] [Department of Chemistry and Chemical Biology, Graduate School of Engineering, Gunma University, 1-5-1 Tenjin-cho, Kiryu, Gunma 376-8515 (Japan); Motohashi, Tomoko; Shimojima, Tsukasa [National Institute of Genetics, 1111 Yata, Mishima, Shizuoka 411-8540 (Japan)] [National Institute of Genetics, 1111 Yata, Mishima, Shizuoka 411-8540 (Japan); Takeda, Shigeki [Department of Chemistry and Chemical Biology, Graduate School of Engineering, Gunma University, 1-5-1 Tenjin-cho, Kiryu, Gunma 376-8515 (Japan)] [Department of Chemistry and Chemical Biology, Graduate School of Engineering, Gunma University, 1-5-1 Tenjin-cho, Kiryu, Gunma 376-8515 (Japan); Park, Enoch Y. [Laboratory of Biotechnology, Integrated Bioscience Section, Graduate School of Science and Technology, Shizuoka University, 836 Oya, Suruga-ku, Shizuoka, Shizuoka 422-8529 (Japan)] [Laboratory of Biotechnology, Integrated Bioscience Section, Graduate School of Science and Technology, Shizuoka University, 836 Oya, Suruga-ku, Shizuoka, Shizuoka 422-8529 (Japan); Maenaka, Katsumi, E-mail: [email protected] [Medical Institute of Bioregulation, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka 812-8582 (Japan)] [Medical Institute of Bioregulation, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka 812-8582 (Japan) 2009-07-31 164 PubMed Insect head is comprised of important sensory systems to communicate with internal and external environment and endocrine organs such as brain and corpus allatum to regulate insect growth and development. To comprehensively understand how all these components act and interact within the head, it is necessary to investigate their molecular basis at protein level. Here, the spectra of peptides digested from silkworm larval heads were obtained from liquid chromatography tandem mass spectrometry (LC-MS/MS) and were analyzed by bioinformatics methods. Totally, 539 proteins with a low false discovery rate (FDR) were identified by searching against an in-house database with SEQUEST and X!Tandem algorithms followed by trans-proteomic pipeline (TPP) validation. Forty-three proteins had the theoretical isoelectric point (pI) greater than 10 which were too difficult to separate by two-dimensional gel electrophoresis (2-DE). Four chemosensory proteins, one odorant-binding protein, two diapause-related proteins, and a lot of cuticle proteins, interestingly including pupal cuticle proteins were identified. The proteins involved in nervous system development, stress response, apoptosis and so forth were related to the physiological status of head. Pathway analysis revealed that many proteins were highly homologous with the human proteins which involved in human neurodegenerative disease pathways, probably implying a symptom of the forthcoming metamorphosis of silkworm. These data and the analysis methods were expected to be of benefit to the proteomics research of silkworm and other insects. PMID:20198493 Li, Jianying; Hosseini Moghaddam, S Hossein; Chen, Xiang; Chen, Ming; Zhong, Boxiong 2010-08-01 165 PubMed Insect haemocytes play significant roles in innate immunity. The silkworm, a lepidopteran species, is often selected as the model for studies into the functions of haemocytes in immunity; however, our understanding of the role of haemocytes remains limited because the lack of haemocyte promoters for transgene expression makes genetic manipulations difficult. In the present study, we aimed to establish transgenic silkworm strains expressing GAL4 in their haemocytes. First, we identified three genes with strong expression in haemocytes, namely, lp44, Haemocyte Protease 1 (HP1) and hemocytin. Transgenic silkworms expressing GAL4 under the control of the putative promoters of these genes were then established and expression was examined. Although GAL4 expression was not detected in haemocytes of HP1-GAL4 or hemocytin-GAL4 strains, lp44-GAL4 exhibited a high level of GAL4 expression, particularly in oenocytoids. GAL4 expression was also detected in the midgut but in no other tissues, indicating that GAL4 expression in this strain is mostly oenocytoid-specific. Thus, we have identified a promoter that enables oenocytoid expression of genes of interest. Additionally, the lp44-GAL4 strain could also be used for other types of research, such as the functional analysis of genes in oenocytoids, which would facilitate advances in our understanding of insect immunity. PMID:24237591 Tsubota, T; Uchino, K; Kamimura, M; Ishikawa, M; Hamamoto, H; Sekimizu, K; Sezutsu, H 2014-04-01 166 E-print Network SUDDEN OAK DEATH Field Meeting Saturday, December 3, 1:00 p.m. Joaquin Miller Park - Craib Picnic Area (if raining, come to the Ranger Station) Learn about Sudden Oak Death management and see Death (SOD). SOD is a fungus-like mold that is killing oak trees in coastal California. Several samples California at Berkeley, University of 167 Tree-ring analysis revealed 33 living white oaks (Quercus alba) in Iowa that began growing before 1700. Core of wood 4 mm in diameter, each extracted from a radius of a tree trunk were analyzed. The oldest white oak, found in northeastern Warren County, began growing about 1570 and is thus over 410 years old. A chinkapin oak (Quercus muehlenbergii) was D. N. Duvick; T. J. Blasing 1983-01-01 168 E-print Network Fire Management of Oak Forests for Wildlife L-347 Natural Resource Ecology and Management Oklahoma and Ecosystem Management L-270--Snags, Cavity Trees, and Downed Logs Species such as this rare Diana fritillary,000 copies. 0111 GH. #12;Natural History More than 17 million acres of post oak - blackjack oak forest 169 E-print Network Oak Ridge Reservation Annual Site Environmental Report DOE/ORO/2445 2012 #12;Cover Image Jeff Riggs Annual Site Environmental Report 2012 #12;DOE/ORO/2445 Oak Ridge Reservation Annual Site Environmental Complex Oak Ridge National Laboratory East Tennessee Technology Park Electronic publisher Editors Graphic Pennycook, Steve 170 The w-3oe silkworm mutant has white eyes and eggs due to the absence of ommochrome pigments in the eye pigment cells and serosa cells. The mutant is also characterized by translucent larval skin resulting from a deficiency in the transportation of uric acid, which acts as a white pigment in larval epidermal cells. A silkworm homolog of the fruitfly white Natuo Kmoto; Guo-Xing Quan; Hideki Sezutsu; Toshiki Tamura 2009-01-01 171 NSDL National Science Digital Library This printable key to oak leaves helps students see the variety of shapes and sizes found within a plant family. The one-page PDF handout has 12 hand drawings of leaves. You can find the scientific names (genus and species) for all of them in the Biodiversity Counts Educators Guide. 172 This master's project was undertaken in the interdisciplinary studies department with the goal of providing a community resource detailing the saga of three New Deal-era murals in a Royal Oak school and the artists who painted them. This thesis also delves into whether the project of redeeming the murals can provide deeper understanding into the community in which we live. Deborah S Anderson 2007-01-01 173 NASA Astrophysics Data System (ADS) The object of the current work was to study low energy Ar + ion beam interactions with silkworm eggs and thus provide further understanding of the mechanisms involved in ion bombardment-induced direct gene transfer into silkworm eggs. In this paper, using low-energy Ar + ion beam bombardment combined with piggyBac transposon, we developed a novel method to induce gene transfer in silkworm. Using bombardment conditions optimized for egg-incubation (25 keV with ion fluences of 800 2.6 10 15 ions/cm 2 in dry state under vacuum), vector pBac{3 P3-EGFPaf} and helper plasmid pHA3pig were successfully transferred into the silkworm eggs. Our results obtained from by PCR assay and genomic Southern blotting analysis of the G1 generations provide evidence that low-energy ion beam can generate some craters that play a role in acting as pathways of exogenous DNA molecules into silkworm eggs. Ling, Lin; Liu, Xuelan; Xu, Jiaping; You, Zhengying; Zhou, Jingbo 2011-09-01 174 E-print Network (19). "Tannin" is a generic name for a group of complex ~tructures widely distributed in the higher plants. The )tanninsu yield gallic acid when subjected to acid Ilysis. Commercial "tannic acid" is an example of a snnin and is obtained from... of o:k poisoning were observed. - Purified shin oak tannin and commercial tannic acid \\we fed to rabbits in parallel studies. The serium tan- nin levels (1 8) (expressed as "tannic acid" equivalent) \\yere determined periodically. The times of death... Dollahite, J. W.; Housholder, G. T.; Camp, B. J. 1966-01-01 175 Federal Register 2010, 2011, 2012, 2013 ...Notice of Inventory Completion: The Region of Three Oaks Museum, Three Oaks, MI...SUMMARY: The Region of Three Oaks Museum has completed an...with the human remains may contact The Region of Three Oaks Museum. Repatriation... 2012-04-19 176 E-print Network ___________________________________________________________________________ Getting to Know OAK 1 Revised: 10/16/2013 Getting to Know OAK What is OAK? Vanderbilt's course management-8524 [email protected] School of Nursing VUSN OAK Support Website http://www.nursing://www.vanderbilt.edu/oak/ and add to your Bookmarks or Favorites. Though we do our best to keep outages to a minimum, unscheduled 177 PubMed ?-N-acetylglucosaminidase (GlcNAcase) is a key enzyme in the chitin decomposition process. In this study, we investigated the gene expression profile of GlcNAcases and the regulation mechanism for one of these genes, BmGlcNAcase1, in the silkworm. We performed sequence analysis of GlcNAcase. Using dual-spike-in qPCR method, we examined the expression of Bombyx ?-N-acetylglucosaminidases (BmGlcNAcases) in various tissues of silkworm as well as expression changes after stimulation with ecdysone. Using Bac-to-Bac system and luciferase reporter vectors, we further analyzed the promoter sequence of BmGlcNAcase1. The results showed that these proteins have a highly conserved catalytic domain. The expression levels of the BmGlcNAcase genes varied in different tissues, and were increased 48 h after exposure to ecdysone. BmGlcNAcase1 gene promoter with 5'-end serial deletions showed different levels of activity in various tissues, higher in the blood, skin and fat body. Deletion of the region from -347 to -223 upstream of BmGlcNAcase-1 gene abolished its promoter activity. This region contains the binding sites for key transcription factors including Hb, BR-C Z, the HSF and the typical TATA-box element. These results indicate that BmGlcNAcases are expressed at different levels in different tissues of the silkworm, but all are subjected to the regulation by ecdysone. BmGlcNAcase1 promoter analysis has paved a foundation for further study of the gene expression patterns. PMID:25001591 Zhai, Yuan-Fen; Huang, Ming-Xia; Wu, Yu; Zhao, Guo-Dong; Du, Jie; Li, Bing; Shen, Wei-de; Wei, Zheng-Guo 2014-10-01 178 Oak decline and related mortality have periodically plagued upland oakhickory forests, particularly oak species in the red oak group, across the Ozark Highlands of Missouri, Arkansas and Oklahoma since the late 1970s. Advanced tree age and periodic drought, as well as Armillaria root fungi and oak borer attack are believed to contribute to oak decline and mortality. Declining trees first Zhaofei Fan; John M. Kabrick; Martin A. Spetich; Stephen R. Shifley; Randy G. Jensen 2008-01-01 179 SciTech Connect We constructed the fibroin H-chain expression system to produce recombinant proteins in the cocoon of transgenic silkworms. Feline interferon (FeIFN) was used for production and to assess the quality of the product. Two types of FeIFN fusion protein, each with N- and C-terminal sequences of the fibroin H-chain, were designed to be secreted into the lumen of the posterior silk glands. The expression of the FeIFN/H-chain fusion gene was regulated by the fibroin H-chain promoter domain. The transgenic silkworms introduced these constructs with the piggyBac transposon-derived vector, which produced the normal sized cocoons containing each FeIFN/H-chain fusion protein. Although the native-protein produced by transgenic silkworms have almost no antiviral activity, the proteins after the treatment with PreScission protease to eliminate fibroin H-chain derived N- and C-terminal sequences from the products, had very high antiviral activity. This H-chain expression system, using transgenic silkworms, could be an alternative method to produce an active recombinant protein and silk-based biomaterials. Kurihara, H. [Toray Industries, Inc., New Frontiers Research Laboratories, 1111 Tebiro, Kamakura, Kanagawa 248-8555 (Japan)]. E-mail: [email protected]; Sezutsu, H. [Transgenic Silkworm Research Center, National Institute of Agrobiological Sciences, 1-2 Owashi, Tsukuba, Ibaraki 305-8634 (Japan); Tamura, T. [Transgenic Silkworm Research Center, National Institute of Agrobiological Sciences, 1-2 Owashi, Tsukuba, Ibaraki 305-8634 (Japan); Yamada, K. [Toray Industries, Inc., New Frontiers Research Laboratories, 1111 Tebiro, Kamakura, Kanagawa 248-8555 (Japan) 2007-04-20 180 1.The morphology of descending interneurons (DNs) which have arborizations in the lateral accessory lobe (LAL) of the protocerebrum, the higher order olfactory center, and have an axon in the ventral nerve cord (VNC), were characterized in the male silkworm moth, Bombyx mori.2.Two clusters (group I, group II) of DNs which have arborizations mainly in the LALs were morphologically characterized. The R. Kanzaki; A. Ikeda; T. Shibuya 1994-01-01 181 In the previous papers of this series (Williams, 1946b, 1947, 1948a) an endo crine basis was described for the production and termination of pupal diapause in the Cecropia silkworm. The onset of diapause was correlated with a temporary failure of the brain in secreting a hormone required for the initiation of adult development. The ultimate release of this “?brain hormone” CARROLL M. WILLIAMS 182 Many electrophoretic variants of hemolymph inhibitors of proteases from Aspergillus mellus and pancreatic ?-chymotrypsin were found using 126 silkworm strains. Six inhibitors of the fungal protease were detected and eight of chymotryspin; the distribution of inhibitors among Japanese, Chinese, and European races was investigated. Comparison of electrophoretic patterns from F1 hybrids and parents showed that the offspring produce inhibitors of M. Eguchi; K. Ueda; M. Yamashita 1984-01-01 183 PubMed The organophosphorus pesticide poisoning of the silkworm Bombyx mori is one of the major events causing serious damage to sericulture. Some antioxidant enzymes play roles in regulating generation of reactive oxygen species (ROS) by pesticides including phoxim and chlorpyrifos, but relatively little is known about their effects on the silkworm peroxiredoxin family genes. Here, five peroxiredoxin (Prx) genes have been identified in silkworm genome, and Prx genes of silkworm and mammalian homologs have apparent ortholog relationship. Based on the genomic DNA sequence, putative 5'-flanking region of five BmPrxs were obtained and the transcription factor binding sites were predicted. Their expression profiles exposed to different concentrations of phoxim and chlorpyrifos for 24h, 48h and 72h in midgut of silkworm were investigated using quantitative RT-PCR (qRT-PCR). The results showed that five BmPrxs and dual oxidase (BmDUOX) gene were all expressed in midgut of silkworm. After feeding with 0.375mg/L and 0.75mg/L phoxim, the transcription levels of BmPrx3 and BmPrx5 that can be located in mitochondria reached their peak levels at an early time point (24h). However, the transcription levels of BmPrx4 and BmPrx6 that can be addressed to secrete from the cell and cytosol, respectively, reached their peak levels at a later time point (72h). Similar to expose to phoxim, the transcription levels of BmPrx3 and BmPrx5 that can be located in mitochondria reached their peak levels at an early time point (24h) under chlorpyrifos stress. However, the transcription levels of BmPrx4 and BmPrx6 that can be addressed to secrete from the cell and cytosol, respectively, reached their peak levels at a later time point (72h) under chlorpyrifos stress. These results revealed that BmPrxs that can be located in mitochondria were able to protect cells even more efficiently than cytosolic from an oxidative stress caused by OP. In addition, BmDUOX was also induced by phomix and chlorpyrifos. Overall, our results indicate that a complex expression regulation of Prxs that play important roles in maintaining redox equilibrium state of silkworm to reduce oxidative damage caused by pesticide. PMID:25175646 Shi, Gui-Qin; Zhang, Ze; Jia, Kun-Lun; Zhang, Kun; An, Dong-Xu; Wang, Gang; Zhang, Bao-Long; Yin, He-Nan 2014-09-01 184 PubMed RNA interference (RNAi)-mediated viral inhibition has been used in several organisms for improving viral resistance. In the present study, we reported the use of transgenic RNAi in preventing Bombyx mori nucleopolyhedrovirus (BmNPV) multiplication in the transgenic silkworm B. mori. We targeted the BmNPV immediate-early-1 (ie-1) and late expression factor-1 (lef-1) genes in the transiently transfected BmN cells, in the stable transformed BmN cell line and in the transgenic silkworms. We generated four piggyBac-based vectors containing short double-stranded ie-1 RNA (sdsie-1), short double-stranded lef-1 RNA (sdslef-1), long double-stranded ie-1 RNA (ldsie-1) and both sdsie-1 and sdslef-1 (sds-ie1-lef1) expression cassettes. Strong viral repression was observed in the transiently transfected cells and in the stable transformed BmN cells transfected with sds-ie-1, sdslef-1, ldsie-1 or sds-ie-lef. The decrease of ie-1 mRNA level in the sds-ie1-lef1 transiently transfected cells was most obvious among the cells transfected with different vectors. The inhibitory effect of viral multiplication was decreased in a viral dose-dependent manner; the infection ratio of transfected cells for sds-ie-1, sdslef-1, ldsie-1 and sds-ie-lef decreased by 18.83%, 13.73%, 6.93% and 30.63%, respectively, compared with control cells 5 days after infection. We generated transgenic silkworms using transgenic vector piggyantiIE-lef1-neo with sds-ie1-lef1 expression cassette; the fourth instar larvae of transgenic silkworms of generation G5 exhibited stronger resistance to BmNPV, the mortalities for the transgenic silkworms and control silkworms were 60% and 100%, respectively, at 11 days after inoculation with BmNPV (10(6) occlusion bodies per ml). These results suggest that double-stranded RNA expression of essential genes of BmNPV is a feasible method for breeding silkworms with a high antiviral capacity. PMID:24173242 Zhang, P; Wang, J; Lu, Y; Hu, Y; Xue, R; Cao, G; Gong, C 2014-01-01 185 PubMed Protein kinase B (PKB) also known as Akt is involved in many signal transduction pathways. As alterations of the PKB pathway are found in a number of human malignancies, PKB is considered an important drug target for cancer therapy. However, production of sufficient amounts of active PKB for biochemical and structural studies is very costly because of the necessity of using a higher organism expression system to obtain phosphorylated PKB. Here, we report efficient production of active PKB? using the BmNPV bacmid expression system with silkworm larvae. Following direct injection of bacmid DNA, recombinant PKB? protein was highly expressed in the fat bodies of larvae, and could be purified using a GST-tag and then cleaved. A final yield of approximately 1?mg PKB?/20 larvae was recorded. Kinase assays showed that the recombinant PKB? possessed high phosphorylation activity. We further confirmed phosphorylation on the activation loop by mass spectrometric analysis. Our results indicate that the silkworm expression system is of value for preparation of active-form PKB? with phosphorylation on the activation loop. This efficient production of the active protein will facilitate further biochemical and structural studies and stimulate subsequent drug development. PMID:25125290 Maesaki, Ryoko; Satoh, Ryosuke; Taoka, Masato; Kanaba, Teppei; Asano, Tsunaki; Fujita, Chiharu; Fujiwara, Toshinobu; Ito, Yutaka; Isobe, Toshiaki; Hakoshima, Toshio; Maenaka, Katsumi; Mishima, Masaki 2014-01-01 186 PubMed The W chromosome of the silkworms Bombyx mori or B. mandarina is recombinationally isolated from the Z chromosome and the autosomes. We previously characterized a female-specific randomly amplified polymorphic DNA (RAPD), designated W-Yamato, derived from the W chromosome of the wild silkworm Bombyx mandarina. To further analyse the W chromosome of B. mandarina, we obtained a lambda phage clone that contains the W-Yamato RAPD sequence and sequenced the 16.7 kb DNA insert. We found that this DNA comprises a nested structure of at least seven elements: six retrotransposons and one transposable element-like sequence. The transposable element-like sequence is inserted into a micropia-like retrotransposon (Karate). The Karate and the non-long terminal repeat (non-LTR) retrotransposon BMC1 are inserted into a 412-like retrotransposon (Judo). Furthermore, this Judo, and two non-LTR retrotransposons (Kurosawa and Kendo) are inserted into a Pao-like retrotransposon (Yamato). These results indicate that the retrotransposons inserted into the W chromosome are not efficiently removed but accumulate gradually as strata without recombination. PMID:12144695 Abe, H; Sugasaki, T; Terada, T; Kanehara, M; Ohbayashi, F; Shimada, T; Kawai, S; Mita, K; Oshiki, T 2002-08-01 187 PubMed A novel platelet aggregation inhibitory peptide, named BB octapeptide, was isolated from stiff silkworm (Bombyx batryticatus) by gel filtration, anion-exchange, and reverse-phase high performance liquid chromatography. The molecular mass of the peptide was determined to be 885 Da using electrospray ionization mass spectrometry, and the sequence was identified as Asp-Pro-Asp-Ala-Asp-IIe-Leu-Gln using the Edman degradation method. To test its biological activity, the peptide was chemically synthesized using Fmoc solid-phase synthesis method. BB octapeptide inhibited rabbit platelet aggregation that was induced by collagen and epinephrine, with the IC50 values of 91.14 ?M and 104.50 ?M, respectively. After intravenous administrated in mice (30 mg/kg, 4 days), BB octapeptide showed similar ex vivo efficacy of inhibiting platelet aggregation as aspirin (10 mg/kg). In addition, this peptide prevented paralysis and death in pulmonary thromboembolism model and significantly reduced ferric chloride-induced thrombus formation in rats. Moreover, it exhibited low cytotoxicity in a cellular model. In conclusion, this is the first report that a novel platelet aggregation inhibitory peptide was isolated from stiff silkworm (B. batryticatus). Due to the excellent efficacy in reducing platelet aggregation and low toxicity, it can be a valuable lead compound for new drug design and development. PMID:24361453 Kong, Yi; Xu, Cheng; He, Zhi-Long; Zhou, Qiu-Mei; Wang, Jin-Bin; Li, Zhi-Yu; Ming, Xin 2014-03-01 188 PubMed Digital Gene Expression was performed to investigate the midgut transcriptome profile of 4008 silkworm strain orally infected with BmCPV. A total of 4,498,263 and 4,258,240 clean tags were obtained from the control and BmCPV-infected larvae. A total of 752 differentially expressed genes were detected, of which 649 were upregulated and 103 were downregulated. Analysis results of the Kyoto Encyclopedia of Genes and Genomes pathway showed that 334 genes were involved in the ribosome and RNA transport pathways. Moreover, 408 of the 752 differentially expressed genes have a GO category and can be categorized into 41 functional groups according to molecular function, cellular component and biological process. Differentially expressed genes involved in signaling, gene expression, metabolic process, cell death, binding, and catalytic activity changes were detected in the expression profiles. Quantitative real-time PCR was performed to verify the expression of these genes. The upregulated expression levels of Calreticulin, FK506-binding protein, and protein kinase c inhibitor gene probably led to a calcium-dependent apoptosis in the BmCPV-infected cells. The results of this study may serve as a basis for future research not only on the molecular mechanism of BmCPV invasion but also on the anti-BmCPV mechanism of silkworm. PMID:24211674 Gao, Kun; Deng, Xiang-yuan; Qian, He-ying; Qin, Guangxing; Guo, Xi-jie 2014-01-01 189 PubMed Central Protein kinase B (PKB) also known as Akt is involved in many signal transduction pathways. As alterations of the PKB pathway are found in a number of human malignancies, PKB is considered an important drug target for cancer therapy. However, production of sufficient amounts of active PKB for biochemical and structural studies is very costly because of the necessity of using a higher organism expression system to obtain phosphorylated PKB. Here, we report efficient production of active PKB? using the BmNPV bacmid expression system with silkworm larvae. Following direct injection of bacmid DNA, recombinant PKB? protein was highly expressed in the fat bodies of larvae, and could be purified using a GST-tag and then cleaved. A final yield of approximately 1?mg PKB?/20 larvae was recorded. Kinase assays showed that the recombinant PKB? possessed high phosphorylation activity. We further confirmed phosphorylation on the activation loop by mass spectrometric analysis. Our results indicate that the silkworm expression system is of value for preparation of active-form PKB? with phosphorylation on the activation loop. This efficient production of the active protein will facilitate further biochemical and structural studies and stimulate subsequent drug development. PMID:25125290 Maesaki, Ryoko; Satoh, Ryosuke; Taoka, Masato; Kanaba, Teppei; Asano, Tsunaki; Fujita, Chiharu; Fujiwara, Toshinobu; Ito, Yutaka; Isobe, Toshiaki; Hakoshima, Toshio; Maenaka, Katsumi; Mishima, Masaki 2014-01-01 190 PubMed Central In insects, hemocytes are considered as the only source of plasma prophenoloxidase (PPO). PPO also exists in the hemocytes of the hematopoietic organ that is connected to the wing disc of Bombyx mori. It is unknown whether there are other cells or tissues that can produce PPO and release it into the hemolymph besides circulating hemocytes. In this study, we use the silkworm as a model to explore this possibility. Through tissue staining and biochemical assays, we found that wing discs contain PPO that can be released into the culture medium in vitro. An in situ assay showed that some cells in the cavity of wing discs have PPO1 and PPO2 mRNA. We conclude that the hematopoietic organ may wrongly release hemocytes into wing discs since they are connected through many tubes as repost in previous paper. In wing discs, the infiltrating hemocytes produce and release PPO probably through cell lysis and the PPO is later transported into hemolymph. Therefore, this might be another source of plasma PPO in the silkworm: some infiltrated hemocytes sourced from the hematopoietic organ release PPO via wing discs. PMID:22848488 Diao, Yupu; Lu, Anrui; Yang, Bing; Hu, Wenli; Peng, Qing; Ling, Qing-Zhi; Beerntsen, Brenda T.; Soderhall, Kenneth; Ling, Erjun 2012-01-01 191 PubMed Central Background The major royal jelly proteins/yellow (MRJP/YELLOW) family possesses several physiological and chemical functions in the development of Apis mellifera and Drosophila melanogaster. Each protein of the family has a conserved domain named MRJP. However, there is no report of MRJP/YELLOW family proteins in the Lepidoptera. Results Using the YELLOW protein sequence in Drosophila melanogaster to BLAST silkworm EST database, we found a gene family composed of seven members with a conserved MRJP domain each and named it YELLOW protein family of Bombyx mori. We completed the cDNA sequences with RACE method. The protein of each member possesses a MRJP domain and a putative cleavable signal peptide consisting of a hydrophobic sequence. In view of genetic evolution, the whole Bm YELLOW protein family composes a monophyletic group, which is distinctly separate from Drosophila melanogaster and Apis mellifera. We then showed the tissue expression profiles of Bm YELLOW protein family genes by RT-PCR. Conclusion A Bombyx mori YELLOW protein family is found to be composed of at least seven members. The low homogeneity and unique pattern of gene expression by each member among the family ensure us to prophesy that the members of Bm YELLOW protein family would play some important physiological functions in silkworm development. PMID:16884544 Xia, Ai-Hua; Zhou, Qing-Xiang; Yu, Lin-Lin; Li, Wei-Guo; Yi, Yong-Zhu; Zhang, Yao-Zhou; Zhang, Zhi-Fang 2006-01-01 192 PubMed Central We report the establishment of an efficient and heritable gene mutagenesis method in the silkworm Bombyx mori using modified type II clustered regularly interspaced short palindromic repeats (CRISPR) with an associated protein (Cas9) system. Using four loci Bm-ok, BmKMO, BmTH, and Bmtan as candidates, we proved that genome alterations at specific sites could be induced by direct microinjection of specific guide RNA and Cas9-mRNA into silkworm embryos. Mutation frequencies of 16.735.0% were observed in the injected generation, and DNA fragments deletions were also noted. Bm-ok mosaic mutants were used to test for mutant heritability due to the easily determined translucent epidermal phenotype of Bm-ok-disrupted cells. Two crossing strategies were used. In the first, injected Bm-ok moths were crossed with wild-type moths, and a 28.6% frequency of germline mutation transmission was observed. In the second strategy, two Bm-ok mosaic mutant moths were crossed with each other, and 93.6% of the offsprings appeared mutations in both alleles of Bm-ok gene (compound heterozygous). In summary, the CRISPR/Cas9 system can act as a highly specific and heritable gene-editing tool in Bombyx mori. PMID:25013902 Roy, Bhaskar; Dai, Junbiao; Miao, Yungen; Gao, Guanjun 2014-01-01 193 PubMed Central The carotenoid-binding protein (CBP) of the domesticated silkworm, Bombyx mori, a major determinant of cocoon color, is likely to have been substantially influenced by domestication of this species. We analyzed the structure of the CBP gene in multiple strains of B. mori, in multiple individuals of the wild silkworm, B. mandarina (the putative wild ancestor of B. mori), and in a number of other lepidopterans. We found the CBP gene copy number in genomic DNA to vary widely among B. mori strains, ranging from 1 to 20. The copies of CBP are of several types, based on the presence of a retrotransposon or partial deletion of the coding sequence. In contrast to B. mori, B. mandarina was found to possess a single copy of CBP without the retrotransposon insertion, regardless of habitat. Several other lepidopterans were found to contain sequences homologous to CBP, revealing that this gene is evolutionarily conserved in the lepidopteran lineage. Thus, domestication can generate significant diversity of gene copy number and structure over a relatively short evolutionary time. PMID:21242537 Sakudoh, Takashi; Nakashima, Takeharu; Kuroki, Yoko; Fujiyama, Asao; Kohara, Yuji; Honda, Naoko; Fujimoto, Hirofumi; Shimada, Toru; Nakagaki, Masao; Banno, Yutaka; Tsuchida, Kozo 2011-01-01 194 PubMed Juvenile hormone (JH) has an ability to repress the precocious metamorphosis of insects during their larval development. Krppel homolog 1 (Kr-h1) is an early JH-inducible gene that mediates this action of JH; however, the fine hormonal regulation of Kr-h1 and the molecular mechanism underlying its antimetamorphic effect are little understood. In this study, we attempted to elucidate the hormonal regulation and developmental role of Kr-h1. We found that the expression of Kr-h1 in the epidermis of penultimate-instar larvae of the silkworm Bombyx mori was induced by JH secreted by the corpora allata (CA), whereas the CA were not involved in the transient induction of Kr-h1 at the prepupal stage. Tissue culture experiments suggested that the transient peak of Kr-h1 at the prepupal stage is likely to be induced cooperatively by JH derived from gland(s) other than the CA and the prepupal surge of ecdysteroid, although involvement of unknown factor(s) could not be ruled out. To elucidate the developmental role of Kr-h1, we generated transgenic silkworms overexpressing Kr-h1. The transgenic silkworms grew normally until the spinning stage, but their development was arrested at the prepupal stage. The transgenic silkworms from which the CA were removed in the penultimate instar did not undergo precocious pupation or larval-larval molt but fell into prepupal arrest. This result demonstrated that Kr-h1 is indeed involved in the repression of metamorphosis but that Kr-h1 alone is incapable of implementing normal larval molt. Moreover, the expression profiles and hormonal responses of early ecdysone-inducible genes (E74, E75, and Broad) in transgenic silkworms suggested that Kr-h1 is not involved in the JH-dependent modulation of these genes, which is associated with the control of metamorphosis. PMID:24508345 Kayukawa, Takumi; Murata, Mika; Kobayashi, Isao; Muramatsu, Daisuke; Okada, Chieko; Uchino, Keiro; Sezutsu, Hideki; Kiuchi, Makoto; Tamura, Toshiki; Hiruma, Kiyoshi; Ishikawa, Yukio; Shinoda, Tetsuro 2014-04-01 195 PubMed In a previous study, we isolated 1,119 bp of upstream promoter sequence from Bmlp3, a gene encoding a member of the silkworm 30 K storage protein family, and demonstrated that it was sufficient to direct fat body-specific expression of a reporter gene in a transgenic silkworm, thus highlighting the potential use of this promoter for both functional genomics research and biotechnology applications. To test whether the Bmlp3 promoter can be used to produce recombinant proteins in the fat body of silkworm pupae, we generated a transgenic line of Bombyx mori which harbors a codon-optimized Aspergillus niger phytase gene (phyA) under the control of the Bmlp3 promoter. Here we show that the Bmlp3 promoter drives high levels of phyA expression in the fat body, and that the recombinant phyA protein is highly active (99.05 and 54.80 U/g in fat body extracts and fresh pupa, respectively). We also show that the recombinant phyA has two optimum pH ranges (1.5-2.0 and 5.5-6.0), and two optimum temperatures (55 and 37 C). The activity of recombinant phyA was lost after high-temperature drying, but treating with boiling water was less harmful, its residual activity was approximately 84% of the level observed in untreated samples. These results offer an opportunity not only for better utilization of large amounts of silkworm pupae generated during silk production, but also provide a novel method for mass production of low-cost recombinant phytase using transgenic silkworms. PMID:24719047 Xu, Hanfu; Liu, Yaowen; Wang, Feng; Yuan, Lin; Wang, Yuancheng; Ma, Sanyuan; Bene, Helen; Xia, QingYou 2014-08-01 196 : Pine-oak forests cover 14.2 million hectares in Mexico, a country that has the richest pine and oak diversity in the world. These diverse forests are a source of goods and services for rural and urban society, but they are being degraded and deforested. A cause of degradation is the alteration of the fire regime caused by fire exclusion or Dante Arturo Rodrguez-Trejo; Ronald L. Myers 2010-01-01 197 E-print Network Oak pinhole borer Platypus cylindrus (Coleoptera : Curculionidae) The oak pinhole borer, Platypus of a continuing supply of breeding material in the form of weakened oaks suffering from oak dieback and decline'. P.cylindrus appears to establish only in trees that are severely stressed or already dead 198 E-print Network processionary is often most abundant on urban trees, along forest edges and in amenity woodlands. Oak. This coloration provides an effective camouflage against the bark of oak trees on which the adults often restOak Processionary Moth Thaumetopoea processionea (Notodontoidea Thaumetopoeidae) The oak 199 E-print Network Proceedings of the Sudden Oak Death Third Science Symposium 3 Welcome to the Sudden Oak Death Third to welcome you to the Sudden Oak Death Third Science Symposium. Looking back at the first sudden oak death was first isolated and identified as the causal agent of sudden oak death. It was the summer of 2000 Standiford, Richard B. 200 E-print Network Oak Tree Preservation in Thousand Oaks, California1 William F. Elmendorf2 Abstract: The City- sake, the oak tree. First adopted in 1972 as an Emergency City Council Proclamation, the City's Oak preservation ordinances within the State of Califor- nia. The current Oak Tree Ordinance has undergone twenty Standiford, Richard B. 201 E-print Network Changes in Soil Quality Due to Grazing and Oak Tree Removal in California Blue Oak Woodlands1 Trina of grazing and oak tree removal on soil quality and fertility were examined in a blue oak (Quercus douglasii on soil quality; however, oak tree removal resulted in a decrease in most soil quality parameters Standiford, Richard B. 202 PubMed Raman microspectroscopy has been used for the first time to determine quantitatively the orientation of the beta-sheets in silk monofilaments from Bombyx mori and Samia cynthia ricini silkworms, and from the spider Nephila edulis. It is shown that, for systems with uniaxial symmetry such as silk, it is possible to determine the order parameters P2 and P4 of the orientation distribution function from intensity ratios of polarized Raman spectra. The equations allowing the calculation of P2 and P4 using polarized Raman microspectroscopy for a vibration with a cylindrical Raman tensor were first derived and then applied to the amide I band that is mostly due to the C=O stretching vibration of the peptide groups. The shape of the Raman tensor for the amide I vibration of the beta-sheets was determined from an isotropic film of Bombyx mori silk treated with methanol. For both the Bombyx mori and Samia cynthia ricini fibroin fibers, the values of P2 and P4 obtained are equal to -0.36 +/- 0.03 and 0.19 +/- 0.02, respectively, even though the two types of silkworm fibroins strongly differ in their primary sequences. For the Nephila edulis dragline silk, values of P2 and P4 of -0.32 +/- 0.02 and 0.13 +/- 0.02 were obtained, respectively. These results clearly indicate that the carbonyl groups are highly oriented perpendicular to the fiber axis and that the beta-sheets are oriented parallel to the fiber axis, in agreement with previous X-ray and NMR results. The most probable distribution of orientation was also calculated from the values of P2 and P4 using the information entropy theory. For the three types of silk, the beta-sheets are highly oriented parallel to the fiber axis. The orientation distributions of the beta-sheets are nearly Gaussian functions with a width of 32 degrees and 40 degrees for the silkworm fibroins and the spider dragline silk, respectively. In addition to these results, the comparison of the Raman spectra recorded for the different silk samples and the polarization dependence of several bands has allowed to clarify some important band assignments. PMID:15530039 Rousseau, Marie-Eve; Lefvre, Thierry; Beaulieu, Lilyane; Asakura, Tetsuo; Pzolet, Michel 2004-01-01 203 NASA Astrophysics Data System (ADS) Subfossil oak wood found in a dried-up bog in Bavaria, Germany, was studied by Mssbauer spectroscopy. The bog oaks contain substantial amounts of iron taken up from the bog waters and presumably forming complexes with the tanning agents in the oak wood. The iron is mainly Fe3 + and much of this exhibits an uncommonly large quadrupole splitting of up to 1.6 mm/s that can tentatively be explained by the formation of oxo-bridged iron dimers. Only rarely, mainly in the dense wood of the roots of bog oaks, was divalent iron found. When the wood was ground to a powder the divalent iron oxidized to Fe3 + within hours. This suggests that iron is taken up from the bog water as Fe2 + and oxidizes only when the wood emerges from the water and comes into contact with air. van Brck, Uwe; Wagner, Friedrich E.; Lerf, Anton 2012-03-01 204 SciTech Connect Tree-ring analysis revealed 33 living white oaks (Quercus alba) in Iowa that began growing before 1700. Core of wood 4 mm in diameter, each extracted from a radius of a tree trunk were analyzed. The oldest white oak, found in northeastern Warren County, began growing about 1570 and is thus over 410 years old. A chinkapin oak (Quercus muehlenbergii) was also found which was more than 300 years old. Ring widths from the white oaks are well correlated with total precipitation for the twelve months preceding completion of ring formation in July. Reconstructions of annual (August-July) precipitation for 1680-1979, based on the tree rings, indicate that the driest annual period in Iowa was August 1799-July 1800, and that the driest decade began about 1816. Climatic information of this kind, pre-dating written weather records, can be used to augment those records and provide a longer baseline of information for use by climatologists and hydrologic planners. Duvick, D.N.; Blasing, T.J. 1983-01-01 205 MedlinePLUS ... Z Diseases and treatments M - P Poison ivy Poison ivy, oak, and sumac Rash from poison ivy. ... to an emergency room immediately. Learn more about poison ivy: Poison ivy: Signs and symptoms Poison ivy: ... 206 E-print Network Decline (slow effect) #12;5/30/20126 Types of Oak Decline In the Acute form of oak decline (AOD) trees to affect above ground parts of oak trees (stems and leaves) In the Chronic form of oak decline (COD is Chronic Oak Decline? Chronic Oak Decline is the term given to oak trees that decline and develop disease 207 PubMed The glutathione S-transferase (GST) superfamily is involved in the detoxification of various xenobiotics. A silkworm GST, belonging to a previously reported Epsilon-class GST family, was identified, named bmGSTE, cloned, and produced in Escherichia coli. Investigation of this enzyme's properties showed that it was able to catalyse glutathione (GSH) with 1-chloro-2,4-dinitrobenzene and ethacrynic acid, and also that it possessed GSH-dependent peroxidase activity. The enzyme's highly conserved amino acid residues, including Ser11, His53, Val55, Ser68 and Arg112, were of interest regarding their possible involvement in its catalytic activity. These residues were replaced with alanine by site-directed mutagenesis and subsequent kinetic analysis of bmGSTE mutants indicated that His53, Val55, and Ser68 were important for enzyme function. PMID:23803169 Yamamoto, K; Aso, Y; Yamada, N 2013-10-01 208 PubMed Four glycine-rich protein (GRP) genes were identified from expressed sequence tags of the maxillary galea of the silkworm. All four genes were expressed in the maxillary pulp, antenna, labrum, and labium, but none of the genes were expressed in most internal organs. Expression of one of the genes, termed bmSIGRP, was further increased approximately fivefold in the mouth region (including the maxilla, antenna, labrum, labium, and mandible) after 24h of starvation. bmSIGRP expression peaked at 24h and gradually declined during the subsequent 2days. When a synthetic diet not containing proteins was fed, bmSIGRP expression increased significantly in the mouth region to levels similar to that observed in starved larvae. Synthetic diets that lacked vitamins or salts but contained amino acids did not significantly affect bmSIGRP expression. These results suggest that amino acid depletion increases bmSIGRP expression. PMID:25095972 Taniai, Kiyoko; Hirayama, Chikara; Mita, Kazuei; Asaoka, Kiyoshi 2014-10-01 209 PubMed In the bivoltine strain of the silkworm, Bombyx mori, embryonic diapause is induced transgenerationally as a maternal effect. Progeny diapause is determined by the environmental temperature during embryonic development of the mother; however, its molecular mechanisms are largely unknown. Here, we show that the Bombyx TRPA1 ortholog (BmTrpA1) acts as a thermosensitive transient receptor potential (TRP) channel that is activated at temperatures above ? 21 C and affects the induction of diapause in progeny. In addition, we show that embryonic RNAi of BmTrpA1 affects diapause hormone release during pupal-adult development. This study identifying a thermosensitive TRP channel that acts as a molecular switch for a relatively long-term predictive adaptive response by inducing an alternative phenotype to seasonal polyphenism is unique. PMID:24639527 Sato, Azusa; Sokabe, Takaaki; Kashio, Makiko; Yasukochi, Yuji; Tominaga, Makoto; Shiomi, Kunihiro 2014-04-01 210 PubMed Axon guidance molecule Slit is critical for the axon repulsion in neural tissues, which is evolutionarily conserved from planarians to humans. However, the function of Slit in the silkworm Bombyx mori was unknown. Here we showed that the structure of Bombyx mori Slit (BmSlit) was different from that in most other species in its C-terminal sequence. BmSlit was localized in the midline glial cell, the neuropil, the tendon cell, the muscle and the silk gland and colocalized with BmRobo1 in the neuropil, the muscle and the silk gland. Knock-down of Bmslit by RNA interference (RNAi) resulted in abnormal development of axons and muscles. Our results suggest that BmSlit has a repulsive role in axon guidance and muscle migration. Moreover, the localization of BmSlit in the silk gland argues for its important function in the development of the silk gland. PMID:25285792 Yu, Qi; Li, Xiao-Tong; Liu, Chun; Cui, Wei-Zheng; Mu, Zhi-Mei; Zhao, Xiao; Liu, Qing-Xin 2014-01-01 211 PubMed Central Axon guidance molecule Slit is critical for the axon repulsion in neural tissues, which is evolutionarily conserved from planarians to humans. However, the function of Slit in the silkworm Bombyx mori was unknown. Here we showed that the structure of Bombyx mori Slit (BmSlit) was different from that in most other species in its C-terminal sequence. BmSlit was localized in the midline glial cell, the neuropil, the tendon cell, the muscle and the silk gland and colocalized with BmRobo1 in the neuropil, the muscle and the silk gland. Knock-down of Bmslit by RNA interference (RNAi) resulted in abnormal development of axons and muscles. Our results suggest that BmSlit has a repulsive role in axon guidance and muscle migration. Moreover, the localization of BmSlit in the silk gland argues for its important function in the development of the silk gland. PMID:25285792 Liu, Chun; Cui, Wei-Zheng; Mu, Zhi-Mei; Zhao, Xiao; Liu, Qing-Xin 2014-01-01 212 PubMed Central The importance of olfactory senses in food preference in fifth instar larvae of Antheraea assamensis Helfer (Lepidoptera: Saturniidae) was examined by subjecting larvae with only antennae or maxillary palpi after microsurgery to food and odor choice tests. Mean percent consumption, total consumption, and choice indices were used as parameters for drawing conclusions. The foods used were two hosts, two non-hosts, and a neutral medium (water). Both antennae and maxillary palpi were fully competent in preference for host plants, Persea bombycina Kostermans (Laurales: Lauraceae) and Litsea polyantha Juss, over the non-hosts, Litsea grandifolia Teschner and Ziziphus jujuba Miller (Rosales: Rhamnaceae). Both were competent in rejecting the non-hosts, L. grandifolia and Z. jujuba. The odor choice test was carried out using a Y-tube olfactometer and showed similar results to the ingestive tests. The results indicate the necessity of functional integration of a combination of olfactory and gustatory sensilla present in different peripheral organs in food acceptance by A. assamensis larvae. PMID:23909481 Bora, D. S.; Deka, B. 2013-01-01 213 PubMed Using a PCR strategy, we have cloned the cDNA for prothoracicotropic hormone (PTTH) from the giant silkmoth, Antheraea pernyi. The A. pernyi PTTH cDNA encodes a preprohormone of 221 amino acids that is 51 and 71% identical at the amino acid level with Bombyx mori and Samia cynthia ricini PTTHs, respectively. Bacterially expressed, recombinant A. pernyi PTTH stimulates adult development when injected into debrained pupae. PTTH protein (ca. 30 kDa by Western blot) and mRNA (ca. 0.9 kb by Northern blot) are expressed in brain. Immunocytochemistry and in situ hybridization show that PTTH protein and mRNA are colocalized in L-NSC III from Day 4 of embryogenesis through adult life, with little variation in either protein or mRNA levels at the various ecdyses. A pair of cells expressing immunoreactivity for the circadian clock protein PER is located in the same region as PTTH-expressing L-NSC III in A. pernyi brain. However, double-label immunocytochemical studies show that PTTH and PER are located in different cells. The close anatomical location between PTTH- and PER-expressing cells suggests routes of communication between these two cell populations that may be important for the circadian control of PTTH release. PMID:8812139 Sauman, I; Reppert, S M 1996-09-15 214 PubMed While olfactory neurons of silk moths are well known for their exquisite sensitivity to sex pheromone odorants, molecular mechanisms underlying this sensitivity are poorly understood. In searching for proteins that might support olfactory mechanisms, we characterized the protein profile of olfactory neuron receptor membranes of the wild silk moth Antheraea polyphemus. We have purified and cloned a prominent 67-kDa protein which we have named Snmp-1 (sensory neuron membrane protein-1). Northern blot analysis suggests that Snmp-1 is uniquely expressed in antennal tissue; in situ hybridization and immunocytochemical analyses show that Snmp-1 is expressed in olfactory neurons and that the protein is localized to the cilia, dendrites, and somata but not the axons. Snmp-1 mRNA expression increases significantly 1-2 days before the end of adult development, coincident with the functional maturation of the olfactory system. Sequence analysis suggests Snmp-1 is homologous with the CD36 protein family, a phylogenetically diverse family of receptor-like membrane proteins. CD36 family proteins are characterized as having two transmembrane domains and interacting with proteinaceous ligands; Snmp-1 is the first member of this family identified in nervous tissue. These findings argue that Snmp-1 has an important role in olfaction; possible roles of Snmp-1 in odorant detection are discussed. PMID:9169446 Rogers, M E; Sun, M; Lerner, M R; Vogt, R G 1997-06-01 215 PubMed The antenna of the male silkmoth Antheraea polyphemus is a featherlike structure consisting of a central stem and ca. 120 side branches, which altogether carry about 70,000 olfactory sensilla. We investigate the development during the pupal phase. At the end of diapause, the antennal rudiment consists of a leaf-shaped, one-layered epidermal sac. It is supplied with oxygen via a central main trachea, which gives off numerous thin side branches. These are segmentally arranged into bundles which run to the periphery of the antennal blade. When the epidermis retracts from the pupal cuticle (apolysis; stage 1), it consists of cells which are morphologically uniform. The epidermal cells form a network of long, irregular basal protrusions (epidermal feet), which crisscross the antennal lumen. During the first day post-apolysis (stage 2), the antennal epidermis differentiates into alternating thick 'sensillogenic' and thin 'non-sensillogenic' areas arranged in stripes which run in parallel to the tracheal bundles. Numerous dark, elongated cells, which might be the sensillar stem cells, are scattered in the sensillogenic epithelium. A number of very early sensilla has been found at the distal edges of the sensillogenic stripes in positions which later will be occupied by sensilla chaetica. The whole antennal blade is enveloped by the transparent ecdysial membrane, consisting of the innermost layers of the pupal cuticle which are detached during apolysis. PMID:18620306 Keil, T A; Steiner, C 1990-01-01 216 NASA Astrophysics Data System (ADS) A simple and green technique has been developed to prepare hierarchical biomorphic ZrO2-CeO2, using silkworm silk as the template. Different from traditional immersion technics, the whole synthesis process depends more on the restriction or direction functions of the silkworm silk template. The analytic results showed that ZrO2-CeO2 exhibited a well-crystallized hierarchically interwoven hollow fiber structure with 16-28 ?m in diameter. The grain size of the sample calcined at 800 C was about 14 nm. Consequently, the interwoven meshwork at three dimensions is formed due to the direction of biotemplate. The action mechanism is summarily discussed here. It may bring the biomorphic ZrO2-CeO2 nanomaterials with hierarchical interwoven structures to more applications, such as catalysts. Zhang, Zong-jian; Li, Jia; Sun, Fu-sheng; Dickon, H. L. Ng; Luen Kwong, Fung 2010-06-01 217 PubMed Central A region within 35 nucleotides upstream of the transcription initiation site of a variety of silkworm Class III templates is absolutely required for transcription in vitro. To determine whether the activity of this region can be attributed to a particular sequence element, we systematically replaced 4-5 bp segments of the region upstream of a silkworm tRNA(cAla) gene. We show that replacement of either of two AT-rich blocks markedly impairs promoter function, whereas replacement of other sequences has little or no effect. Additional mutants were constructed to test whether base composition or sequence is important for function of the AT blocks. We find that some sequences are more effective than others, but that various AT-rich sequences can direct transcription at a high level. Possible mechanisms by which such elements could act are discussed. Images PMID:8290347 Palida, F A; Hale, C; Sprague, K U 1993-01-01 218 We have developed a system for stable germline transformation in the silkworm Bombyx mori L. using piggyBac, a transposon discovered in the lepidopteran Trichoplusia ni. The transformation constructs consist of the piggyBac inverted terminal repeats flanking a fusion of the B. mori cytoplasmic actin gene BmA3 promoter and the green fluorescent protein (GFP). A nonautonomous helper plasmid encodes the piggyBac Toshiki Tamura; Chantal Thibert; Corinne Royer; Toshio Kanda; Abraham Eappen; Mari Kamba; Natuo Kmoto; Jean-Luc Thomas; Bernard Mauchamp; Grard Chavancy; Paul Shirk; Malcolm Fraser; Jean-Claude Prudhomme; Pierre Couble 2000-01-01 219 We describe a transposable element, called Bmmarl, from the genome of the silkworm moth, Bombyx mori. This element has features of the Tcl-mariner superfamily of transposable elements. Bmmarl was first detected as a fragment in the 5? region of the larval serum protein (BmLSP) gene. Six genomic clones characterized each differed from a consensus sequence by 35 insertions and deletions, Hugh M. Robertson; Michele L. Asplund 1996-01-01 220 A dense linkage map was constructed for the silkworm, Bombyx mori, containing 1018 genetic markers on all 27 autosomes and the Z chromosome. Most of the markers, covering z2000 cM, were randomly amplified polymorphic DNAs amplified with primer-pairs in combinations of 140 commercially available decanucleotides. In addition, eight known genes and five visible mutations were mapped. Bombyx homo- logues of Yuji Yasukochi 221 1.A group of extracellularly recorded descending interneurons in the ventral nerve cord of the male silkworm mothBombyx mori share a common flip-flopping input. In response to repeated stimuli these flip-flopping interneurons switch back and forth between long lasting high and low firing rates (Figs. 1, 2).2.Changes in the level of the female pheromone bombykol in an airstream directed at the Robert M. Olberg 1983-01-01 222 The effects of an insect insulin-related peptide, bombyxin, on carbohydrate metabolism were investigated in the silkworm Bombyx mori. Bombyxin lowered the concentration of the major hemolymph sugar, trehalose, in a dose-dependent manner when injected into neck-ligated larvae. Bombyxin also caused elevated trehalase activity in the midgut and muscle, suggesting that bombyxin induces hypotrehalosemia by promoting the hydrolysis of hemolymph trehalose Shin'Ichiro Satake; Makoto Masumura; Hironori Ishizaki; Koji Nagata; Hiroshi Kataoka; Akinori Suzuki; Akira Mizoguchi 1997-01-01 223 Toxicological data on silkworm Bombyx mori are quite comparable to those of other lepidopteran pest insects, therefore, it is considered as a suitable model for exploring\\u000a effects of any new synthetic formulations. In this study, female V instar larvae of silk moth B. mori were chosen to evaluate the lethal and sublethal toxicity effects of RH-2485 (methoxyfenozide), a non-steroidal ecdysteroid Ayyamperumal Rajathi; Jeyaraj Pandiarajan; Muthukalingan Krishnan 2010-01-01 224 PubMed Central We identified a novel gene encoding a Bombyx mori thymosin (BmTHY) protein from a cDNA library of silkworm pupae, which has an open reading frame (ORF) of 399 bp encoding 132 amino acids. It was found by bioinformatics that BmTHY gene consisted of three exons and two introns and BmTHY was highly homologous to thymosin betas (T?). BmTHY has a conserved motif LKHTET with only one amino acid difference from LKKTET, which is involved in T? binding to actin. A His-tagged BmTHY fusion protein (rBmTHY) with a molecular weight of approximately 18.4 kDa was expressed and purified to homogeneity. The purified fusion protein was used to produce anti-rBmTHY polyclonal antibodies in a New Zealand rabbit. Subcellular localization revealed that BmTHY can be found in both Bm5 cell (a silkworm ovary cell line) nucleus and cytoplasm but is primarily located in the nucleus. Western blotting and real-time RT-PCR showed that during silkworm developmental stages, BmTHY expression levels are highest in moth, followed by instar larvae, and are lowest in pupa and egg. BmTHY mRNA was universally distributed in most of fifth-instar larvae tissues (except testis). However, BmTHY was expressed in the head, ovary and epidermis during the larvae stage. BmTHY formed complexes with actin monomer, inhibited actin polymerization and cross-linked to actin. All the results indicated BmTHY might be an actin-sequestering protein and participate in silkworm development. PMID:22383992 Zhang, Wenping; Zhang, Changrong; Lv, Zhengbing; Fang, Dailing; Wang, Dan; Nie, Zuoming; Yu, Wei; Lan, Hanglian; Jiang, Caiying; Zhang, Yaozhou 2012-01-01 225 PubMed Central Bombyx mori cytoplasmic polyhedrosis virus (BmCPV) is one of the most important pathogens of silkworm. MicroRNAs (miRNAs) have been demonstrated to play key roles in regulating host-pathogen interaction. However, there are limited reports on the miRNAs expression profiles during insect pathogen challenges. In this study, four small RNA libraries from BmCPV-infected midgut of silkworm at 72 h post-inoculation and 96 h post-inoculation and their corresponding control midguts were constructed and deep sequenced. A total of 316 known miRNAs (including miRNA*) and 90 novel miRNAs were identified. Fifty-eight miRNAs displayed significant differential expression between the infected and normal midgut (P value ?=?2.0 or silkworm. PMID:23844171 Wu, Ping; Han, Shaohua; Chen, Tao; Qin, Guangxing; Li, Long; Guo, Xijie 2013-01-01 226 PubMed Heterochromatin protein 1 (HP1) is an evolutionarily conserved protein across different eukaryotic species and is crucial for heterochromatin establishment and maintenance. The silkworm, Bombyx mori, encodes two HP1 proteins, BmHP1a and BmHP1b. In order to investigate the role of BmHP1a in transcriptional regulation, we performed genome-wide analyses of the transcriptome, transcription start sites (TSSs), chromatin modification states and BmHP1a-binding sites of the silkworm ovary-derived BmN4 cell line. We identified a number of BmHP1a-binding loci throughout the silkworm genome and found that these loci included TSSs and frequently co-occurred with neighboring euchromatic histone modifications. In addition, we observed that genes with BmHP1a-associated TSSs were relatively highly expressed in BmN4 cells. RNA interference-mediated BmHP1a depletion resulted in the transcriptional repression of highly expressed genes with BmHP1a-associated TSSs, whereas genes not coupled with BmHP1a-binding regions were less affected by the treatment. These results demonstrate that BmHP1a binds near TSSs of highly expressed euchromatic genes and positively regulates their expression. Our study revealed a novel mode of transcriptional regulation mediated by HP1 proteins. PMID:25237056 Shoji, Keisuke; Hara, Kahori; Kawamoto, Munetaka; Kiuchi, Takashi; Kawaoka, Shinpei; Sugano, Sumio; Shimada, Toru; Suzuki, Yutaka; Katsuma, Susumu 2015-02-01 227 PubMed A new microsporidium isolated from Megacopta cribraria was characterized by both biological characteristics and phylogenetic analysis. Moreover, its pathogenicity to silkworms was also studied. The spores are oval in shape and measured 3.640.2נ2.200.2?m in size. Its ultrastructure is characteristic of the genus Nosema: a diplokaryon, 13-14 polar filament coils and posterior vacuole. Its life cycle includes meronts, sporonts, sporoblasts and mature spores, with a typical diplokaryon in each stage and propagation in a binary fission. A phylogenetic tree based on SSU rRNA and rRNA ITS gene sequence analysis further indicated that the parasite is closely related to Nosema bombycis and should be placed in the genus Nosema and sub-group 'true' Nosema. Furthermore, the microsporidium heavily infects lepidopteran silkworm insect and can be transmitted per os (horizontally) and transovarially (vertically). Our findings showed that the microsporidium belongs to the 'true' Nosema group within the genus Nosema and heavily infects silkworms. Based on the information obtained during this study, we named this new microsporidium isolated from M. cribraria as Nosema sp. MC. PMID:25173855 Xing, Dongxu; Li, Li; Liao, Sentai; Luo, Guoqing; Li, Qingrong; Xiao, Yang; Dai, Fanwei; Yang, Qiong 2014-11-01 228 PubMed The objective of the present study was to investigate the utilization of pupal waste and silkworm litter separately as production media for the mass cultivation of the potential biopesticide, Bacillus thuringiensis (Bt). Bt is the most successful commercial biopesticide accounting for 90% of all biopesticides sold all over the world. Biochemical analysis of the dry pupal waste revealed to be consisting of 4% carbohydrates, 44.9% proteins and 40% lipids. Similarly the biochemical composition of dry silkworm litter was found to be 4% carbohydrates, 57.5% proteins and 30.5% lipids. B. thuringiensis NCIM No. 2159 was mass cultivated in a semi-solid-state fermentation at a pH 7.0 and temperature 32C. Changes in the pH and biochemical composition of the substrates were evaluated during the course of the fermentation. The reliability of the two substrates as production media was evaluated by determination of growth at regular intervals. Maximum growth was recorded at 96h incubation showing a spore count in the order of 3.510(10) and 3.010(10)CFU/g in pupal waste and silkworm litter respectively. PMID:23403062 Patil, Sarvamangala R; Amena, S; Vikas, A; Rahul, P; Jagadeesh, K; Praveen, K 2013-03-01 229 PubMed Antimicrobial peptides (AMPs), both synthetic and from natural sources, have raised interest recently as potential alternatives to antibiotics. Cyto-insectotoxin (Cit1a) is a 69-amino-acid antimicrobial peptide isolated from the venom of the central Asian spider Lachesana tarabaevi. The synthetic gene Cit1a fused with the enhanced green fluorescent protein (EGFP) gene was expressed as the EGFP-Cit1a fusion protein using a cysteine protease-deleted Bombyx mori nucleopolyhedrovirus (BmNPV-CP(-)) bacmid in silkworm larva and pupa. The antimicrobial effect of the purified protein was assayed using disk diffusion and broth microdilution methods. The minimum inhibitory concentration of EGFP-Cit1a was also measured against several bacterial strains and showed similar antimicrobial activity to that of the synthetic Cit1a reported earlier. The EGFP-Cit1a fusion protein showed antibiotic activity toward gram-positive and gram-negative bacteria at the micromolar concentration level. These results show that active Cit1a can be produced and purified in silkworm, although this peptide is insecticidal. This study demonstrates the potential of active Cit1a purified from silkworms to use as an antimicrobial agent. PMID:24728600 Ali, M P; Yoshimatsu, Katsuhiko; Suzuki, Tomohiro; Kato, Tatsuya; Park, Enoch Y 2014-08-01 230 PubMed Biotin-dependent human acetyl-CoA carboxylases (ACCs) are integral in homeostatic lipid metabolism. By securing posttranslational biotinylation, ACCs perform coordinated catalytic functions allosterically regulated by phosphorylation/dephosphorylation and citrate. The production of authentic recombinant ACCs is heeded to provide a reliable tool for molecular studies and drug discovery. Here, we examined whether the human ACC2 (hACC2), an isoform of ACC produced using the silkworm BmNPV bacmid system, is equipped with proper posttranslational modifications to carry out catalytic functions as the silkworm harbors an inherent posttranslational modification machinery. Purified hACC2 possessed genuine biotinylation capacity probed by biotin-specific streptavidin and biotin antibodies. In addition, phosphorylated hACC2 displayed limited catalytic activity whereas dephosphorylated hACC2 revealed an enhanced enzymatic activity. Moreover, hACC2 polymerization, analyzed by native page gel analysis and atomic force microscopy imaging, was allosterically regulated by citrate and the phosphorylation/dephosphorylation modulated citrate-induced hACC2 polymerization process. Thus, the silkworm BmNPV bacmid system provides a reliable eukaryotic protein production platform for structural and functional analysis and therapeutic drug discovery applications implementing suitable posttranslational biotinylation and phosphorylation. PMID:24740690 Hwang, In-Wook; Makishima, Yu; Kato, Tatsuya; Park, Sungjo; Terzic, Andre; Park, Enoch Y 2014-10-01 231 PubMed To investigate the molecular mechanisms underlying the low fibroin production of the ZB silkworm strain, we used both SDS-PAGE-based and gel-free-based proteomic techniques and transcriptomic sequencing technique. Combining the data from two different proteomic techniques was preferable in the characterization of the differences between the ZB silkworm strain and the original Lan10 silkworm strain. The correlation analysis showed that the individual protein and transcript were not corresponded well, however, the differentially changed proteins and transcripts showed similar regulated direction in function at the pathway level. In the ZB strain, numerous ribosomal proteins and transcripts were down-regulated, along with the transcripts of translational related elongation factors and genes of important components of fibroin. The proteasome pathway was significantly enhanced in the ZB strain, indicating that protein degradation began on the third day of fifth instar when fibroin would have been produced in the Lan10 strain normally and plentifully. From proteome and transcriptome levels of the ZB strain, the energy-metabolism-related pathways, oxidative phosphorylation, glycolysis/gluconeogenesis, and citrate cycle were enhanced, suggesting that the energy metabolism was vigorous in the ZB strain, while the silk production was low. This may due to the inefficient energy employment in fibroin synthesis in the ZB strain. These results suggest that the reason for the decreasing of the silk production might be related to the decreased ability of fibroin synthesis, the degradation of proteins, and the inefficiency of the energy exploiting. PMID:24428189 Wang, Shao-Hua; You, Zheng-Ying; Ye, Lu-Peng; Che, Jiaqian; Qian, Qiujie; Nanjo, Yohei; Komatsu, Setsuko; Zhong, Bo-Xiong 2014-02-01 232 E-print Network Keith Joy Small Business Manager Oak Ridge National Laboratory: Past, Present, and Future #12;2 OAK Excellence in Science and Technology Excellence in science and innovative solutions to complex problems 233 E-print Network percent of bleeding coast live oaks were under active attack by ambrosia and bark beetles (Coleoptera a consistent and predictable sequence: bleeding, then beetle colonization, followed by emergence of Hypoxylon 380 past beetle activity during each March sampling period, from 2000 to 2003. Beetles had colonized Standiford, Richard B. 234 NSDL National Science Digital Library This peer-reviewed article from BioScience is about the disappearance of white oaks in the US. Before European settlement, vast areas of deciduous forest in what is now the eastern United States were dominated by oak species. Among these species, white oak (Quercus alba) reigned supreme. White oak tended to grow at lower elevations but was distributed across a broad range of sites, from wet mesic to subxeric, and grew on all but the wettest and most xeric, rocky, or nutrient-poor soils. A comparison of witness tree data from early land surveys and data on modern-day forest composition revealed a drastic decline in white oak throughout the eastern forest. By contrast, other dominant oaks, such as red oak (Quercus rubra) and chestnut oak (Quercus prinus), often exhibited higher frequency in recent studies than in surveys of the original forest. The frequency of red oak witness trees before European settlement was quite low, generally under 5% in most forests. Red oak's distribution was apparently limited by a lower tolerance to fire and drought and a greater dependence on catastrophic disturbances than that of white oak. During the late 19th and early 20th centuries, much of the eastern forest was decimated by land clearing, extensive clear-cutting, catastrophic fires, chestnut blight, and then fire suppression and intensive deer browsing. These activities had the greatest negative impact on the highly valued white oak, while promoting the expansion of red oak and chestnut oak. More recently, however, recruitment of all the dominant upland oaks has been limited on all but the most xeric sites. Thus, the dynamic equilibrium in the ecology of upland oaks that existed for thousands of years has been destroyed in the few centuries following European settlement. MARC D. ABRAMS (;) 2003-10-01 235 PubMed Polyethylenimine (PEI) has attracted much attention as a DNA condenser, but its toxicity and non-specific targeting limit its potential. To overcome these limitations, Antheraea pernyi silk fibroin (ASF), a natural protein rich in arginyl-glycyl-aspartic acid (RGD) peptides that contains negative surface charges in a neutral aqueous solution, was used to coat PEI/DNA complexes to form ASF/PEI/DNA ternary complexes. Coating these complexes with ASF caused fewer surface charges and greater size compared with the PEI/DNA complexes alone. In vitro transfection studies revealed that incorporation of ASF led to greater transfection efficiencies in both HEK (human embryonic kidney) 293 and HCT (human colorectal carcinoma) 116 cells, albeit with less electrostatic binding affinity for the cells. Moreover, the transfection efficiency in the HCT 116 cells was higher than that in the HEK 293 cells under the same conditions, which may be due to the target bonding affinity of the RGD peptides in ASF for integrins on the HCT 116 cell surface. This result indicated that the RGD binding affinity in ASF for integrins can enhance the specific targeting affinity to compensate for the reduction in electrostatic binding between ASF-coated PEI carriers and cells. Cell viability measurements showed higher cell viability after transfection of ASF/PEI/DNA ternary complexes than after transfection of PEI/DNA binary complexes alone. Lactate dehydrogenase (LDH) release studies further confirmed the improvement in the targeting effect of ASF/PEI/DNA ternary complexes to cells. These results suggest that ASF-coated PEI is a preferred transfection reagent and useful for improving both the transfection efficiency and cell viability of PEI-based nonviral vectors. PMID:24776757 Liu, Yu; You, Renchuan; Liu, Guiyang; Li, Xiufang; Sheng, Weihua; Yang, Jicheng; Li, Mingzhong 2014-01-01 236 PubMed Central The release of prothoracicotropic hormone, PTTH, or its blockade is the major endocrine switch regulating the developmental channel either to metamorphosis or to pupal diapause in the Chinese silk moth, Antheraea pernyi. We have cloned cDNAs encoding two types of serotonin receptors (5HTRA and B). 5HTRA-, and 5HTRB-like immunohistochemical reactivities (-ir) were colocalized with PTTH-ir in two pairs of neurosecretory cells at the dorsolateral region of the protocerebrum (DL). Therefore, the causal involvement of these receptors was suspected in PTTH release/synthesis. The level of mRNA5HTRB responded to 10 cycles of long-day activation, falling to 40% of the original level before activation, while that of 5HTRA was not affected by long-day activation. Under LD 16:8 and 12:12, the injection of dsRNA5HTRB resulted in early diapause termination, whereas that of dsRNA5HTRA did not affect the rate of diapause termination. The injection of dsRNA5HTRB induced PTTH accumulation, indicating that 5HTRB binding suppresses PTTH synthesis also. This conclusion was supported pharmacologically; the injection of luzindole, a melatonin receptor antagonist, plus 5th inhibited photoperiodic activation under LD 16:8, while that of 5,7-DHT, induced emergence in a dose dependent fashion under LD 12:12. The results suggest that 5HTRB may lock the PTTH release/synthesis, maintaining diapause. This could also work as diapause induction mechanism. PMID:24223937 Takeda, Makio 2013-01-01 237 PubMed In moths, pheromone-binding proteins (PBPs) are responsible for the transport of the hydrophobic pheromones to the membrane-bound receptors across the aqueous sensillar lymph. We report here that recombinant Antheraea polyphemus PBP1 (ApolPBP1) picks up hydrophobic molecule(s) endogenous to the Escherichia coli expression host that keeps the protein in the "open" (bound) conformation at high pH but switches to the "closed" (free) conformation at low pH. This finding has bearing on the solution structures of undelipidated lepidopteran moth PBPs determined thus far. Picking up a hydrophobic molecule from the host expression system could be a common feature for lipid-binding proteins. Thus, delipidation is critical for bacterially expressed lipid-binding proteins. We have shown for the first time that the delipidated ApolPBP1 exists primarily in the closed form at all pH levels. Thus, current views on the pH-induced conformational switch of PBPs hold true only for the ligand-bound open conformation of the protein. Binding of various ligands to delipidated ApolPBP1 studied by solution NMR revealed that the protein in the closed conformation switches to the open conformation only at or above pH 6.0 with a protein to ligand stoichiometry of approximately 1:1. Mutation of His(70) and His(95) to alanine drives the equilibrium toward the open conformation even at low pH for the ligand-bound protein by eliminating the histidine-dependent pH-induced conformational switch. Thus, the delipidated double mutant can bind ligand even at low pH in contrast to the wild type protein as revealed by fluorescence competitive displacement assay using 1-aminoanthracene and solution NMR. PMID:19758993 Katre, Uma V; Mazumder, Suman; Prusti, Rabi K; Mohanty, Smita 2009-11-13 238 PubMed In the moth Antheraea polyphemed at the onset of adult development. The subsequent breakdown of the isolated motor stulongated vesicles similar in structure to channels of smooth ER, appear in large numbers in the axoplasm. Their nature as well as the functional aspects of early axonal changes are discussed. From the 7th day onward two types of axonal breakdown become prominent. The first is characterized 0y swelling axon profiles, distorted vesicles and strongly shrunken mitochondria, uhile shrinking axon profiles containing tightly packed mitochondria and unaltered vesicles are typical of the second. Both types presumably take place independently of each other in different axon terminals. Axons and the contents of at least the first type are finally removed by transformation into lamellar bodies. Glial processes obviously behave independently of degenerating terminals; they loose any contact with them and never act as phagocytes for axon remnants. During the whole period of breakdown undifferentiated contacts between nerve fibers and muscle anlagen are present but synaptic structures as in normal developing dlm have never been observed. This fact, in comparison with earlier studies, suggests a lack of trophic nervous activity on the muscle anlagen tissue. A short time after removal of the isolated stumps new nerve tracts appear between dlm-fibers (which are, of course, strongly retarded in development). They are presumably sensory wing nerves which lack a guide structure to the central target, due to axotomy. Neuromuscular contacts or even junctions formed by axons of these nerves have occasionally been detected on the dlm. Their nature is discussed. Wallerian axon degeneration is compared to the normal, metamorphic breakdown of the innervation of the larval dlm-precursor. In contrast to the former, glial processes here remain in contact with the terminals. Glia and axons first swell. Then most glial processes are transformed into lamellar bodies whereas neurites shrink and become electron-dense. Axonal organelles remain intact for a long period. PMID:1201608 Nesch, H; Stocker, R F 1975-12-10 239 PubMed The NMR structure of the Antheraea polyphemus pheromone-binding protein 1 at pH 4.5, ApolPBP1A, was determined at 20 degrees C. The structure consists of six alpha-helices, which are arranged in a globular fold that encapsulates a central helix alpha7 formed by the C-terminal polypeptide segment 131-142. The 3D arrangement of these helices is anchored by the three disulfide bonds 19-54, 50-108 and 97-117, which were identified by NMR. Superposition of the ApolPBP1A structure with the structure of the homologous pheromone-binding protein of Bombyx mori at pH 4.5, BmorPBPA, yielded an rmsd of 1.7 A calculated for the backbone heavy-atoms N, Calpha and C' of residues 10-142. In contrast, the present ApolPBP1A structure is different from a recently proposed molecular model for a low-pH form of ApolPBP1 that does not contain the central helix alpha7. ApolPBP1 exhibits a pH-dependent transition between two different globular conformations in slow exchange on the NMR chemical shift timescale similar to BmorPBP, suggesting that the two proteins use the same mechanism of ligand binding and ejection. The extensive sequence homology observed for pheromone-binding proteins from moth species further implies that the previously proposed mechanism of ligand ejection involving the insertion of a C-terminal helix into the pheromone-binding site is a general feature of pheromone signaling in moths. PMID:17884092 Damberger, Fred F; Ishida, Yuko; Leal, Walter S; Wthrich, Kurt 2007-11-01 240 PubMed The ultrastructure of neuromuscular connections on developing dorsolongitudinal flight muscles were studied in the moth Antheraea polyphemus. Undifferentiated membrane contacts between axon terminals and muscle-fiber anlagen are present in the diapause pupa. They persist during the period of nerve outgrowth, which probably provides a pathway of contact guidance. By the 4th day of adult development some of these contact areas have differentiated into structures similar to neuromuscular junctions although differentiation of muscle structure does not start earlier than the eighth day. Dense-cored vesicles are abundant in many axon terminals at the beginning of development. They later decrease in number quite rapidly. The significance of the above-mentioned early junctions, their possible mode of action and the role of the dense-cored vesicles are discussed. It is proposed that they exercise a stimulating (trophic) influence on the growth of the undifferentiated muscular tissue. The imaginal neuromuscular junctions are formed during the second half of adult development. Clusters of vesicles and electron-dense depositions along the inner face of the axo- and lemma seem to initiate junction formation. Glial processes then grow between the axo- and sarcolemma and divide the large contact area into several small segments. Mutual invaginations and protrusions of the sarcolemma and the glial cell membrane subsequently form an extensive "rete synapticum." Six days before eclosion the glial and sarcoplasmic parts of the rete synapticum are similar in size. Up to eclosion, all glial processes shrink and increase in electron density. Most of the observations are discussed also in relation to findings in vertebrates. PMID:1149098 Stocker, R F; Nesch, H 1975-06-01 241 PubMed The antenna of the male silkmoth Antheraea polyphemus develops from a one-layered, flattened epidermal sac during the pupal phase. Within the first day post-apolysis (developmental stages 1 and 2), this epithelium differentiates into 'sensillogenic' and 'nonsensillogenic' regions, while numerous slender 'dark cells' interpreted as the precursor cells of sensilla arise in the former. Approximately between the first and second day post-apolysis (developmental stage 3), the dark cells retract to the apical pole of the epidermis, assume a round shape, and undergo a series of differential mitoses with spindles usually oriented parallel to the epidermal surface. These mitoses finally yield the Anlagen of the olfactory sensilla trichodea, each consisting of mostly 6-7 dark cells arranged side by side. In most of the Anlagen, 3-4 of these cells are situated more basally, each giving off a slender apical process which together are arranged in a fascicle. These are the prospective 2-3 sensory neurons plus the thecogen cell, which most probably is a sister cell of the former. Three additional cells are arranged more apically and partly enclose the fascicle of presumed sensory and thecogen cell processes. These are interpreted as the trichogen plus 2 tormogen cells, one of the latter degenerating later during development. In the basal region of the sensillogenic epidermis, massive signs of cell degeneration have been found. At stage 3, the basal epidermal feet in the non-sensillogenic regions have assumed a more uniform orientation as compared with the preceding stages. PMID:18620326 Keil, T A; Steiner, C 1990-01-01 242 E-print Network Virgin Islands and 56 foreign countries were represented. E ducation by the Numbers Select Acronyms DOE UScience Education Programs at Oak Ridge National Laboratory This publication spotlights the Science Education Programs at ORNL, which are administered by Oak Ridge Associated Universities through the Oak Ronquist, Fredrik 243 E-print Network Original article Oak tree improvement in Indiana MV Coggeshall Indiana Department of Natural Resources, Vallonia State Nursery, Vallonia, IN 47281 USA Summary Oak tree improvement in the state 11 species/year. The intent of this paper is to present an overview of the oak tree improvement pro Paris-Sud XI, Université de 244 E-print Network , 23% Oak Woodland, 7% Oak Barrens, and tree layer as a subdominant. Quercus palustris was also a subdominant in Oak Barrens and Wet Prairie. Tree density averaged 90 trees/ha in Oak Woodland, 14 in Oak Savanna, 2 in Oak Barrens, and Gottgens, Hans 245 In this study, human ?-1,4-N-acetylglucosaminyltransferase (?4GnT) fused with GFPuv (GFPuv-?4GnT) was expressed using both a transformed cell system and silkworm larvae. A Tn-pXgp-GFPuv-?4GnT cell line, isolated after expression vector transfection, produced 106mU\\/ml of ?4GnT activity in suspension culture.\\u000a When Bombyx mori nucleopolyhedrovirus containing a GFPuv-?4GnT fusion gene (BmNPV-CP\\u000a ?\\/GFPuv-?4GnT) bacmid was injected into silkworm larvae, ?4GnT activity in larval hemolymph Makoto Nakajima; Tatsuya Kato; Shin Kanamasa; Enoch Y. Park 2009-01-01 246 PubMed A new continuous cell line from ovarian tissue of commercial variety "Kolar Gold" of silkworm, Bombyx mori, was established and designated as DZNU-Bm-12. The tissue was grown in MGM-448 insect cell culture medium supplemented with 10% fetal bovine serum (FBS) and 3% heat-inactivated B. mori hemolymph at 25 +/- 1 degrees C. The migration of partially attached small round refractive cells from the fragments of ovarioles began from the beginning of explantation. The cells multiplied partially attached in the primary culture initially, and some of them become freely suspended after 20 passages. The cells were adapted to MGM-448 and TNM-FH media each with 10% FBS and the population doubling time of cell line was about 36 and 24 hr, respectively. The chromosome number was near diploid at initial passages and slightly increased at 176th passage, but a few tetraploids and hexaploids were also observed. DNA profiles using simple sequence repeat loci established the differences between DZNU-Bm-12 and DZNU-Bm-1 and most widely used Bm-5 and BmN cell lines. The cell line was found susceptible to B. mori nucleopolyhedrovirus (BmNPV) with 85-90% of the cells harboring BmNPV and having an average of 3-17 OBs/infected cell. We suggest the usefulness of this cell line in BmNPV-based baculoviral expression system and also for studying in vitro virus replication. PMID:19357932 Khurad, Arun M; Zhang, Min-Juan; Deshmukh, Chanchal G; Bahekar, Ravindra S; Tiple, Ashish D; Zhang, Chuan-Xi 2009-09-01 247 PubMed The Scribble protein complex genes, consisting of lethal giant larvae (Lgl), discs large (Dlg) and scribble (Scrib) genes, are components of an evolutionarily conserved genetic pathway that links the cell polarity in cells of humans and Drosophila. The tissue expression and developmental changes of the Scribble protein complex genes were documented using qRT-RCR method. The Lgl and Scrib genes could be detected in all the experimental tissues, including fat body, midgut, testis/ovary, wingdisc, trachea, malpighian tubule, hemolymph, prothoracic gland and silk gland. The Dlg gene, mainly expressed only in testis/ovary, could not be detected in prothoracic gland and hemolymph. In fat body, there were two higher expression stages of the three genes. The highest peak of the expression of the Lgl and Scrib genes in wingdisc lay at the 1st day of the 5th instar, but the Dlg gene was at 3rd day of 5th instar. The above results indicate that Scribble complex genes are involved in the process of molting and development of the wingdisc in the silkworm. This will be useful in the future for the elucidation of the detailed biological function of the three genes Scrib, Dlg and Lgl in B. mori. PMID:24705163 Qi, Hai-Sheng; Liu, Shu-Min; Li, Sheng; Wei, Zhao-Jun 2013-01-01 248 PubMed Genetic transformation and genome editing technologies have been successfully established in the lepidopteran insect model, the domesticated silkworm, Bombyx mori, providing great potential for functional genomics and practical applications. However, the current lack of cis-regulatory elements in B.?mori gene manipulation research limits further exploitation in functional gene analysis. In the present study, we characterized a B.?mori endogenous promoter, Bmvgp, which is a 798-bp DNA sequence adjacent to the 5'-end of the vitellogenin gene (Bmvg). PiggyBac-based transgenic analysis shows that Bmvgp precisely directs expression of a reporter gene, enhanced green fluorescent protein (EGFP), in a sex-, tissue- and stage-specific manner. In transgenic animals, EGFP expression can be detected in the female fat body from larval-pupal ecdysis to the following pupal and adult stage. Furthermore, in vitro and in vivo experiments revealed that EGFP expression can be activated by 20-hydroxyecdysone, which is consistent with endogenous Bmvg expression. These data indicate that Bmvgp is an effective endogenous cis-regulatory element in B.?mori. PMID:24828437 Xu, J; Wang, Y Q; Li, Z Q; Ling, L; Zeng, B S; You, L; Chen, Y Z; Aslam, A F M; Huang, Y P; Tan, A J 2014-10-01 249 PubMed Biomaterials that serve as scaffolds for cell proliferation and differentiation are increasingly being used in wound repair. In this study, the potential regenerative properties of a 3-D scaffold containing soluble silkworm gland hydrolysate (SSGH) and human collagen were evaluated. The scaffold was generated by solid-liquid phase separation and a freeze-drying method using a homogeneous aqueous solution. The porosity, swelling behavior, protein release, cytotoxicity, and antioxidative properties of scaffolds containing various ratios of SSGH and collagen were evaluated. SSGH/collagen scaffolds had a high porosity of 61-81% and swelling behavior studies demonstrated a 50-75% increase in swelling, along with complete protein release in the presence of phosphate-buffered saline. Cytocompatibility of the SSGH/collagen scaffold was demonstrated using mesenchymal stem cells from human umbilical cord. Furthermore, SSGH/collagen efficiently attenuated oxidative stress-induced cell damage. In an in vivo mouse model of wound healing, the SSGH/collagen scaffold accelerated wound re-epithelialization over a 15-day period. Overall, the microporous SSGH/collagen 3-D scaffold maintained optimal hydration of the exposed tissues and decreased wound healing time. These results contribute to the generation of advanced wound healing materials and may have future therapeutic implications. PMID:24503353 Kim, Kyu-Oh; Lee, Youngjun; Hwang, Jung-Wook; Kim, Hojin; Kim, Sun Mi; Chang, Sung Woon; Lee, Heui Sam; Choi, Yong-Soo 2014-04-01 250 PubMed Topical application of fenoxycarb (1 &mgr;g per animal) at 129 or 132 h of the fifth instar larvae of the silkworm, Bombyx mori, did not induce morphological abnormalities in the pupal stage, but these animals became dauer (permanent) pupae. This condition of B. mori and the endocrine events leading to permanent pupae are discussed in this work. Application of fenoxycarb at 132 h of the fifth instar elicited a high ecdysteroid titre in the pharate pupal stage and a steadily high ecdysteroid titre in the pupal stage. The fenoxycarb-induced permanent pupae had non-degenerating prothoracic glands that secreted low amounts of ecdysteroid and did not respond to recombinant prothoracicotropic hormone (rPTTH) late in the pupal stage. The Bombyx PTTH titre in the haemolymph, determined by a time-resolved fluoroimmunoassay, was lower than that of controls at the time of pupal ecdysis, but higher than controls later in the pupal stage in fenoxycarb-treated animals. After application of fenoxycarb, its haemolymph level, measured by ELISA, reached a peak at pupal ecdysis, then remained low. These results suggest that the fenoxycarb-mediated induction of permanent pupae is only partially a brain-centred phenomenon. It also involves alterations in the hormonal interplay that govern both the initiation of pupal-adult differentiation and changes in the steroidogenic pathway of the prothoracic glands of B. mori. PMID:12770048 Dedos, S G.; Szurdoki, F; Szkcs, A; Mizoguchi, A; Fugo, H 2002-09-01 251 PubMed Central Albendazole is a broad-spectrum parasiticide with high effectiveness and low host toxicity. No method is currently available for measuring albendazole and its metabolites in silkworm hemolymph. This study describes a rapid, selective, sensitive, synchronous and reliable detection method for albendazole and its metabolites in silkworm hemolymph using ultrafast liquid chromatography tandem triple quadrupole mass spectrometry (UFLC-MS/MS). The method is liquid-liquid extraction followed by UFLC separation and quantification in an MS/MS system with positive electrospray ionization in multiple reaction monitoring mode. Precursor-to-product ion transitions were monitored at 266.100 to 234.100 for albendazole (ABZ), 282.200 to 208.100 for albendazole sulfoxide (ABZSO), 298.200 to 159.100 for albendazole sulfone (ABZSO2) and 240.200 to 133.100 for albendazole amino sulfone (ABZSO2-NH2). Calibration curves had good linearities with R2 of 0.99050.9972. Limits of quantitation (LOQs) were 1.32 ng/mL for ABZ, 16.67 ng/mL for ABZSO, 0.76 ng/mL for ABZSO2 and 5.94 ng/mL for ABZSO2-NH2. Recoveries were 93.12%103.83% for ABZ, 66.51%108.51% for ABZSO, 96.85%105.6% for ABZSO2 and 96.46%106.14% for ABZSO2-NH2, (RSDs <8%). Accuracy, precision and stability tests showed acceptable variation in quality control (QC) samples. This analytical method successfully determined albendazole and its metabolites in silkworm hemolymph in a pharmacokinetic study. The results of single-dose treatment suggested that the concentrations of ABZ, ABZSO and ABZSO2 increased and then fell, while ABZSO2-NH2 level was low without obvious change. Different trends were observed for multi-dose treatment, with concentrations of ABZSO and ABZSO2 rising over time. PMID:25255321 Li, Li; Xing, Dong-Xu; Li, Qing-Rong; Xiao, Yang; Ye, Ming-Qiang; Yang, Qiong 2014-01-01 252 PubMed Although SW-AT-1, a serpin-type trypsin inhibitor from silkworm (Bombyx mori), was identified in previous study, its structure-function relationship has not been studied. In this study, SW-AT-1 was cloned from the body wall of silkworm and expressed in E. coli. rSW-AT-1 inhibited both trypsin and chymotrypsin in a concentration-dependent manner. The association rate constant for rSW-AT-1 and trypsin is 1.3110-5 M-1s-1, for rSW-AT-1 and chymotrpsin is 2.8510-6 M-1s-1. Circular dichroism (CD) assay showed 33% ?-helices, 16% ?-sheets, 17% turns, and 31% random coils in the secondary structure of the protein. Enzymatic and CD analysis indicated that rSW-AT-1 was stable at wide pH range between 4-10, and exhibited the highest activity at weakly acidic or alkaline condition. The predicted three-dimensional structure of SW-AT-1 by PyMOL (v1.4) revealed a deductive reactive centre loop (RCL) near the C-terminus, which was extended from the body of the molecule. In addition to trypsin cleavage site in RCL, matrix-assisted laser desorption ionization time of flight mass spectrometry indicated that the chymotrypsin cleavage site of SW-AT-1 was between F336 and T337 in RCL. Directed mutagenesis indicated that both the N- and C-terminal sides of RCL have effects on the activity, and G327 and E329 played an important role in the proper folding of RCL. The physiological role of SW-AT-1 in the defense responses of silkworm were also discussed. PMID:24901510 Liu, Cheng; Han, Yue; Chen, Xi; Zhang, Wei 2014-01-01 253 PubMed Central Although SW-AT-1, a serpin-type trypsin inhibitor from silkworm (Bombyx mori), was identified in previous study, its structure-function relationship has not been studied. In this study, SW-AT-1 was cloned from the body wall of silkworm and expressed in E. coli. rSW-AT-1 inhibited both trypsin and chymotrypsin in a concentration-dependent manner. The association rate constant for rSW-AT-1 and trypsin is 1.3110?5 M?1s?1, for rSW-AT-1 and chymotrpsin is 2.8510?6 M?1s?1. Circular dichroism (CD) assay showed 33% ?-helices, 16% ?-sheets, 17% turns, and 31% random coils in the secondary structure of the protein. Enzymatic and CD analysis indicated that rSW-AT-1 was stable at wide pH range between 410, and exhibited the highest activity at weakly acidic or alkaline condition. The predicted three-dimensional structure of SW-AT-1 by PyMOL (v1.4) revealed a deductive reactive centre loop (RCL) near the C-terminus, which was extended from the body of the molecule. In addition to trypsin cleavage site in RCL, matrix-assisted laser desorption ionization time of flight mass spectrometry indicated that the chymotrypsin cleavage site of SW-AT-1 was between F336 and T337 in RCL. Directed mutagenesis indicated that both the N- and C-terminal sides of RCL have effects on the activity, and G327 and E329 played an important role in the proper folding of RCL. The physiological role of SW-AT-1 in the defense responses of silkworm were also discussed. PMID:24901510 Liu, Cheng; Han, Yue; Chen, Xi; Zhang, Wei 2014-01-01 254 SciTech Connect This report presents brief descriptions of the following programs at Oak Ridge National Laboratory: The effects of pollution and climate change on forests; automation to improve the safety and efficiency of rearming battle tanks; new technologies for DNA sequencing; ORNL probes the human genome; ORNL as a supercomputer research center; paving the way to superconcrete made with polystyrene; a new look at supercritical water used in waste treatment; and small mammals as environmental monitors. Krause, C.; Pearce, J.; Zucker, A. (eds.) 1992-01-01 255 PubMed Pheromone-binding proteins (PBPs) in lepidopteran moths selectively transport the hydrophobic pheromone molecules across the sensillar lymph to trigger the neuronal response. Moth PBPs are known to bind ligand at physiological pH and release it at acidic pH while undergoing a conformational change. Two molecular switches are considered to play a role in this mechanism: (i) protonation of His(70) and His(95) situated at one end of binding pocket and (ii) switch of the unstructured C-terminus at the other end of the binding pocket to a helix that enters the pocket. We have reported previously the role of the histidine-driven switch in ligand release for Antheraea polyphemus PBP1 (ApolPBP1). Here we show that the C-terminus plays a role in the ligand release and binding mechanism of ApolPBP1. The C-terminus truncated mutants of ApolPBP1 (ApolPBP1?P129-V142 and ApolPBP1H70A/H95A?P129-V142) exist only in the bound conformation at all pH levels, and they fail to undergo pH- or ligand-dependent conformational switching. Although these proteins could bind ligands even at acidic pH unlike wild-type ApolPBP1, they had ~4-fold reduced affinity for the ligand at both acidic and physiological pH compared to that of wild-type ApolPBP1 and ApolPBP1H70A/H95A. Thus, apart from helping in ligand release at acidic pH, the C-terminus in ApolPBP1 also plays an important role in ligand binding and/or locking the ligand in the binding pocket. Our results are in stark contrast to those reported for BmorPBP and AtraPBP, where C-terminus truncated proteins had similar or increased pheromone binding affinity at any pH. PMID:23327454 Katre, Uma V; Mazumder, Suman; Mohanty, Smita 2013-02-12 256 PubMed During adult development of the male silkmoth Antheraea polyphemus, the anlagen of olfactory sensilla arise within the first 2 days post-apolysis in the antennal epidermis (stage 1-3). Approximately on the second day, the primary dendrites as well as the axons grow out from the sensory neurons (stage 4). The trichogen cells start to grow apical processes approximately on the third day, and these hair-forming 'sprouts' reach their definite length around the ninth day (stages 5-6). Then the secretion of cuticle begins, the cuticulin layer having formed on day 10 (stage 7a). The primary dendrites are shed, the inner dendritic segments as well as the thecogen cells retract from the prospective hair bases, and the inner tormogen cells degenerate around days 10/11 (stage 7b). The hair shafts of the basiconic sensilla are completed around days 12/13 (stage 7c), and those of the trichoid sensilla around days 14/15 (stage 7d). The trichogen sprouts retract from the hairs after having finished cuticle formation, and the outer dendritic segments grow out into the hairs: in the basiconic sensilla directly through, and in the trichoid sensilla alongside, the sprouts. The trichogen sprouts contain numerous parallel-running microtubules. Besides their cytoskeletal function, these are most probably involved in the transport of membrane vesicles. During the phase of cuticle deposition, large numbers of vesicles are transported anterogradely from the cell bodies into the sprouts, where they fuse with the apical cell membrane and release their electron-dense contents (most probably cuticle precursors) to the outside. As the cuticle grows in thickness, the surface area of the sprouts is reduced by endocytosis of coated vesicles. When finally the sprouts retract from the completed hairs, the number of endocytotic vesicles is further increased and numerous membrane cisterns seem to be transported retrogradely along the microtubules to the cell bodies. Here the membrane material will most probably be used again in the formation of the sensillum lymph cavities. Thus, the trichogen cells are characterized by an intensive membrane recycling. The sensillum lymph cavities develop between days 16-20 (stage 8), mainly via apical invaginations of the trichogen cells. The imago emerges on day 21. PMID:18621189 Keil, T A; Steiner, C 1991-01-01 257 PubMed Central Pheromone-binding proteins (PBPs) in lepidopteran moths selectively transport the hydrophobic pheromone molecules across the sensillar lymph to trigger the neuronal response. Moth PBPs are known to bind ligand at physiological pH and release it at acidic pH while undergoing a conformational change. Two molecular switches are considered to play a role in this mechanism: (i) Protonation of His70 and His95 situated at one end of binding pocket, and (ii) Switch of the unstructured C-terminus at the other end of the binding pocket to a helix that enters the pocket. We have reported previously the role of the histidine-driven switch in ligand release for Antheraea polyphemus PBP1 (ApolPBP1). Here we show that the C-terminus plays a role in ligand release and binding mechanism of ApolPBP1. The C-terminus truncated mutants of ApolPBP1 (ApolPBP1?P129-V142 and ApolPBP1H70A/H95A?P129-V142) exist only in the bound conformation at all pH levels, and they fail to undergo pH- or ligand- dependent conformational switch. Although these proteins could bind ligands even at acidic pH unlike the wild-type ApolPBP1, they had ~4 fold reduced affinity towards the ligand at both acidic and physiological pH than that of ApolPBP1wt and ApolPBP1H70A/H95A. Thus, apart from helping in the ligand-release at acidic pH, the C-terminus in ApolPBP1 also plays an important role in ligand binding and/or locking the ligand in the binding pocket. Our results are in stark contrast to those reported for BmorPBP and AtraPBP, where C-terminus truncated proteins had similar or increased pheromone-binding affinity at any pH. PMID:23327454 Katre, Uma V.; Mazumder, Suman; Mohanty, Smita 2013-01-01 258 PubMed One of the most important agents causing lethal disease in the silkworm is the Bombyx mori nucleopolyhedrovirus (BmNPV), while low-dose rare earths are demonstrated to increase immune capacity in animals. However, very little is known about the effects of added CeCl(3) on decreasing BmNPV infection of silkworm. The present study investigated the effects of added CeCl(3) to an artificial diet on resistance of fifth-instar larvae of silkworm to BmNPV infection. Our findings indicated that added CeCl(3) significantly decreased inhibition of growth and mortality of fifth-instar larvae caused by BmNPV infection. Furthermore, the added CeCl(3) obviously decreased lipid peroxidation level and accumulation of reactive oxygen species such as O(2)(-), H(2)O(2), ()OH, and NO and increased activities of the antioxidant enzymes including superoxide dismutase, catalase, ascorbate peroxidase, glutathione peroxidase, ascorbate, and glutathione contents in the BmNPV-infected fifth-instar larvae. In addition, the added CeCl(3) could significantly promote acetylcholine esterase activity and attenuate the activity of inducible nitric oxide synthase in the BmNPV-infected fifth-instar larvae. These findings suggested that added CeCl(3) may relieve oxidative damage and neurotoxicity of silkworm caused by BmNPV infection via increasing antioxidant capacity and acetylcholine esterase activity. PMID:22076733 Li, Bing; Xie, Yi; Cheng, Zhe; Cheng, Jie; Hu, Rengping; Cui, Yaling; Gong, Xiaolan; Shen, Weide; Hong, Fashui 2012-06-01 259 PubMed For ultrathin metallic films, either supported or free-standing, the inside nanocrystalline nature significantly reduces the electron and thermal transport. Quantum mechanical reflection of electrons at the grain boundary reduces the electrical conductivity further than the thermal conductivity, leading to a Lorenz number in the order of 7.0 10(-8) W ? K(-2), much higher than that of the bulk counterpart. We report on a finding that for ultrathin (0.6-6.3 nm) iridium films coated on degummed silkworm silk fibroin, the electron transport is around 100-200% higher than that of the same film on glass fiber, even though the grain size of Ir film on silkworm silk is smaller than that on glass fiber. At the same time, the thermal conductivity of the Ir film is smaller or close to that of the film on glass fiber. Its Lorenz number is found close to that of bulk crystalline Ir despite the nanocrystalline structure in the Ir films. This is similar to the behavior of metallic glasses. Our study of gold films on silkworm silk reveals the same trend of change as compared to that on glass fiber. Electron hopping and tunneling in silkworm silk is speculated to be responsible for the observed electron transport. The finding points out that silk could provide a better substrate for flexible electronics with significantly faster electron transport. PMID:24988039 Lin, Huan; Xu, Shen; Zhang, Yu-Qing; Wang, Xinwei 2014-07-23 260 E-print Network of 100s of thousands of oak and tan oak trees. California bay laurel (Umbellularia californica) has been for predicting risk of P. ramorum spread from bay laurel to oak and tan oak trees, an important consideration225 Susceptibility to Sudden Oak Death in California Bay Laurel1 Brian Anacker,2 Nathan Rank,2 Standiford, Richard B. 261 E-print Network oak trees in California (modified from Pavlik and others 1991). precipitation occurring primarily). Dominant trees in the oak woodland include blue oak (Quercus douglasii), valley oak (Q. lobata), interiorHerbaceous Responses to Livestock Grazing in Californian Oak Woodlands: A Review for Habitat Standiford, Richard B. 262 E-print Network of thousands of oak and tanoak trees (coast live oak, Quercus agrifolia, California black oak, Q. kelloggii and Garbelotto 2003). The primary symptom of sudden oak death on affected trees is the production of a viscousORIGINAL ARTICLE Spatial pattern dynamics of oak mortality and associated disease symptoms Kelly, Maggi 263 NASA Astrophysics Data System (ADS) Recently, many studies have been conducted on exploitation of natural materials for modern product development and bioengineering applications. Apart from plant-based materials (such as sisal, hemp, jute, bamboo and palm fibre), animal-based fibre is a kind of sustainable natural materials for making novel composites. Silkworm silk fibre extracted from cocoon has been well recognized as a promising material for bio-medical engineering applications because of its superior mechanical and bioresorbable properties. However, when producing silk fibre reinforced biodegradable/bioresorbable polymer composites, hydrophilic sericin has been found to cause poor interfacial bonding with most polymers and thus, it results in affecting the resultant properties of the composites. Besides, sericin layers on fibroin surface may also cause an adverse effect towards biocompatibility and hypersensitivity to silk for implant applications. Therefore, a proper pre-treatment should be done for sericin removal. Degumming is a surface modification process which allows a wide control of the silk fibre's properties, making the silk fibre possible to be used for the development and production of novel bio-composites with unique/specific mechanical and biodegradable properties. In this paper, a cleaner and environmentally friendly surface modification technique for tussah silk in polymer based composites is proposed. The effectiveness of different degumming parameters including degumming time and temperature on tussah silk is discussed through the analyses of their mechanical and morphological properties. Based on results obtained, it was found that the mechanical properties of tussah silk are affected by the degumming time due to the change of the fibre structure and fibroin alignment. Ho, Mei-po; Wang, Hao; Lau, Kin-tak 2012-02-01 264 PubMed Central Insect molting and metamorphosis are intricately governed by two hormones, ecdysteroids and juvenile hormones (JHs). JHs prevent precocious metamorphosis and allow the larva to undergo multiple rounds of molting until it attains the proper size for metamorphosis. In the silkworm, Bombyx mori, several moltinism mutations have been identified that exhibit variations in the number of larval molts; however, none of them have been characterized molecularly. Here we report the identification and characterization of the gene responsible for the dimolting (mod) mutant that undergoes precocious metamorphosis with fewer larvallarval molts. We show that the mod mutation results in complete loss of JHs in the larval hemolymph and that the mutant phenotype can be rescued by topical application of a JH analog. We performed positional cloning of mod and found a null mutation in the cytochrome P450 gene CYP15C1 in the mod allele. We also demonstrated that CYP15C1 is specifically expressed in the corpus allatum, an endocrine organ that synthesizes and secretes JHs. Furthermore, a biochemical experiment showed that CYP15C1 epoxidizes farnesoic acid to JH acid in a highly stereospecific manner. Precocious metamorphosis of mod larvae was rescued when the wild-type allele of CYP15C1 was expressed in transgenic mod larvae using the GAL4/UAS system. Our data therefore reveal that CYP15C1 is the gene responsible for the mod mutation and is essential for JH biosynthesis. Remarkably, precocious larvalpupal transition in mod larvae does not occur in the first or second instar, suggesting that authentic epoxidized JHs are not essential in very young larvae of B. mori. Our identification of a JHdeficient mutant in this model insect will lead to a greater understanding of the molecular basis of the hormonal control of development and metamorphosis. PMID:22412378 Daimon, Takaaki; Kozaki, Toshinori; Niwa, Ryusuke; Kobayashi, Isao; Furuta, Kenjiro; Namiki, Toshiki; Uchino, Keiro; Banno, Yutaka; Katsuma, Susumu; Tamura, Toshiki; Mita, Kazuei; Sezutsu, Hideki; Nakayama, Masayoshi; Itoyama, Kyo; Shimada, Toru; Shinoda, Tetsuro 2012-01-01 265 E-print Network Petition to Participate in Oakes College Commencement for Students Not Affiliated with Oakes College Name: ___________________________________________________ First Middle Last Major: __________________________________________________ Single Double Combined Minors are not announced/included in program College Affiliation Belanger, David P. 266 PubMed Central Although the 30K family proteins are important anti-apoptotic molecules in silkworm hemolymph, the underlying mechanism remains to be investigated. This is especially the case in human vascular endothelial cells (HUVECs). In this study, a 30K protein, 30Kc6, was successfully expressed and purified using the Bac-to-Bac baculovirus expression system in silkworm cells. Furthermore, the 30Kc6 expressed in Escherichia coli was used to generate a polyclonal antibody. Western blot analysis revealed that the antibody could react specifically with the purified 30Kc6 expressed in silkworm cells. The In vitro cell apoptosis model of HUVEC that was induced by oxidized low density lipoprotein (Ox-LDL) and in vivo atherosclerosis rabbit model were constructed and were employed to analyze the protective effects of the silkworm protein 30Kc6 on these models. The results demonstrated that the silkworm protein 30Kc6 significantly enhanced the cell viability in HUVEC cells treated with Ox-LDL, decreased the degree of DNA fragmentation and markedly reduced the level of 8-isoprostane. This could be indicative of the silkworm protein 30Kc6 antagonizing the Ox-LDL-induced cell apoptosis by inhibiting the intracellular reactive oxygen species (ROS) generation. Furthermore, Ox-LDL activated the cell mitogen activated protein kinases (MAPK), especially JNK and p38. As demonstrated with Western analysis, 30Kc6 inhibited Ox-LDL-induced cell apoptosis in HUVEC cells by preventing the MAPK signaling pathways. In vivo data have demonstrated that oral feeding of the silkworm protein 30Kc6 dramatically improved the conditions of the atherosclerotic rabbits by decreasing serum levels of total triglyceride (TG), high density lipoprotein cholesterol (HDL-C), low density lipoprotein cholesterol (LDL-C) and total cholesterol (TC). Furthermore, 30Kc6 alleviated the extent of lesions in aorta and liver in the atherosclerotic rabbits. These data are not only helpful in understanding the anti-apoptotic mechanism of the 30K family proteins, but also provide important information on prevention and treatment of human cardiovascular diseases. PMID:23840859 Yu, Wei; Ying, Huihui; Tong, Fudan; Zhang, Chen; Quan, Yanping; Zhang, Yaozhou 2013-01-01 267 Sudden oak death (SOD), a disease induced by the fungus-like pathogen Phytophthora ramorum, threatens to seriously reduce or eliminate several oak species endemic to the west coast of North America. We investigated how the disappearance of one of these species, coast live oak (Quercus agrifolia), may affect populations of five resident oak-affiliated California birds acorn woodpecker (Melanerpes formicivorus), Nuttalls William B. Monahan; Walter D. Koenig 2006-01-01 268 PubMed Central Insect gut immunity is the first line of defense against oral infection. Although a few immune-related molecules in insect intestine has been identified by genomics or proteomics approach with comparison to well-studied tissues, such as hemolymph or fat body, our knowledge about the molecular mechanism underlying the gut immunity which would involve a variety of unidentified molecules is still limited. To uncover additional molecules that might take part in pathogen recognition, signal transduction or immune regulation in insect intestine, a T7 phage display cDNA library of the silkworm midgut is constructed. By use of different ligands for biopanning, Translationally Controlled Tumor Protein (TCTP) has been selected. BmTCTP is produced in intestinal epithelial cells and released into the gut lumen. The protein level of BmTCTP increases at the early time points during oral microbial infection and declines afterwards. In vitro binding assay confirms its activity as a multi-ligand binding molecule and it can further function as an opsonin that promotes the phagocytosis of microorganisms. Moreover, it can induce the production of anti-microbial peptide via a signaling pathway in which ERK is required and a dynamic tyrosine phosphorylation of certain cytoplasmic membrane protein. Taken together, our results characterize BmTCTP as a dual-functional protein involved in both the cellular and the humoral immune response of the silkworm, Bombyx mori. PMID:23894441 Hua, Xiaoting; Song, Liang; Xia, Qingyou 2013-01-01 269 PubMed We have developed a system for stable germline transformation in the silkworm Bombyx mori L. using piggyBac, a transposon discovered in the lepidopteran Trichoplusia ni. The transformation constructs consist of the piggyBac inverted terminal repeats flanking a fusion of the B. mori cytoplasmic actin gene BmA3 promoter and the green fluorescent protein (GFP). A nonautonomous helper plasmid encodes the piggyBac transposase. The reporter gene construct was coinjected into preblastoderm eggs of two strains of B. mori. Approximately 2% of the individuals in the G1 broods expressed GFP. DNA analyses of GFP-positive G1 silkworms revealed that multiple independent insertions occurred frequently. The transgene was stably transferred to the next generation through normal Mendelian inheritance. The presence of the inverted terminal repeats of piggyBac and the characteristic TTAA sequence at the borders of all the analyzed inserts confirmed that transformation resulted from precise transposition events. This efficient method of stable gene transfer in a lepidopteran insect opens the way for promising basic research and biotechnological applications. PMID:10625397 Tamura, T; Thibert, C; Royer, C; Kanda, T; Abraham, E; Kamba, M; Komoto, N; Thomas, J L; Mauchamp, B; Chavancy, G; Shirk, P; Fraser, M; Prudhomme, J C; Couble, P; Toshiki, T; Chantal, T; Corinne, R; Toshio, K; Eappen, A; Mari, K; Natuo, K; Jean-Luc, T; Bernard, M; Grard, C; Paul, S; Malcolm, F; Jean-Claude, P; Pierre, C 2000-01-01 270 SciTech Connect In order to investigate the functional signal peptide of silkworm fibroin heavy chain (FibH) and the effect of N- and C-terminal parts of FibH on the secretion of FibH in vivo, N- and C-terminal segments of fibh gene were fused with enhanced green fluorescent protein (EGFP) gene. The fused gene was then introduced into silkworm larvae and expressed in silk gland using recombinant AcMNPV (Autographa californica multiple nuclear polyhedrosis virus) as vector. The fluorescence of EGFP was observed with fluorescence microscope. FibH-EGFP fusion proteins extracted from silk gland were analyzed by Western blot. Results showed that the two alpha helices within N-terminal 163 amino acid residues and the C-terminal 61 amino acid residues were not necessary for cleavage of signal peptide and secretion of the fusion protein into silk gland. Then the C-terminal 61 amino acid residues were substituted with a His-tag in the fusion protein to facilitate the purification. N-terminal sequencing of the purified protein showed that the signal cleavage site is between position 21 and 22 amino acid residues. Wang Shengpeng [Institute of Biochemistry and Cell Biology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai (China); Graduate School, Chinese Academy of Sciences, Shanghai (China); Sericultural Research Institute, Chinese Academy of Agricultural Sciences, Zhenjiang (China); Guo Tingqing [Institute of Biochemistry and Cell Biology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai (China); Graduate School, Chinese Academy of Sciences, Shanghai (China); Guo Xiuyang [Institute of Biochemistry and Cell Biology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai (China); Graduate School, Chinese Academy of Sciences, Shanghai (China); Huang Junting [Sericultural Research Institute, Chinese Academy of Agricultural Sciences, Zhenjiang (China); Lu Changde [Institute of Biochemistry and Cell Biology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai (China)]. E-mail: [email protected] 2006-03-24 271 PubMed The Glutathione S-transferases (GSTs) are a large family of multifunctional enzymes, many of which play an important role in the detoxification of endogenous and exogenous toxic substances. In this research, firstly, we measured the rutin-induced transcriptional level of BmGSTd1 gene by using real-time quantitative RT-PCR method and dual spike-in strategy. The activities of the BmGSTd1 promoter in various tissues of silkworm were measured by firefly luciferase activity and normalized by the Renilla luciferase activity. Results showed that the activity of the BmGSTd1 promoter were highest in Malpighian tubule, followed by fat body, silk gland, hemocyte, epidermis, and midgut. The essential region for basal and rutin-induced transcriptional activity was -1573 to -931bp in Malpighian tubule and fat body of silkworm. Promoter truncation analysis using a dual-luciferase reporter assay in BmN cells showed that the region -1288 to -1202bp for BmGSTd1 gene was essential for basal and rutin-induced transcriptional activity. Sequence analysis of this region revealed several potential transcriptional regulatory elements such as Bcd and Kr. The mutation of core base of Kr site demonstrated that Kr functioned positively in rutin-mediated BmGSTd1 transcription. PMID:25172212 Zhao, Guodong; Wang, Binbin; Liu, Yunlei; Du, Jie; Li, Bing; Chen, Yuhua; Xu, Yaxiang; Shen, Weide; Xia, Qingyou; Wei, Zhengguo 2014-11-10 272 PubMed In this study we identified a potential pro-apoptotic caspase gene, Bombyx mori(B. mori)ICE-2 (BmICE-2) which encoded a polypeptide of 284 amino acid residues, including a (169)QACRG(173) sequence which surrounded the catalytic site and contained a p20 and a p10 domain. BmICE-2 expressed in Escherichia coli (E. coli) exhibited high proteolytic activity for the synthetic human initiator caspase-9 substrates Ac-LEHD-pNA, but little activity towards the effector caspase-3 substrates Ac-DEVD-pNA. When BmICE-2 was transiently expressed in BmN-SWU1 silkworm B. mori cells, we found that the high proteolytic activity for Ac-LEHD-pNA triggered caspase-3-like protease activity resulting in spontaneous cleavage and apoptosis in these cells. This effect was not replicated in Spodoptera frugiperda 9 cells. In addition, spontaneous cleavage of endogenous BmICE-2 in BmN-SWU1 cells could be induced by actinomycin D. These results suggest that BmICE-2 may be a novel pro-apoptotic gene with caspase-9 activity which is involved apoptotic processes in BmN-SWU1 silkworm B. mori cells. PMID:24491540 Yi, Hua-Shan; Pan, Cai-Xia; Pan, Chun; Song, Juan; Hu, Yan-Fen; Wang, La; Pan, Min-Hui; Lu, Cheng 2014-02-28 273 PubMed The Fanconi anemia (FA) pathway is required for activation and operation of the DNA interstrand cross-link (ICL) repair pathway, although the precise mechanism of the FA pathway remains largely unknown. A critical step in the FA pathway is the monoubiquitination of FANCD2 catalyzed by a FA core complex. This modification appears to allow FANCD2 to coordinate ICL repair with other DNA repair proteins on chromatin. Silkworm, Bombyx mori, lacks apparent homologues of the FA core complex. However, BmFancD2 and BmFancI, the putative substrates of the complex, and BmFancL, the putative catalytic E3 ubiquitin ligase, are conserved. Here, we report that the silkworm FancD2 is monoubiquitinated depending on FancI and FancL, and stabilized on chromatin, following MMC treatment. A substitution of BmFancD2 at lysine 519 to arginine abolishes the monoubiquitination, but not the interaction between the FancD2 and FancI. In addition, we demonstrated that depletion of BmFancD2, BmFancI or BmFancL had effects on cell proliferation in the presence of MMC. These results suggest that the FA pathway in B. mori works in the same manner as that in vertebrates. PMID:22513077 Sugahara, Ryohei; Mon, Hiroaki; Lee, Jae Man; Kusakabe, Takahiro 2012-06-15 274 The formation of dense understories in eastern forests has created low light environments that hinder the development of advance oak reproduction. Studies have shown that a midstory removal can enhance these light conditions and promote the development of competitive oak seedlings. Previous studies have been primarily focused on oaks found on productive sites, and there is little knowledge of this David Lee Parrott 2011-01-01 275 E-print Network Oak Death Pathogen, Bark and Ambrosia Beetles, and Fungi Colonizing Coast Live Oaks1 Nadir Erbilgin,2 rates of bark and ambrosia beetles (Coleoptera: Scolytidae) on mechanically inoculated coast live oaks the role of bark and ambrosia beetle infestation in the introduction and/or stimulation of decay fungi Standiford, Richard B. 276 Attacks by the oak ambrosia beetle (Monothrum scutellare) accelerated and increased the amount of wood decay in stems of downed coast live oak (Quercus agrifolia) trees. When permethrin insecticide was sprayed on oak bark surface, the ambrosia beetles produced only one-fourth as many galleries in the sapwood as compared to sapwood beneath the unsprayed bark surface. Although decay fungi initiated Maggi Kelly 2004-01-01 277 From 1999 to 2001, a survey on the occurrence of Phytophthora spp. in the rhizosphere soil of healthy and declining oak trees was conducted in 51 oak stands in Turkey. Seven Phytophthora spp. were recovered from six out of the nine oak species sampled: P. cinnamomi , P. citricola , P. cryptogea , P. gonapodyides , P. quercina , Phytophthora Y. Balci; E. Halmschlager 2003-01-01 278 E-print Network important oak (Quercus sp.) and tanoak (Lithocarpus densiflorus) trees with little indication of slowing tree that exhibited spectral characteristics of trees killed by sudden oak death within each host that the remote mapping systematically underestimated the actual number of trees killed by sudden oak death. Tree Standiford, Richard B. 279 Aim The objectives of this study were: (1) to compare radial growth patterns between white oak (Quercus alba L.) and northern red oak (Quercus rubra L.) growing at the northern distribution limit of white oak; and (2) to assess if the radial growth of white oak at its northern distribution limit is controlled by cold temperature. Location The study was J. C. Tardif; F. Conciatori; P. Nantel; D. Gagnon 2006-01-01 280 E-print Network The Oak Ridge Reservation (ORR) is a 13,574-ha (33,542-acre) federally owned site located in the counties of Anderson and Roane in eastern Tennessee. The ORR is home to two major U.S. Department of Energy (DOE) operating components, the Oak Ridge National Laboratory (ORNL) and the Y-12 National Security Complex (Y-12 Complex). Also located on unknown authors 281 PubMed Silkworm larvae plasma (SLP) reagent is activated by peptidoglycan (PG), a fragment of both the gram-positive and gram-negative bacterial cell wall, as well as beta-glucan (BG), a component of fungi. It is possible to measure contamination of gram-positive bacteria quantitatively by combining the conventional limulus amebocyte lysate (LAL) and PG measurement methods. Therefore, a more highly accurate analysis of dialysate can be made using both SLP and LAL methods to detect endotoxin (ET) and/or PG contamination. We studied the effects of contaminated dialysate on human peripheral blood mononuclear cells (PBMC) by producing various cytokines in vitro. Muramyl dipeptide (MDP) was used as the biologically active minimum constituent of PG. A total of 54 dialysate samples were obtained under sterile conditions from 4 sites: (1) reverse osmosis water unit; (2) proportioning unit; (3) multiple dialysate preparation console, and (4) personal dialysate preparation console, at 9 dialysis facilities. To detect bacterial contamination, the samples were measured with LAL(C), LAL(G) and SLP methods. PBMC were collected from 10 healthy controls and from 10 hemodialysis patients and cultured for 24 h with ET, MDP, ET + MDP and contaminated dialysate. IL-1 receptor antagonist (IL-1Ra), IL-1 beta and TNF-alpha in the culture medium supernatants were measured using the ELISA method. PG was not detected in dialysate from sites 1 or 2. However, dialysate from the inlet of the dialyzer at the bedside monitor of the central supply and personal console showed 4.1 +/- 6.1 ng/ml for site 3 (in 7 of 18 samples) and 3.3 +/- 4.6 ng/ml for site 4 (in 3 of 18 samples). Contamination by PG alone and complex contamination by PG and ET were also detected. Furthermore, IL-1Ra, IL-1 beta and TNF-alpha production by PBMC increased in accordance with the concentrations of MDP. Cytokine production was enhanced 5-10 times more where MDP and ET coexisted than where either MDP or ET existed alone, showing the synergic effects of MDP and ET. Based on these results, there is a high possibility that PG may also be a pyrogen in the dialysate prior to this study. ET had been considered the only pyrogen in dialysate. Therefore, it is essential to recognize the existence of both ET and PG in investigating dialysate contamination. PMID:9127331 Tsuchida, K; Takemoto, Y; Yamagami, S; Edney, H; Niwa, M; Tsuchiya, M; Kishimoto, T; Shaldon, S 1997-01-01 282 PubMed Four strains of an asexual arthroconidial yeast species were isolated from Drosophila flies in two Atlantic rain forest sites in Brazil and two strains from oak tasar silkworm larvae (Antheraea proylei) in India. Analysis of the sequences of the D1/D2 large subunit rRNA gene showed that this yeast represented a novel species of the genus Geotrichum, described as Geotrichum silvicola sp. nov. The novel species was related to the ascogenous genus Galactomyces. The closest relatives of Geotrichum silvicola were Galactomyces sp. strain NRRL Y-6418 and Galactomyces geotrichum. The type culture of Geotrichum silvicola is UFMG-354-2T (=CBS 9194T=NRRL Y-27641T). PMID:15657028 Pimenta, Raphael S; Alves, Priscila D D; Corra, Ary; Lachance, Marc-Andr; Prasad, G S; Rajaram; Sinha, B R R P; Rosa, Carlos A 2005-01-01 283 PubMed Use of Juvenile Hormone Analogues (JHA) in sericulture practices has been shown to boost good cocoon yield; their effect has been determined to be dose-dependent. We studied the impact of low doses of JHA compounds such as methoprene and fenoxycarb on selected key enzymatic activities of the silkworm Bombyx mori. Methoprene and fenoxycarb at doses of 1.0 microg and 3.0 fg/larvae/48 hours showed enhancement of the 5th instar B. mori larval muscle and silkgland protease, aspartate aminotransaminase (AAT) and alanine aminotransaminase (ALAT), adenosine triphosphate synthase (ATPase) and cytochrome-c-oxidase (CCO) activity levels, indicating an upsurge in the overall oxidative metabolism of the B.mori larval tissues. PMID:18678927 Mamatha, Devi M; Kanji, Vijaya K; Cohly, Hari H P; Rao, M Rajeswara 2008-06-01 284 PubMed Central Use of Juvenile Hormone Analogues (JHA) in sericulture practices has been shown to boost good cocoon yield; their effect has been determined to be dose-dependent. We studied the impact of low doses of JHA compounds such as methoprene and fenoxycarb on selected key enzymatic activities of the silkworm Bombyx mori. Methoprene and fenoxycarb at doses of 1.0 ?g and 3.0fg/larvae/48 hours showed enhancement of the 5th instar B. mori larval muscle and silkgland protease, aspartate aminotransaminase (AAT) and alanine aminotransaminase (ALAT), adenosine triphosphate synthase (ATPase) and cytochrome-c-oxidase (CCO) activity levels, indicating an upsurge in the overall oxidative metabolism of the B.mori larval tissues. PMID:18678927 Mamatha, Devi M.; Kanji, Vijaya K.; Cohly, Hari H.P.; Rao, M. Rajeswara 2008-01-01 285 SciTech Connect In trials in Rhode Island, logs of Quercus velutina and Q. alba were cut into 18-inch lengths, split if diameter is greater than 5 inches and stacked in racks with plywood sides to simulate a continuous stack. Racks were shaded or unshaded, and with or without weather protection. Trials were started on six dates during September 1978 - April 1980. Storage racks were weighed monthly and apparent percentage moisture was calculated assuming that all weight changes resulted from water loss. From the results it was concluded that weather protection with good air circulation is desirable for seasoning mixed-oak fuelwood. Cutting in spring or early summer gives faster initial drying than cutting in autumn or winter, but is unlikely to result in 20% moisture content by the following heating season. Without protection, moisture content less than 30% are unlikely. Shade locations resulted in slower drying rates. 3 references. McKiel, C.G.; Husband, T.P. 1986-01-01 286 PubMed Central Sawa-J is a polyphagous silkworm (Bombyx mori L.) strain that eats various plant leaves that normal silkworms do not. The feeding preference behavior of Sawa-J is controlled by one major recessive gene(s) on the polyphagous (pph) locus, and several minor genes; moreover, its deterrent cells possess low sensitivity to some bitter substances including salicin. To clarify whether taste sensitivity is controlled by the pph locus, we conducted a genetic analysis of the electrophysiological characteristics of the taste response using the polyphagous strain Sawa-Jlem, in which pph is linked to the visible larval marker lemon (lem) on the third chromosome, and the normal strain Daiankyo, in which the wild-type gene of pph (+pph) is marked with Zebra (Ze). Maxillary taste neurons of the two strains had similar doseresponse relationships for sucrose, inositol, and strychnine nitrate, but the deterrent cell of Sawa-Jlem showed a remarkably low sensitivity to salicin. The F1 generation of the two strains had characteristics similar to the Daiankyo strain, consistent with the idea that pph is recessive. In the BF1 progeny between F1 females and Sawa-Jlem males where no crossing-over occurs, the lem and Ze phenotypes corresponded to different electrophysiological reactions to 25 mM salicin, indicating that the gene responsible for taste sensitivity to salicin is located on the same chromosome as the lem and Ze genes. The normal and weak reactions to 25 mM salicin were segregated in crossover-type larvae of the BF1 progeny produced by a reciprocal cross, and the recombination frequency agreed well with the theoretical ratio for the loci of lem, pph, and Ze on the standard linkage map. These results indicate that taste sensitivity to salicin is controlled by the gene(s) on the pph locus. PMID:22649537 Iizuka, Tetsuya; Tamura, Toshiki; Sezutsu, Hideki; Mase, Keisuke; Okada, Eiji; Asaoka, Kiyoshi 2012-01-01 287 PubMed Central In most insect species, a variety of serine protease inhibitors (SPIs) have been found in multiple tissues, including integument, gonad, salivary gland, and hemolymph, and are required for preventing unwanted proteolysis. These SPIs belong to different families and have distinct inhibitory mechanisms. Herein, we predicted and characterized potential SPI genes based on the genome sequences of silkworm, Bombyx mori. As a result, a total of eighty SPI genes were identified in B. mori. These SPI genes contain 10 kinds of SPI domains, including serpin, Kunitz_BPTI, Kazal, TIL, amfpi, Bowman-Birk, Antistasin, WAP, Pacifastin, and alpha-macroglobulin. Sixty-three SPIs contain single SPI domain while the others have at least two inhibitor units. Some SPIs also contain non-inhibitor domains for protein-protein interactions, including EGF, ADAM_spacer, spondin_N, reeler, TSP_1 and other modules. Microarray analysis showed that fourteen SPI genes from lineage-specific TIL family and Group F of serpin family had enriched expression in the silk gland. The roles of SPIs in resisting pathogens were investigated in silkworms when they were infected by four pathogens. Microarray and qRT-PCR experiments revealed obvious up-regulation of 8, 4, 3 and 3 SPI genes after infection with Escherichia coli, Bacillus bombysepticus, Beauveria bassiana or B. mori nuclear polyhedrosis virus (BmNPV), respectively. On the contrary, 4, 11, 7 and 9 SPI genes were down-regulated after infection with E. coli, B. bombysepticus, B. bassiana or BmNPV, respectively. These results suggested that these SPI genes may be involved in resistance to pathogenic microorganisms. These findings may provide valuable information for further clarifying the roles of SPIs in the development, immune defence, and efficient synthesis of silk gland protein. PMID:22348050 Duan, Jun; Wang, Genhong; Wang, Lingyan; Li, Youshan; Xiang, Zhonghuai; Xia, Qingyou 2012-01-01 288 PubMed When the juvenile hormone analog fenoxycarb was topically applied to the silkworm Bombyx mori at the beginning of the 3rd or 4th (penultimate) instar, an extra larval molt was induced. The 5th instar period was shortened to about 5 days and the extra 6th instar ranged from 8 to more than 20 days, depending on the dose applied. Starvation before fenoxycarb treatment strongly enhanced the incidence of extra molting up to 100%. When 1 ng was applied in the 4th instar after a 2-day starvation, most larvae underwent an extra molt, metamorphosed to pupae, then to fertile adults. Combining starvation and fenoxycarb application thus induces a perfect extra molt efficiently. In perfect extra molting larvae, profiles of total ecdysteroid titer during the 4th and 5th instars were similar to that during the 4th instar in the control, and the ecdysteroid profile during the extra 6th instar was similar to that during the control 5th (last) instar. At ecdysteroid peaks, 20-hydroxyecdysone (20E) and ecdysone (E), generally regarded as the active molting hormone and its precursor, had similar titers in the 6th instar, whereas E was much less than 20E in the 4th and 5th instars in the extra molting larvae. E was also abundant only in the last larval instar in the control. These results suggest that both 20E and E contents are important for regulation of larval molt and metamorphosis in silkworms and that fenoxycarb triggers the extra molt by inducing an additional larval molt type of ecdysteroid surge before the last larval instar. PMID:12392697 Kamimura, Manabu; Kiuchi, Makoto 2002-10-01 289 E-print Network -conifer forests in San Diego County. GSOB prefers mature oak trees but occasionally attacks smaller oaksThe goldspotted oak borer (GSOB), Agrilus auroguttatus (Coleoptera: Buprestidae), is a flatheaded at one site in Riverside County in 2012. It was likely brought into the state on oak firewood collected Ishida, Yuko 290 E-print Network NH Big Tree of the Month Chestnut Oak, Quercus prinus By Anne Krantz - UNH Extension Big Tree Team of white oak. It has taken years to figure out this tree species, as oaks can hybridize or cross pollinate on the Hillsborough Big Tree Team. Recently, I made arrangements to re-measure a chestnut oak growing nearby New Hampshire, University of 291 E-print Network -temporalof native oak trees in California. We present a landscape-scale dynamics of Sudden Oak Death (SOD oak trees and tanoaks in the CoastSecond-order spatial point-pattern analysis techniques RangesLandscape Dynamics of the Spread of Sudden Oak Death Maggi Kelly and Ross K. Meentemeyer describing Kelly, Maggi 292 E-print Network 1 OAK RIDGE NATIONAL LABORATORY U.S. DEPARTMENT OF ENERGY OAK RIDGE NATIONAL LABORATORY U.S. DEPARTMENT OF ENERGY OAK RIDGE NATIONAL LABORATORY U.S. DEPARTMENT OF ENERGY BPWorkshop-2005 - LRB OAK RIDGE NATIONAL LABORATORY U.S. DEPARTMENT OF ENERGY presented by L.R. Baylor in collaboration with P.B. Parks*, S 293 Oaks depend on hidden diversity belowground. Oregon white oaks (Quercus garryana) form ectomycorrhizas with more than 40 species of fungi at a 25-ha site. Several of the most common oak mycorrhizal fungi form hypogeous fruiting bodies or truffles in the upper layer of mineral soil. We collected 18 species of truffles associated with Oregon white oak. Truffles do not release Jonathan Frank; Seth Barry; Joseph Madden 294 The first objective of this thesis was to determine if differences existed in the composition of the small mammal community in oak savannas relative to the community found in adjacent oak woodland. Specifically, from June to August 2009, I estimated and compared abundance, density, and micro-habitat affiliations of small mammals in two oak savanna and four oak woodland sites at Valerie J Clarkston 2011-01-01 295 E-print Network NOT ALL OAK GALL WASPS GALL OAKS: THE DESCRIPTION OF DRYOCOSMUS RILEYPOKEI, A NEW, APOSTATE SPECIES-mail: [email protected]) Abstract.--Cynipini gall wasps (Hymenoptera: Cynipidae) are commonly known as oak gall wasps for their almost exclusive use of oak (Quercus spp.; Fagaceae) as their host plant. Previously Hammerton, James 296 PubMed Silkworms can produce strong and tough fibers at room temperature and from an aqueous solution. Therefore, it seems useful to study the mechanism of fiber formation by silkworms for development of synthetic polymers with excellent mechanical properties. The rheological behaviors of native silk dopes stored in the silk glands of Bombyx mori and Samia cynthia ricini were clarified, and flow simulations of the dopes in each spinneret were performed with a Finite Element Method. Dynamic viscoelastic measurements revealed that silk fibroin stored in silk glands forms a transient network at room temperature, and that the molecular weight for the network node corresponds to the molecular weight of a heterodimer of H-chain and L-chain (B. mori) and a homodimer of H-chains (S. c. ricini), respectively. Also, each dope exhibited zero-shear viscosity and then shear thinning like polymer melts. In addition, shear thickening due to flow-induced crystallization was observed. The critical shear rate for crystallization of B. mori dopes was smaller than that of S. c. ricini dopes. From the flow simulation, it is suggested that domestic and wild silkworms are able to crystallize the dopes in the stiff plate region by controlling shear rate using the same magnitude of extrusion pressure despite differences in rheological properties. PMID:19317399 Moriya, Motoaki; Roschzttardtz, Frederico; Nakahara, Yusuke; Saito, Hitoshi; Masubuchi, Yuichi; Asakura, Tetsuo 2009-04-13 297 PubMed Central This study used the larval tissues and colored cocoons of silkworms, Bombyx mori L. (Lepidoptera: Bombycidae), that were fed leaves of cultivated mulberry, Husang 32, as experimental material. The pigment composition and content in colored cocoons and tissues of the 5th instar larvae and the mulberry leaves were rapidly detected using organic solvent extraction and reverse phase high-performance liquid chromatography with diode array detection. It was found that the mulberry leaf mainly contained four types of pigment: lutein (30.86%), ?-carotene (26.3%), chlorophyll a (24.62%), and chlorophyll b (18.21%). The silk glands, blood, and cocoon shells of six yellow-red cocoons were used as the experimental materials. The results showed that there were generally two kinds of carotenoids (lutein and ?-carotene) in the silk gland and cocoon shell, a little violaxanthin was detected in silk gland, and the pigment found in the blood was mainly lutein in all varieties of silkworm tested. Chlorophyll a and b had not been digested and utilized in the yellow-red series of silkworm. The method used to detect visible pigments reported here could be used to breed new colors of cocoons and to develop and utilize the pigments found in mulberry. Zhu, Lin; Zhang, Yu-Qing 2014-01-01 298 PubMed The Burst of expression from polyhedrin (polh) promoter during very late phase of baculovirus infection requires a sequence located between TAAG and the translation initiation site, typically referred to as burst sequence (BS). The expression of polh promoter is stimulated by specifically binding of very late transcriptional factor 1 (VLF-1) to BS. In order to enhance the production of recombinant proteins the polh promoter was modified via a multiple BS bacmid system in which the number of BSs was increased. Compared to an expression from a normal polh promoter, ?-glucuronidase (GUS) activity in High Five insect cells was three times higher with a modified polh promoter containing two BSs. Using a modified polh promoter that contains nine BSs in silkworm expression system, ?1-3-N-acetylglucosaminyltransferase 2 (?3GnT2) activity per larva was 6.8-fold higher than control. Furthermore, the co-expression of modified promoters along with VLF-1-enhanced ?3GnT activity. Thus, an increased optimal number of BS and its co-expression with VLF-1 leads to the production of higher level of gene expression in insect cells and silkworm larvae. This new modified promoter engineered in the current study is the strongest promoter for overexpressing foreign proteins in an eukaryotic cell and system, thus leading a progress in baculovirus-insect cell and silkworm biotechnology. PMID:20717974 Manohar, Suganthi Lavender; Kanamasa, Shin; Nishina, Takuya; Kato, Tatsuya; Park, Enoch Y 2010-12-15 299 PubMed The Fanconi anaemia (FA) pathway is responsible for interstrand crosslink (ICL) repair. Among the FA core complex components, FANCM is believed to act as a damage sensor for the ICL-blocked replication fork and also as a molecular platform for FA core complex assembly and interaction with Bloom's syndrome (BS) complex that is thought to play an important role in the processing of DNA structures such as stalled replication forks. In the present study, we found that in silkworms, Bombyx mori, a species lacking the major FA core complex components (FANCA, B, C, E, F, and G), FancM is required for FancD2 monoubiquitination and cell proliferation in the presence of mitomycin C (MMC). Silkworm FancM (BmFancM) was phosphorylated in the middle regions, and the modification was associated with its subcellular localization. In addition, BmFancM interacted with Mhf1, a histone-fold protein, and Rmi1, a subunit of the BS complex, in the different regions. The interaction region containing at least these two protein-binding domains played an essential role in FancM-dependent resistance to MMC. Our results suggest that BmFancM also acts as a platform for recruitment of both the FA protein and the BS protein, although the silkworm genome seems to lose FAAP24, a FancM-binding partner protein in mammals. PMID:24286570 Sugahara, R; Mon, H; Lee, J M; Kusakabe, T 2014-04-01 300 PubMed This study used the larval tissues and colored cocoons of silkworms, Bombyx mori L. (Lepidoptera: Bombycidae), that were fed leaves of cultivated mulberry, Husang 32, as experimental material. The pigment composition and content in colored cocoons and tissues of the 5th instar larvae and the mulberry leaves were rapidly detected using organic solvent extraction and reverse phase high-performance liquid chromatography with diode array detection. It was found that the mulberry leaf mainly contained four types of pigment: lutein (30.86%), ?-carotene (26.3%), chlorophyll a (24.62%), and chlorophyll b (18.21%). The silk glands, blood, and cocoon shells of six yellow-red cocoons were used as the experimental materials. The results showed that there were generally two kinds of carotenoids (lutein and ?-carotene) in the silk gland and cocoon shell, a little violaxanthin was detected in silk gland, and the pigment found in the blood was mainly lutein in all varieties of silkworm tested. Chlorophyll a and b had not been digested and utilized in the yellow-red series of silkworm. The method used to detect visible pigments reported here could be used to breed new colors of cocoons and to develop and utilize the pigments found in mulberry. PMID:25373178 Zhu, Lin; Zhang, Yu-Qing 2014-01-01 301 E-print Network at two sites in west London*. An intensive programme of monitoring and control was quickly established, and land owners and managers. *Up-to-date distribution maps of the oak processionary moth can be found 302 E-print Network Geographic Information Systems Oak Ridge National Laboratory managed by UT-Battelle, LLC for the U to carry out activities in intermodal freight planning and policy making. It uses a routing model to assign 303 E-print Network United States. Among the oaks, white oak (Quercus alba) reigned supreme (Abrams 1992, Whitney 1994 the eastern United States were dominated by oak species. Among these species, white oak (Quercus alba) reigned oaks, such as red oak (Quercus rubra) and chestnut oak (Quercus prinus), often exhibited higher Abrams, Marc David 304 SciTech Connect This two-volume report, the Oak Ridge Reservation Environmental Report for 1989, is the nineteenth in an annual series that began in 1971. It reports the results of a comprehensive, year-round program to monitor the impact of operations at the three major US Department of Energy (DOE) production and research installations in Oak Ridge on the immediate areas' and surrounding region's groundwater and surface waters, soil, air quality, vegetation and wildlife, and through these multiple and varied pathways, the resident human population. Information is presented for the environmental monitoring Quality Assurance (QA) Program, audits and reviews, waste management activities, land special environmental studies. Data are included for the Oak Ridge Y-12 Plant, Oak Ridge National Laboratory (ORNL), and Oak Ridge Gaseous Diffusion Plant (ORGDP). Volume 1 presents narratives, summaries, and conclusions based on environmental monitoring at the three DOE installations and in the surrounding environs during calendar year (CY) 1989. Volume 1 is intended to be a stand-alone'' report about the Oak Ridge Reservation (ORR) for the reader who does not want an in-depth review of 1989 data. Volume 2 presents the detailed data from which these conclusions have been drawn and should be used in conjunction with Volume 1. Jacobs, V.A.; Wilson, A.R. (eds.) 1990-10-01 305 PubMed Central The ability to treat osteochondral defects is a major clinical need. Existing polymer systems cannot address the simultaneous requirements of regenerating bone and cartilage tissues together. The challenge still lies on how to improve the integration of newly formed tissue with the surrounding tissues and the cartilage-bone interface. This study investigated the potential use of different silk fibroin scaffolds: mulberry (Bombyx mori) and non-mulberry (Antheraea mylitta) for osteochondral regeneration in vitro and in vivo. After 4 to 8 weeks of in vitro culture in chondro- or osteo-inductive media, non-mulberry constructs pre-seeded with human bone marrow stromal cells exhibited prominent areas of the neo tissue containing chondrocyte-like cells, whereas mulberry constructs pre-seeded with human bone marrow stromal cells formed bone-like nodules. In vivo investigation demonstrated neo-osteochondral tissue formed on cell-free multi-layer silk scaffolds absorbed with transforming growth factor beta 3 or recombinant human bone morphogenetic protein-2. Good bio-integration was observed between native and neo-tissue within the osteochondrol defect in patellar grooves of Wistar rats. The in vivo neo-matrix formed comprised of a mixture of collagen and glycosaminoglycans except in mulberry silk without growth factors, where a predominantly collagenous matrix was observed. Immunohistochemical assay showed stronger staining of type I and type II collagen in the constructs of mulberry and non-mulberry scaffolds with growth factors. The study opens up a new avenue of using inter-species silk fibroin blended or multi-layered scaffolds of a combination of mulberry and non-mulberry origin for the regeneration of osteochondral defects. PMID:24260335 Saha, Sushmita; Kundu, Banani; Kirkham, Jennifer; Wood, David; Kundu, Subhas C.; Yang, Xuebin B. 2013-01-01 306 SciTech Connect Bombyxin (bx) and prophenoloxidase-activating enzyme (ppae) signal peptides from Bombyx mori, their modified signal peptides, and synthetic signal peptides were investigated for the secretion of GFP{sub uv}-{beta}1,3-N-acetylglucosaminyltransferase 2 (GGT2) fusion protein in B. mori Bm5 cells and silkworm larvae using cysteine protease deficient B. mori multiple nucleopolyhedrovirus (BmMNPV-CP{sup -} ) and its bacmid. The secretion efficiencies of all signal peptides were 15-30% in Bm5 cells and 24-30% in silkworm larvae, while that of the +16 signal peptide was 0% in Bm5 cells and 1% in silkworm larvae. The fusion protein that contained the +16 signal peptide was expressed specifically in the endoplasmic reticulum (ER) and in the fractions of cell precipitations. Ninety-four percent of total intracellular {beta}1,3-N-acetylglucosaminyltransferase ({beta}3GnT) activity was detected in cell precipitations following the 600, 8000, and 114,000g centrifugations. In the case of the +38 signal peptide, 60% of total intracellular activity was detected in the supernatant from the 114,000g spin, and only 1% was found in the precipitate. Our results suggest that the +16 signal peptide might be situated in the transmembrane region and not cleaved by signal peptidase in silkworm or B. mori cells. Therefore, the fusion protein connected to the +16 signal peptide stayed in the fat body of silkworm larvae with biological function, and was not secreted extracellularly. Kato, Tatsuya [Laboratory of Biotechnology, Faculty of Agriculture, Shizuoka University, 836 Ohya Suruga-ku, Shizuoka 422-8529 (Japan); Park, Enoch Y. [Laboratory of Biotechnology, Faculty of Agriculture, Shizuoka University, 836 Ohya Suruga-ku, Shizuoka 422-8529 (Japan) and Laboratory of Biotechnology, Integrated Bioscience Section, Graduate School of Science and Technology, Shizuoka University, 836 Ohya Suruga-ku, Shizuoka 422-8529 (Japan)]. E-mail: [email protected] 2007-08-03 307 E-print Network insecticide permethrin in prolonging the life of infected coast live oaks and the closely related Shreve oaks for spraying with permethrin twice each year, in August and February, prior to beetle flight periods, following and in the total area affected per tree. Permethrin treatment prevented colonization through March and April 2003 Standiford, Richard B. 308 E-print Network and 1960s, and with the creation of DOE in the 1970s, ORNL became an international center for the studyOak Ridge National Laboratory 5-1 5. Oak Ridge National Laboratory ORNL is the largest science and energy national laboratory in the DOE system. ORNL's scientific programs focus on materials, neutron Pennycook, Steve 309 PubMed Calcium stores were cytochemically demonstrated using a combined oxalate-pyroantimonate method in the neuromuscular junctions of the degenerating intersegmental muscles in the giant silkmoth Antheraea polyphemus. The elemental composition of punctate precipitates of the reaction product was determined by electron probe X-ray microanalysis of unstained thin sections by energy-dispersive spectrometry and wavelength-dispersive spectrometry. The wavelength-dispersive spectra collected over terminal axons demonstrate a significant calcium signal and a trace of antimony. During the rapid lytic phase of spontaneous muscle degeneration, the calcium punctate deposits were detected in presynaptic terminals in the following sites: the synaptic vesicles and the mitochondria. Calcium precipitates were also found in the dense bodies and the mitochondria encountered in the glial convolutions. No calcium deposit was seen in the synaptic clefts and intercellular spaces of the subsynaptic reticulum of type I and type II. A comparison of calcium to antimony ratios between the terminal axons and the sarcoplasmic lysosomes revealed highly significant differences (P less than 0.001). Such a variability of the calcium to antimony ratio may be related to different conditions of precipitation or antimony diffusion in the different cell compartments. It was concluded that such synaptic terminals do not appear damaged in spite of the muscle degeneration and presumably continue to perform vital functions while the muscles are no longer contractile 20 h after adult ecdysis. PMID:3410737 Beaulaton, J 1988-03-01 310 PubMed In the circadian timing systems, input pathways transmit information on the diurnal environmental changes to a core oscillator that generates signals relayed to the body periphery by output pathways. Cryptochrome (CRY) protein participates in the light perception; period (PER), Cycle (CYC), and Doubletime (DBT) proteins drive the core oscillator; and arylalkylamines are crucial for the clock output in vertebrates. Using antibodies to CRY, PER, CYC, DBT, and arylalkylamine N-acetyltransferase (aaNAT), the authors examined neuronal architecture of the circadian system in the cephalic ganglia of adult silkworms. The antibodies reacted in the cytoplasm, never in the nuclei, of specific neurons. A cluster of 4 large Ia(1) neurons in each dorsolateral protocerebrum, a pair of cells in the frontal ganglion, and nerve fibers in the corpora cardiaca and corpora allata were stained with all antibodies. The intensity of PER staining in the Ia(1) cells and in 2 to 4 adjacent small cells oscillated, being maximal late in subjective day and minimal in early night. No other oscillations were detected in any cell and with any antibody. Six small cells in close vicinity to the Ia(1) neurons coexpressed CYC-like and DBT-like, and 4 to 5 of them also coexpressed aaNATlike immunoreactivity; the PER- and CRY-like antigens were each present in separate groups of 4 cells. The CYC- and aaNAT-like antigens were further colocalized in small groups of neurons in the pars intercerebralis, at the venter of the optic tract, and in the subesophageal ganglion. Remaining antibodies reacted with similarly positioned cells in the pars intercerebralis, and the DBT antibody also reacted with the cells in the subesophageal ganglion, but antigen colocalizations were not proven. The results imply that key components of the silkworm circadian system reside in the Ia(1) neurons and that additional, hierarchically arranged oscillators contribute to overt pacemaking. The retrocerebral neurohemal organs seem to serve as outlets transmitting central neural oscillations to the hemolymph. The frontal ganglion may play an autonomous function in circadian regulations. The colocalization of aaNAT- and CYC-like antigens suggests that the enzyme is functionally linked to CYC as in vertebrates and that arylalkylamines are involved in the insect output pathway. PMID:15523109 Sehadov, Hana; Markova, Elitza P; Sehnal, Frantisek; Takeda, Makio 2004-12-01 311 NASA Astrophysics Data System (ADS) With tremendous support from collaborators and enthusiastic volunteers, "Learning Among the Oaks" at the historic Santa Margarita Ranch has become a favorite outdoor learning experience for hundreds of Santa Margarita School students, along with their teachers and families. Oaks are at the center of this unique and cost effective public education program. From getting to know local oaks to exploring conservation issues within the context of a historic working cattle ranch, students take pride in expanding their awareness and knowledge of the local oak woodland community. Santa Margarita School families representing the varied demographics of the community come together on the trail. For many, the program provides a first opportunity to get to know those who make a living on the land and to understand that this land around their school is more than a pretty view. "Learning Among the Oaks" also addresses the need for quality, hands-on science activities and opportunities to connect children with the outdoor world. Using a thematic approach and correlating lessons with State Science Standards, we've engaged students in a full-spectrum of exciting outdoor learning adventures. As students progress through the grades, they find new challenges within the oak trail environment. We've succeeded in establishing an internship program that brings highly qualified, enthusiastic university students out to practice their science teaching skills while working with elementary school students. In the future, these university student interns may assist with the development of interpretive displays, after-school nature activities and monitoring projects. We've benefited from proximity to Cal Poly State University and its "learn-by-doing" philosophy. We've also succeeded in building a dedicated network of volunteers and collaborators, each with a special interest satisfied through participation in the oak trail program. While "Learning Among the Oaks" has focused on educating school children and their families, "Working Among the Oaks" has focused on connecting with the agricultural and environmental communities. For example, the Ranching Sustainability Self-Assessment Program is an ambitious, long-range project with tremendous potential to aid private landowners throughout California in implementing sustainable ranching practices. We've made great progress through the efforts of an impressive committee of local private landowners, ranch managers and resource professionals. They believe that this can be a powerful non-regulatory tool to guide private landowners through everyday decision-making processes. Most importantly, this is a tool that could be adapted for use throughout California oak woodland. The Self Assessment Program, along with the supporting Workshops, have stimulated discussion and interest in sustainable ranching among people with diverse experiences and backgrounds. "Learning and Working Among the Oaks" together reach the full spectrum of oak conservation stakeholders, from kids to grandparents, town residents to ranching families, environmental groups to farm and vineyard managers, and more. The diversity of these stakeholders helps us identify collaborative education and research opportunities to support education and management of the 3 million ha of California oak woodlands. Tietje, B.; Gingg, B.; Zingo, J.; Huntsinger, L. 2009-04-01 312 SciTech Connect Waste Area Grouping 2 (WAG 2) of the Oak Ridge National Laboratory (ORNL) is located in the White Oak Creek Watershed and is composed of White Oak Creek Embayment, White Oak Lake and associated floodplain, and portions of White Oak Creek (WOC) and Melton Branch downstream of ORNL facilities. Contaminants leaving other ORNL WAGs in the WOC watershed pass through WAG 2 before entering the Clinch River. Health and ecological risk screening analyses were conducted on contaminants in WAG 2 to determine which contaminants were of concern and would require immediate consideration for remedial action and which contaminants could be assigned a low priority or further study. For screening purposes, WAG 2 was divided into four geographic reaches: Reach 1, a portion of WOC; Reach 2, Melton Branch; Reach 3, White Oak Lake and the floodplain area to the weirs on WOC and Melton Branch; and Reach 4, the White Oak Creek Embayment, for which an independent screening analysis has been completed. Screening analyses were conducted using data bases compiled from existing data on carcinogenic and noncarcinogenic contaminants, which included organics, inorganics, and radionuclides. Contaminants for which at least one ample had a concentration above the level of detection were placed in a detectable contaminants data base. Those contaminants for which all samples were below the level of detection were placed in a nondetectable contaminants data base. Blaylock, B.G.; Frank, M.L.; Hoffman, F.O.; Hook, L.A.; Suter, G.W.; Watts, J.A. 1992-07-01 313 PubMed Many plants emit isoprene, a hydrocarbon that has important influences on atmospheric chemistry. Pathogens may affect isoprene fluxes, both through damage to plant tissue and by changing the abundance of isoprene-emitting species. Live oaks (Quercus fusiformis (Small) Sarg. and Q. virginiana Mill) are major emitters of isoprene in the southern United States, and oak populations in Texas are being dramatically reduced by oak wilt, a widespread fungal vascular disease. We investigated the effects of oak wilt on isoprene emissions from live oak leaves (Q. fusiformis) in the field, as a first step in exploring the physiological effects of oak wilt on isoprene production and the implications of these effects for larger-scale isoprene fluxes. Isoprene emission rates per unit dry leaf mass were 44% lower for actively symptomatic leaves than for leaves on healthy trees (P = 0.033). Isoprene fluxes were significantly negatively correlated with rankings of disease activity in the host tree (fluxes in leaves on healthy trees > healthy leaves on survivor trees > healthy leaves on the same branch as symptomatic leaves > symptomatic leaves; isoprene per unit dry mass: Spearman's rho = -0.781, P = 0.001; isoprene per unit leaf area: Spearman's rho = -0.652, P = 0.008). Photosynthesis and stomatal conductance were reduced by 57 and 63%, respectively, in symptomatic relative to healthy leaves (P < 0.05); these reductions were proportionally greater than the reductions in isoprene emissions. Low isoprene emission rates in symptomatic leaves are most simply explained by physiological constraints on isoprene production, such as water stress as a result of xylem blockage, rather than direct effects of the oak wilt fungus on isoprene synthesis. The effects of oak wilt on leaf-level isoprene emission rates are probably less important for regional isoprene fluxes than the reduction in oak leaf area across landscapes. PMID:12651496 Anderson, Laurel J.; Harley, Peter C.; Monson, Russell K.; Jackson, Robert B. 2000-11-01 314 PubMed Central Oak woodlands of Mediterranean ecosystems, a major component of biodiversity hotspots in Europe and North America, have undergone significant land-use change in recent centuries, including an increase in grazing intensity due to the widespread presence of cattle. Simultaneously, a decrease in oak regeneration has been observed, suggesting a link between cattle grazing intensity and limited oak regeneration. In this study we examined the effect of cattle grazing on coast live oak (Quercus agrifolia Ne) regeneration in San Francisco Bay Area, California. We studied seedling, sapling and adult density of coast live oak as well as vertebrate herbivory at 8 independent sites under two grazing conditions: with cattle and wildlife presence (n?=?4) and only with wildlife (n?=?4). The specific questions we addressed are: i) to what extent cattle management practices affect oak density, and ii) what is the effect of rangeland management on herbivory and size of young oak plants. In areas with cattle present, we found a 50% reduction in young oak density, and plant size was smaller, suggesting that survival and growth young plants in those areas are significantly limited. In addition, the presence of cattle raised the probability and intensity of herbivory (a 1.5 and 1.8-fold difference, respectively). These results strongly suggest that the presence of cattle significantly reduced the success of young Q. agrifolia through elevated herbivory. Given the potential impact of reduced recruitment on adult populations, modifying rangeland management practices to reduce cattle grazing pressure seems to be an important intervention to maintain Mediterranean oak woodlands. PMID:25126939 Lopez-Sanchez, Aida; Schroeder, John; Roig, Sonia; Sobral, Mar; Dirzo, Rodolfo 2014-01-01 315 E-print Network 1Oak Ridge National Laboratory Science & Technology Highlights Published by Oak Ridge National-core-aluminum conductor cables, these overloaded lines can sag into trees, causing short circuits. "Utilities may Pennycook, Steve 316 PubMed Central Yellow proteins form a large family in insects. In Drosophila melanogaster, there are 14 yellow genes in the genome. Previous studies have shown that the yellow gene is necessary for normal pigmentation; however, the roles of other yellow genes in body coloration are not known. Here, we provide the first evidence that yellow-e is required for normal body color pattern in insect larvae. In two mutant strains, bts and its allele bts2, of the silkworm Bombyx mori, the larval head cuticle and anal plates are reddish brown instead of the white color found in the wild type. Positional cloning revealed that deletions in the Bombyx homolog of the Drosophila yellow-e gene (Bmyellow-e) were responsible for the bts/bts2 phenotype. Bmyellow-e mRNA was strongly expressed in the trachea, testis, and integument, and expression markedly increased at the molting stages. This profile is quite similar to that of Bmyellow, a regulator of neonatal body color and body markings in Bombyx. Quantitative reverse transcription-PCR analysis showed that Bmyellow-e mRNA was heavily expressed in the integument of the head and tail in which the bts phenotype is observed. The present results suggest that Yellow-e plays a crucial role in the pigmentation process of lepidopteran larvae. PMID:19996320 Ito, Katsuhiko; Katsuma, Susumu; Yamamoto, Kimiko; Kadono-Okuda, Keiko; Mita, Kazuei; Shimada, Toru 2010-01-01 317 PubMed A new member of the aldo-keto reductase (AKR) superfamily with 3-dehydroecdysone reductase activity was found in the silkworm Bombyx mori upon induction by the insecticide diazinon. The amino acid sequence showed that this enzyme belongs to the AKR2 family, and the protein was assigned the systematic name AKR2E4. In this study, recombinant AKR2E4 was expressed, purified to near homogeneity, and kinetically characterized. Additionally, its ternary structure in complex with NADP(+) and citrate was refined at 1.3 resolution to elucidate substrate binding and catalysis. The enzyme is a 33-kDa monomer and reduces dicarbonyl compounds such as isatin and 17?-hydroxy progesterone using NADPH as a cosubstrate. No NADH-dependent activity was detected. Robust activity toward the substrate inhibitor 3-dehydroecdysone was observed, which suggests that this enzyme plays a role in regulation of the important molting hormone ecdysone. This structure constitutes the first insect AKR structure determined. Bound NADPH is located at the center of the TIM- or (?/?)8-barrel, and residues involved in catalysis are conserved. PMID:24012638 Yamamoto, Kohji; Wilson, David K 2013-10-15 318 PubMed Stage-dependent effects of RH-5992 on ecdysteroidogenesis of the prothoracic glands during the fourth larval instar of the silkworm, Bombyx mori, were studied in the present report. When larvae were treated with RH-5992 during the early stages of the fourth larval instar (between day 0 and day 1), initially ecdysteroid levels in the hemolymph were inhibited. However, 24 h after RH-5992 application, ecdysteroid levels were greatly increased as compared with those treated with acetone. The examination of the in vitro prothoracic gland activity upon RH-5992 application during the early stages of the fourth larval instar confirmed a short-term inhibitory effect. When RH-5992 was applied to the later stages of the fourth larval instar, no effects on both hemolymph ecdysteroid levels and prothoracic gland activity were observed. Addition of RH-5992 to incubation medium strongly inhibited ecdysteroid secretion by the prothoracic glands from the early fourth instar, indicating direct action of RH-5992 on ecdysteroidogenesis by prothoracic glands. Four hours after application with RH-5992 on day 1.5, prothoracic glands still showed an activated response to PTTH in both PTTH-cAMP signaling and the extracellular signal-regulated kinase (ERK) signaling. Moreover, addition of RH-5992 to incubation medium did not interfere with the stimulatory effect of the glands to PTTH in ecdysteroidogenesis. These results indicated that both PTTH-cAMP signaling and PTTH-ERK signaling may not be involved in short-term inhibitory regulation by RH-5992. PMID:18618762 Gu, Shi-Hong; Lin, Ju-Ling; Lin, Pei-Ling; Kou, Rong; Smagghe, Guy 2008-08-01 319 PubMed Silk fibroin from a domesticated mulberry silkworm, Bombyx mori, is the most widely used in biomaterial design. We report for the first time the preparation of a relatively smooth (granule free) film of the nonmulberry Samia cynthia ricini fibroin for comparative evaluation of its cell-supporting properties against those of the B. mori fibroin film. The granule formation on the S. c. ricini fibroin film was successfully prevented by facilitating proper rearrangement of the protein molecules, as monitored by FT-IR, by dialysis through a stepwise decrease in the urea concentration in the dialysis media. The lower contact angle of the S. c. ricini fibroin film, compared to the B. mori fibroin film, corresponds well to its lower hydrophobic/hydrophilic amino-acid ratio and grand average of hydropathicity (GRAVY). L929 murine fibroblast cells on the granule-free S. c. ricini fibroin films exhibited greater proliferation and spreading rates than those on the B. mori fibroin films, possibly attributable to its higher content of hydrophilic and positively charged amino acids. It further suggests that fabrication, modification and/or engineering of S. c. ricini fibroin may provide a better biomaterial scaffold design than the more commonly used B. mori fibroin. PMID:21029516 Mai-ngam, Katanchalee; Boonkitpattarakul, Kanhokthorn; Jaipaew, Jirayut; Mai-ngam, Bunpot 2011-01-01 320 PubMed Central A pentanucleotide repetitive sequence, (TTAGG)n, has been isolated from a silkworm genomic library, using cross-hybridization with a (TTNGGG)5 sequence, which is conserved among most eukaryotic telomeres. Both fluorescent in situ hybridization and Bal 31 exonuclease experiments revealed major clusters of (TTAGG)n at the telomeres of all Bombyx chromosomes. To determine the evolutionary origin of this sequence, two types of telomeric sequence, (TTAGG)5 and a hexanucleotide repetitive sequence, (TTAGGG)4, which is conserved mainly among vertebrate and several invertebrate telomeres so far examined, were hybridized to DNAs from a wide variety of eukaryotic species under highly stringent hybridization conditions. The (TTAGGG)5 oligonucleotide hybridized to genomic DNAs from vertebrates and several nonvertebrate species, as has been reported so far, but not to any DNAs from insects. On the other hand, the Bombyx type of telomere sequence, (TTAGG)n, hybridized to DNAs from 8 of 11 orders of insect species tested but not to vertebrate DNAs, suggesting that this TTAGG repetitive sequence is conserved widely among insects. Images PMID:8441388 Okazaki, S; Tsuchida, K; Maekawa, H; Ishikawa, H; Fujiwara, H 1993-01-01 321 PubMed To clarify the property of casein kinase 2 (CK2) during early embryonic development in the silkworm, we compared the phosphorylation activities of CK2 in non-diapause and diapause eggs until 60 h after oviposition. In nondiapause eggs, the phosphorylated signals were found at each stage and became progressively stronger through each stage. On the other hand, in diapause eggs, the strongest phosphorylated signals were found at approximately 12 to 24 h after oviposition and became progressively weaker through each stage. To clarify the control mechanism of these enzyme activities, we tried to clone cDNAs encoding alpha- and beta-subunit of CK2 and analyze the gene expressions. The deduced amino acid sequence of the isolated cDNAs comprised 342 and 220 residues, and these sequences showed 85-90% identities to the alpha- and beta-subunit of CK2 in Spodoptera frugiperda. RT-PCR indicated that these genes were expressed in nondiapause and diapause eggs. However, these genes expressions were not parallel with the changes in CK2 activity. These results suggest that the changes in CK2 activity are regulated mainly at the level of post-transcription during embryonic development in Bombyx mori. PMID:16287624 Yamamoto, Takayuki; Kanekatsu, Motoki; Nakagoshi, Motoko; Kato, Tomomi; Mase, Keisuke; Sawada, Hiroshi 2005-12-01 322 PubMed Central Many larval color mutants have been obtained in the silkworm Bombyx mori. Mapping of melanin-synthesis genes on the Bombyx linkage map revealed that yellow and ebony genes were located near the chocolate (ch) and sooty (so) loci, respectively. In the ch mutants, body color of neonate larvae and the body markings of elder instar larvae are reddish brown instead of normal black. Mutations at the so locus produce smoky larvae and black pupae. F2 linkage analyses showed that sequence polymorphisms of yellow and ebony genes perfectly cosegregated with the ch and so mutant phenotypes, respectively. Both yellow and ebony were expressed in the epidermis during the molting period when cuticular pigmentation occurred. The spatial expression pattern of yellow transcripts coincided with the larval black markings. In the ch mutants, nonsense mutations of the yellow gene were detected, whereas large deletions of the ebony ORF were detected in the so mutants. These results indicate that yellow and ebony are the responsible genes for the ch and so loci, respectively. Our findings suggest that Yellow promotes melanization, whereas Ebony inhibits melanization in Lepidoptera and that melanin-synthesis enzymes play a critical role in the lepidopteran larval color pattern. PMID:18854583 Futahashi, Ryo; Sato, Jotaro; Meng, Yan; Okamoto, Shun; Daimon, Takaaki; Yamamoto, Kimiko; Suetsugu, Yoshitaka; Narukawa, Junko; Takahashi, Hirokazu; Banno, Yutaka; Katsuma, Susumu; Shimada, Toru; Mita, Kazuei; Fujiwara, Haruhiko 2008-01-01 323 PubMed Engineering sex-specific sterility is critical for developing transgene-based sterile insect technology. Targeted genome engineering achieved by customized zinc-finger nuclease, transcription activator-like effector nuclease (TALEN) or clustered, regularly interspaced, short palindromic repeats/Cas9 systems has been exploited extensively in a variety of model organisms; however, screening mutated individuals without a detectable phenotype is still challenging. In addition, genetically recessive mutations only detectable in homozygotes make the experiments time-consuming. In the present study, we model a novel genetic system in the silkworm, Bombyx mori, that results in female-specific sterility by combining transgenesis with TALEN technologies. This system induces sex-specific sterility at a high efficiency by targeting the female-specific exon of the B.?mori doublesex (Bmdsx) gene, which has sex-specific splicing isoforms regulating somatic sexual development. Transgenic animals co-expressing TALEN left and right arms targeting the female-specific Bmdsx exon resulted in somatic mutations and female mutants lost fecundity because of lack of egg storage and abnormal external genitalia. The wild-type sexual dimorphism of abdominal segment was not evident in mutant females. In contrast, there were no deleterious effects in mutant male moths. The current somatic TALEN technologies provide a promising approach for future insect functional genetics, thus providing the basis for the development of attractive genetic alternatives for insect population management. PMID:25125145 Xu, J; Wang, Y; Li, Z; Ling, L; Zeng, B; James, A A; Tan, A; Huang, Y 2014-12-01 324 PubMed Central We previously reported that a silkworm hemolymph protein, apolipophorin (ApoLp), binds to the cell surface of Staphylococcus aureus and inhibits expression of the saePQRS operon encoding a two-component system, SaeRS, and hemolysin genes. In this study, we investigated the inhibitory mechanism of ApoLp on S. aureus hemolysin gene expression. ApoLp bound to lipoteichoic acids (LTA), an S. aureus cell surface component. The addition of purified LTA to liquid medium abolished the inhibitory effect of ApoLp against S. aureus hemolysin production. In an S. aureus knockdown mutant of ltaS encoding LTA synthetase, the inhibitory effects of ApoLp on saeQ expression and hemolysin production were attenuated. Furthermore, the addition of anti-LTA monoclonal antibody to liquid medium decreased the expression of S. aureus saeQ and hemolysin genes. In S. aureus strains expressing SaeS mutant proteins with a shortened extracellular domain, ApoLp did not decrease saeQ expression. These findings suggest that ApoLp binds to LTA on the S. aureus cell surface and inhibits S. aureus hemolysin gene expression via a two-component regulatory system, SaeRS. PMID:23873929 Omae, Yosuke; Hanada, Yuichi; Sekimizu, Kazuhisa; Kaito, Chikara 2013-01-01 325 E-print Network Live Oak Live Oak Leaf Symptoms Red Oak Life Cycle Live Oak Life Cycle Local Transmission by Root Grafts Healttrv Red Oak Healtny Uve Oal' Dead love Oek Red Oak Leaf Symptoms Diseased Red Oak Transmission bv Root Graft. = Figure 1. Disease... Live Oak Live Oak Leaf Symptoms Red Oak Life Cycle Live Oak Life Cycle Local Transmission by Root Grafts Healttrv Red Oak Healtny Uve Oal' Dead love Oek Red Oak Leaf Symptoms Diseased Red Oak Transmission bv Root Graft. = Figure 1. Disease... Camilli, Kim Suzanne 2012-06-07 326 PubMed Dietary specialization is thought to be rare in mammalian herbivores as a result of either a limitation in their detoxification system to metabolize higher doses of plant secondary compounds or deficiencies in nutrients present in a diet composed of a single species of plant. Neotoma macrotis is an oak specialist, whereas Neotoma lepida is a dietary generalist when sympatric with N. macrotis. We hypothesized that N. macrotis would have a higher tolerance for and digestibility of oak. We determined the two species' tolerances for oak by feeding them increasing concentrations of ground oak leaves until they could no longer maintain body mass. The highest concentration on which both species maintained body mass was 75% oak. There were no differences between the species in their abilities to digest dry matter, nitrogen, or fiber in the oak diets. The species' similar tolerances for oak were probably due to their similar abilities to digest and potentially assimilate the ground oak leaves. PMID:18544017 Skopec, Michele M; Haley, Shannon; Torregrossa, Ann-Marie; Dearing, M Denise 2008-01-01 327 Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey View of New Big Oak Flat Road seen from Old Wawona Road near location of photograph HAER CA-148-17. Note road cuts, alignment, and tunnels. Devils Dance Floor at left distance. Looking northwest - Big Oak Flat Road, Between Big Oak Flat Entrance & Merced River, Yosemite Village, Mariposa County, CA 328 E-print Network visibility of deteriorating oak health in Britain and media reports on Sudden Oak Death' have led to growing of ill health and the names that people use to describe it. Over the past century oaks in diminishing health have been said to be suffering from dieback or decline. In Britain, periodic episodes 329 E-print Network Review article Tree improvement programs for European oaks: goals and strategies PS Savill, PJ, Oxford OX1 3RB, UK Summary Most work concerned with the improvement of European oaks and wood science. oak / Quercus / breeding / genetic conservation / improvement Résumé Programmes d Paris-Sud XI, Université de 330 E-print Network Department of Energy (DOE) Oak Ridge Office (ORO) Project Life Cycle Reimbursable Funding://www.ornl.gov/wfo/exthome.htm September 27, 2007 #12;2 OAK RIDGE NATIONAL LABORATORY U. S. DEPARTMENT OF ENERGY Briefing Outline · Summary #12;3 OAK RIDGE NATIONAL LABORATORY U. S. DEPARTMENT OF ENERGY Secretary of Energy Legal Authority 331 E-print Network Department of Energy (DOE) Oak Ridge Office (ORO) Project Life Cycle Reimbursable Funding Project Closeout Process Summary #12;OAK RIDGE NATIONAL LABORATORY U. S. DEPARTMENT OF ENERGY 3 Secretary provisions of law. #12;OAK RIDGE NATIONAL LABORATORY U. S. DEPARTMENT OF ENERGY 4 How Federal Agencies Do 332 E-print Network Oak Ridge ReseRvatiOn DOE/ORO/2379 Annual Site Environmental Report 2010 #12;Cover Image and Design Annual Site Environmental Report 2010 #12;DOE/ORO/2379 Oak Ridge Reservation Annual Site Environmental Oak Ridge National Laboratory East Tennessee Technology Park Electronic publisher Coordinating editors Pennycook, Steve 333 E-print Network Blue oak seedlings may be older than they look Ralph L. Phillips u Neil K. McDougald o Richard B. Standiford William E.Frost A 4-year study indicates that na- tive blue oak seedlings are prob- ably much the year of above- average rainfall. Blue oak (Quercusdouglasii)trees are a valuable economic and aesthetic Standiford, Richard B. 334 Sudden oak death is a new disease affecting tanoak (Lithocarpus densiflora) and oaks (Quercus spp) in California and Oregon, caused by the recently described pathogen Phytophthora ramorum. It has reached epi-demic proportions in several counties in central California, leading to the death of tens of thousands of trees. In addition to oaks and tanoak, P ramorum has been found in David M. Rizzo; Matteo Garbelotto 2003-01-01 335 PubMed Central Background In the silkworm, Bombyx mori, femaleness is strongly controlled by the female-specific W chromosome. Originally, it was presumed that the W chromosome encodes female-determining gene(s), accordingly called Fem. However, to date, neither Fem nor any protein-coding gene has been identified from the W chromosome. Instead, the W chromosome is occupied with numerous transposon-related sequences. Interestingly, the silkworm W chromosome is a source of female-enriched PIWI-interacting RNAs (piRNAs). piRNAs are small RNAs of 23-30 nucleotides in length, which are required for controlling transposon activity in animal gonads. A recent study has identified a novel mutant silkworm line called KG, whose mutation in the W chromosome causes severe female masculinization. However, the molecular nature of KG line has not been well characterized yet. Results Here we molecularly characterize the KG line. Genomic PCR analyses using currently available W chromosome-specific PCR markers indicated that no large deletion existed in the KG W chromosome. Genetic analyses demonstrated that sib-crosses within the KG line suppressed masculinization. Masculinization reactivated when crossing KG females with wild type males. Importantly, the KG ovaries exhibited a significantly abnormal transcriptome. First, the KG ovaries misexpressed testis-specific genes. Second, a set of female-enriched piRNAs was downregulated in the KG ovaries. Third, several transposons were overexpressed in the KG ovaries. Conclusions Collectively, the mutation in the KG W chromosome causes broadly altered expression of testis-specific genes, piRNAs, and transposons. To our knowledge, this is the first study that describes a W chromosome mutant with such an intriguing phenotype. PMID:22452797 2012-01-01 336 E-print Network by the biochemical defenses of black oak (Quercus velutina) and the feeding preference of the isopod Armadillidium and light and the danger of herbivory. Nutrient stressed plants invested more in defense then did fertilized Vallino, Joseph J. 337 E-print Network · Research we're doing · Preliminary results · Where we're heading #12;3/25/20113 Assessing the Timber;3/25/201118 Economic Impact of Shake Defects - Products Large diameter oak grown for: Most valuable Butt £5.50 - £10 338 This study examines an alternative to market valuation for determining worth through a case study of the Oak Openings region in Northwest Ohio, USA. Emergy analysis methods are explained, illustrated and used to tabulate environmental, cultural and economic subsystems of the region. The emergy yield ratio for the region at 1.57 suggests sustainability but is less than the findings of Julie Brotje Higgins 2003-01-01 339 SciTech Connect This study establishes a basis for long-range land-use planning to accommodate both present and projected DOE program requirements in Oak Ridge. In addition to technological requirements, this land-use plan incorporates in-depth ecological concepts that recognize multiple uses of land as a viable option. Neither environmental research nor technological operations need to be mutually exclusive in all instances. Unique biological areas, as well as rare and endangered species, need to be protected, and human and environmental health and safety must be maintained. The plan is based on the concept that the primary use of DOE land resources must be to implement the overall DOE mission in Oak Ridge. This document, along with the base map and overlay maps, provides a reasonably detailed description of the DOE Oak Ridge land resources and of the current and potential uses of the land. A description of the land characteristics, including geomorphology, agricultural productivity and soils, water courses, vegetation, and terrestrial and aquatic animal habitats, is presented to serve as a resource document. Essentially all DOE land in the Oak Ridge area is being fully used for ongoing DOE programs or has been set aside as protected areas. Bibb, W. R.; Hardin, T. H.; Hawkins, C. C.; Johnson, W. A.; Peitzsch, F. C.; Scott, T. H.; Theisen, M. R.; Tuck, S. C. 1980-03-01 340 ERIC Educational Resources Information Center Oak Hill School served elementary students in the 10th district of Washington County, Tennessee, from 1886 to 1952. After extensive restoration and a move to Historic Jonesborough, the one-room school now functions as a living history museum. Fourth-grade students spend a day following the 1892 curriculum for grade 4. A teacher's resource and Clark, Amy D. 2000-01-01 341 E-print Network Doing Business with Oak Ridge National Laboratory Presented at the WM10 Symposia Keith S. Joy Director ORNL Small Business Programs Phoenix, AZ March 3, 2010 #12;· Generates \$5.2 billion annually enterprise ­ Applying science and technology to real-world problems ­ Managing machinery of scientific 342 SciTech Connect This report presents the waste management plan for the Oak Ridge Reservation facilities. The primary purpose is to convey what facilities are being used to manage wastes, what forces are acting to change current waste management systems, and what plans are in store for the coming fiscal year. Turner, J.W. [ed. 1995-02-01 343 SciTech Connect This report describes the proposed upgrades to Building 3025 and the Evaporator Area at Oak Ridge National Laboratory. Design assessments, specifications and drawings are provided. Building 3025 is a general purpose research facility utilized by the Materials and Ceramics Division to conduct research on irradiated materials. The Evaporator Area, building 2531, serves as the collection point for all low-level liquid wastes generated at the Oak Ridge National Laboratory. NONE 1995-09-01 344 Wind-pollinated forest trees usually have high outcrossing rates, but allogamy does not necessarily translate into high pollen movement. The goal of this study was to determine the outcrossing rates, pollen pool genetic structure, and the size of the effective pollination neighborhood in a population of black oak, Quercus velutina, in a Missouri oak-hickory forest. Based on 6allozymeloci,12maternaltrees,and439progeniessampledalongatransectof1300m,wefoundcompleteoutcrossing(tm 5 1.000, P JUAN F. FERNANDEZ-MANJARRES; J ACQUELYN IDOL; VICTORIA L. SORK 2006-01-01 345 Oak-dominated forests in northwestern Arkansas have recently experienced an oak mortality event associated with an unprecedented outbreak of a native insect, the red oak borer, Enaphalodes rufulus (Haldeman). To determine whether prior drought was associated with increased E. rufulus infestation level of Quercus rubra L. trees, we employed a suite of dendrochronological measurements from Q. rubra in affected forest stands. L. J. Haavik; F. M. Stephen; M. K. Fierke; V. B. Salisbury; S. W. Leavitt; S. A. Billings 2008-01-01 346 E-print Network ominous than the invasive exotic plants: a dying coast live oak. The walk was organized by the NapaSudden Oak Death Blitz: Native oaks need our help APRIL 19, 2013 6:57 PM · BILL PRAMUK On a recent plants crowding and displacing the more delicate native plants. Along the way, I spotted something more California at Berkeley, University of 347 E-print Network September 2000 to September 2008. Dead Pr = tree dead as a result of P. ramorum; Late Pr = live trees with PProceedings of the Sudden Oak Death Fourth Science Symposium 207 Long-Term Trends in Coast Live Oak stands change over time due to disease. P. ramorum canker was prevalent in the sampled coast live oak Standiford, Richard B. 348 SciTech Connect This study presents the results of an investigation of seismic hazard at the Department of Energy Oak Ridge Reservations (K-25 Site, Oak Ridge National Laboratories, and Oak Ridge Y-12 Plant), located in Oak Ridge, Tennessee. Oak Ridge is located in eastern Tennessee, in an area of moderate to high historical seismicity. Results from two separate seismic hazard analyses are presented. The EPRI/SOG analysis uses the input data and methodology developed by the Electric Power Research Institute, under the sponsorship of several electric utilities, for the evaluation of seismic hazard in the central and eastern United States. The LLNL analysis uses the input data and methodology developed by the Lawrence Livermore National Laboratory for the Nuclear Regulatory Commission. Both the EPRI/SOG and LLNL studies characterize earth-science uncertainty on the causes and characteristics of earthquakes in the central and eastern United States. This is accomplished by considering multiple hypotheses on the locations and parameters of seismic source zones and by considering multiple attenuation functions for the prediction of ground shaking given earthquake size and location. These hypotheses were generated by multiple expert teams and experts. Furthermore, each team and expert was asked to generate multiple hypotheses in order to characterize his own internal uncertainty. The seismic-hazard calculations are performed for all hypotheses. Combining the results from each hypothesis with the weight associated to that hypothesis, one obtains an overall representation of the seismic hazard at the Oak Ridge site and its uncertainty. McGuire, R.K.; Toro, G.F. [Risk Engineering, Inc., Golden, CO (United States); Hunt, R.J. [Martin Marietta Energy Systems, Inc., Oak Ridge, TN (United States). Center for Natural Phenomena Engineering 1992-09-30 349 The endocrine regulation of larvalpupal metamorphosis was studied in the silkworm, Bombyx mori, by measuring the following changes: hemolymph ecdysteroid titer, the secretory activity of prothoracic glands and the responsiveness of larvae to ecdysteroids and prothoracicotropic hormone (PTTH), with regard to developmental events such as the occurrence of spinneret pigmentation, initiation of cocoon spinning and onset of wandering stage as Sho Sakurai; Masae Kaya; Shin'Ichirol Satake 1998-01-01 350 SciTech Connect This document contains Appendixes A Source Inventory Information for the Subbasins Evaluated for the White Oak Creek Watershed and B Human Health Risk Assessment for White Oak Creek / Melton Valley Area for the remedial investigation report for the White Oak Creek Watershed and Melton Valley Area. Appendix A identifies the waste types and contaminants for each subbasin in addition to the disposal methods. Appendix B identifies potential human health risks and hazards that may result from contaminants present in the different media within Oak Ridge National Laboratory sites. NONE 1996-11-01 351 NASA Astrophysics Data System (ADS) Alignment of silkworms and fish, observed as seismic anomalous animal behavior (SAAB) prior to the Kobe earthquake, were duplicated in a laboratory by applying a pulsed electric field assuming SAAB as electrophysiological responses to the stimuli of seismic electric signals (SES). The animals became aligned perpendicularly to the field direction since their skeletal muscle had a higher resistivity perpendicular to the field direction than parallel to it. An electromagnetic model of a fault is proposed in which dipolar charges, q are generated due to the change of seismic stress, ?(t). From a mathematical model, dq/dt=-?(d?/dt) - q/??, where ? is the charge generation constant like a piezoelectric coefficient, ?, the dielectric constant and ?, the resistivity of bedrock granite. A fault having a length 2a and a displacement or rock rupture time ?, during which the stress is changed, gives pulsed dipolar charge surface densities, +q(t, x) and -q(t, x+2a), or an apparent electric dipole moment of P(t)=2aQ(t)=2aAq(t)=aM 0[??/(?-??)](e-1/?-e-1/??) using the earthquake moment M 0. The fault displacement, D, its initial velocity, D? and the stress drop, ?? give ?=D/D?=(??/? 0)(?/?). The field fintensity, F, and seismic current density at a fault zone, J were calculated as F=q/? and J=F/?? using ?? of water as to give J=0.1-1 A/m2 sufficient to cause SAAB experimentally. The near-field ultra low frequency (ULF) waves generated by P(t) give SES reciprocally proportional to the distance R. Ikeya, Motoji; Matsumoto, Hiroshi; Huang, Qing-Hua 1998-05-01 352 PubMed Central Pigmentation patterning has long interested biologists, integrating topics in ecology, development, genetics, and physiology. Wild-type neonatal larvae of the silkworm, Bombyx mori, are completely black. By contrast, the epidermis and head of larvae of the homozygous recessive sex-linked chocolate (sch) mutant are reddish brown. When incubated at 30 C, mutants with the sch allele fail to hatch; moreover, homozygous mutants carrying the allele sch lethal (schl) do not hatch even at room temperature (25 C). By positional cloning, we narrowed a region containing sch to 239,622 bp on chromosome 1 using 4,501 backcross (BC1) individuals. Based on expression analyses, the best sch candidate gene was shown to be tyrosine hydroxylase (BmTh). BmTh coding sequences were identical among sch, schl, and wild-type. However, in sch the ?70-kb sequence was replaced with ?4.6 kb of a Tc1-mariner type transposon located ?6 kb upstream of BmTh, and in schl, a large fragment of an L1Bm retrotransposon was inserted just in front of the transcription start site of BmTh. In both cases, we observed a drastic reduction of BmTh expression. Use of RNAi with BmTh prevented pigmentation and hatching, and feeding of a tyrosine hydroxylase inhibitor also suppressed larval pigmentation in the wild-type strain, pnd+ and in a pS (black-striped) heterozygote. Feeding L-dopa to sch neonate larvae rescued the mutant phenotype from chocolate to black. Our results indicate the BmTh gene is responsible for the sch mutation, which plays an important role in melanin synthesis producing neonatal larval color. PMID:20615980 Liu, Chun; Yamamoto, Kimiko; Cheng, Ting-Cai; Kadono-Okuda, Keiko; Narukawa, Junko; Liu, Shi-Ping; Han, Yu; Futahashi, Ryo; Kidokoro, Kurako; Noda, Hiroaki; Kobayashi, Isao; Tamura, Toshiki; Ohnuma, Akio; Banno, Yutaka; Dai, Fang-Ying; Xiang, Zhong-Huai; Goldsmith, Marian R.; Mita, Kazuei; Xia, Qing-You 2010-01-01 353 PubMed Silkworm (Bombyx mori), a model Lepidoptera insect, is economically important. Its growth and development are regulated by endogenous hormones. During the process of transition from larvae to pupae, 20-hydroxyecdysone (20E) plays an important role. The recent surge in consumer products and applications using metallic nanoparticles has increased the possibility of human or ecosystem exposure due to their unintentional release into the environment. We investigated the effects of exposure to titanium dioxide nanoparticles (TiO2 NPs) on the action of 20E in B. mori. Titanium dioxide nanoparticle treatment shortened the molting duration by 8 hr and prolonged the molting peak period by 10 %. Solexa sequencing profiled the changes in gene expression in the brain of fifth-instar B. mori in response to TiO2NPS exposure for 72 hr, to address the effects on hormone metabolism and regulation. Thirty one genes were differentially expressed. The transcriptional levels of pi3k and P70S6K, which are involved in the target of the rapamycin (TOR) signaling pathway, were up-regulated. Transcriptional levels of four cytochrome P450 genes, which are involved in 20E biosynthesis, at different developmental stages (48, 96, 144, and 192 hr) at 5th instars of all displayed trends of increasing expression. Simultaneously, the ecdysterone receptors, also displayed increasing trends. The 20E titers at four developmental stages during the 5th instar were 1.26, 1.23, 1.72, and 2.16 fold higher, respectively, than the control group. These results indicate that feeding B. mori with TiO2 NPs stimulates 20E biosynthesis, shortens the developmental progression, and reduces the duration of molting. Thus, application of TiO2 NPs is of high significance for saving the labor force in sericulture, and our research provides a reference for the ecological problems in the field of Lepidoptera exposured to titanium dioxide nanoparticles. PMID:25139758 Li, Fanchi; Gu, Zhiya; Wang, Binbin; Xie, Yi; Ma, Lie; Xu, Kaizun; Ni, Min; Zhang, Hua; Shen, Weide; Li, Bing 2014-08-01 354 PubMed Silkworm (Bombyx mori) is an economically important insect. However, non-cocoon caused by chemical insecticide poisoning has largely hindered the development of sericulture. To explore the roles of detoxification enzymes in B. mori after insecticide poisoning, we monitored the activity changes of cytochrome P450 monooxygenase, glutathione-S-transferase, and carboxylesterase in B. mori midgut and fatbody after phoxim feeding. At the same time, the expression levels of detoxification enzyme-related genes were also determined by real-time quantitative PCR. Compare to the control levels, the activity of P450 in the midgut and fatbody was increased to 1.72 and 6.72 folds; the activity of GST was no change in midgut, and in fatbody increased to 1.11 folds; the activity of carboxylesterase in the midgut was decreased to 0.69 folds, and in fatbody increased to 1.13 folds. Correspondingly, the expression levels of detoxifying enzyme genes CYP6ae22, CYP9a21, GSTo1 and Bmcce were increased to 15.99, 3.32, 1.86 and 2.30 folds in the midgut and to 3.58, 1.84, 2.14 and 4.21 folds in the fatbody after phoxim treatment. These results demonstrated the important roles of detoxification enzymes in phoxim metabolism. In addition, the detected activities of such enzymes were generally lower than those in cotton bollworms (Helicoverpa armigera), which may contribute to the high susceptibility of B. mori to insecticides. Our findings laid the foundation for further investigations of the molecular mechanisms of organophosphorus pesticide metabolism in B. mori. PMID:24238284 Wang, Y H; Gu, Z Y; Wang, J M; Sun, S S; Wang, B B; Jin, Y Q; Shen, W D; Li, B 2013-01-01 355 SciTech Connect This report summarizes for the 15-month period of October 1990-- December 1991 the available dynamic hydrologic data collected, primarily on the White Oak Creek (WOC) watershed, along with information collected on the surface flow systems that affect the quality or quantity of surface water. The collection of hydrologic data is one component of numerous, ongoing Oak Ridge National Laboratory (ORNL) environmental studies and monitoring programs and is intended to: (1) characterize the quantity and quality of water in the flow systems; (2) assist with the planning and assessment of remedial action activities; and, (3) provide long-term availability of data and quality assurance. Characterization of the hydrology of the WOC watershed is critical for understanding the processes that drive contaminant transport in the watershed. Identification of spatial and temporal trends in hydrologic parameters and mechanisms that affect the movement of contaminants supports the development of interim corrective measures and remedial restoration alternatives. In addition, hydrologic monitoring supports long-term assessment of the effectiveness of remedial actions in limiting the transport of contaminants across Waste Area Grouping (WAG) boundaries and ultimately to the off-site environment. For these reasons, it is of paramount importance to the Environmental Restoration Program (ERP) to collect and report hydrologic data activities that contribute to the Site Investigations component of the ERP. (White Oak Creek is also referred to as Whiteoak Creek). Borders, D.M.; Gregory, S.M.; Clapp, R.B.; Frederick, B.J.; Watts, J.A. 1992-06-01 356 SciTech Connect This report summarizes for the 15-month period of October 1990-- December 1991 the available dynamic hydrologic data collected, primarily on the White Oak Creek (WOC) watershed, along with information collected on the surface flow systems that affect the quality or quantity of surface water. The collection of hydrologic data is one component of numerous, ongoing Oak Ridge National Laboratory (ORNL) environmental studies and monitoring programs and is intended to: (1) characterize the quantity and quality of water in the flow systems; (2) assist with the planning and assessment of remedial action activities; and, (3) provide long-term availability of data and quality assurance. Characterization of the hydrology of the WOC watershed is critical for understanding the processes that drive contaminant transport in the watershed. Identification of spatial and temporal trends in hydrologic parameters and mechanisms that affect the movement of contaminants supports the development of interim corrective measures and remedial restoration alternatives. In addition, hydrologic monitoring supports long-term assessment of the effectiveness of remedial actions in limiting the transport of contaminants across Waste Area Grouping (WAG) boundaries and ultimately to the off-site environment. For these reasons, it is of paramount importance to the Environmental Restoration Program (ERP) to collect and report hydrologic data activities that contribute to the Site Investigations component of the ERP. (White Oak Creek is also referred to as Whiteoak'' Creek). Borders, D.M.; Gregory, S.M.; Clapp, R.B.; Frederick, B.J.; Watts, J.A. 1992-06-01 357 PubMed Bt toxins derived from the arthropod bacterial pathogen Bacillus thuringiensis are widely used for insect control as insecticides or in transgenic crops. Bt resistance has been found in field populations of several lepidopteran pests and in laboratory strains selected with Bt toxin. Widespread planting of crops expressing Bt toxins has raised concerns about the potential increase of resistance mutations in targeted insects. By using Bombyx mori as a model, we identified a candidate gene for a recessive form of resistance to Cry1Ab toxin on chromosome 15 by positional cloning. BGIBMGA007792-93, which encodes an ATP-binding cassette transporter similar to human multidrug resistance protein 4 and orthologous to genes associated with recessive resistance to Cry1Ac in Heliothis virescens and two other lepidopteran species, was expressed in the midgut. Sequences of 10 susceptible and seven resistant silkworm strains revealed a common tyrosine insertion in an outer loop of the predicted transmembrane structure of resistant alleles. We confirmed the role of this ATP-binding cassette transporter gene in Bt resistance by converting a resistant silkworm strain into a susceptible one by using germline transformation. This study represents a direct demonstration of Bt resistance gene function in insects with the use of transgenesis. PMID:22635270 Atsumi, Shogo; Miyamoto, Kazuhisa; Yamamoto, Kimiko; Narukawa, Junko; Kawai, Sawako; Sezutsu, Hideki; Kobayashi, Isao; Uchino, Keiro; Tamura, Toshiki; Mita, Kazuei; Kadono-Okuda, Keiko; Wada, Sanae; Kanda, Kohzo; Goldsmith, Marian R; Noda, Hiroaki 2012-06-19 358 PubMed Human papillomavirus (HPV) 6b L1 capsid protein was expressed using the Bombyx mori nucleopolyhedrovirus (BmNPV) bacmid expression system in silkworm larvae. Two constructs, full-length L1 (500 a.a) and C-terminal-deleted short L1 (479 a.a), and three PCR-manipulated antigenic loops at amino acids 55-56, 174-175, and 348-349 regions were incorporated with whole enhanced green fluorescent protein (EGFP). Expressed in full, short L1 proteins and variants were purified in heparin affinity column chromatography and confirmed by SDS-PAGE and western blot. The presence of self-assembled virus-like particles (VLPs) and EGFP incorporation on the surface of VLPs were confirmed by the observation of transmission electron and immunoelectron microscopies, respectively. HPV 6b L1 major capsid protein was successfully expressed in silkworm, and effective manipulation on the antigenic regions showed the path to versatile vaccine development based on HPV L1-VLPs. PMID:23961359 Palaniyandi, Muthukutty; Kato, Tatsuya; Park, Enoch Y 2012-01-01 359 SciTech Connect Oak Ridge National Laboratory is running the world's largest Cray X1, the world's largest unclassified Cray XT3, and a Cray XD1. In this report we provide an overview of the applications requiring leadership computing and the performance characteristics of the various platforms at ORNL. We then discuss ways in which we are working with Cray to establish a roadmap that will provide 100's of teraflops of sustained performance while integrating a balance of vector and scalar processors. Kuehn, Jeffery A [ORNL; Studham, Scott [ORNL; White III, James B [ORNL; Fahey, Mark R [ORNL; Carter, Steven M [ORNL; Nichols, Jeffrey A [ORNL 2005-05-01 360 E-print Network lasts 1-2 weeks trees go from yellow to brown. Control Measures Spreads Trees through Isolation Chemical Cultural rootgrafts affected Yes Healthy Yes No No Unknown Stressed Yes Yes Yes I I No Stressed No No Yes Lichens on small post oak... rainfall. On severely in fested trees, a second application may be required in 12 months. When using Kocide avoid drift to nearby sensitive plants and buildings. Lichens (Combined fungal and algal growth) Lichens assume several different shapes and col... Johnson, Jerral D.; Appel, David N. 1984-01-01 361 SciTech Connect The objective of the Oak Ridge National Laboratory Waste Management Plan is to compile and to consolidate information annually on how the ORNL Waste Management Program is conducted, which waste management facilities are being used to manage wastes, what forces are acting to change current waste management systems, what activities are planned for the forthcoming fiscal year (FY), and how all of the activities are documented. Not Available 1992-12-01 362 SciTech Connect This is the inaugural issues of an annual publication about the Oak Ridge National Laboratory. Here you will find a brief overview of ORNL, a sampling of our recent research achievements, and a glimpse of the directions we want to take over the next 15 years. A major purpose of ornl 89 is to provide the staff with a sketch of the character and dynamics of the Laboratory. Anderson, T.D.; Appleton, B.R.; Jefferson, J.W.; Merriman, J.R.; Mynatt, F.R.; Richmond, C.R.; Rosenthal, M.W. 1989-01-01 363 SciTech Connect White Oak Dam is located in the White Oak Creek watershed which provides the primary surface drainage for Oak Ridge National Laboratory. A stability analysis was made on the dam by Syed Ahmed in January 1994 which included an evaluation of the liquefaction potential of the embankment and foundation. This report evaluates the stability of the dam and includes comments on the report prepared by Ahmed. Slope stability analyses were performed on the dam and included cases for sudden drawdown, steady seepage, partial pool and earthquake. Results of the stability analyses indicate that the dam is stable and failure of the structure would not occur for the cases considered. The report prepared by Ahmed leads to the same conclusions as stated above. Review of the report finds that it is complete, well documented and conservative in its selection of soil parameters. The evaluation of the liquefaction potential is also complete and this report is in agreement with the findings that the dam and foundation are not susceptible to liquefaction. NONE 1995-04-11 364 SciTech Connect The Oak Ridge National Laboratory (ORNL), located in Oak Ridge, Tennessee, is a multipurpose research facility managed by Martin Marietta Energy Systems, Inc. (Energy Systems) for the US Deparment of Energy-Oak Ridge Operations (DOE). The operation of ORNL has resulted in a legacy of contaminated and potentially contaminated facilities, research areas, and waste management areas that may require remediation. The most recent inventory of remediation sites has identified approximately 400 individual sites that will require investigation and possibly remediation. The Remedial Action program (RAP) was established at ORNL in 1985 to conduct the investigations, studies,and remediation necessary to prevent unacceptable risks to the environment and to the public from this legacy of contaminated sites. Then, in 1989 a central Environmental Restoration program (ERP) was established that consolidates the previous RAPs at all five sites managed by Energy Systems for DOE. This paper describes how a program was developed to solve the large and diverse problems associated with the environmental restoration of the ORNL. 3 figs., 1 tab. Garland, S.B. II. 1991-01-01 365 Ponderosa pine (Pinus ponderosa) forests with Gambel oak (Quercus gambelii) are associated with higher bird abundance and diversity than are ponderosa pine forests lacking Gambel oak. Little is known, however, about specific structural characteristics of Gambel oak trees, clumps, and stands that may be important to birds in ponderosa pine-Gambel oak (hereafter pine-oak) forests. We examined associations among breeding birds Stephanie Jentsch; R. William Mannan; Brett G. Dickson; William M. Block 2008-01-01 366 E-print Network of black oak sprouts peaked in the 20­50-cm classes, and trees in the 70­80-cm class produced the fewest oak than white oak. Keywords: stump sprout, oak, clearcut, regeneration O ak (Quercus) trees dominateFIELD NOTE Effects of Stump Diameter on Sprout Number and Size for Three Oak Species Abrams, Marc David 367 E-print Network Oak Tree Planting Project1 Sherryl L. Nives William D. Tietje William H. Weitkamp2 Abstract: An Oak and oak planting techniques. Out- reach efforts resulted in participation in the Oak Tree Planting Project-association groups: over 3,500 acorns were planted at about 1,200 sites (three acorns per site). The Oak Tree Standiford, Richard B. 368 A study of the effects of fertilizer and herbaceous competition on the survival and growth of spot-seeded oaks and walnut was established in 1979. Species included bur oak (Quercus macrocarpa), pin oak (Q. palustris), white oak (Q. alba), chestnut oak (Q. prinus), and black walnut (Juglans nigra). The treatments consisted of fertilizing with a fertilizer tablet and broadcast seeding a Walter H. Davidson; Donald H. Graves; James M. Ringe; Thomas R. Cunningham 1991-01-01 369 ? Abstract Oak gall wasps (Hymenoptera: Cynipidae, Cynipini) are characterized by possession of complex,cyclically parthenogenetic,life cycles and the ability to induce a wide,diversity of highly complex,species- and generation-specific galls on oaks and other Fagaceae. The galls support species-rich, closed communities of inquilines and parasitoids that have become,a model,system,in community,ecology. We review recent advances in the ecology of oak cynipids, with Graham N. Stone; Karsten Schonrogge; Rachel J. Atkinson; David Bellido; Juli Pujade-Villar 2002-01-01 370 The agro-silvopastoral system ?montado? dominates the landscape of the south-western Iberian Peninsula, occupies approximately 3.1 million hectares of woodland in Spain and 1.2 million hectares in Portugal. The forest system ?montado? is mostly dominated by Mediterranean evergreen oaks such as cork oak (Quercus suber L.) and holm oak (Quercus rotundifolia). The ?montado? production system management aims the maintenance of a Antnio Cipriano Pinheiro; Nuno Almeida Ribeiro; Peter Surov; Alfredo Gonalves 2008-01-01 371 PubMed Relationships between advance regeneration of four tree species (red maple (Acer rubrum L.), white oak (Quercus alba L.), chestnut oak (Q. montana Willd.) and northern red oak (Q. rubra L.)) and biotic (non-tree vegetation and canopy composition) and abiotic (soil series and topographic variables) factors were investigated in 52, mature mixed-oak stands in the central Appalachians. Aggregate height was used as a composite measure of regeneration abundance. Analyses were carried out separately for two physiographic provinces. Associations with tree regeneration were found for all biotic and abiotic factors both in partial models and full models. Red maple was abundant on most of the sites, but high red maple abundance was commonly associated with wet north-facing slopes with little or no cover of mountain-laurel (Kalmia latifolia L.) and hay-scented fern (Dennstaedtia punctilobula (Michx.) Moore). Regeneration of the three oak species was greatly favored by the abundance of overstory trees of their own kind. White oak regeneration was most abundant on south-facing, gentle, lower slopes with soils in the Buchanan series. Chestnut oak regeneration was more common on south-facing, steep upper slopes with stony soils. There was a positive association between chestnut oak and huckleberry (Gaylussacia baccata (Wangh.) Koch) cover classes. Northern red oak was more abundant on north-facing wet sites with Hazleton soil, and was associated with low occurrence of mountain-laurel and hay-scented fern. PMID:18450575 Fei, Songlin; Steiner, Kim C 2008-07-01 372 PubMed Central Background Small non-coding RNAs (ncRNAs) are important regulators of gene expression in eukaryotes. Previously, only microRNAs (miRNAs) and piRNAs have been identified in the silkworm, Bombyx mori. Furthermore, only ncRNAs (50-500nt) of intermediate size have been systematically identified in the silkworm. Results Here, we performed a systematic identification and analysis of small RNAs (18-50nt) associated with the Bombyx mori argonaute2 (BmAgo2) protein. Using RIP-seq, we identified various types of small ncRNAs associated with BmAGO2. These ncRNAs showed a multimodal length distribution, with three peaks at ~20nt, ~27nt and ~33nt, which included tRNA-, transposable element (TE)-, rRNA-, snoRNA- and snRNA-derived small RNAs as well as miRNAs and piRNAs. The tRNA-derived fragments (tRFs) were found at an extremely high abundance and accounted for 69.90% of the BmAgo2-associated small RNAs. Northern blotting confirmed that many tRFs were expressed or up-regulated only in the BmNPV-infected cells, implying that the tRFs play a prominent role by binding to BmAgo2 during BmNPV infection. Additional evidence suggested that there are potential cleavage sites on the D, anti-codon and T?C loops of the tRNAs. TE-derived small RNAs and piRNAs also accounted for a significant proportion of the BmAgo2-associated small RNAs, suggesting that BmAgo2 could be involved in the maintenance of genome stability by suppressing the activities of transposons guided by these small RNAs. Finally, Northern blotting was also used to confirm the Bombyx 5.8s rRNA-derived small RNAs, demonstrating that various novel small RNAs exist in the silkworm. Conclusions Using an RIP-seq method in combination with Northern blotting, we identified various types of small RNAs associated with the BmAgo2 protein, including tRNA-, TE-, rRNA-, snoRNA- and snRNA-derived small RNAs as well as miRNAs and piRNAs. Our findings provide new clues for future functional studies of the role of small RNAs in insect development and evolution. PMID:24074203 2013-01-01 373 PubMed Central Background Silkworm fecal matter is considered one of the richest sources of antimicrobial and antiviral protein (substances) and such economically feasible and eco-friendly proteins acting as secondary metabolites from the insect system can be explored for their practical utility in conferring broad spectrum disease resistance against pathogenic microbial specimens. Methodology/Principal Findings Silkworm fecal matter extracts prepared in 0.02 M phosphate buffer saline (pH 7.4), at a temperature of 60C was subjected to 40% saturated ammonium sulphate precipitation and purified by gel-filtration chromatography (GFC). SDS-PAGE under denaturing conditions showed a single band at about 21.5 kDa. The peak fraction, thus obtained by GFC wastested for homogeneityusing C18reverse-phase high performance liquid chromatography (HPLC). The activity of the purified protein was tested against selected Gram +/? bacteria and phytopathogenic Fusarium species with concentration-dependent inhibitionrelationship. The purified bioactive protein was subjected to matrix-assisted laser desorption and ionization-time of flight mass spectrometry (MALDI-TOF-MS) and N-terminal sequencing by Edman degradation towards its identification. The N-terminal first 18 amino acid sequence following the predicted signal peptide showed homology to plant germin-like proteins (Glp). In order to characterize the full-length gene sequence in detail, the partial cDNA was cloned and sequenced using degenerate primers, followed by 5?- and 3?-rapid amplification of cDNA ends (RACE-PCR). The full-length cDNA sequence composed of 630 bp encoding 209 amino acids and corresponded to germin-like proteins (Glps) involved in plant development and defense. Conclusions/Significance The study reports, characterization of novel Glpbelonging to subfamily 3 from M. alba by the purification of mature active protein from silkworm fecal matter. The N-terminal amino acid sequence of the purified protein was found similar to the deduced amino acid sequence (without the transit peptide sequence) of the full length cDNA from M. alba. PMID:23284650 Patnaik, Bharat Bhusan; Kim, Dong Hyun; Oh, Seung Han; Song, Yong-Su; Chanh, Nguyen Dang Minh; Kim, Jong Sun; Jung, Woo-jin; Saha, Atul Kumar; Bindroo, Bharat Bhushan; Han, Yeon Soo 2012-01-01 374 E-print Network Sudden oak death 'here to stay' Jeanne Wirka, resident biologist at the Bouverie Preserve in Glen Ellen, stands next to a live oak that fell after it developed sudden oak death in the heavily wooded:10 p.m. As sudden oak death continues to ravage Sonoma County woodlands, a secluded creek near Glen California at Berkeley, University of 375 E-print Network National School on Neutron and X-ray Scattering Oak Ridge National Laboratory June 12-26, 2010 Oak in at the Comfort Inn. Dinner hosted by Oak Ridge National Laboratory Scattering Science Div. Oak Ridge National Laboratory Opening Remarks Dr. Bryan C. Chakoumakos Geoscientist Pennycook, Steve 376 E-print Network 1997. Tree shelters: An alternative for oak regeneration.oak stands, with con- siderable numbers of seedlings and treestree shelter types on microclimate and seedling performance of Oregon white oak and 2011-01-01 377 E-print Network Forests: Vegetation Dynamics in the Wake of Tanoak Decline1 Benjamin Ramage2 and Kevin O'Hara2 to experience drastic population declines and may even disappear entirely from redwood (Sequoia sempervirens) forests as a result of the exotic disease sudden oak death (SOD) (Maloney and others 2005, Mc Silver, Whendee 378 E-print Network Tree pathogens can affect community composition and structure over wide areas. Phytophthora ramorum, cause of sudden oak death (SOD), occurs in the wild in California from Humboldt County to southernmost Monterey County. P. ramorum has killed many trees at some sites and may spread to affect near and distant forests. The pathogen has not yet been detected in San Luis Obispo County outside of nurseries, but threatens coast live oak (Quercus agrifolia) woodlands there. SOD-induced changes in vegetation structure and tree community composition may cascade to affect vertebrate communities. From 2002-2004 we counted breeding birds and measured habitat characteristics at 78 points distributed among four sites in coastal oak woodlands at high risk from SOD in San Luis Obispo County. Each point was visited three times each year to conduct 10-minute counts of all adult birds detected within 50 m. In 2004 we surveyed trees within 10 m of each point. We found 13 tree species; 63.8 percent of the individuals recorded were coast live oak and 19.6 percent were California bay laurel (Umbellularia californica). We recorded 75 bird species at the census points. The most abundant species were Stellers jay (Cyanocitta stelleri, 8.9 percent of individuals), Donald E. Winslow; William D. Tietje 379 E-print Network be explained by local variation in tree species composition and forest structure. The degree to which patterns menziesii) (47 percent), black oak (Quercus kelloggii) (45 percent), madrone (Arbutus menziesii) (43 percent of symptomatic bay leaves. The results showed that bay laurel trees were infected more frequently than canker Standiford, Richard B. 380 NASA Astrophysics Data System (ADS) We investigated the effects of heavy ions on embryogenesis of the silkworm, Bombyx mori using a collimated heavy ion microbeam from the vertical beam line of an AVF-cyclotron. Eggs were exposed to carbon ions at the cellular blastoderm stage. Microbeams were found to be extremely useful for radio-microsurgical inactivation of nuclei or cells in the target site. Spot irradiation caused abnormal embryos, which showed localized defects such as deletion, duplication and fusion, depending on dose, beam size and site of irradiation. The location and frequency of defects on the resultant embryos were closely correlated to the irradiation site. Based on this correlation, a fate map was established for the Bombyx egg at the cellular blastoderm stage. Kiguchi, Kenji; Shirai, Koji; Kanekatsu, Rensuke; Kobayashi, Yasuhiko; Tu, Zhen-Li; Funayama, Tomoo; Watanabe, Hiroshi 2003-09-01 381 PubMed The Toll family of transmembrane proteins mediates signaling during the innate immune response in most animals. Toll9 is widespread in insects and has a unique signature, QHR, in its Toll/interleukin-1 receptor (TIR) domain. The introns in the TIR region are highly conserved among insects, suggesting the antiquity of Toll9 genes. Toll9 of Bombyx mori (BmToll9) was analysed by quantitative real-time RT-PCR. BmToll9 is constitutively expressed in egg, larval and adult stages prior to microbial challenge. BmToll9 is strongly expressed in the different parts of the gut, but weakly expressed in haemocytes, trachea, fat body, malpighian tubule and epidermis, and scarcely expressed in the silk glands. The injection of sterilized 0.85% NaCl solution inhibited BmToll9 expression in most tissues especially during the early responses. Staphylococcus aureus had no or limited effect on the expression of BmToll9 in the silkworm gut and fat body. But in epidermis, trachea, malpighian tubules and haemocytes, the expression of BmToll9 was significantly increased after S. aureus challenge. Infection of Escherichia coli significantly increased the BmToll9 expression in different parts of the gut as well as in epidermis, malpighian tubule and haemocytes. At 48h after feeding of the fungus, Beauveria bassiana, BmToll9 expression was significantly increased. Tissues responses to the injected and ingested bacteria showed that BmToll9 is probably involved in the local gut immune response in the silkworm. PMID:19723534 Wu, Shan; Zhang, Xiaofeng; Chen, Xiaomei; Cao, Pingsheng; Beerntsen, Brenda T; Ling, Erjun 2010-02-01 382 E-print Network Note Osmotic adjustment in sessile oak seedlings in response to drought C Collet JM Guehl 1 ?quipe-year-old sessile oak seedlings were submitted to drought developed at two different rates (0.050 and 0.013 MPa·day-1).Drought was controlled by combining levels of irrigation and grass competition. At the end Boyer, Edmond 383 E-print Network Dendrochronology of oak (Quercus spp.) in Slovenia ­ an interim report K. Cufar1 , M. Zupancic1 , L and dating historical buildings or archaeological wood. Oak - mainly represented by pedunculate (Quercus (Abies alba) in SE Slovenia. A dendroclimatic analysis showed that tree-ring width variations Cufar, Katarina 384 With the exception of small mammals, little research has been conducted in eastern oak forests on the influence of fire on mammals. Several studies have documented little or no change in relative abundance or community measures for non-volant small mammals in eastern oak (Quercus spp.) forests following fires despite reductions in leaf litter, small woody debris, and changes in understory Patrick D. Keyser; W. Mark Ford 385 Galls are commonly found on urban trees. In- duced by oviposition of insects and other arthropods, galls develop from woody tree tissues, forming shelters for develop- ing larvae. Few galls are physiologically harmful to the tree. Some, like the mealy-oak galls on live oak, are not only harmless but may harbor beneficial arthropods long after the gall-maker has departed. Because David L. Morgan; Gordon W. Frankie 386 E-print Network . Budburst phenology, Cynipidae, gypsy moth, horned oak gall, insect­plant interactions, phytochemistry, pin oak, Quercus. Introduction Plant galls are complex entities that develop under the influence of both) or mutualistic (Cockerell, 1890; Bronner, 1983). Some have taken a plant-centric view, perceiving gall formation Rieske-Kinney, Lynne K. 387 E-print Network Patterns of oak regeneration in a Midwestern savanna restoration experiment Lars A. Brudvig regeneration dynamics. We used a replicated large-scale restoration experiment with Midwestern oak savannas (USA) to understand spatial patterns of regeneration by the dominant overstory species, Quercus alba. Q 388 ERIC Educational Resources Information Center A lab in Eastern North America conducted a study to determine the taxonomic relationship between deciduous trees and several species of oaks by calculating the similarity index of all species to be studied. The study enabled students to classify the different species of oaks according to their distinct characteristics. McMaster, Robert T. 2004-01-01 389 E-print Network triacanthos (15%). Quercus macrocarpa was absent from this survey, which also revealed Q. alba saplings Oak Savanna Conference Paper ABSTRACT.--Unsuccessful oak (Quercus spp.) regeneration could result (Quercus spp.) regeneration has resulted in major changes in community structure (Loftis and McGee, 1993 390 E-print Network 1 OAK RIDGE NATIONAL LABORATORY U. S. DEPARTMENT OF ENERGY AAAS Symposium CO2 Fertilization: Boon U. S. DEPARTMENT OF ENERGY The boon vs. bust polarity applies especially to trees and forests Boon Amicus Journal Fall: 8 #12;3 OAK RIDGE NATIONAL LABORATORY U. S. DEPARTMENT OF ENERGY CO2 fertilization 391 E-print Network OAK RIDGE NATIONAL LABORATORY U. S. DEPARTMENT OF ENERGY Tilt Option Discussion Issues Van Graves Phone Conference Sept 22, 2004 #12;2 OAK RIDGE NATIONAL LABORATORY U. S. DEPARTMENT OF ENERGY RIDGE NATIONAL LABORATORY U. S. DEPARTMENT OF ENERGY · Hg drainage provided either by tilting magnet McDonald, Kirk 392 E-print Network 1 OAK RIDGE NATIONAL LABORATORY U. S. DEPARTMENT OF ENERGY Tri Cities Town Hall Forum August 9, 2006 #12;2 OAK RIDGE NATIONAL LABORATORY U. S. DEPARTMENT OF ENERGY Doing Business with ORNL. DEPARTMENT OF ENERGY CI Rapid Purchasing Techniques · AVID ­ Just In Time Agreements - 1.5 million items 393 SciTech Connect This document presents a summary of the information collected for the Oak Ridge Reservation 1994 site environmental report. Topics discussed include: Oak Ridge Reservation mission; ecology; environmental laws; community participation; environmental restoration; waste management; radiation effects; chemical effects; risk to public; environmental monitoring; and radionuclide migration. NONE 1995-09-01 394 Efforts to restore and maintain oak savannas in North America, with emphasis on the use of pre- scribed fire, have become common. Little is known, however, about how restoration affects animal populations, especially those of birds. I compared the breeding densities, community structure, and reproductive success of birds in oak savannas maintained by prescribed fire (12 sites) with those in JEFFREY D. BRAWN 2006-01-01 395 Chloroplast DNA polymorphisms have been detected by the conventional Southern-blotting hybridization method in four species of European oaks (Quercus petraea, Q. robur, Q. pubescens and Q. pyrenaica). Three polymorphisms, shared by at least three of these species, can be scored directly in ethidium bromidestained gels and were used in a broad survey of the level of differentiation of the oak R. J. Petit; A. Kremer; D. B. Wagner 1993-01-01 396 E-print Network #12;#12;DOE/ORO/2296 Oak Ridge Reservation Annual Site Environmental Report for 2008 on the World Ridge National Laboratory East Tennessee Technology Park Electronic publisher Coordinating editor Project manager, DOE-ORO David Page September 2009 Prepared by Oak Ridge National Laboratory P.O. Box 2008 Pennycook, Steve 397 E-print Network #12;#12;DOE/ORO/2261 Oak Ridge Reservation Annual Site Environmental Report for 2007 on the World Ridge National Laboratory East Tennessee Technology Park Electronic publisher Coordinating editor, Jane Parrott Project manager, DOE-ORO David Page September 2008 Prepared by Oak Ridge National Pennycook, Steve 398 E-print Network Author's personal copy Emergence of the sudden oak death pathogen Phytophthora ramorum Niklaus J Phytophthora ramorum is responsible for causing the sudden oak death epidemic. This review documents reproductively isolated populations and underwent at least four global migration events. This recent work sheds California at Berkeley, University of 399 E-print Network in the vascular system. We conclude that embolism plays little role in the drought tolerance of oaks since droughtReview article Summer and winter embolism in oak: impact on water relations MT Tyree H Cochard 1-induced embolism occurs at more negative water potentials than are known to cause damage (eg, reduced growth Paris-Sud XI, Université de 400 E-print Network Review article Genetic improvement of oaks in North America KC Steiner School of Forest Resources contexts of oak tree improvement in North America are described briefly, and the methods, species orchard to operational plantations. Quercus /genetic improvement / North america / review Résumé &mdash Paris-Sud XI, Université de 401 SciTech Connect The Oak Ridge Isochronous Cyclotron (ORIC) has been in operation for nearly fifty years at the Oak Ridge National Laboratory (ORNL). Presently, it serves as the driver accelerator for the ORNL Holifield Radioactive Ion Beam Facility (HRIBF), where radioactive ion beams are produced using the Isotope Separation Online (ISOL) technique for post-acceleration by the 25URC tandem electrostatic accelerator. Operability and reliability of ORIC are critical issues for the success of HRIBF and have presented increasingly difficult operational challenges for the facility in recent years. In February 2010, a trim coil failure rendered ORIC inoperable for several months. This presented HRIBF with the opportunity to undertake various repairs and maintenance upgrades aimed at restoring the full functionality of ORIC and improving the reliability to a level better than what had been typical over the previous decade. In this paper, we present details of these efforts, including the replacement of the entire trim coil set and measurements of their radial field profile. Comparison of measurements and operating tune parameters with setup code predictions will also be presented. Mendez, II, Anthony J [ORNL; Ball, James B [ORNL; Dowling, Darryl T [ORNL; Mosko, Sigmund W [ORNL; Tatum, B Alan [ORNL 2011-01-01 402 PubMed Central Trace fossils of insect feeding have contributed substantially to our understanding of the evolution of insectplant interactions. The most complex phenotypes of herbivory are galls, whose diagnostic morphologies often allow the identification of the gall inducer. Although fossil insect-induced galls over 300?Myr old are known, most are two-dimensional impressions lacking adequate morphological detail either for the precise identification of the causer or for detection of the communities of specialist parasitoids and inquilines inhabiting modern plant galls. Here, we describe the first evidence for such multitrophic associations in Pleistocene fossil galls from the Eemian interglacial (130?000115?000 years ago) of The Netherlands. The exceptionally well-preserved fossils can be attributed to extant species of Andricus gallwasps (Hymenoptera: Cynipidae) galling oaks (Quercus), and provide the first fossil evidence of gall attack by herbivorous inquiline gallwasps. Furthermore, phylogenetic placement of one fossil in a lineage showing obligate host plant alternation implies the presence of a second oak species, Quercus cerris, currently unknown from Eemian fossils in northwestern Europe. This contrasts with the southern European native range of Q. cerris in the current interglacial and suggests that gallwasp invasions following human planting of Q. cerris in northern Europe may represent a return to preglacial distribution limits. PMID:18559323 Stone, Graham N; van der Ham, Raymond W.J.M; Brewer, Jan G 2008-01-01 403 PubMed Trace fossils of insect feeding have contributed substantially to our understanding of the evolution of insect-plant interactions. The most complex phenotypes of herbivory are galls, whose diagnostic morphologies often allow the identification of the gall inducer. Although fossil insect-induced galls over 300Myr old are known, most are two-dimensional impressions lacking adequate morphological detail either for the precise identification of the causer or for detection of the communities of specialist parasitoids and inquilines inhabiting modern plant galls. Here, we describe the first evidence for such multitrophic associations in Pleistocene fossil galls from the Eemian interglacial (130000-115000 years ago) of The Netherlands. The exceptionally well-preserved fossils can be attributed to extant species of Andricus gallwasps (Hymenoptera: Cynipidae) galling oaks (Quercus), and provide the first fossil evidence of gall attack by herbivorous inquiline gallwasps. Furthermore, phylogenetic placement of one fossil in a lineage showing obligate host plant alternation implies the presence of a second oak species, Quercus cerris, currently unknown from Eemian fossils in northwestern Europe. This contrasts with the southern European native range of Q. cerris in the current interglacial and suggests that gallwasp invasions following human planting of Q. cerris in northern Europe may represent a return to preglacial distribution limits. PMID:18559323 Stone, Graham N; van der Ham, Raymond W J M; Brewer, Jan G 2008-10-01 404 SciTech Connect A parametric study was conducted to evaluate the stability of the White Oak Dam (WOD) embankment and foundation. Slope stability analyses were performed for the upper and lower bound soil properties at three sections of the dam using the PCSTABL4 computer program. Minimum safety factors were calculated for the applicable seismic and static loading conditions. Liquefaction potential of the dam embankment and foundation solid during the seismic event was assessed by using simplified procedures. The WOD is classified as a low hazard facility and the Evaluation Basis Earthquake (EBE) is defined as an earthquake with a magnitude of m{sub b} = 5.6 and a Peak Ground Accelerator (PGA) of 0.13 g. This event is approximately equivalent to a Modified Mercalli Intensity of VI-VIII. The EBE is used to perform the seismic evaluation for slope stability and liquefaction potential. Results of the stability analyses and the liquefaction assessment lead to the conclusion that the White Oak Dam is safe and stable for the static and the seismic events defined in this study. Ogden Environmental, at the request of MMES, has checked and verified the calculations for the critical loading conditions and performed a peer review of this report. Ogden has determined that the WOD is stable under the defined static and seismic loading conditions and the embankment materials are in general not susceptible to liquefaction. Ahmed, S.B. 1994-01-01 405 E-print Network plant-mediated interactions between a native pathogen and a community of gall-forming insects on oakChanges in Oak Gall Wasps Species Diversity (Hymenoptera: Cynipidae) in Relation to the Presence Ghosta1 1 Plant Protection Department- Sero Road- Agricultural Faculty, Urmia Univ., PO Box 165, Urmia 406 Peptidergic neurons, which serve as source of various endocrine neuropeptides, were identified in the suboesophageal ganglion (SG) and brain of insects. In the silkworm Bombyx mori, SG is known to secrete two neuropeptides, diapause hormone (DH) responsible for induction of embryonic diapause and pheromone biosynthesis-activating neuropeptide, which share a pentapeptide amide, Phe-Xaa-Pro-Arg-Leu-NH_2 (Xaa = Gly or Ser), at the C Yukihiro Sato; Masaaki Oguchi; Nobuo Menjo; Kunio Imai; Hiroyuki Saito; Motoko Ikeda; Minoru Isobe; Okitsugu Yamashita 1993-01-01 407 SciTech Connect The US Department of Energy (DOE) proposes to construct and maintain additional storage capacity at Oak Ridge National Laboratory (ORNL), Oak Ridge, Tennessee, for liquid low-level radioactive waste (LLLW). New capacity would be provided by a facility partitioned into six individual tank vaults containing one 100,000 gallon LLLW storage tank each. The storage tanks would be located within the existing Melton Valley Storage Tank (MVST) facility. This action would require the extension of a potable water line approximately one mile from the High Flux Isotope Reactor (HFIR) area to the proposed site to provide the necessary potable water for the facility including fire protection. Alternatives considered include no-action, cease generation, storage at other ORR storage facilities, source treatment, pretreatment, and storage at other DOE facilities. NONE 1995-04-01 408 SciTech Connect White Oak Creek is the major surface-water drainage through the Department of Energy (DOE) Oak Ridge National Laboratory (ORNL). Samples taken from the lower portion of the creek revealed high levels of Cesium-137, and lower levels of Cobalt-60 in near-surface sediment. Other contaminants present in the sediment included: lead, mercury, chromium, and PCBS. In October 1990, DOE, US Environmental Protection Agency (EPA), and Tennessee Department of Environment and Conservation (DEC) agreed to initiate a time-critical removal action in accordance with Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) to prevent transport of the contaminated sediments into the Clinch River system. This paper discusses the environmental, regulatory, design, and construction issues that were encountered in conducting the remediation work. Van Hoesen, S.D.; Kimmel, B.L. (Oak Ridge National Lab., TN (United States)); Page, D.G.; Hudson, G.R. (USDOE Oak Ridge Field Office, TN (United States)); Wilkerson, R.B. (MK-Ferguson Co., Oak Ridge, TN (United States)); Zocolla, M. (Corps of Engineers, Nashville, TN (United States). Nashville District); Kauschinger, J.L. (Ground Engineering Services, Manchester, NH (United States)) 1992-01-01 409 SciTech Connect White Oak Creek is the major surface-water drainage through the Department of Energy (DOE) Oak Ridge National Laboratory (ORNL). Samples taken from the lower portion of the creek revealed high levels of Cesium-137, and lower levels of Cobalt-60 in near-surface sediment. Other contaminants present in the sediment included: lead, mercury, chromium, and PCBS. In October 1990, DOE, US Environmental Protection Agency (EPA), and Tennessee Department of Environment and Conservation (DEC) agreed to initiate a time-critical removal action in accordance with Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) to prevent transport of the contaminated sediments into the Clinch River system. This paper discusses the environmental, regulatory, design, and construction issues that were encountered in conducting the remediation work. Van Hoesen, S.D.; Kimmel, B.L. [Oak Ridge National Lab., TN (United States); Page, D.G.; Hudson, G.R. [USDOE Oak Ridge Field Office, TN (United States); Wilkerson, R.B. [MK-Ferguson Co., Oak Ridge, TN (United States); Zocolla, M. [Corps of Engineers, Nashville, TN (United States). Nashville District; Kauschinger, J.L. [Ground Engineering Services, Manchester, NH (United States) 1992-12-01 410 SciTech Connect In July 1991, the State of Tennessee initiated the Health Studies Agreement with the United States Department of Energy to carry out independent studies of possible adverse health effects in people living in the vicinity of the Oak Ridge Reservation. The health studies focus on those effects that could have resulted or could result from exposures to chemicals and radioactivity released at the Reservation since 1942. The major focus of the first phase was to complete a Dose Reconstruction Feasibility Study. This study was designed to find out if enough data exist about chemical and radionuclide releases from the Oak Ridge Reservation to conduct a second phase. The second phase will lead to estimates of the actual amounts or the doses of various contaminants received by people as a result of off-site releases. Once the doses of various contaminants have been estimated, scientists and physicians will be better able to evaluate whether adverse health effects could have resulted from the releases. Yarbrough, M.I.; Van Cleave, M.L.; Turri, P.; Daniel, J. 1993-09-01 411 PubMed Central This study addresses the underlying spatial distribution of oak mistletoe, Phoradendron villosum, a hemi-parasitic plant that provides a continuous supply of berries for frugivorous birds overwintering the oak savanna habitat of California's outer coast range. As the winter community of birds consuming oak mistletoe varies from group-living territorial species to birds that roam in flocks, we asked if mistletoe volume was spatially autocorrelated at the scale of persistent territories or whether the patterns predicted by long-term territory use by western bluebirds are overcome by seed dispersal by more mobile bird species. The abundance of mistletoe was mapped on trees within a 700 ha study site in Carmel Valley, California. Spatial autocorrelation of mistletoe volume was analyzed using the variogram method and spatial distribution of oak mistletoe trees was analyzed using Ripley's K and O-ring statistics. On a separate set of 45 trees, mistletoe volume was highly correlated with the volume of female, fruit-bearing plants, indicating that overall mistletoe volume is a good predictor of fruit availability. Variogram analysis showed that mistletoe volume was spatially autocorrelated up to approximately 250 m, a distance consistent with persistent territoriality of western bluebirds and philopatry of sons, which often breed next door to their parents and are more likely to remain home when their parents have abundant mistletoe. Using Ripley's K and O-ring analyses, we showed that mistletoe trees were aggregated for distances up to 558 m, but for distances between 558 to 724 m the O-ring analysis deviated from Ripley's K in showing repulsion rather than aggregation. While trees with mistletoe were aggregated at larger distances, mistletoe was spatially correlated at a smaller distance, consistent with what is expected based on persistent group territoriality of western bluebirds in winter and the extreme philopatry of their sons. PMID:25389971 Wilson, Ethan A.; Sullivan, Patrick J.; Dickinson, Janis L. 2014-01-01 412 While latitudinal patterns of genetic diversity are well known for many taxa in Europe, there has been little analysis of\\u000a longitudinal patterns across Pleistocene glacial refugia. Here we analyze longitudinal patterns in two aspects of diversity\\u000a (species richness and intraspecific genetic diversity) for two trophically related groups of organisms oaks (Fagaceae, genus\\u000a Quercus) and their associated gallwasps (Hymenoptera: Cynipidae) Rachel J. Atkinson; Antonis Rokas; Graham N. Stone 413 SciTech Connect This report provides summary information on Oak Ridge National Laboratory (ORNL) Environmental Restoration (ER) sites as listed in the Oak Ridge Reservation Federal Facility Agreement (FFA), dated January 1, 1992, Appendix C. The Oak Ridge National Laboratory was built in 1943 as part of the World War II Manhattan Project. The original mission of ORNL was to produce and chemically separate the first gram-quantities of plutonium as part of the national effort to produce the atomic bomb. The current mission of ORNL is to provide applied research and development in support of the U.S. Department of Energy (DOE) programs in nuclear fusion and fission, energy conservation, fossil fuels, and other energy technologies and to perform basic scientific research in selected areas of the physical, life, and environmental sciences. ER is also tasked with clean up or mitigation of environmental impacts resulting from past waste management practices on portions of the approximately 37,000 acres within the Oak Ridge Reservation (ORR). Other installations located within the ORR are the Gaseous Diffusion Plant (K-25) and the Y-12 plant. The remedial action strategy currently integrates state and federal regulations for efficient compliance and approaches for both investigations and remediation efforts on a Waste Area Grouping (WAG) basis. As defined in the ORR FFA Quarterly Report July - September 1995, a WAG is a grouping of potentially contaminated sites based on drainage area and similar waste characteristics. These contaminated sites are further divided into four categories based on existing information concerning whether the data are generated for scoping or remedial investigation (RI) purposes. These areas are as follows: (1) Operable Units (OU); (2) Characterization Areas (CA); (3) Remedial Site Evaluation (RSE) Areas; and (4) Removal Site Evaluation (RmSE) Areas. Kuhaida, A.J. Jr.; Parker, A.F. 1997-02-01 414 There is great interest in understandinghow rangeland management practices affectthe long-term sustainability of California oakwoodland ecosystems through their influence onnutrient cycling. This study examines the effects ofoak trees and low to moderate intensity grazing onsoil properties and nutrient pools in a blue oak (Quercus douglasii H.&A.) woodland in the SierraNevada foothills of northern California. Fourcombinations of vegetation and management wereinvestigated: R. A. DAHLGREN; M. J. SINGER; X. HUANG 1997-01-01 415 SciTech Connect This document contains findings identified during the Tiger Team Compliance Assessment of the Department of Energy's (DOE's) Y-12 Plant in Oak Ridge, Tennessee. The Y-12 Plant Tiger Team Compliance Assessment is comprehensive in scope. It covers the Environmental, Safety, and Health (including Occupational Safety and Health Administration (OSHA) compliance), and Management areas and determines the plant's compliance with applicable federal (including DOE), state, and local regulations and requirements. 4 figs., 12 tabs. Not Available 1990-02-01 416 SciTech Connect This report, Site Descriptions of Environmental Restoration Units at the Oak Ridge K-25 Site, Oak Ridge, Tennessee, is being prepared to assimilate information on sites included in the Environmental Restoration (ER) Program of the K-25 Site, one of three major installations on the Oak Ridge Reservation (ORR) built during World War III as part of the Manhattan Project. The information included in this report will be used to establish program priorities so that resources allotted to the K-25 ER Program can be best used to decrease any risk to humans or the environment, and to determine the sequence in which any remedial activities should be conducted. This document will be updated periodically in both paper and Internet versions. Units within this report are described in individual data sheets arranged alphanumerically. Each data sheet includes entries on project status, unit location, dimensions and capacity, dates operated, present function, lifecycle operation, waste characteristics, site status, media of concern, comments, and references. Each data sheet is accompanied by a photograph of the unit, and each unit is located on one of 13 area maps. These areas, along with the sub-area, unit, and sub-unit breakdowns within them, are outlined in Appendix A. Appendix B is a summary of information on remote aerial sensing and its applicability to the ER program. Goddard, P.L.; Legeay, A.J.; Pesce, D.S.; Stanley, A.M. 1995-11-01 417 SciTech Connect The first two volumes of this report are devoted to a presentation of environmental data and supporting narratives for the US Department of Energy's (DOE's) Oak Ridge Reservation (ORR) and surrounding environs during 1989. Volume 1 includes all narrative descriptions, summaries, and conclusions and is intended to be a stand-alone'' report for the ORR for the reader who does not want to review in detail all of the 1989 data. Volume 2 includes the detailed data summarized in a format to ensure that all environmental data are represented in the tables. Narratives are not included in Vol. 2. The tables in Vol. 2 are addressed in Vol. 1. For this reason, Vol. 2 cannot be considered a stand-alone report but is intended to be used in conjunction with Vol. 1. 16 figs., 194 tabs. Jacobs, V.A.; Wilson, A.R. (eds.) 1990-10-01 418 SciTech Connect The first two volumes of this report are devoted to a presentation of environmental data and supporting narratives for the US Department of Energy's (DOE's) Oak Ridge Reservation (ORR) and surrounding environs during 1990. Volume 1 includes all narrative descriptions, summaries, and conclusions and is intended to be a stand-alone'' report for the ORR for the reader who does not want to review in detail all of the 1990 data. Volume 2 includes the detailed data summarized in a format to ensure that all environmental data are represented in the tables. Narratives are not included in Vol. 2. The tables in Vol. 2 are addressed in Vol. 1. For this reason, Vol. 2 cannot be considered a stand-alone report but is intended to be used in conjunction with Vol. 1. Wilson, A.R. (ed.) 1991-09-01 419 SciTech Connect This report has been prepared to provide background information on White Oak Lake for the Oak Ridge National Laboratory Environmental and Safety Report. The paper presents the history of White Oak Dam and Lake and describes the hydrological conditions of the White Oak Creek watershed. Past and present sediment and water data are included; pathway analyses are described in detail. Oakes, T.W.; Kelly, B.A.; Ohnesorge, W.F.; Eldridge, J.S.; Bird, J.C.; Shank, K.E.; Tsakeres, F.S. 1982-03-01 420 E-print Network SAMPLING Optimization of Sampling Methods for Within-Tree Populations of Red Oak Borer, Enaphalodes) mortality. Twenty-four northern red oak trees, Quercus rubra L., infested with red oak borer, were felled of 480 examined trees, and Donley and Rast (1984) examined the entire bole of 144 oaks in Pennsylvania Stephen, Frederick M. 421 E-print Network (Lithocarpus densiflorus) and Shreve's oak (Quercus parvula var. Shrevei). Native stands of mature trees were263 Effect of Phosphonate Treatments on Sudden Oak Death in Tanoak and Shreve's Oak1 Doug Schmidt2 to evaluate the effectiveness of phosphonate chemical treatments for control of sudden oak death in tanoak Standiford, Richard B. 422 E-print Network . In the oak savannas of California the various species of native oaks occur as individual large trees on steeper slopes and in draws. Young trees are notably few or absent in oak savannas. The predominance and the strange forms of oaks (and other trees) are the result of plant husbandry practices by the local Indians Standiford, Richard B. 423 E-print Network ................................................................................................ 11 Figure 3. Small Fastigiate Oaks at an infestation site in 2007. Tree centre right with feeding-mature trees, in both cases a form of Pedunculate Oak known as Cypress Oak (Quercus robur f. fastigiataReport on survey for Oak Processionary Moth Thaumetopoea processionea (Linnaeus) (Lepidoptera 424 E-print Network occurred in oak from the 1920s. A survey in 1987 in the UK has shown that 18% of oak trees had less than 10Note Drought susceptibility and xylem dysfunction in seedlings of 4 European oak species KH Higgs V June 1994; accepted 8 March 1995) Summary Seedlings of oak (Quercus robur, Q petraea, Q cerris Paris-Sud XI, Université de 425 E-print Network specific structural characteristics of Gambel oak trees, clumps, and stands that may be important to birds oak 7­ 15 cm in diameter at breast height. We also found evidence that large Gambel oak trees (!23 cm density of large pine trees !23 cm in diameter at breast height was low. Because large oak trees are rare 426 E-print Network morphotypes including Cenococcum geophilum. Infection rates on oak roots were lowest on trees growingBiodiversity of Mycorrhizas on Garry Oak (Quercus garryana) in a Southern Oregon Savanna1 Lori L Garry oak or Oregon white oak (Quercus garryana) is the dominant vegetation on the Whetstone Savanna Standiford, Richard B. 427 E-print Network interannual to multi-decadal growth variations of 555 oak trees from Central-West Germany. A network of 13Complex climate controls on 20th century oak growth in Central-West Germany DAGMAR A. FRIEDRICHS,1 pedunculate oak (Quercus robur L.) and 33 sessile oak (Quercus petraea (Matt.) Liebl.) site chronologies Esper, Jan 428 E-print Network dollars. Anoka County had nearly 3 million oak trees and 990 active infection centers in 2008. If oak wilt2009 USDA Research Forum on Invasive Species 31 AN ECONOMIC IMPACT ASSESSMENT FOR OAK WILT IN ANOKA, or discontinued. Few analyses have attempted to carefully quantify those damages, especially for forest pests. Oak Fried, Jeremy S. 429 E-print Network on oak trees in Brent, Ealing, Hounslow and Richmond boroughs. Its caterpillars feed on oak leaves after their habit of forming nose-to-tail' processions. A silken nest on the trunk of an oak tree PestThe oak processionary moth (Thaumetopoea processionea), a native of mainland Europe, is breeding 430 E-print Network mortality had tanoak seedlings, which could potentially grow to replace dead trees. Coast live oak seedlings were present in about 80 percent of all plots with coast live oak trees. About 6 to 8 percent of plots with coast live oak trees had mortality but no coast live oak seedlings. Less than half of all plots Standiford, Richard B. 431 E-print Network DECLINING AND DEAD TREES TYPE OF TREES DECLINING AND DYING (ALL THAT APPLY): WHITE OAKS RED OAKS OTHER OF WHITE OAK TREES DECLINING AND DEAD: FEW MANY ALL NUMBER OF OTHER TREES DECLINING AND DEAD: NONE FEW MANYWhite Oak Rapid Death Survey OBSERVER NAME AND PHONE / EMAIL DATE LOCATION OF DECLINING AND DEAD Noble, James S. 432 SciTech Connect Waste Area Grouping 2 (WAG 2) of the Oak Ridge National Laboratory (ORNL) is located in the White Oak Creek Watershed and is composed of White Oak Creek Embayment, White Oak Lake and associated floodplain, and portions of White Oak Creek (WOC) and Melton Branch downstream of ORNL facilities. Contaminants leaving other ORNL WAGs in the WOC watershed pass through WAG 2 before entering the Clinch River. Health and ecological risk screening analyses were conducted on contaminants in WAG 2 to determine which contaminants were of concern and would require immediate consideration for remedial action and which contaminants could be assigned a low priority or further study. For screening purposes, WAG 2 was divided into four geographic reaches: Reach 1, a portion of WOC; Reach 2, Melton Branch; Reach 3, White Oak Lake and the floodplain area to the weirs on WOC and Melton Branch; and Reach 4, the White Oak Creek Embayment, for which an independent screening analysis has been completed. Screening analyses were conducted using data bases compiled from existing data on carcinogenic and noncarcinogenic contaminants, which included organics, inorganics, and radionuclides. Contaminants for which at least one ample had a concentration above the level of detection were placed in a detectable contaminants data base. Those contaminants for which all samples were below the level of detection were placed in a nondetectable contaminants data base. Blaylock, B.G.; Frank, M.L.; Hoffman, F.O.; Hook, L.A.; Suter, G.W.; Watts, J.A. 1992-07-01 433 E-print Network % oak savanna (4­43 trees ha­1), 23% oak woodland (> 43 trees ha­1), 7% oak barrens (1-3 trees ha­1), and 27% wet prairie (tree ha­1; Brewer & Vankat 2001). Oak savannas in this region not convertedFIFTEEN YEARS OF PLANT COMMUNITY DYNAMICS DURING A NORTHWEST OHIO OAK SAVANNA RESTORATION Scott R Abella, Scott R. 434 E-print Network s canpesteu live. and slip. . '. gastr -', ;. is. l'ho matuze oak . oliectad tr. e lat, cer nart of'!azy ard d. led was ur;pala'- able, az!d the amount af zaocit nuppets had ! o be limited ir! ar;er to pet, tnem ta eat enough of the oak, The emacra! i... s canpesteu live. and slip. . '. gastr -', ;. is. l'ho matuze oak . oliectad tr. e lat, cer nart of'!azy ard d. led was ur;pala'- able, az!d the amount af zaocit nuppets had ! o be limited ir! ar;er to pet, tnem ta eat enough of the oak, The emacra! i... Dollahite, James Walton 2012-06-07 435 PubMed Poison ivy, poison oak, and poison sumac are now classified in the genus Toxicodendron which is readily distinguished from Rhus. In the United States, there are two species of poison oak, Toxicodendron diversilobum (western poison oak) and Toxicodendron toxicarium (eastern poison oak). There are also two species of poison ivy, Toxicodendron rydbergii, a nonclimbing subshrub, and Toxicodendron radicans, which may be either a shrub or a climbing vine. There are nine subspecies of T. radicans, six of which are found in the United States. One species of poison sumac, Toxicodendron vernix, occurs in the United States. Distinguishing features of these plants and characteristics that separate Toxicodendron from Rhus are outlined in the text and illustrated in color plates. PMID:6451640 Guin, J D; Gillis, W T; Beaman, J H 1981-01-01 436 E-print Network Transportation Decision Support Systems Oak Ridge National Laboratory managed by UT-Battelle, LLC Passenger Flows Supply Chain Efficiency Transportation: Energy Environment Safety Security Vehicle and implementation of automated transportation decision support models for the scheduling and routing of cargo 437 Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey 19. DETAIL VIEW OF SKIFF BOW WITH OAK STEM AND FRAMES PLANKED IN CEDAR USING COPPER CLINCH NAILS. TRANSOM OF SECOND SKIFF CAN BE SEEN BACKGROUND. - Lowell's Boat Shop, 459 Main Street, Amesbury, Essex County, MA 438 E-print Network , and American Superconductor Corporation Partnerships:Partnerships: X. Li, M. Rupich, D. Verebelyi, C. Thieme, U. Schoop, T. Kodenkandath, W. Zhang, M. Teplitsky, and J. Scudiere (AMSC) #12;OAK RIDGE NATIONAL LABORATORY 439 E-print Network of a holm oak and up to complete vegetation closure. The best marker appeared to be Phillyrea latifolia author who noted the existence of chablis in southern France and regeneration in these natural openings Paris-Sud XI, Université de 440 E-print Network , and Marcus Selig1 1 Department of Forestry and Natural Resources, Purdue University 2 Division of Forestry root systems (Johnson et al. 2002). Therefore, clearcutting often results in the overtopping of oaks 441 Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey VIEW ALONG SEVENTEENTH STREET. NOTE THE MATURE SILK OAK TREES LINING THE STREET, WHICH DO NOT PROVIDE A CANOPY VIEW FACING NORTHWEST. - Hickam Field, Hickam Historic Housing, Honolulu, Honolulu County, HI 442 Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey OUTER RIM OF CIRCLE, WITH LIVE OAK TREE AT LEFT FOREGROUND AND CEMETERY SECTION 25 IN BACKGROUND. VIEW TO WEST. - Barrancas National Cemetery, Naval Air Station, 80 Hovey Road, Pensacola, Escambia County, FL 443 ScienceCinema ORNL Director Thom Mason explains the groundbreaking work in neutron sciences, supercomputing, clean energy, advanced materials, nuclear research, and global security taking place at the Department of Energy's Office of Science laboratory in Oak Ridge, Tenn. Mason, Thomas 2013-02-25 444 E-print Network , yellow indiangrass, gulfdune paspalum, and Dichanthelium species. Major forb species are western ragweed, plains wild- indigo, Texas croton, hairy bonamia, and Cassia specles. The major woody species is live oak, which occurs as scattered large trees..., yellow indiangrass, gulfdune paspalum, and Dichanthelium species. Major forb species are western ragweed, plains wild- indigo, Texas croton, hairy bonamia, and Cassia specles. The major woody species is live oak, which occurs as scattered large trees... Scifres, C.J.; Kelly, D.M. 1979-01-01 445 Forest management to improve stand vigor and growth requires foresters to distinguish between trees that will respond favorably to treatment and trees that are likely to grow poorly or die. The Missouri Ozark Forest Ecosystem Project provided an opportunity to quantify factors associated with oak mortality in mature, fully-stocked, second-growth forests. We monitored more than 24,000 permanently-tagged oak trees from John M. Kabrick; Stephen R. Shifley; Randy G. Jensen; Zhaofei Fan; David R. Larsen 446 Leaf-mining Stilbosis quadricustatella larvae are distributed non-randomly within leaves of their host plants, sand live oak (Quercus geminata) and water oak (Q. nigra), in north Florida. Fewer mines are found together on the same side of the mid-vein than separated, on opposite sides of the mid-vein. Larvae do not normally cross the mid-vein but create small blotch-like mines along subsidiary P. D. Stiling; D. Simberloff; L. C. Anderson 1987-01-01 447 SciTech Connect The production of radioisotopes has been one of the basic activities at Oak Ridge since the end of World War II. The importance of this work was best described by Alvin Weinberg, former Laboratory Director, when he wrote ... If God has a golden book and writes down what it is that Oak Ridge National Laboratory did that had the biggest influence on science, I would guess that was the production and distribution of isotopes. Radioisotopes production continues to be an important aspect of Oak Ridge programs today and of those planned for the future. Past activities, current projects, and future plans and potentials will be described briefly in this paper. Also, some of the major issues facing the continued production of radioisotopes will be described. The scope of the program has always been primarily that of process development, followed by special batch-type productions, where no other supply exists. The technology developed has been available for adoption by US commercial corporations, and in cases where this has occurred, Oak Ridge has withdrawn as a supplier of the particular isotopes involved. One method of production that will not be described is that of target bombardment with an accelerator. This method was used at Oak Ridge prior to 1978 in the 86-inch Cyclotron. However, this method has not been used at Oak Ridge since then for radioisotope production, except as a research tool. Collins, E.D.; Aaron, W.S.; Alexander, C.W.; Bigelow, J.E.; Parks, J.T.; Tracy, J.G.; Wham, R.M. 1994-09-01 448 SciTech Connect Geophysical data were acquired at a site on the Oak Ridge Reservation, Tennessee to determine the characteristics of a mud-filled void and to evaluate the effectiveness of a suite of geophysical methods at the site. Methods that were used included microgravity, electrical resistivity, and seismic refraction. Both microgravity and resistivity were able to detect the void as well as overlying structural features. The seismic data provide bedrock depth control for the other two methods, and show other effects that are caused by the void. Carpenter, P.J.; Carr, B.J.; Doll, W.E.; Kaufmann, R.D.; Nyquist, J.E. 1999-11-14 449 E-print Network ,r\\sc\\ such as bluestems, beaked panicum, the pas- III~~III~, Intliangrass, switchgrass, the dropseeds and iltllc~ yootl grasses has been reduced materially. In- tcrl~\\c I~rush clearing and other control measures cnlrl)lctl ~ith suitable grazing management are nec- O... of such associated woody species as winged elm, hickory and haw and by the presence of panicums and paspalums in the forage cover. CENTRAL BASIN Post and blackjack oaks characterize the sandy, noncalcareous soils of this heterogenous area derived from granites... Darrow, Robert A.; McCully, Wayne G. 1959-01-01 450 SciTech Connect The Background Soil characterization Project (BSCP) will provide background concentration levels of selected metals, organic compounds, and radionuclides in soils from uncontaminated on-site areas at the Oak Ridge Reservation (ORR), and off-site in the western part of Roane County and the eastern part of Anderson County. The BSCP will establish a database, recommend how to use the data for contaminated site assessment, and provide estimates of the potential human health and environmental risks associated with the background level concentrations of potentially hazardous constituents. Not Available 1992-08-01 451 E-print Network The proceedings of the sudden oak death second science symposium: the state of our knowledge 559 The Effects of Sudden Oak Death on Wildlife-Can Anything Be Learned From the American Chestnut Blight?1 area. In 1995, Sudden Oak Death (SOD) was identified near Mill Valley, California. Caused by the fungus Standiford, Richard B. 452 Little is known about environmental controls on vessel features in ring-porous tree species. Our objectives were to assess (i) the association between tree-ring descriptors (vessels and width) and climate in two oak species, white oak, Quercus alba L., and red oak, Quercus rubra L., and (ii) the utility of vessel series in climate reconstruc- tion. The study was conducted in J. C. Tardif; F. Conciatori 2006-01-01 453 San Luis Obispo County contains oak woodlands at varying levels of risk of sudden oak death (SOD), caused by a fungal pathogen (Phytophthora ramorum) that in the past decade has killed thousands of oak (Quercus spp.) and tanoak (Lithocarpus densiflorus) trees in California. SOD was most recently detected 16 km north of the San Luis Obispo County line. Low-risk woodlands Douglas J. Tempel; William D. Tietje; Donald E. Winslow 454 We overview the recent development of oak dendrochronology in Europe related to archaeology and art-history. Tree-ring series of European oaks (Quercus robur and Q. petraea) have provided a reliable framework for chronometric dating and reconstruction of past climate and environment. To date, long oak chronologies cover almost the entire Holocene, up to 8480 BC and the network over the entire Kristof Haneca; Katarina ?ufar; Hans Beeckman 2009-01-01 455 Sudden oak death, caused by Phytophthora ramorum, is widely established in mesic forests of coastal central and northern California. In 2000, we placed 18 plots in two Marin County sites to monitor disease progression in coast live oaks (Quercus agrifolia), California black oaks (Q. kelloggii), and tanoaks (Lithocarpus densiflorus), the species that are most consistently killed by the pathogen in Brice A. McPherson; Sylvia R. Mori; David L. Wood; Maggi Kelly; Andrew J. Storer; Pavel Svihra; Richard B. Standiford 2010-01-01 456 SciTech Connect The purpose of this report is to provide information to the public about the impact of the US Department of Energy's (DOE's) facilities located on the Oak Ridge Reservation (ORR) on the public and the environment. It describes the environmental surveillance and monitoring activities conducted at and around the DOE facilities operated by Martin Marietta Energy Systems, Inc. Preparation and publication of this report is in accordance with DOE Order 5400.1. The order specifies a publication deadline of June of the following year for each calendar year of data. The primary objective of this report is to summarize all information collected for the previous calendar year regarding effluent monitoring, environmental surveillance, and estimates of radiation and chemical dose to the surrounding population. When multiple years of information are available for a program, trends are also evaluated. The first seven sections of Volume 1 of this report address this objective. The last three sections of Volume 1 provide information on solid waste management, special environmental studies, and quality assurance programs. Wilson, A.R. (ed.) 1991-09-01 457 PubMed Leaf phenology is important to herbivores, but the timing and extent of leaf drop has not played an important role in our understanding of herbivore interactions with deciduous plants. Using phylogenetic general least squares regression, we compared the phenology of leaves of 55 oak species in a common garden with the abundance of leaf miners on those trees. Mine abundance was highest on trees with an intermediate leaf retention index, i.e. trees that lost most, but not all, of their leaves for 2-3 months. The leaves of more evergreen species were more heavily sclerotized, and sclerotized leaves accumulated fewer mines in the summer. Leaves of more deciduous species also accumulated fewer mines in the summer, and this was consistent with the idea that trees reduce overwintering herbivores by shedding leaves. Trees with a later leaf set and slower leaf maturation accumulated fewer herbivores. We propose that both leaf drop and early leaf phenology strongly affect herbivore abundance and select for differences in plant defense. Leaf drop may allow trees to dispose of their herbivores so that the herbivores must recolonize in spring, but trees with the longest leaf retention also have the greatest direct defenses against herbivores. PMID:23774946 Pearse, Ian S; Karban, Richard 2013-11-01 458 SciTech Connect This Quality Assurance Plan (QAP) is concerned with design and construction (Sect. 2) and characterization and monitoring (Sect. 3). The basis for Sect. 2 is the Quality Assurance Plan for the Design and Construction of Waste Area Grouping 6 Closure at Oak Ridge National Laboratory, Oak Ridge, Tennessee, and the basis for Sect. 3 is the Environmental Restoration Quality Program Plan. Combining the two areas into one plan gives a single, overall document that explains the requirements and from which the individual QAPs and quality assurance project plans can be written. The Waste Area Grouping (WAG) 6 QAP establishes the procedures and requirements to be implemented for control of quality-related activities for the WAG 6 project. Quality Assurance (QA) activities are subject to requirements detailed in the Martin Marietta Energy Systems, Inc. (Energy Systems), QA Program and the Environmental Restoration (ER) QA Program, as well as to other quality requirements. These activities may be performed by Energy Systems organizations, subcontractors to Energy Systems, and architect-engineer (A-E) under prime contract to the US Department of Energy (DOE), or a construction manager under prime contract to DOE. This plan specifies the overall Energy Systems quality requirements for the project. The WAG 6 QAP will be supplemented by subproject QAPs that will identify additional requirements pertaining to each subproject. Not Available 1994-01-01 459 PubMed In this paper, silkworm exuviae (SE) waste, an agricultural waste available in large quantity in China, was utilized as low-cost adsorbent to remove basic dye (methylene blue, MB) from aqueous solution by adsorption. Kinetic data and sorption equilibrium isotherms were carried out in batch process. The adsorption kinetic experiments revealed that MB adsorption onto SE for different initial dye concentrations all followed pseudo-second order kinetics and were mainly controlled by the film diffusion mechanism. Batch equilibrium results at different temperatures suggest that MB adsorption onto SE can be described perfectly with Freundlich isotherm model compared with Langmuir and D-R isotherm models, and the characteristic parameters for each adsorption isotherm were also determined. Thermodynamic parameters calculated show the adsorption process has been found to be endothermic in nature. The analysis for the values of the mean free energies of adsorption (E(a)), the Gibbs free energy (?G(0)) and the effect of ionic strength all demonstrate that the whole adsorption process is mainly dominated by ion-exchange mechanism, which has also been verified by variations in FT-IR spectra and pH value before and after adsorption and desorption studies. The results reveal that SE can be employed as a low-cost alternative to other adsorbents for MB adsorption. PMID:21185648 Chen, Hao; Zhao, Jie; Dai, Guoliang 2011-02-28 460 PubMed Central Lipid transfer particle (LTP) is a high-molecular-weight, very high-density lipoprotein known to catalyze the transfer of lipids between a variety of lipoproteins, including both insects and vertebrates. Studying the biosynthesis and regulation pathways of LTP in detail has not been possible due to a lack of information regarding the apoproteins. Here, we sequenced the cDNA and deduced amino acid sequences for three apoproteins of LTP from the silkworm (Bombyx mori). The three subunit proteins of the LTP are coded by two genes, apoLTP-II/I and apoLTP-III. ApoLTP-I and apoLTP-II are predicted to be generated by posttranslational cleavage of the precursor pr
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5357569456100464, "perplexity": 16385.32727631445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462141.68/warc/CC-MAIN-20150226074102-00128-ip-10-28-5-156.ec2.internal.warc.gz"}
https://wiki.mahara.org/index.php/System_Administrator's_Guide/Installing_Mahara
Nearby: This is a guide to installing Mahara. It's mainly targeted at the main platform on which people will install it (Linux with Apache), though Mahara can be installed on other platforms. Mahara is a web application. This means it's not just an executable file you can download and run. You need a server to put it on (you can use your desktop as a server if you're just trialling it). You'll need to install other software for it to run too - such as the Apache web server and PostgreSQL database system. Another alternative is to use shared hosting (see below for more information). If you're running Debian/Ubuntu, a lot of the required software (and even Mahara itself) can be installed using apt-get. If you're using some other linux distribution, you may be able to use your distribution specific install tool also. ## Dependencies Mahara is mainly designed for use on Linux, using the Apache web server, PostgreSQL database server and PHP. We also support its use with the MySQL database server. In addition, members of the community have successfully got Mahara running on Mac OSX and Windows, and have also managed to use nginx instead of Apache. The Mahara developers fix bugs found that prevent running on these platforms, but don't explicitly test them from day to day, so you may have less luck than using a Linux box with Apache. ## Get the code, and put on your webserver In order for your Mahara to run, you need to put a copy of the code on your own server. You can either: Once you have the code, copy the contents of the htdocs/ directory to your webserver. It is best if you either: • Copy the entire directory, then rename it to something like 'mahara' - so you will see your site at example.org/mahara/; or • Copy the contents of the htdocs/ directory to the top level public directory (often called htdocs or public_html) of your webserver, so you can see your site at example.org. If you are unfamiliar with Ubuntu and need an in more detailed explanation about how to install the files and configure Apache, go to the step-by-step guide. ## Create the Database You need to create a database for Mahara and ensure that the webserver is able to connect to it. Instructions for the command line follow, if you have access to a CPanel or similar software you can create the database using it instead. Your database must be UTF8 encoded. The commands below create a UTF8 encoded database, if you use CPanel etc., then you will have to make sure it creates you a UTF8 database. Command line instructions for Postgres: # Run these commands while su'd to the 'postgres' user # If you need to create a database user: # Actual database creation createdb -O (username who will be connecting) -EUTF8 (databasename) Command line instructions for MySQL: mysql --user=root -p create database (databasename) character set UTF8; By the way, if possible, try to use PostgreSQL for your database if you can. ## Create the Dataroot Directory This is a directory where Mahara will write files that are uploaded to it, as well as some other files it needs to run. The main point about this directory is that it is one that the web server can write to. The other main point is that it cannot be inside the directory where your Mahara code is! In fact, if you have a public_html directory, it should not be inside that at all. On your webserver, you will need to make this directory. If you have a public_html directory, make the directory alongside it. You can give it a name like 'maharadata'. Once you are done, you should have this directory structure (ignoring files): .../yourusername/public_html .../yourusername/maharadata You will need to make the maharadata directory writable by the web server user. You can either change its owner to be the web server user, or you can chmod it to 777. The latter isn't recommended, but if you're on shared hosting it's what you'll probably have to do. FTP programs will allow you to chmod the directory. ## Create Mahara's config.php In the Mahara htdocs directory is config-dist.php. You need to make a copy of this called config.php. You then need to go through the file and make changes where appropriate. The file is commented, and there are not many settings to change. However, take special note of the following settings: • database connection details — ensure you include the database connection details by inserting the proper values for $cfg->dbname,$cfg->dbuser, $cfg->dbpass,$cfg->dbprefix. • Note: You do not need to set dbprefix, unlike Moodle. Try not using it first, and see if it works. If you find you do need it, you may find it best to leave dbprefix empty and add the prefix directly to $cfg->dbname,$cfg->dbuser values. • dataroot - set this to the filesystem path to the directory you made that the web server user can write to. Please note this is a filesystem path, which on linux will look like /path/to/your/directory and on windows will look like C:\path\to\your\directory. This is NOT a web address such as http://example.org/maharadata! • directorypermissions - if you're on a server where you do not have root access, you should change this from 0700 to 0777 or similar, so that you can download your dataroot later for backup purposes, and install language packs. ## Apache Configuration Note: If you're on shared hosting, you probably are not able to change this, so you can ignore this section. The simplest of Apache VirtualHost configurations will be sufficient: <VirtualHost *:80> ServerName example.org DocumentRoot /path/to/mahara/htdocs ErrorLog /var/log/apache2/mahara.error.log CustomLog /var/log/apache2/mahara.access.log combined <Directory /home/nigel/mahara/htdocs> AllowOverride All </Directory> </VirtualHost Please note that your apache configuration should contain no ServerAliases. Mahara expects to be accessed through ONE URL. This gives certainty that cookies are set for the right domain, and also means that SSL certificates for the networking functionality can be generated for this one URL. If you use a server alias, you should expect to see problems like having to log in twice, and potentially SSO between your site and others breaking. However, you can still make both example.org and www.example.org work for your site, if you use a second VirtualHost directive as well as the one above: <VirtualHost *:80> ServerName www.example.org # You _can_ add ServerAliases here if you want more than one URL to # redirect to your main site # ServerAlias foo.example.org Redirect Permanent / http://example.org/ </VirtualHost> In your php.ini file (often found in /etc/php5/apache2/php.ini), make sure you have these: register_globals off magic_quotes_runtime off magic_quotes_sybase off magic_quotes_gpc off log_errors on allow_call_time_pass_reference off post_max_size 50M ## Run the Installer As of version 1.5.0 of Mahara, there are now two options for installing Mahara. Most users will want to use the web-based installer but more advanced users may wish to use the Command-Line Interface installer as an alternative. ### Run the Command-Line Interface Installer Since version 1.5.0, Mahara has had a command-line installation and upgrade toolset which provide an excellent alternative to the web-based installer. The Command Line Interface (CLI) installer needs to be run as the same user that your web server will run as because it creates a number of directories and sets file ownerships within the document root. Navigate to the mahara/htdocs directory and run: php admin/cli/install.php --adminpassword='examplepassword' --adminemail=youremailaddress You can optionally specify the --verbose option to see exactly what the installation process does or --help for additional help on the options to the installer. ### What if the installation process breaks with an error? Sometimes this can happen. Normally this is because of some misconfiguration in your system - for example, you haven't granted your database user permission to create tables, or you aren't using a high enough version of your database (see the dependencies). Sometimes, it's because of a bug in the Mahara installer. In order to find out, you will need to check the error log for your webserver. Mahara dumps detailed information in there where a bug occurs. The message might show you that the problem is with your configuration, but if it looks like a problem with Mahara, you should make a bug report with the information from your logs, so we can fix it. If you were able to fix the problem, you might need to drop and re-create your database again, so you're starting the installation again without any of the previously installed tables in the way. ## Final Tasks (don't forget!): Email & Cron Job 1. You may wish to check that Mahara can send email by trying to register on your site. If you receive a registration email, all is well. If you get a message saying "Sorry, your registration attempt was unsuccessful. This is our fault, not yours...", then you will need to install a mail transport agent (such as sendmail, exim, postfix, nullmailer) on your server, or specify an outgoing (SMTP) mail server in your config.php (form version 1.4 mail settings can be configured in Configure Site -> Site options -> Email). See the troubleshooting page, or this thread for more details. 2. You will need to set up a cron job to hit htdocs/lib/cron.php every minute. Mahara implements its own cron internally, so hitting cron every minute is sufficient to make everything work. If you don't set up the cron job, you will find that RSS feeds will not update, and some email notifications won't be sent out, such as forum post notifications. You can set it up using the command line 'php' command to run the cron script, or by using a command line web browser such as lynx or w3m. Something like the following in a crontab file will be sufficient: * * * * * curl http://your-mahara-site.org/lib/cron.php If you run cron using curl, the cron output will be logged in your apache error log. If you wish to separate where it's logged to away from your apache logs (which is a particularly good idea, though slightly harder to set up), read the separate article about Mahara's cron. There are a few things that can be done to increase the security of your installation by enabling virus scanning and extra spam protections. Have a look under Sites options, Security settings. ## My Mahara is installed - now what? Congratulations on getting your Mahara installed! Now read the Next Steps article to see what you can do now - for example, installing a new language pack and theme. And if you haven't already, now might be a good time to join the Mahara community - you can get help and ideas for your Mahara from a world-wide community of enthusiastic users. We hope to see you on the forums soon! ## Troubleshooting If you are having problems installing Mahara, please check out the Installation Instructions Troubleshooting Page for answers to some common problems. ## Mahara and Shared Hosting Shared hosting isn't as good as having a machine where you can have administrator access. If you would still like to install Mahara on shared hosting, a detailed guide with screen shots and video tutorials is available on http://mygreatlearningsite.com/. This guide will show you how to install Mahara via cPanel and is designed for users who don't any previous experience with creating websites or setting up databases. You could also try Softaculous and let us know if that works for you as we have not tried it and don't know how well the installation process there works.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21709653735160828, "perplexity": 2384.2052659607934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246636104.0/warc/CC-MAIN-20150417045716-00081-ip-10-235-10-82.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/167185-integral.html
# Math Help - integral 1. ## integral find the value of ( b ) so that the line y=b divides the region bound by the graph of the tow functions f(x)= 9-x^2 , g(x)=0 into regions of equal area find the value of ( b ) so that the line y=b divides the region bound by the graph of the tow functions f(x)= 9-x^2 , g(x)=0 into regions of equal area What have you tried to solve this? 3. sorry, i have not idea to solve sorry, i have not idea to solve start by calculating the area of the region between $y = 9-x^2$ and the x-axis ... you'll need that piece of information to find the "half" area. what do you get? 5. the area of the region between f(x)=9-x^2 and the x-axis equal 36 good, then let $y = b$ be the horizontal line that cuts the region's area in half. using symmetry, note that the area of the region in quadrant I is $\displaystyle \int_0^3 9 - x^2 \, dx = 18$ since $y = 9-x^2$ , $x = \sqrt{9-y}$ , and ... $\displaystyle \int_0^b \sqrt{9-y} \, dy = 9$ evaluate the integral and solve for $b$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.979309618473053, "perplexity": 647.7801808502564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860112231.78/warc/CC-MAIN-20160428161512-00202-ip-10-239-7-51.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/311593/total-variation-of-a-fourier-series
# Total variation of a Fourier series Let $f(x) = f(x+2\pi)$ be a bounded real function given by the Fourier series of the form $$f(x) = \sum_{k=1}^N a_k \sin(kx + \phi_k).$$ What is the total variation $V(f)$ of this function over one period? In this case, one should be able to use that $V(f) = \int |f'(x)|dx$ and that $$f'(x) = \sum_{k=1}^{N} k a_k \cos(kx + \phi_k),$$ but how? If instead the function is given by an infinite Fourier series, then what are the conditions on the $a_k$ terms for the total variation to be finite? - A periodic function is of bounded variation if and only if it is antiderivative of a finite signed measure on $[0,2\pi)$ (or, better, on the circle $\mathbb T$) with total mass $0$. Therefore, $\sum_{n\in \mathbb Z} c_n e^{inx}$ is the Fourier series of a function of bounded variation if and only if $\sum_{n\in\mathbb Z} in c_n e^{inx}$ is the Fourier series of a finite signed measure. Let $b_n=i n c_n$ to simplify notation. The following result can be found, for example, in An Introduction to Harmonic Analysis by Katznelson. Theorem (Herglotz). $\sum_{n\in\mathbb Z} b_n e^{inx}$ is the Fourier series of a positive measure if and only if the sequence $(b_n)$ is positive definite. The latter means that $$\sum_{n,m}b_{n-m}z_n\overline{z_m}\ge 0\quad \text{ for all sequences }\ z_n\in \mathbb C \tag1$$ where only finitely many $z_n$ are nonzero. Hence, $\sum_{n\in \mathbb Z} c_n e^{inx}$ is the Fourier series of a function of bounded variation if and only if the sequence $(nc_n)$ is the difference of two positive definite sequences. I don't think this is a practical condition, but then, neither is (1).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9884429574012756, "perplexity": 36.22163376198776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010628283/warc/CC-MAIN-20140305091028-00068-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.mediatebc.com/torani-move-drjr/c96c66-statistically-significant-psychology
The test has been running for two months. What this means is that there is less than a 5% probability that the results happened just by random chance, and therefore a 95% probability that the results reflect a meaningful pattern in human psychology. During researches, results can be statistically significant but not meaningful. The point of doing research and running statistical analyses on data is to find truth. You should ____ asked Apr 11, 2017 in Psychology by Likal. 1: The P value fallacy. Advertising, Cancer, Drug industry. This is closely related to Janet Shibley Hyde’s argument about sex differences (Hyde 2007). If this was just a chance event, this would only happen roughly one in 150 times but the fact that this happened in your experiment, it makes you feel pretty confident that your experiment is significant. Or embodied cognition. by Tabitha M. Powledge, Public Library of Science : Broadly speaking, statistical significance is assigned to a result when an event is found to be unlikely to have occurred by chance. If a test of significance gives a p-value lower than the α-level, the null hypothesis is rejected. Statistical significance comes from the bell curve. For any given statistical experiment – including A/B testing – statistical significance is based on several parameters: The confidence level (i.e how sure you can be that the results are statistically relevant, e.g 95%); Your sample size (little effects in small samples tend to be unreliable); Your minimum detectable effect (i.e the minimum effect that you want to observe with that experiment) (93 in psychology, and 16 in experimental economics, after excluding initial studies with P > 0.05), these numbers are suggestive of the potential gains in reproducibility that would accrue from the new threshold of P < 0.005 in these fields. You then run statistical tests on your observations.You use the standard in psychology for statistical testing that allows a 5 percent chance of getting a false positive result. A psychologist runs a study with three conditions and displays the resulting condition means in a line graph.3 The readers of the psychologist's article will want to know which condition means are statistically significantly different from one another. Smaller α-levels give greater confidence in the determination of significance, but run greater risks of failing to reject a false null hypothesis (a Type II error, or "false negative determination"), and so have less statistical power. If the CI for the odds ratio excludes 1, then your results are statistically significant. It is important to understand that statistical significance reflects the chance probability, not the magnitude or effect size of a difference or result. Yet another common pitfall often happens when a researcher writes the ambiguous statement "we found no statistically significant difference," which is then misquoted by others as "they found that there was no difference." Statistically significant results are those that are understood as not likely to have occurred purely by chance and thereby have other underlying causes for their occurrence - hopefully, the underlying causes you are trying to investigate! These statistical results indicate that an effect exists. Congruent validity 25. Popular levels of significance are 5%, 1% and 0.1%. A result is statistically significant if it satisfies certain statistical criteria. Statistically Significant Definition: A result in a study can be viewed as statistically significant if the probability of achieving the result or a result more extreme by chance alone is less than . We call that degree of confidence our confidence level, which demonstrates how sure we are that our data was not skewed by random chance. Or power pose. 0 votes. Armstrong suggests authors should avoid tests of statistical significance; instead, they should report on effect sizes, confidence intervals, replications/extensions, and meta-analyses. More precisely, a study's defined significance level, denoted by α {\displaystyle \alpha }, is the probability of the study rejecting the null hypothesis, given that the null hypothesis was assumed to be true; and the p-value of a result, p {\displaystyle p}, is the probability of … Sign In Sign Up. The confidence of a result (and its associated confidence interval) is not dependent on effect size alone. Statistical significance is a determination that a relationship between two or more variables is caused by something other than chance. Thus, it is safe to assume that the difference is due to the experimental manipulation or treatment. And that 5% threshold is set at 5% to ensure that there is a high probability that we make a correct decision and that our determination of statistical significance is an accurate reflection of reality. How to get statistically significant effects in any ERP experiment (and why you shouldn't) Steven J. The situations occurs at the end of a study when the statistical figures relating to certain topics of study are calculated in absence of qualitative aspect and other details that can be … We can call a result statistically significant when P < alpha. Categories. Technical note: In general, the more predictor variables you have in the model, the higher the likelihood that the The F-statistic and corresponding p-value will be statistically significant. Address correspondence to: Steven J. A common misconception is that a statistically significant result is always of practical significance, or demonstrates a large effect in the population. Talk about how your findings contrast with existing theories and previous research and emphasize that more research may be needed to reconcile these differences. Technically, statistical significance is the probability of some result from a statistical test occurring by chance. Or we may only have a “snapshot” of observations from a more long-term process or only a small subset of individuals from the populationof interest. Most researchers work with samples, defined as … statistically significant and insignificant results. A statistically significant result would be one where, after rigorous testing, you reach a certain degree of confidence in the results. A statistical significance of "" can be converted into a value of α via use of the error function: The use of σ is motivated by the ubiquitous emergence of the Gaussian distribution in measurement uncertainties. Word count: 3,250 Reading time: 10 minutes Published: 2011. an excerpt from xkcd, geeky web comic. Plain language should be used to describe effects based on the size of the effect and the quality of the evidence. The significance level is usually represented by the Greek symbol, α (alpha). The significance level is usually represented by the Greek symbol, α (alpha). Significance comes down to the relationship between two crucial quantities, the p-value and the significance level (alpha). Most often, psychologists look for a probability of 5% or less that the results are do to chance, which means a 95% chance the results are "not" due to chance. :>), Get the word of the day delivered to your inbox, © 1998-, AlleyDog.com. answered Apr 11, 2017 by Holly . And hopefully when we conclude that an effect is not statistically significant there really is no effect and if we tested the entire population we would find no effect. Rick, not statistically significant (relationship, difference in means, or difference in proportion) is one of the two possible outcomes of any study. A Priori Sample Size Estimation: Researchers should do a power analysis before they conduct their study to determine how many subjects to enroll. If the p value is being less than 5% (p<0.05), we will identify it being Statistically Significant. A Significant Difference between two groups or two points in time means that there is a measurable difference between the groups and that, statistically, the probability of obtaining that difference by chance is very small (usually less than 5%). In one study, 60% of a sample of professional researchers thought that a p value of .01—for an independent-samples t- test with 20 participants in each sample—meant there was a 99% chance of replicating the statistically significant result (Oakes, 1986) [4] . Or power pose. And, importantly, it should be quoted whether or not the p-value is judged to be significant. They point out that "insignificance" does not mean unimportant, and propose that the scientific community should abandon usage of the test altogether, as it can cause false hypotheses to be accepted and true hypotheses to be rejected.[6][1]. psychological-assessment; 0 Answers. However, modern statistical advice is that, where the outcome of a test is essentially the final outcome of an experiment or other study, the p-value should be quoted explicitly. It indicates strong evidence against the null hypothesis, as there is less than a 5% probability the null is correct (and the results are random). Therefore, we reject the null hypothesis, and accept the alternative hypothesis. Summary Beware of placing too much weight on traditional values of α, such as α= 0.05. The most commonly agreed border in Significance Testing is at the P value 0.05. None were significant, but after including tree age as independent variable, suddenly elevation and slope become statistically significant. As a marketer, you want to be certain about the results you get… However, both t-values are equally unlikely under H0. Luck. The first two, .03 and .001, would be statistically significant. In psychology nonparametric test are more usual than parametric tests. Toward evidence-based medical statistics. In more complicated, but practically important cases, the significance level of a test is a probability such that the probablility of making a decision to reject the null hypothesis when the null hypothesis is actually true is no more than the stated probability. Similarly, if the P value is more than 5% (p>0.05), we will identify it being Statistically Insignificant. 2-tailed statistical significance is the probability of finding a given absolute deviation from the null hypothesis -or a larger one- in a sample.For a t test, very small as well as very large t-values are unlikely under H0. Tags. When you hear that the results of an experiment were stastically significant, it means that you can be 95% sure the results are not due to chance...this is a good thing. "A statistically significant difference" simply means there is statistical evidence that there is a difference; it does not mean the difference is necessarily large, important, or significant in the common meaning of the word. Fixed significance levels such as those mentioned above may be regarded as useful in exploratory data analyses. This allows for those applications where the probability of deciding to reject may be much smaller than the significance level for some sets of assumptions encompassed within the null hypothesis. Statistical significance means that a result from testing or experimenting is not likely to occur randomly or by chance, but is instead likely to be attributable to a specific cause. Psychological science—the good, the bad, and the statistically significant. Even a very weak result can be statistically significant if it is based on a large enough sample. If we continue the test, and if we assume that the data keeps coming in the same proportions… It’s 50 shades of gray all over again. However, you’ll need to use subject area expertise to determine whether this effect is important in the real world to determine practical significance. The difference is statistically significant 23. Yet it’s one of the most common phrases heard when dealing with quantitative methods. All material within this site is the property of AlleyDog.com. Introduction. ), The Concept of Statistical Significance Testing, Pearson product-moment correlation coefficient, https://psychology.wikia.org/wiki/Statistical_significance?oldid=175032. It’s possible that each predictor variable is not significant and yet the F-test says that all of the predictor variables combined are jointly significant. Significance Testing is fundamental in identifying whether there is a relationship exists between two or more variables in a Psychology Research. The selection of an α-level inevitably involves a compromise between significance and power, and consequently between the Type I error and the Type II error. In order to do this, you have to take lots of steps to make sure you set up good experiments, use good measures, measure the correct variables, etc...and you have to determine if the findings you get occurred because you ran a good study or by some fluke. In terms of α, this statement is equivalent to saying that "assuming the theory is true, the likelihood of obtaining the experimental result by coincidence is 0.27%" (since 1 − erf(3/√2) = 0.0027). It’s hard to say and harder to understand. Critical Regions. For clarity, the above formula is presented in tabular form below. Failing to find evidence that there is a difference does not constitute evidence that there is no difference. The decision is often made using the p-value: if the p-value is less than the significance level, then the null hypothesis is rejected. For example, there may be potential for measurement errors (even your own body temperature can fluctuate by almost 1°F over the course of the day). In such cases, how can we determine whether patterns we see in our small set of data is convincing evidence of a systema… Popular levels of significance are 5%, 1% and 0.1%. In statistics, a result is called statistically significant if it is unlikely to have occurred by chance. You will also want to discuss the implications of your non-significant findings to your area of research. Technically, statistical significance is the probability of some result from a statistical test occurring by chance. A number of attempts failed to find empirical evidence supporting the use of significance tests. In these cases p-values are adjusted in order to control either the false discovery rate or the familywise error rate. It is achieved by comparing the probability of which the data has demonstrated its effect due to chance, or due to real connection. If the sample size is large and the noise is low a small effect size can be measured with great confidence. Such results are informally referred to as 'statistically significant'. by Tabitha M. Powledge, Public Library of Science In psychology nonparametric test are more usual than parametric tests. Toward evidence-based medical statistics. In other words, the confidence one has in a given result being non-random (i.e. In some fields, for example nuclear and particle physics, it is common to express statistical significance in units of "σ" (sigma), the standard deviation of a Gaussian distribution. The smaller the p-value, the more significant the result is said to be. Online marketers seek more accurate, proven methods of running online experiments. Statistical significance can be considered to be the confidence one has in a given result. In medicine, small effect sizes (reflected by small increases of risk) are often considered clinically relevant and are frequently used to guide treatment decisions (if there is great confidence in them). A p-value less than 0.05 (typically ≤ 0.05) is statistically significant. Therefore, it doesn't make sense to treat α= 0.05 as a universal rule for what is significant. This is to allow maximum information to be transferred from a summary of the study into meta-analyses. The 'p' value in Significance Testing indicates the probability of which the effect is cause by chance.… In a comparison study, it is dependent on the relative difference between the groups compared, the amount of measurement and the noise associated with the measurement. [2][3] See Bayes factor for details. It’s 50 shades of gray all over again. In this section, you’ll learn about some of the tools that psychologists use in statistical analysis. Unfortunately, this problem is commonly encountered in scientific writing. Clinical Significance Statistical Significance; Definition. Such results are informally referred to as 'statistically significant'. In biomedical research, 96% of a sample of recent papers claim statistically significant results with In order to do this, you have to take lots of steps to make sure you set up good experiments, use good measures, … Probability refers to the likelihood of an event occurring. That they think about is the probability of some result from a statistical test occurring by.! To assist them in analyzing data, and accept the alternative hypothesis suggests we. [ an effect ] ‟, „ borderline significant‟ ) should not used! Formula is presented in tabular form ) where, after rigorous testing, you ’ ll learn about of! „ borderline significant‟ ) should not be used in EPOC reviews statistical test occurring by chance size can considered... The probability of replicating a statistically significant result? oldid=175032 is commonly encountered in scientific writing both though is... Encountered in scientific writing into meta-analyses not just random “ noise. determine how subjects... It has a way of evoking as much emotion with existing theories previous. Random “ noise. to real connection this principle is sometimes described by the maxim Absence of is. Formula a clinician-trialist is ever likely to need ( or understand as α= 0.05 as a rule. Because they distract researchers from the use of significance gives a p-value less than (! Straightforward meaning useful in exploratory data analyses form below should be quoted whether or not the p-value is judged be... Than parametric tests that we would n't reject the null hypothesis if t had been 2.2 instead of -2.2 not... And this is true and this is to find evidence that there is practical. Most commonly agreed border in significance testing, you ’ ll learn about some of the distribution like we when! When an event is found to be significant effects in any ERP experiment ( and its associated confidence interval is. Judged to be transferred from a statistical test occurring by chance s ____ actually, statistics not... Performance indicators section, you ’ ll learn about some of the events compared a p-value than. Most common phrases heard when dealing with quantitative methods physiological statistics, a (... Beware extrapolating the preference of a result is called statistically significant effects in any ERP experiment and... Is presented in tabular form below by the term p < 0.05 ), the. On cancer drug spin scientific statistically significant psychology site is the probability of some result from a summary of psychological... That psychologists use in statistical hypothesis testing concept is important to understand s a phrase that s... Certain statistical statistically significant psychology, often represented by the Greek symbol, α ( alpha ) result when event. Of scores or subscores that is statistically significant, unlikely to have occurred purely by chance assessment an., attempts to educate researchers on how to avoid pitfalls of using statistical significance is probability. Being less than 0.05 ( typically ≤ 0.05 ), we should n't ignore the right tail of psychological. When it is very unlikely to have occurred given the null hypothesis is rejected most commonly border... Used in EPOC reviews are more usual than parametric tests the noise is low a small size. In statistical analysis what each of these quantities represents: Broadly speaking, statistical significance the... Or p-values should be quoted whether or not the p-value, the relationship between two quantities. Started using them more and more with the rise of A/B testing any difference as statistical significance is also consideration! Statistically Insignificant, University of California, Davis, Davis, California USA. Comes down to the relationship or difference is probably not just random noise!, results can be statistically significant the maxim Absence of evidence is not a consequence of chance ) on... With the rise of A/B testing necessarily a strong one California, USA relationship or difference is due the. Error rate threshold that statistically significant psychology think about is the probability of which the has. Or more variables in a given result being non-random ( i.e proven methods of running online experiments % p. Learn about some of the tools that psychologists use statistics to assist them in analyzing using! A number ( 0.5 ) or a zillion other examples pushed by the happy-talk crowd abandonment and many key. For any reason without the express written consent of AlleyDog.com whether a small size. Written consent of AlleyDog.com both though idea is more absurd for nonparametric.. Not zero to a common misconception is that a statistically significant relationship between the psychological assessment of an A/B for! Say and harder to understand that statistical significance '' that a given treatment is important... Consequence of chance ) depends on the context of the study into meta-analyses experimental... Effect ] ‟, „ borderline significant‟ ) should not be reprinted or copied for any reason the... Researchers from the use of the evidence smaller the p-value, the is! Https: //psychology.wikia.org/wiki/Statistical_significance? oldid=175032 lower the significance level, the more significant the result is statistically significant researchers the. The false discovery rate or the familywise error rate bordering on meaningless when a! The phrase statistically significant effects in any ERP experiment ( and why you should ____ Apr! The population gives a p-value lower than the α-level, the concept of statistical significance test been. This site is the probability of something statistically significant but not meaningful can determine the test, accept. Events compared statistical significance '' and, importantly, it has a very important and common in. Evidence that there is exactly zero difference between two crucial quantities, the more significant result... For t = -2.2 and the statistically significant if it is important to understand that statistical significance 1. On cancer drug spin of Absence. cases p-values are considered by to. Formula is presented in tabular form ) difference of scores or subscores that statistically. Distribution like we do when reporting a 1-tailed p-value mentioned above may be regarded as in. -2.2 and the significance level ( alpha ) be regarded as useful in exploratory data analyses we can be with! Construct, researchers can determine the test ’ s argument about sex differences ( Hyde 2007 ) sex! Something statistically significant result is not evidence of Absence. seek more accurate, proven of. But otherwise unimportant general point estimates and confidence intervals, when interpreting the of! To your area of research: 2011. an excerpt from xkcd, geeky web comic when interpreting results. Safe to assume that the difference is probably not just random “ noise ''. Findings to your area of research flavours of “ significant ”: statistical versus clinical % and 0.1 % //psychology.wikia.org/wiki/Statistical_significance... Utilize statistics, digital marketers have started using them more and more with rise. The false discovery rate or the familywise error rate two or more variables in a result!.03 and.001, would be statistically significant, unlikely to have occurred by chance copied any... Related misinterpretation is that 1 − p equals the probability of something statistically significant describe effects based a. Much emotion is to compute p for t = -2.2 and the sample size:... Precisely, is being tested statistically ( SNR ) and the statistically significant if it is by. Performance indicators misconception is that frequentist analyses of p-values are considered by some to ... Much weight on traditional values of α, such as those mentioned above may be needed to these! Accept the alternative hypothesis occurring by chance ) and the o… Suppose a is... Is fundamental in identifying whether there is no difference statistically significant psychology [ an effect ],... Manipulated independent variable in her experiment high-ticket-value product – perhaps a SaaS company of. People have problems with the use of the statistical significance is also consideration..., attempts to educate researchers on how to Get statistically significant 11, 2017 in psychology by.... Be quoted whether or not the magnitude or effect size can be to. Cases p-values are adjusted in order to control either the false discovery rate or the familywise rate! Between two crucial quantities, the concept of statistical significance some of the statistical significance testing at... Distribution like we do when reporting a 1-tailed p-value, geeky web.! Enables researchers to find empirical evidence supporting the use of the events compared you will also want discuss... To treat α= 0.05, AlleyDog.com information to be unlikely to have by... Note what, precisely, is being tested statistically in significance testing significance is same proved! Brain, University of California, USA clinician-trialist is ever likely to need ( or!! From a summary of the statistical significance α ( alpha ) it does n't make any difference statistical! Argument about sex differences ( Hyde 2007 ) with noise, signal and sample size ( tabular form.. Events compared to your inbox, © 1998-, AlleyDog.com exactly zero difference between two crucial quantities the! Preference of the same proportions… Introduction to assume that the data has demonstrated its effect due to real.... To have occurred by chance continue the test ’ s 50 shades of gray all over.. Form ) be regarded as useful in exploratory data analyses pitfalls of using statistical significance of the. Probability refers to the experimental manipulation or treatment important is dependent on effect size considered! Transferred from a statistical statistically significant psychology occurring by chance a consideration when interpreting a stated significance, or due the... Transferred from a statistical test occurring by chance but need n't: 2 assessments of statistical significance test been... Play on conversions, average order value, cart abandonment and many other key performance indicators Hyde. Shibley Hyde ’ s ____, precisely, is being less than 5 %, 1 % 0.1! Alpha ) the nature of significance are 5 %, 1 % and 0.1 % than statistically significant psychology ( typically 0.05!, precisely, is being less than 5 % ( p > ). Significant represents the result is always of practical significance, often represented by the crowd! Refresh Advanced Eye Drops Coupon, Map Of Cat Island, Iupui Application Deadline Fall 2020, 559 Area Code Time Zone, Resorts Near Kelowna, Auxiliary Verb French,
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.826396644115448, "perplexity": 1023.7902889274379}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662564830.55/warc/CC-MAIN-20220524045003-20220524075003-00506.warc.gz"}
http://annals.math.princeton.edu/2014/180-2/p07
# On self-similar sets with overlaps and inverse theorems for entropy ### Abstract We study the dimension of self-similar sets and measures on the line. We show that if the dimension is less than the generic bound of $\mathrm{min}\{1,s\}$, where $s$ is the similarity dimension, then there are superexponentially close cylinders at all small enough scales. This is a step towards the conjecture that such a dimension drop implies exact overlaps and confirms it when the generating similarities have algebraic coefficients. As applications we prove Furstenberg’s conjecture on projections of the one-dimensional Sierpinski gasket and achieve some progress on the Bernoulli convolutions problem and, more generally, on problems about parametric families of self-similar measures. The key tool is an inverse theorem on the structure of pairs of probability measures whose mean entropy at scale $2^{-n}$ has only a small amount of growth under convolution. Note: To view the article, click on the URL link for the DOI number. • [FengHu09] D. Feng and H. Hu, "Dimension theory of iterated function systems," Comm. Pure Appl. Math., vol. 62, iss. 11, pp. 1435-1500, 2009. @article{FengHu09,   author = {Feng, De-Jun and Hu, Huyi},   journal = {Comm. Pure Appl. Math.},   number = {11},   pages = {1435--1500},   title = {Dimension theory of iterated function systems},   volume = {62},   year = {2009},   doi = {10.1002/cpa.20276},   issn = {0010-3640},   } • [PeresSolomyak2000b] Y. Peres and B. Solomyak, "Problems on self-similar sets and self-affine sets: an update," in Fractal Geometry and Stochastics, II, Basel: Birkhäuser, 2000, vol. 46, pp. 95-106. @incollection{PeresSolomyak2000b, address = {Basel},   author = {Peres, Yuval and Solomyak, Boris},   booktitle = {Fractal {G}eometry and {S}tochastics, {II}},   pages = {95--106},   publisher = {Birkhäuser},   series = {Progr. Probab.},   title = {Problems on self-similar sets and self-affine sets: an update},   volume = {46},   year = {2000},   } • [Garsia1962] A. M. Garsia, "Arithmetic properties of Bernoulli convolutions," Trans. Amer. Math. Soc., vol. 102, pp. 409-432, 1962. @article{Garsia1962,   author = {Garsia, Adriano M.},   journal = {Trans. Amer. Math. Soc.},   pages = {409--432},   title = {Arithmetic properties of {B}ernoulli convolutions},   volume = {102},   year = {1962},   doi = {10.2307/1993615},   issn = {0002-9947},   } • [Furstenberg70] H. Furstenberg, "Intersections of Cantor sets and transversality of semigroups," in Problems in Analysis, Princeton, N.J.: Princeton Univ. Press, 1970, pp. 41-59. @incollection{Furstenberg70, address = {Princeton, N.J.},   author = {Furstenberg, Harry},   booktitle = {Problems in Analysis},   pages = {41--59},   publisher = {Princeton Univ. Press},   title = {Intersections of {C}antor sets and transversality of semigroups},   year = {1970},   } • [Furstenberg08] H. Furstenberg, "Ergodic fractal measures and dimension conservation," Ergodic Theory Dynam. Systems, vol. 28, iss. 2, pp. 405-422, 2008. @article{Furstenberg08,   author = {Furstenberg, Hillel},   journal = {Ergodic Theory Dynam. Systems},   number = {2},   pages = {405--422},   title = {Ergodic fractal measures and dimension conservation},   volume = {28},   year = {2008},   doi = {10.1017/S0143385708000084},   issn = {0143-3857},   } • [Kenyon97] R. Kenyon, "Projecting the one-dimensional Sierpinski gasket," Israel J. Math., vol. 97, pp. 221-238, 1997. @article{Kenyon97,   author = {Kenyon, Richard},   journal = {Israel J. Math.},   pages = {221--238},   title = {Projecting the one-dimensional {S}ierpinski gasket},   volume = {97},   year = {1997},   doi = {10.1007/BF02774038},   issn = {0021-2172},   } • [SwiatekVeerman2002] G. Świcatek and J. J. P. Veerman, "On a conjecture of Furstenberg," Israel J. Math., vol. 130, pp. 145-155, 2002. @article{SwiatekVeerman2002,   author = {{\'S}wi{\c{a}}tek, Grzegorz and Veerman, J. J. P.},   journal = {Israel J. Math.},   pages = {145--155},   title = {On a conjecture of {F}urstenberg},   volume = {130},   year = {2002},   doi = {10.1007/BF02764075},   issn = {0021-2172},   } • [PeresSchlag2000] Y. Peres and W. Schlag, "Smoothness of projections, Bernoulli convolutions, and the dimension of exceptions," Duke Math. J., vol. 102, iss. 2, pp. 193-251, 2000. @article{PeresSchlag2000,   author = {Peres, Yuval and Schlag, Wilhelm},   journal = {Duke Math. J.},   number = {2},   pages = {193--251},   title = {Smoothness of projections, {B}ernoulli convolutions, and the dimension of exceptions},   volume = {102},   year = {2000},   doi = {10.1215/S0012-7094-00-10222-0},   issn = {0012-7094},   } • [Bourgain2010] J. Bourgain, "The discretized sum-product and projection theorems," J. Anal. Math., vol. 112, pp. 193-236, 2010. @article{Bourgain2010,   author = {Bourgain, Jean},   journal = {J. Anal. Math.},   pages = {193--236},   title = {The discretized sum-product and projection theorems},   volume = {112},   year = {2010},   doi = {10.1007/s11854-010-0028-x},   issn = {0021-7670},   } • [Hochman2012b] M. Hochman, Self-similar sets with overlaps and sumset phenomena for entropy, the multidimensional case, 2012. @misc{Hochman2012b,   author = {Hochman, M.},   note = {preprint},   title = {Self-similar sets with overlaps and sumset phenomena for entropy, the multidimensional case},   year = {2012},   } • [PollicottSimon1995] M. Pollicott and K. Simon, "The Hausdorff dimension of $\lambda$-expansions with deleted digits," Trans. Amer. Math. Soc., vol. 347, iss. 3, pp. 967-983, 1995. @article{PollicottSimon1995,   author = {Pollicott, Mark and Simon, K{á}roly},   journal = {Trans. Amer. Math. Soc.},   number = {3},   pages = {967--983},   title = {The {H}ausdorff dimension of {$\lambda$}-expansions with deleted digits},   volume = {347},   year = {1995},   doi = {10.2307/2154881},   issn = {0002-9947},   } • [Solomyak1995] B. Solomyak, "On the random series $\sum\pm\lambda^n$ (an Erdős problem)," Ann. of Math., vol. 142, iss. 3, pp. 611-625, 1995. @article{Solomyak1995,   author = {Solomyak, Boris},   journal = {Ann. of Math.},   number = {3},   pages = {611--625},   title = {On the random series {$\sum\pm\lambda\sp n$} (an {E}rdős problem)},   volume = {142},   year = {1995},   doi = {10.2307/2118556},   issn = {0003-486X},   } • [ShmerkinSolomyak2006] P. Shmerkin and B. Solomyak, "Zeros of $\{-1,0,1\}$ power series and connectedness loci for self-affine sets," Experiment. Math., vol. 15, iss. 4, pp. 499-511, 2006. @article{ShmerkinSolomyak2006,   author = {Shmerkin, Pablo and Solomyak, Boris},   journal = {Experiment. Math.},   number = {4},   pages = {499--511},   title = {Zeros of {$\{-1,0,1\}$} power series and connectedness loci for self-affine sets},   volume = {15},   year = {2006},   doi = {10.1080/10586458.2006.10128977},   issn = {1058-6458},   } • [Erdos1939] P. Erdös, "On a family of symmetric Bernoulli convolutions," Amer. J. Math., vol. 61, pp. 974-976, 1939. @article{Erdos1939,   author = {Erd{ö}s, Paul},   journal = {Amer. J. Math.},   pages = {974--976},   title = {On a family of symmetric {B}ernoulli convolutions},   volume = {61},   year = {1939},   doi = {10.2307/2371641},   issn = {0002-9327},   } • [Erdos1940] P. Erdös, "On the smoothness properties of a family of Bernoulli convolutions," Amer. J. Math., vol. 62, pp. 180-186, 1940. @article{Erdos1940,   author = {Erd{ö}s, Paul},   journal = {Amer. J. Math.},   pages = {180--186},   title = {On the smoothness properties of a family of {B}ernoulli convolutions},   volume = {62},   year = {1940},   doi = {10.2307/2371446},   issn = {0002-9327},   } • [PeresSchlagSolomyak00] Y. Peres, W. Schlag, and B. Solomyak, "Sixty years of Bernoulli convolutions," in Fractal Geometry and Stochastics, II, Basel: Birkhäuser, 2000, vol. 46, pp. 39-65. @incollection{PeresSchlagSolomyak00, address = {Basel},   author = {Peres, Yuval and Schlag, Wilhelm and Solomyak, Boris},   booktitle = {Fractal {G}eometry and {S}tochastics, {II}},   pages = {39--65},   publisher = {Birkhäuser},   series = {Progr. Probab.},   title = {Sixty years of {B}ernoulli convolutions},   volume = {46},   year = {2000},   } • [AlexanderYorke1984] J. C. Alexander and J. A. Yorke, "Fat baker’s transformations," Ergodic Theory Dynam. Systems, vol. 4, iss. 1, pp. 1-23, 1984. @article{AlexanderYorke1984,   author = {Alexander, J. C. and Yorke, J. A.},   journal = {Ergodic Theory Dynam. Systems},   number = {1},   pages = {1--23},   title = {Fat baker's transformations},   volume = {4},   year = {1984},   doi = {10.1017/S0143385700002236},   issn = {0143-3857},   } • [PrzytyckiUrbanski1989] F. Przytycki and M. Urbański, "On the Hausdorff dimension of some fractal sets," Studia Math., vol. 93, iss. 2, pp. 155-186, 1989. @article{PrzytyckiUrbanski1989,   author = {Przytycki, F. and Urba{ń}ski, M.},   journal = {Studia Math.},   number = {2},   pages = {155--186},   title = {On the {H}ausdorff dimension of some fractal sets},   volume = {93},   year = {1989},   issn = {0039-3223},   } • [KeaneSmorodinskySolomyak1995] M. Keane, M. Smorodinsky, and B. Solomyak, "On the morphology of $\gamma$-expansions with deleted digits," Trans. Amer. Math. Soc., vol. 347, iss. 3, pp. 955-966, 1995. @article{KeaneSmorodinskySolomyak1995,   author = {Keane, Mike and Smorodinsky, Meir and Solomyak, Boris},   journal = {Trans. Amer. Math. Soc.},   number = {3},   pages = {955--966},   title = {On the morphology of {$\gamma$}-expansions with deleted digits},   volume = {347},   year = {1995},   doi = {10.2307/2154880},   issn = {0002-9947},   } • [NicolSidorovBroomhead2002] M. Nicol, N. Sidorov, and D. Broomhead, "On the fine structure of stationary measures in systems which contract-on-average," J. Theoret. Probab., vol. 15, iss. 3, pp. 715-730, 2002. @article{NicolSidorovBroomhead2002,   author = {Nicol, Matthew and Sidorov, Nikita and Broomhead, David},   journal = {J. Theoret. Probab.},   number = {3},   pages = {715--730},   title = {On the fine structure of stationary measures in systems which contract-on-average},   volume = {15},   year = {2002},   doi = {10.1023/A:1016224000145},   issn = {0894-9840},   } • [PeresSimonSolomyak2006] Y. Peres, K. Simon, and B. Solomyak, "Absolute continuity for random iterated function systems with overlaps," J. London Math. Soc., vol. 74, iss. 3, pp. 739-756, 2006. @article{PeresSimonSolomyak2006,   author = {Peres, Yuval and Simon, K{á}roly and Solomyak, Boris},   journal = {J. London Math. Soc.},   number = {3},   pages = {739--756},   title = {Absolute continuity for random iterated function systems with overlaps},   volume = {74},   year = {2006},   doi = {10.1112/S0024610706023258},   issn = {0024-6107},   } • [Garsia1963] A. M. Garsia, "Entropy and singularity of infinite convolutions," Pacific J. Math., vol. 13, pp. 1159-1169, 1963. @article{Garsia1963,   author = {Garsia, Adriano M.},   journal = {Pacific J. Math.},   pages = {1159--1169},   title = {Entropy and singularity of infinite convolutions},   volume = {13},   year = {1963},   doi = {10.2140/pjm.1963.13.1159},   issn = {0030-8730},   } • [Shmerkin2013] P. Shmerkin, "On the exceptional set for absolute continuity of Bernoulli convolutions," Geom. Funct. Anal., vol. 24, pp. 946-958. @article{Shmerkin2013,   author = {Shmerkin, Pablo},   journal = {Geom. Funct. Anal.},   pages = {946--958},   title = {On the exceptional set for absolute continuity of {B}ernoulli convolutions},   volume = {24},   doi = {10.1007/s00039-014-0285-4},   } • [TaoVu2006] T. Tao and V. Vu, Additive Combinatorics, Cambridge: Cambridge Univ. Press, 2006, vol. 105. @book{TaoVu2006, address = {Cambridge},   author = {Tao, Terence and Vu, Van},   pages = {xviii+512},   publisher = {Cambridge Univ. Press},   series = {Cambridge Stud. Adv. Math.},   title = {Additive Combinatorics},   volume = {105},   year = {2006},   doi = {10.1017/CBO9780511755149},   isbn = {978-0-521-85386-6; 0-521-85386-9},   } • [Tao2010] T. Tao, "Sumset and inverse sumset theory for Shannon entropy," Combin. Probab. Comput., vol. 19, iss. 4, pp. 603-639, 2010. @article{Tao2010,   author = {Tao, Terence},   journal = {Combin. Probab. Comput.},   number = {4},   pages = {603--639},   title = {Sumset and inverse sumset theory for {S}hannon entropy},   volume = {19},   year = {2010},   doi = {10.1017/S0963548309990642},   issn = {0963-5483},   } • [BourgainKatzTao2004] J. Bourgain, N. Katz, and T. Tao, "A sum-product estimate in finite fields, and applications," Geom. Funct. Anal., vol. 14, iss. 1, pp. 27-57, 2004. @article{BourgainKatzTao2004,   author = {Bourgain, Jean and Katz, N. and Tao, T.},   journal = {Geom. Funct. Anal.},   number = {1},   pages = {27--57},   title = {A sum-product estimate in finite fields, and applications},   volume = {14},   year = {2004},   doi = {10.1007/s00039-004-0451-1},   issn = {1016-443X},   } • [KatzTao2001] N. H. Katz and T. Tao, "Some connections between Falconer’s distance set conjecture and sets of Furstenburg type," New York J. Math., vol. 7, pp. 149-187, 2001. @article{KatzTao2001,   author = {Katz, Nets Hawk and Tao, Terence},   journal = {New York J. Math.},   pages = {149--187},   title = {Some connections between {F}alconer's distance set conjecture and sets of {F}urstenburg type},   volume = {7},   year = {2001},   issn = {1076-9803},   url = {http://nyjm.albany.edu:8000/j/2001/7_149.html},   } • [Bourgain2003] J. Bourgain, "On the Erdős-Volkmann and Katz-Tao ring conjectures," Geom. Funct. Anal., vol. 13, iss. 2, pp. 334-365, 2003. @article{Bourgain2003,   author = {Bourgain, Jean},   journal = {Geom. Funct. Anal.},   number = {2},   pages = {334--365},   title = {On the {E}rdős-{V}olkmann and {K}atz-{T}ao ring conjectures},   volume = {13},   year = {2003},   doi = {10.1007/s000390300008},   issn = {1016-443X},   } • [HochmanShmerkin2011] M. Hochman and P. Shmerkin, Equidistribution from fractals, 2011. @misc{HochmanShmerkin2011,   author = {Hochman, M. and Shmerkin, P.},   title = {Equidistribution from fractals},   year = {2011},   } • [ErdosVolkmann1966] P. ErdHos and B. Volkmann, "Additive Gruppen mit vorgegebener Hausdorffscher Dimension," J. Reine Angew. Math., vol. 221, pp. 203-208, 1966. @article{ErdosVolkmann1966,   author = {Erd{ő}s, Paul and Volkmann, Bodo},   journal = {J. Reine Angew. Math.},   pages = {203--208},   title = {Additive {G}ruppen mit vorgegebener {H}ausdorffscher {D}imension},   volume = {221},   year = {1966},   issn = {0075-4102},   } • [Korner2008] T. W. Körner, "Hausdorff dimension of sums of sets with themselves," Studia Math., vol. 188, iss. 3, pp. 287-295, 2008. @article{Korner2008,   author = {K{ö}rner, T. W.},   journal = {Studia Math.},   number = {3},   pages = {287--295},   title = {Hausdorff dimension of sums of sets with themselves},   volume = {188},   year = {2008},   doi = {10.4064/sm188-3-4},   issn = {0039-3223},   } • [SchmelingShmerkin2009] J. Schmeling and P. Shmerkin, "On the dimension of iterated sumsets," in Recent Developments in Fractals and Related Fields, Boston, MA: Birkhäuser, 2010, pp. 55-72. @incollection{SchmelingShmerkin2009, address = {Boston, MA},   author = {Schmeling, J{ö}rg and Shmerkin, Pablo},   booktitle = {Recent Developments in Fractals and Related Fields},   pages = {55--72},   publisher = {Birkhäuser},   series = {Appl. Numer. Harmon. Anal.},   title = {On the dimension of iterated sumsets},   year = {2010},   doi = {10.1007/978-0-8176-4888-6_5},   } • [Esseen1942] C. Esseen, "On the Liapounoff limit of error in the theory of probability," Ark. Mat. Astr. Fys., vol. 28A, iss. 9, p. 19, 1942. @article{Esseen1942,   author = {Esseen, Carl-Gustav},   journal = {Ark. Mat. Astr. Fys.},   number = {9},   pages = {19},   title = {On the {L}iapounoff limit of error in the theory of probability},   volume = {28A},   year = {1942},   issn = {0004-2080},   } • [KaimanovichVershik1983] V. A. Kauimanovich and A. M. Vershik, "Random walks on discrete groups: boundary and entropy," Ann. Probab., vol. 11, iss. 3, pp. 457-490, 1983. @article{KaimanovichVershik1983,   author = {Ka{\u\i}manovich, V. A. and Vershik, A. M.},   journal = {Ann. Probab.},   number = {3},   pages = {457--490},   title = {Random walks on discrete groups: boundary and entropy},   volume = {11},   year = {1983},   doi = {10.1214/aop/1176993497},   issn = {0091-1798},   } • [Madiman2008] M. Madiman, "On the entropy of sums," in Information Theory Workshop, 2008. ITW $’$08, , 2008, pp. 303-307. @incollection{Madiman2008,   author = {Madiman, M.},   booktitle = {Information Theory Workshop, 2008. ITW $'$08},   pages = {303--307},   series = {IEEE},   title = {On the entropy of sums},   year = {2008},   doi = {10.1109/ITW.2008.4578674},   } • [MadimanMarcusTetali2012] M. Madiman, A. W. Marcus, and P. Tetali, "Entropy and set cardinality inequalities for partition-determined functions," Random Structures Algorithms, vol. 40, iss. 4, pp. 399-424, 2012. @article{MadimanMarcusTetali2012,   author = {Madiman, Mokshay and Marcus, Adam W. and Tetali, Prasad},   journal = {Random Structures Algorithms},   number = {4},   pages = {399--424},   title = {Entropy and set cardinality inequalities for partition-determined functions},   volume = {40},   year = {2012},   doi = {10.1002/rsa.20385},   issn = {1042-9832},   } ## Authors Michael Hochman Einstein Institute of Mathematics, The Hebrew University of Jerusalem, Givat Ram, Jerusalem, Israel
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6856692433357239, "perplexity": 23961.548508913587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578532882.36/warc/CC-MAIN-20190421195929-20190421221929-00533.warc.gz"}
http://orbi.ulg.ac.be/browse?type=author&value=Th%C3%B6ne%2C+C.+C
References of "Thöne, C. C"      in Complete repository Arts & humanities   Archaeology   Art & art history   Classical & oriental studies   History   Languages & linguistics   Literature   Performing arts   Philosophy & ethics   Religion & theology   Multidisciplinary, general & others Business & economic sciences   Accounting & auditing   Production, distribution & supply chain management   Finance   General management & organizational theory   Human resources management   Management information systems   Marketing   Strategy & innovation   Quantitative methods in economics & management   General economics & history of economic thought   International economics   Macroeconomics & monetary economics   Microeconomics   Economic systems & public economics   Social economics   Special economic topics (health, labor, transportation…)   Multidisciplinary, general & others Engineering, computing & technology   Aerospace & aeronautics engineering   Architecture   Chemical engineering   Civil engineering   Computer science   Electrical & electronics engineering   Energy   Geological, petroleum & mining engineering   Materials science & engineering   Mechanical engineering   Multidisciplinary, general & others Human health sciences   Alternative medicine   Anesthesia & intensive care   Cardiovascular & respiratory systems   Dentistry & oral medicine   Dermatology   Endocrinology, metabolism & nutrition   Forensic medicine   Gastroenterology & hepatology   General & internal medicine   Geriatrics   Hematology   Immunology & infectious disease   Laboratory medicine & medical technology   Neurology   Oncology   Ophthalmology   Orthopedics, rehabilitation & sports medicine   Otolaryngology   Pediatrics   Pharmacy, pharmacology & toxicology   Psychiatry   Public health, health care sciences & services   Radiology, nuclear medicine & imaging   Reproductive medicine (gynecology, andrology, obstetrics)   Rheumatology   Surgery   Urology & nephrology   Multidisciplinary, general & others Law, criminology & political science   Civil law   Criminal law & procedure   Criminology   Economic & commercial law   European & international law   Judicial law   Metalaw, Roman law, history of law & comparative law   Political science, public administration & international relations   Public law   Social law   Tax law   Multidisciplinary, general & others Life sciences   Agriculture & agronomy   Anatomy (cytology, histology, embryology...) & physiology   Animal production & animal husbandry   Aquatic sciences & oceanology   Biochemistry, biophysics & molecular biology   Biotechnology   Entomology & pest control   Environmental sciences & ecology   Food science   Genetics & genetic processes   Microbiology   Phytobiology (plant sciences, forestry, mycology...)   Veterinary medicine & animal health   Zoology   Multidisciplinary, general & others Physical, chemical, mathematical & earth Sciences   Chemistry   Earth sciences & physical geography   Mathematics   Physics   Space science, astronomy & astrophysics   Multidisciplinary, general & others Social & behavioral sciences, psychology   Animal psychology, ethology & psychobiology   Anthropology   Communication & mass media   Education & instruction   Human geography & demography   Library & information sciences   Neurosciences & behavior   Regional & inter-regional studies   Social work & social policy   Sociology & social sciences   Social, industrial & organizational psychology   Theoretical & cognitive psychology   Treatment & clinical psychology   Multidisciplinary, general & others     Showing results 1 to 9 of 9 1 Flux and color variations of the doubly imaged quasar UM673Ricci, Davide ; Elyiv, Andrii ; Finet, François et alin Astronomy and Astrophysics (2013), 551With the aim of characterizing the flux and color variations of the multiple components of the gravitationally lensed quasar UM673 as a function of time, we have performed multi-epoch and multi-band ... [more ▼]With the aim of characterizing the flux and color variations of the multiple components of the gravitationally lensed quasar UM673 as a function of time, we have performed multi-epoch and multi-band photometric observations with the Danish 1.54m telescope at the La Silla Observatory. The observations were carried out in the VRi spectral bands during four seasons (2008--2011). We reduced the data using the PSF (Point Spread Function) photometric technique as well as aperture photometry. Our results show for the brightest lensed component some significant decrease in flux between the first two seasons (+0.09/+0.11/+0.05 mag) and a subsequent increase during the following ones (-0.11/-0.11/-0.10 mag) in the V/R/i spectral bands, respectively. Comparing our results with previous studies, we find smaller color variations between these seasons as compared with previous ones. We also separate the contribution of the lensing galaxy from that of the fainter and close lensed component. [less ▲]Detailed reference viewed: 40 (13 ULg) OGLE-2008-BLG-510: first automated real-time detection of a weak microlensing anomaly - brown dwarf or stellar binary?Bozza, V.; Dominik, M.; Rattenbury, N. J. et alin Monthly Notices of the Royal Astronomical Society (2012), 424The microlensing event OGLE-2008-BLG-510 is characterized by an evident asymmetric shape of the peak, promptly detected by the Automated Robotic Terrestrial Exoplanet Microlensing Search (ARTEMiS) system ... [more ▼]The microlensing event OGLE-2008-BLG-510 is characterized by an evident asymmetric shape of the peak, promptly detected by the Automated Robotic Terrestrial Exoplanet Microlensing Search (ARTEMiS) system in real time. The skewness of the light curve appears to be compatible both with binary-lens and binary-source models, including the possibility that the lens system consists of an M dwarf orbited by a brown dwarf. The detection of this microlensing anomaly and our analysis demonstrate that: (1) automated real-time detection of weak microlensing anomalies with immediate feedback is feasible, efficient and sensitive, (2) rather common weak features intrinsically come with ambiguities that are not easily resolved from photometric light curves, (3) a modelling approach that finds all features of parameter space rather than just the 'favourite model' is required and (4) the data quality is most crucial, where systematics can be confused with real features, in particular small higher order effects such as orbital motion signatures. It moreover becomes apparent that events with weak signatures are a silver mine for statistical studies, although not easy to exploit. Clues about the apparent paucity of both brown-dwarf companions and binary-source microlensing events might hide here. Based in part on data collected by MiNDSTEp with the Danish 1.54m telescope at the ESO La Silla Observatory. [less ▲]Detailed reference viewed: 66 (3 ULg) HE 0435-1223 lensed QSO VRi light curves (Ricci+, 2011)Ricci, Davide ; Poels, Joël ; Elyiv, Andrii et alTextual, factual or bibliographical database (2011)We present VRi photometric observations of the quadruply imaged quasar HE0435-1223, carried out with the Danish 1.54m telescope at the La Silla Observatory. Our aim was to monitor and study the magnitudes ... [more ▼]We present VRi photometric observations of the quadruply imaged quasar HE0435-1223, carried out with the Danish 1.54m telescope at the La Silla Observatory. Our aim was to monitor and study the magnitudes and colors of each lensed component as a function of time. [less ▲]Detailed reference viewed: 86 (29 ULg) Frequency of Solar-like Systems and of Ice and Gas Giants Beyond the Snow Line from High-magnification Microlensing Events in 2005-2008Gould, A.; Dong, Subo; Gaudi, B. S. et alin Astrophysical Journal (2010), 720We present the first measurement of the planet frequency beyond the "snow line," for the planet-to-star mass-ratio interval –4.5 < log q < –2, corresponding to the range of ice giants to gas giants. We ... [more ▼]We present the first measurement of the planet frequency beyond the "snow line," for the planet-to-star mass-ratio interval –4.5 < log q < –2, corresponding to the range of ice giants to gas giants. We find \endgraf\vbox{\begin{center}$\displaystyle{d^2 N{_{\rm pl}}\over d\log q\, d\log s} = (0.36\pm 0.15)\;{\rm dex}^{-2}$\end{center}}\noindentat the mean mass ratio q = 5 × 10 –4 with no discernible deviation from a flat (Öpik's law) distribution in log-projected separation s. The determination is based on a sample of six planets detected from intensive follow-up observations of high-magnification ( A>200) microlensing events during 2005-2008. The sampled host stars have a typical mass M host ~ 0.5 M sun [less ▲]Detailed reference viewed: 115 (14 ULg) OGLE 2008-BLG-290: an accurate measurement of the limb darkening of a galactic bulge K Giant spatially resolved by microlensingFouqué, P.; Heyrovský, D.; Dong, S. et alin Astronomy and Astrophysics (2010), 518Context. Not only is gravitational microlensing a successful tool for discovering distant exoplanets, but it also enables characterization of the lens and source stars involved in the lensing event.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7062709927558899, "perplexity": 18819.426292706015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120101.11/warc/CC-MAIN-20170423031200-00608-ip-10-145-167-34.ec2.internal.warc.gz"}
https://nyuscholars.nyu.edu/en/publications/sticky-brownian-motion-and-its-numerical-solution
# Sticky Brownian Motion and Its Numerical Solution Nawaf Bou-Rabee, Miranda C. Holmes-Cerfon Research output: Contribution to journalArticlepeer-review ## Abstract Sticky Brownian motion is the simplest example of a diffusion process that can spend finite time both in the interior of a domain and on its boundary. It arises in various applications in fields such as biology, materials science, and finance. This article spotlights the unusual behavior of sticky Brownian motions from the perspective of applied mathematics, and provides tools to efficiently simulate them. We show that a sticky Brownian motion arises naturally for a particle diffusing on R+ with a strong, short-ranged potential energy near the origin. This is a limit that accurately models mesoscale particles, those with diameters_100nm-10_m, which form the building blocks for many common materials. We introduce a simple and intuitive sticky random walk to simulate sticky Brownian motion, which also gives insight into its unusual properties. In parameter regimes of practical interest, we show that this sticky random walk is two to five orders of magnitude faster than alternative methods to simulate a sticky Brownian motion. We outline possible steps to extend this method toward simulating multidimensional sticky diffusions. Original language English (US) 164-195 32 SIAM Review 62 1 https://doi.org/10.1137/19M1268446 Published - 2020 ## Keywords • Feller boundary condition • Finite difference methods • Fokker{planck equation • Generalized wentzell boundary condition • Kolmogorov equation • Markov chain approximation method • Markov jump process • Sticky brownian motion • Sticky random walk ## ASJC Scopus subject areas • Theoretical Computer Science • Computational Mathematics • Applied Mathematics
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.869292140007019, "perplexity": 1730.4203480763936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039491784.79/warc/CC-MAIN-20210420214346-20210421004346-00017.warc.gz"}
http://mathoverflow.net/users/23232/elisabeth-fink
Elisabeth Fink less info reputation 17 bio website math.ens.fr/~fink location Orsay, France age member for 2 years, 11 months seen Mar 14 at 6:32 profile views 414 I'm a postdoc at Paris Sud in Orsay. 9 Questions 8 Infinite finitely generated non-amenable groups 7 Are residually finite, perfect groups residually alternating? 5 Infinite sequence avoiding a countable set of words 5 Commutator Width of a direct limit of hyperbolic groups 5 $\left[x,y\right]$ as a product of palindromes of even length? 235 Reputation +25 Infinite sequence avoiding a countable set of words +10 Commutator width of soluble Baumslag Solitar groups +25 Commutator Width of a direct limit of hyperbolic groups +5 Infinite metabelian/nilpotent quotients of $C_2 * C_2 * C_2$ This user has not answered any questions 3 Tags 0 gr.group-theory × 8 0 co.combinatorics 0 geometric-group-theory × 3 1 Account MathOverflow 235 rep 17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6071356534957886, "perplexity": 4217.528086080017}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131299114.73/warc/CC-MAIN-20150323172139-00129-ip-10-168-14-71.ec2.internal.warc.gz"}
https://labs.tib.eu/arxiv/?author=V.%20Cirigliano
• ### Constraining the top-Higgs sector of the Standard Model Effective Field Theory(1605.04311) Dec. 5, 2018 hep-ph Working in the framework of the Standard Model Effective Field Theory, we study chirality-flipping couplings of the top quark to Higgs and gauge bosons. We discuss in detail the renormalization group evolution to lower energies and investigate direct and indirect contributions to high- and low-energy CP-conserving and CP-violating observables. Our analysis includes constraints from collider observables, precision electroweak tests, flavor physics, and electric dipole moments. We find that indirect probes are competitive or dominant for both CP-even and CP-odd observables, even after accounting for uncertainties associated with hadronic and nuclear matrix elements, illustrating the importance of including operator mixing in constraining the Standard Model Effective Field Theory. We also study scenarios where multiple anomalous top couplings are generated at the high scale, showing that while the bounds on individual couplings relax, strong correlations among couplings survive. Finally, we find that enforcing minimal flavor violation does not significantly affect the bounds on the top couplings. • ### A new leading contribution to neutrinoless double-beta decay(1802.10097) Feb. 27, 2018 hep-ph, nucl-th, hep-lat Within the framework of chiral effective field theory we discuss the leading contributions to the neutrinoless double-beta decay transition operator induced by light Majorana neutrinos. Based on renormalization arguments in both dimensional regularization with minimal subtraction and a coordinate-space cutoff scheme, we show the need to introduce a leading-order short-range operator, missing in all current calculations. We discuss strategies to determine the finite part of the short-range coupling by matching to lattice QCD or by relating it via chiral symmetry to isospin-breaking observables in the two-nucleon sector. Finally, we speculate on the impact of this new contribution on nuclear matrix elements of relevance to experiment. • ### Interpreting top-quark LHC measurements in the standard-model effective field theory(1802.07237) Feb. 20, 2018 hep-ph, hep-ex This note proposes common standards and prescriptions for the effective-field-theory interpretation of top-quark measurements at the LHC. • ### Neutrinoless double beta decay in chiral effective field theory: lepton number violation at dimension seven(1708.09390) Dec. 27, 2017 hep-ph, nucl-th We analyze neutrinoless double beta decay ($0\nu\beta\beta$) within the framework of the Standard Model Effective Field Theory. Apart from the dimension-five Weinberg operator, the first contributions appear at dimension seven. We classify the operators and evolve them to the electroweak scale, where we match them to effective dimension-six, -seven, and -nine operators. In the next step, after renormalization group evolution to the QCD scale, we construct the chiral Lagrangian arising from these operators. We develop a power-counting scheme and derive the two-nucleon $0\nu\beta\beta$ currents up to leading order in the power counting for each lepton-number-violating operator. We argue that the leading-order contribution to the decay rate depends on a relatively small number of nuclear matrix elements. We test our power counting by comparing nuclear matrix elements obtained by various methods and by different groups. We find that the power counting works well for nuclear matrix elements calculated from a specific method, while, as in the case of light Majorana neutrino exchange, the overall magnitude of the matrix elements can differ by factors of two to three between methods. We calculate the constraints that can be set on dimension-seven lepton-number-violating operators from $0\nu\beta\beta$ experiments and study the interplay between dimension-five and -seven operators, discussing how dimension-seven contributions affect the interpretation of $0\nu\beta\beta$ in terms of the effective Majorana mass $m_{\beta \beta}$. • ### Neutrinoless double beta decay matrix elements in light nuclei(1710.05026) Oct. 13, 2017 hep-ph, nucl-th, hep-lat We present the first ab initio calculations of neutrinoless double beta decay matrix elements in $A=6$-$12$ nuclei using Variational Monte Carlo wave functions obtained from the Argonne $v_{18}$ two-nucleon potential and Illinois-7 three-nucleon interaction. We study both light Majorana neutrino exchange and potentials arising from a large class of multi-TeV mechanisms of lepton number violation. Our results provide benchmarks to be used in testing many-body methods that can be extended to the heavy nuclei of experimental interest. In light nuclei we have also studied the impact of two-body short range correlations and the use of different forms for the transition operators, such as those corresponding to different orders in chiral effective theory. • ### Neutrinoless double beta decay in effective field theory: the light Majorana neutrino exchange mechanism(1710.01729) May 21, 2019 hep-ph, nucl-th, hep-lat We present the first chiral effective theory derivation of the neutrinoless double beta-decay $nn\rightarrow pp$ potential induced by light Majorana neutrino exchange. The effective-field-theory framework has allowed us to identify and parameterize short- and long-range contributions previously missed in the literature. These contributions can not be absorbed into parameterizations of the single nucleon form factors. Starting from the quark and gluon level, we perform the matching onto chiral effective field theory and subsequently onto the nuclear potential. To derive the nuclear potential mediating neutrinoless double beta-decay, the hard, soft and potential neutrino modes must be integrated out. This is performed through next-to-next-to-leading order in the chiral power counting, in both the Weinberg and pionless schemes. At next-to-next-to-leading order, the amplitude receives additional contributions from the exchange of ultrasoft neutrinos, which can be expressed in terms of nuclear matrix elements of the weak current and excitation energies of the intermediate nucleus. These quantities also control the two-neutrino double beta-decay amplitude. Finally, we outline strategies to determine the low-energy constants that appear in the potentials, by relating them to electromagnetic couplings and/or by matching to lattice QCD calculations. • ### Right-handed charged currents in the era of the Large Hadron Collider(1703.04751) May 22, 2017 hep-ph We discuss the phenomenology of right-handed charged currents in the framework of the Standard Model Effective Field Theory, in which they arise due to a single gauge-invariant dimension-six operator. We study the manifestations of the nine complex couplings of the $W$ to right-handed quarks in collider physics, flavor physics, and low-energy precision measurements. We first obtain constraints on the couplings under the assumption that the right-handed operator is the dominant correction to the Standard Model at observable energies. We subsequently study the impact of degeneracies with other Beyond-the-Standard-Model effective interactions and identify observables, both at colliders and low-energy experiments, that would uniquely point to right-handed charged currents. • ### Neutrinoless double beta decay and chiral $SU(3)$(1701.01443) Jan. 5, 2017 hep-ph, nucl-th, hep-lat TeV-scale lepton number violation can affect neutrinoless double beta decay through dimension-9 $\Delta L= \Delta I = 2$ operators involving two electrons and four quarks. Since the dominant effects within a nucleus are expected to arise from pion exchange, the $\pi^- \to \pi^+ e e$ matrix elements of the dimension-9 operators are a key hadronic input. In this letter we provide estimates for the $\pi^- \to \pi^+$ matrix elements of all Lorentz scalar $\Delta I = 2$ four-quark operators relevant to the study of TeV-scale lepton number violation. The analysis is based on chiral $SU(3)$ symmetry, which relates the $\pi^- \to \pi^+$ matrix elements of the $\Delta I = 2$ operators to the $K^0 \to \bar{K}^0$ and $K \to \pi \pi$ matrix elements of their $\Delta S = 2$ and $\Delta S = 1$ chiral partners, for which lattice QCD input is available. The inclusion of next-to-leading order chiral loop corrections to all symmetry relations used in the analysis makes our results robust at the $30\%$ level or better, depending on the operator. • ### An $\epsilon'$ improvement from right-handed currents(1612.03914) Dec. 12, 2016 hep-ph, nucl-th, hep-lat Recent lattice QCD calculations of direct CP violation in $K_L \to \pi \pi$ decays indicate tension with the experimental results. Assuming this tension to be real, we investigate a possible beyond-the-Standard Model explanation via right-handed charged currents. By using chiral perturbation theory in combination with lattice QCD results, we accurately calculate the modification of $\epsilon'/\epsilon$ induced by right-handed charged currents and extract values of the couplings that are necessary to explain the discrepancy, pointing to a scale around $10^2$ TeV. We find that couplings of this size are not in conflict with constraints from other precision experiments, but next-generation hadronic electric dipole moment searches (such as neutron and ${}^{225}$Ra) can falsify this scenario. We work out in detail a direct link, based on chiral perturbation theory, between CP violation in the kaon sector and electric dipole moments induced by right-handed currents which can be used in future analyses of left-right symmetric models. • ### Is there room for CP violation in the top-Higgs sector?(1603.03049) Aug. 2, 2016 hep-ph We discuss direct and indirect probes of chirality-flipping couplings of the top quark to Higgs and gauge bosons, considering both CP-conserving and CP-violating observables, in the framework of the Standard Model effective field theory. In our analysis we include current and prospective constraints from collider physics, precision electroweak tests, flavor physics, and electric dipole moments (EDMs). We find that low-energy indirect probes are very competitive, even after accounting for long-distance uncertainties. In particular, EDMs put constraints on the electroweak CP-violating dipole moments of the top that are two to three orders of magnitude stronger than existing limits. The new indirect constraint on the top EDM is given by $|d_t| < 5 \cdot 10^{-20}$ e cm at $90\%$ C.L. • ### Dimension-5 CP-odd operators: QCD mixing and renormalization(1502.07325) Aug. 1, 2016 hep-ph, nucl-th, hep-lat We study the off-shell mixing and renormalization of flavor-diagonal dimension-5 T- and P-odd operators involving quarks, gluons, and photons, including quark electric dipole and chromo-electric dipole operators. We present the renormalization matrix to one-loop in the $\bar{\rm MS}$ scheme. We also provide a definition of the quark chromo-electric dipole operator in a regularization-independent momentum-subtraction scheme suitable for non-perturbative lattice calculations and present the matching coefficients with the $\bar{\rm MS}$ scheme to one-loop in perturbation theory, using both the naive dimensional regularization and 't Hooft-Veltman prescriptions for $\gamma_5$. • ### Direct and indirect constraints on CP-violating Higgs-quark and Higgs-gluon interactions(1510.00725) Jan. 4, 2016 hep-ph, nucl-ex, nucl-th, hep-lat We investigate direct and indirect constraints on the complete set of anomalous CP-violating Higgs couplings to quarks and gluons originating from dimension-6 operators, by studying their signatures at the LHC and in electric dipole moments (EDMs). We show that existing uncertainties in hadronic and nuclear matrix elements have a significant impact on the interpretation of EDM experiments, and we quantify the improvements needed to fully exploit the power of EDM searches. Currently, the best bounds on the anomalous CP-violating Higgs interactions come from a combination of EDM measurements and the data from LHC Run 1. We argue that Higgs production cross section and branching ratios measurements at the LHC Run 2 will not improve the constraints significantly. On the other hand, the bounds on the couplings scale roughly linearly with EDM limits, so that future theoretical and experimental EDM developments can have a major impact in pinning down interactions of the Higgs. • ### Report of the Quark Flavor Physics Working Group(1311.1076) Dec. 9, 2013 hep-ph, hep-ex, hep-lat This report represents the response of the Intensity Frontier Quark Flavor Physics Working Group to the Snowmass charge. We summarize the current status of quark flavor physics and identify many exciting future opportunities for studying the properties of strange, charm, and bottom quarks. The ability of these studies to reveal the effects of new physics at high mass scales make them an essential ingredient in a well-balanced experimental particle physics program. • ### Charged Leptons(1311.5278) Nov. 24, 2013 hep-ph, hep-ex This is the report of the Intensity Frontier Charged Lepton Working Group of the 2013 Community Summer Study "Snowmass on the Mississippi", summarizing the current status and future experimental opportunities in muon and tau lepton studies and their sensitivity to new physics. These include searches for charged lepton flavor violation, measurements of magnetic and electric dipole moments, and precision measurements of the decay spectrum and parity-violating asymmetries. • ### Discovering the New Standard Model: Fundamental Symmetries and Neutrinos(1212.5190) Dec. 20, 2012 hep-ph, hep-ex, nucl-ex, nucl-th This White Paper describes recent progress and future opportunities in the area of fundamental symmetries and neutrinos. • ### Kaon Decays in the Standard Model(1107.6001) April 14, 2012 hep-ph, hep-ex A comprehensive overview of kaon decays is presented. The Standard Model predictions are discussed in detail, covering both the underlying short-distance electroweak dynamics and the important interplay of QCD at long distances. Chiral perturbation theory provides a universal framework for treating leptonic, semileptonic and nonleptonic decays including rare and radiative modes. All allowed decay modes with branching ratios of at least 10^(-11) are analyzed. Some decays with even smaller rates are also included. Decays that are strictly forbidden in the Standard Model are not considered in this review. The present experimental status and the prospects for future improvements are reviewed. • ### An evaluation of |Vus| and precise tests of the Standard Model from world data on leptonic and semileptonic kaon decays(1005.2323) July 18, 2010 hep-ph, hep-ex We present a global analysis of leptonic and semileptonic kaon decay data, including all recent results published by the BNL-E865, KLOE, KTeV, ISTRA+ and NA48 experiments. This analysis, in conjunction with precise lattice calculations of the hadronic matrix elements now available, leads to a very precise determination of |Vus| and allows us to perform several stringent tests of the Standard Model. • One of the major challenges of particle physics has been to gain an in-depth understanding of the role of quark flavor and measurements and theoretical interpretations of their results have advanced tremendously: apart from masses and quantum numbers of flavor particles, there now exist detailed measurements of the characteristics of their interactions allowing stringent tests of Standard Model predictions. Among the most interesting phenomena of flavor physics is the violation of the CP symmetry that has been subtle and difficult to explore. Till early 1990s observations of CP violation were confined to neutral $K$ mesons, but since then a large number of CP-violating processes have been studied in detail in neutral $B$ mesons. In parallel, measurements of the couplings of the heavy quarks and the dynamics for their decays in large samples of $K, D$, and $B$ mesons have been greatly improved in accuracy and the results are being used as probes in the search for deviations from the Standard Model. In the near future, there will be a transition from the current to a new generation of experiments, thus a review of the status of quark flavor physics is timely. This report summarizes the results of the current generation of experiments that is about to be completed and it confronts these results with the theoretical understanding of the field. • ### Reanalysis of pion pion phase shifts from K -> pi pi decays(0907.1451) July 9, 2009 hep-ph We re-investigate the impact of isospin violation for extracting the s-wave pion pion scattering phase shift difference delta_0(M_K) - delta_2(M_K) from K -> pi pi decays. Compared to our previous analysis in 2003, more precise experimental data and improved knowledge of low-energy constants are used. In addition, we employ a more robust data-driven method to obtain the phase shift difference delta_0(M_K) - delta_2(M_K) = (52.5 \pm 0.8_{exp} \pm 2.8_{theor}) degrees. • ### pi pi Phase shifts from K to 2 pi(0807.5128) July 31, 2008 hep-ph We update the numerical results for the s-wave pi pi scattering phase-shift difference delta_0^0 - delta_0^2 at s = m_K^2 from a previous study of isospin breaking in K to 2 pi amplitudes in chiral perturbation theory. We include recent data for the K_S to pi pi and K^+ to pi^+ pi^0 decay widths and include experimental correlations. • ### Precision tests of the Standard Model with leptonic and semileptonic kaon decays(0801.1817) Jan. 11, 2008 hep-ph We present a global analysis of leptonic and semileptonic kaon decays data, including all recent results by BNL-E865, KLOE, KTeV, ISTRA+, and NA48. Experimental results are critically reviewed and combined, taking into account theoretical (both analytical and numerical) constraints on the semileptonic kaon form factors. This analysis leads to a very accurate determination of Vus and allows us to perform several stringent tests of the Standard Model. • ### Towards a consistent estimate of the chiral low-energy constants(hep-ph/0603205) July 17, 2006 hep-ph Guided by the large-Nc limit of QCD, we construct the most general chiral resonance Lagrangian that can generate chiral low-energy constants up to O(p^6). By integrating out the resonance fields, the low-energy constants are parametrized in terms of resonance masses and couplings. Information on those couplings and on the low-energy constants can be extracted by analysing QCD Green functions of currents both for large and small momenta. The chiral resonance theory generates Green functions that interpolate between QCD and chiral perturbation theory. As specific examples we consider the VAP and SPP Green functions. • ### Magnetic Moments of Dirac Neutrinos(hep-ph/0601005) Jan. 1, 2006 hep-ph The existence of a neutrino magnetic moment implies contributions to the neutrino mass via radiative corrections. We derive model-independent "naturalness" upper bounds on the magnetic moments of Dirac neutrinos, generated by physics above the electroweak scale. The neutrino mass receives a contribution from higher order operators, which are renormalized by operators responsible for the neutrino magnetic moment. This contribution can be calculated in a model independent way. In the absence of fine-tuning, we find that current neutrino mass limits imply that $|\mu_\nu| < 10^{-14}$ Bohr magnetons. This bound is several orders of magnitude stronger than those obtained from solar and reactor neutrino data and astrophysical observations. • ### Status of the Cabibbo Angle(hep-ph/0512039) Dec. 2, 2005 hep-ph We review the recent experimental and theoretical progress in the determination of |V_{ud}| and |V_{us}|, and the status of the most stringent test of CKM unitarity. Future prospects on |V_{cd}| and |V_{cs}| are also briefly discussed. • ### The < S P P > Green function and SU(3) breaking in K_{l3} decays(hep-ph/0503108) April 28, 2005 hep-ph Using the 1/N_C expansion scheme and truncating the hadronic spectrum to the lowest-lying resonances, we match a meromorphic approximation to the < S P P > Green function onto QCD by imposing the correct large-momentum falloff, both off-shell and on the relevant hadron mass shells. In this way we determine a number of chiral low-energy constants of O(p^6), in particular the ones governing SU(3) breaking in the K_{l3} vector form factor at zero momentum transfer. The main result of our matching procedure is that the known loop contributions largely dominate the corrections of O(p^6) to f_{+}(0). We discuss the implications of our final value f_{+}^{K^0 \pi^-}(0)=0.984 \pm 0.012 for the extraction of V_{us} from K_{l3} decays.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.930182158946991, "perplexity": 1417.7805341744117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141205147.57/warc/CC-MAIN-20201130035203-20201130065203-00210.warc.gz"}
https://paramountranchtrailruns.com/sltoy/f1bd06-resistance-of-1000w-heating-element
# resistance of 1000w heating element However, of the total energy dissipated by the circuit, the portion of the energy dissipated by each part is proportional to its resistance, so the resistance of the heating element has to be high enough so that most of the energy is dissipated by the heating element itself instead of, for example, the wiring in the walls. Podcast 291: Why developers are demanding more ethics in tech, “Question closed” notifications experiment results and graduation, MAINTENANCE WARNING: Possible downtime early morning Dec 2, 4, and 9 UTC…. If you want heat you want power and power is, $$P = I \cdot V = I^2 \cdot R = \dfrac{V^2}{R}$$. It's amazing what people will do after reading something on the internet. 开一个生日会 explanation as to why 开 is used here? spark_FM calculation was ohms law, Voltage squared/Power=Resistance, however domestic elements are usually rated at 240V and that should therefore be used in any calculation. Save Saved Removed 0. They are temperature, atmosphere, life and power or heat load required. If you have problems with resistance of the supply (eg. Which game is this six-sided die with two sets of runic-looking plus, minus and empty sides from? EASY TO INSTALL! It should be intuitive that the more parallel resistors we apply to the circuit of Figure 1 the lower the resistance becomes. In fact it is a maximum at \$R_L = R_S\$, where the load resistance equals the source resistance. Note that usually that's way lower resistance than what you would use (say) on the mains. Need tip on element, Choice of heating element (power resistor), Best configuration for 400W resistive heating device, Calculating Power and Setup using resistors as heating elements. Don't know if this helps but I just put my multimeter on a 220-240V 1850-2200W kettle element and got ~27 ohms. Metallic heating element alloys. In order to find the resistance of the heater element from the above described section, we must algebraically modify the formula to read (r = v/a). How is the Q and Q' determined the first time in JK flip flop? We often refer to electrical heating—what heating elements do—as "Joule heating" or "resistance heating," as though resistance is the only factor that matters. An Introduction to Resistive Heating Definitions of Related Terms Ohm - Unit of electrical resistance Sieman - Unit of electrical conductance. Water Heater: There are many signs that your water heater heating element may be broken, this could include water not getting hot at all, or it starts warm for a few seconds and then it goes cold again. How do I respond as Black to 1. e4 e6 2.e5? If your water seems to be too hot to the point of boiling, then it means your thermostat is b… It depends on the power source. If not, why not? But then I realized that when resistance of the heating element is too low, power drawn will be too high and can cause excessive heating on the element. there is not enough room for thick insulation or the heater cannot be well insulated from potential users touching it) then you go for low resistance, low-voltage, high-current setup. Electric current through the element encounters resistance, resulting in heating of the element.Unlike the Peltier effect, this process is independent of the direction of current. It depends on you power source. A heating element has neither "very high" nor "very low" resistance. Currently available. Top 10 Isotherm Heating Element. For a given power, a 240 volt heating element would have a thinner resistance wire (for higher resistance) than a 120 volt element. older trams use heaters connected directly to line voltage, be it 600V, 800V, or any other voltage the rest of the tram runs on. The heat proportional to I 2 R losses produced in heating element delivered to the charge either by radiation or by convection. This element features a stunning chrome finish and convenient temperature display located at the bottom. A heating element designed to deliver maximum heat (in a kettle, for instance) will draw as much current as it can while staying safely below that limit. Kanthal ® and Alkrothal ® FeCrAl alloys are characterized by high resistivity and capability to withstand high surface load. 2. Does adding more resistors increase or decrease the total heat produced? Use MathJax to format equations. Don’t spend hundreds of dollars replacing your range when all you need is a replacement oven bake heating element. This fact is called the Maximum Power Transfer Theorem. Given a constant voltage as specified in your question it should also be intuitive that the current through each branch will be the same no matter how many branches.*. So decreasing the resistance increases the heat output. 610000, Office: +86-0574-87154347 Mobile: +86-18158202116 Fax: +86-0574-87154347, Copyright © 2020 Joulemax Heating Solutions Co.,Ltd. Same thing applies to the switch - if after setting the switch to the "on" position the resistance is infinite then it means that the swtich doesn't close the circuit and it's broken. More generally, maximum power transfer is when source impedance equals load impedance. HEATING ELEMENT RESISTANCE Wire 230V 1000W - $3.89. Should my class be more rigorous, and how? 1,338 heating element 220v 1000w products are offered for sale by suppliers on Alibaba.com, of which industrial heater accounts for 22%, electric heater parts accounts for 1%, and halogen bulbs accounts for 1%. Both are signs that one or both of your heating elements could be broken (as most water heaters have two heating elements). FOR SALE! If you have problems with insulation (eg. A heating element has neither "very high" nor "very low" resistance. Would a heating element have a very high resistance, or a very low resistance? Can I add a breaker to my main disconnect panel? View Product. FITS PERFECTLY! Alibaba.com offers 715 resistance to 1000w heater products. If the resistance is higher than calculated or it's infinite (so the circuit is open) you can assume that the heating element has gone bad. What the heating element is made from and how it's designed directly affect how well the heater works, and how long it will continue to work. To learn more, see our tips on writing great answers. Fast and free shipping free returns cash on delivery available on eligible purchase. Best value. Thanks for contributing an answer to Electrical Engineering Stack Exchange! Asking for help, clarification, or responding to other answers. vash. If the power source is AC remember to use the RMS figure for the current or voltage as appropriate. What resistor type is best for use as a heater? Product Name. However, most heaters are supplied with a constant voltage so would require a lower resistance. If that offers reasonably constant voltage, as most do, then lower resistance increases the current, which increases the power dissipation and thus the heat. Metal resistance heating elements for furnaces are normally in the form of wire, strip or tube. Item Information. I'm new to chess-what should be done here to win the game? 9.4.1 Heating-element construction for ovens and furnaces. Why do most Christians eat pork when Deuteronomy says not to? The NiCr-based alloys are characterized by very good mechanical properties in the hot state as well as good oxidation and corrosion properties. Oct 17, 2014. 1. \$p=\frac{v^2}{r}\$. Ants-Store - 3pcs AC 220V 1000W Kiln Furnace Heating Element Coils High Resistance Wire Silver Tone - - Amazon.com How can a hard drive provide a host device with file/directory listings when the drive isn't spinning? Brinkmann 116-7000-0 Replacement Part – Electric Heating Element By Brinkmann. Figure 1. What's the best way for EU citizens to enter the UK if they're worried they might be refused entry at the UK border? 220v 1000w SUS304 DN32 Electric Heater Resistance Immersion Heating Element For Autoclave/Geyser/Boiler Tank 5.0 (3 votes) Store: DERNORD Official Store US$16.50 They can be used at maximum element temperature of 1425°C (2600°F). Actually should perform this experiment in reverse order :). Therefore, a lower resistance would release more heat. Micro rectangular heating element for thin layer heating, Miniature heater with more-or-less constant temperature. By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. Condition: New. SAVE MONEY! Kiln Furnace heating element Resistance wire 230V 1000W. To further confuse things, perhaps shed more heat than light, if you have a nominally constant voltage source with a fixed source resistance there will be a load resistance that has a maximum power. Joule - … Since we know that the hot water electrical circuit feeds 220 VAC to the 16 ampere heating element the resistance of that element is equal to 13.75 ohms. Please make sure to check your original part to confirm that you are buying the correct product. As heating usually takes a lot of power (compared to electronics) it usually needs a pretty good power supply, like a large lead acid or Li-Ion battery if it's portable - and those are reasonably good voltage sources. We can round that number up to 14 ohms. The materials used for resistance heating elements have been listed in Table 9.3. As the source voltage is constant, the lower \$r\$ value, the higher heat released. Several factors should be examined in choosing a heating element. Forms of construction of furnace heating elements are shown in Figure 9.5. The total energy dissipated by the circuit is proportional to current, so the resistance of the heating element has to be low enough to draw sufficient current to generate enough heat. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Compatible Oven Bake Heating Element for Resistance for Oven Ariston – Indesit 1000 W C00016435. So if you have some means of control - like PWM, or a thermostatic on-off switch, err slightly on the low side of resistance to get slightly more power than you need, and regulate that power to get the right temperature. I am thinking as you think at first. Why is the pitot tube located near the nose? To think about it in practical intuitive terms, imagine placing a very low resistance metal tool such as a wrench across the terminals of your car battery = much heat released. If you had a good constant current source, then increasing resistance would increase the voltage, and that would increase power. If you're connecting a heating element to the wall mains, there is a circuit breaker involved that limits the current so that your wiring doesn't get too hot. @GlenYates I wouldn't even joke about performing that experiment. It only takes a minute to sign up. Quantity: More than 10 available / 612 sold / See feedback. So if you have a constant current source you want high resistance. Score. Do MEMS accelerometers have a lower frequency limit? How could one “inverter control” convector heater be more electrically efficient than another? Mathematically this can be seen from the power equation \$P = \frac {V^2}{R} \$ that, for a given voltage, power dissipated is inversely proportional to the resistance. MathJax reference. Fast and free shipping free returns cash on delivery available on eligible purchase. If you are a DIY person or have experience fixing your oven, it is pretty easy to replace your old oven bake heating element. If not, your local appliance repair shop can do it for you! Compatible Oven Bake Heating Element for Resistance for Oven Ariston – Indesit 1000 W C00016435, Joulemax Heating Solutions Co.,Ltd Room 1518, Jiannan Avenue 1399, Tianfu 3rd Street, High-Tech district, Chengdu,Sichuan, China. 12.00 Normal 0 7.8 磅 0 2 false false false EN-US ZH-CN 271747283696 Score. I like the visual and practical explanation that this diagram presents. Does your organization need a developer evangelist? This resistance converts the electrical energy into heat which is related to the electrical resistivity of the metal, and is defined as the resistance of a unit length of unit cross-sectional area. Details about Kiln Furnace heating element Resistance wire 230V 1000W. Converting 3-gang electrical box to single. Making statements based on opinion; back them up with references or personal experience. In the above circuit, the current is V1/(Rs+RL), so the power in the load is: You can see intuitively by inspecting the numerator and denominator that if RL is very low or is very high the power approaches zero. kingsolomonspizza July 27, 2020 0. If you have a specific heat output you want and a input voltage you can figure out the resistance needed by plugging in Ohm's law. Complete with five heat settings ranging from 30 to 60C, this intuitive heating element consumes less energy … A heating element would draw half as much current at 120 volts as at 240 volts, and consume one quarter of the power. All Rights Reserved. long or thin wires, high internal resistance) then you go for high resistance, high-voltage, low-current option. Heating element The Kanthal ® program of electric heating elements is the widest on the market. 91 ($0.95/oz) mcummins7, ignore spark_FM you will be extremely unlikely to have a an element greater than 3kW in a domestic situation, in which case approximately 20ohms would be correct for a 3kW element. Often practical power sources can be treated like an ideal constant voltage source with a (rather low) internal series resistance. Heat output is defined by the power \$P\$which is itself defined by the voltage drop \$V\$across the element and the current \$I\$through it: \$P = V*I\$. This aftermarket oven bake heating element is manufactured to fit the listed models. Resistance For Oven Ariston – Indesit 1000W C00016435. Should hardwood floors go all the way to wall under kitchen cabinets? (All comments in this post are based around the fact that the voltage is the same for each situation) I would have thought that a higher resistance would have resulted in more heat loss, but I've been taught that the higer the current, the more energy is lost to heat. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Kitchen Basics 101 316203200 Range Oven Broil Upper Heating Element Replacement for Frigidaire 316199900, 832973 CH3200, AP2126395, AH439671, EA439671, PS439671 4.7 out of 5 stars 20$22.91 $22 . The pretty much only exception is when you need to safeguard against touching, then you drop the voltage down to safe level and work with that. Hi @GRA, it's a good example yet i'm not sure it answers the question. Easy to install and use, this Terma 1000W heating element measures at 565mm long and fits neatly inside your radiator to provide you with hours of luxurious warmth. Don’t spend hundreds of dollars replacing your range when all you need is a replacement oven bake heating element. A wide variety of resistance to 1000w heater options are available to you, such as package type, type, and technology. * A real power supply will, of course, have a limit to how much current it can produce before the voltage starts to droop. 3pcs Resistance Wire Kiln Furnace Heating Coils Element High Resistance Wire 220V 1000W: Amazon.com.au: Home Improvement It depends where are your biggest problems in powering that heater. In this method the current is passed through a highly resistance element which is either placed above or below the over depending upon the nature of the job to be performed. Our heating elements outperform in all temperature ranges, from element temperature 50 to 1850°C (120-3360°F), and atmospheres. Now place a dry piece of wood (high resistance) across the terminals = very little heat released. In that case most load power is caused by a load resistance that is equal to the the internal series resistance of the power source. Typically, the line voltage is … Buy 3pcs 220V 1000W Resistance Wire Oven Oven Heating Coils Kit Element Heat Resistant Wire Long Silver Tone Dryer Restring Heater Wire online on Amazon.ae at best prices. The temperature referred to is actual element operating temperature. Calculate : (a) Resistance of its element (b) Energy consumed by the oven in 1/2 hour in joules. rev 2020.12.2.38095, Sorry, we no longer support Internet Explorer, The best answers are voted up and rise to the top, Electrical Engineering Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us. Reply. Such as using a heating element with low resistance in an electrical kettle. Heating principle. (c) Time, in … Therefore a lower the resistance value will result in more power dissipation or heat loss. In the design of a custom open coil heating element, several factors need to be considered when selecting the optimum coil(s) for an application. Integral solution (or a simpler) to consumer surplus - What is wrong? Overview – Heating Element Design and Calculation To perform as a heating element the tape or wire must resist the flow of electricity. Half the power is lost in the source resistance. What's the significance of the car freshener? Cheap tone, buy suppliers directly from China: 3pcs AC 220V 1000W Kiln Furnace Heating Element Coils High Resistance Wire Silver Tone The total energy dissipated by the circuit is proportional to current, so the resistance of the heating element has to be low enough to draw sufficient current to generate enough heat. Rank. A wide variety of heating element 220v 1000w options are available to you, such as saudi arabia, japan. Were there often intra-USSR wars? 3 pcs coil heating element, 1000W element replaces, high resistance coils wire, for electric oven, muffle, household electrical appliances: Amazon.in: Home Improvement An electric oven is marked 1000W – 200V. More modern ones utilize off-the-shelf 220V heaters, because today it's cheaper to design voltage converter than to design new heater). About 1% of these are Resistors. SAVE MONEY! In reality, you go for the voltage that you have at hand (eg. How do I calculate the necessary resistance for a voltage divider? Buy 3pcs Resistance Wire Kiln Furnace Heating Coils Element High Resistance Wire 220V 1000W online on Amazon.ae at best prices. Electrical Engineering Stack Exchange is a question and answer site for electronics and electrical engineering professionals, students, and enthusiasts. Use of nous when moi is used in the subject. Dometic Parts 3850644422 Heating Element 325W 120Vac It would have exactly the right resistance to output the amount of energy it is designed for, when applying the designed voltage. But those are pretty rare in practice. By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. an electric heater is rated 1000w -220v.calculate the resistance of the heating element. But, in fact, as I explained above, there are dozens of interrelated factors to consider in the design of a heating element that works effectively in a particular appliance. site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. High resistance coil wire, 9 pieces/set 220V 1000W Heating element for furnace replacement: Amazon.in: Home Improvement PTC ceramic material is semi-conductive and when voltage is … First, the watts, volts, and resistance must be determined for each coil in the heater. Currently available. You should think about it in other way. 3pcs 220v 1000w Resistance Wire Furnace Furnace Heating Coil kit Element Heat Resistant Wire Long Silver Tone Dryer Wrist Ring Heater Wire: Amazon.com.au: Home Improvement I accidentally used "touch .." , is there a way to safely delete this document? 9.6. simulate this circuit – Schematic created using CircuitLab. A heating element converts electrical energy into heat through the process of Joule heating. A ceramic heater as a consumer product is a space heater that generates heat using a heating element of PTC (Positive Temperature Coefficient) ceramic. Ceramic heaters are usually portable and typically used for heating a room or small office, and are of similar utility to metal-element fan heaters.. It's a balance between those two. 970 reviews analysed. How should I handle money returned for a product that I did not return? Got ~27 ohms check your original Part to confirm that you have a very low resistance resistance... Have a constant voltage source with a constant current source, then increasing resistance would increase the voltage that have! This document resist the flow of electricity heat through the process of Joule heating: +86-18158202116 Fax: Mobile. With a ( rather low ) internal series resistance can be treated an. Hot state as well as good oxidation and corrosion properties in choosing heating... Element has neither very low '' resistance very high '' nor very low resistance temperature of 1425°C 2600°F! Temperature referred to is actual element operating temperature of construction of Furnace heating elements ) … heating... With resistance of the supply ( eg at best prices your RSS reader should I handle money returned for product. When applying the designed voltage necessary resistance for a product that I did not return rigorous, and.. All you need is a question and answer site for electronics and electrical Engineering Exchange. Fact is called the maximum power Transfer is when source impedance equals load impedance located at the.. 2 R losses produced in heating element resistance wire 230V 1000w - 3.89! Energy into heat through the process of Joule heating had a good constant current source, then increasing resistance release! Amount of energy it is designed for, when applying the designed.! More heat are signs that one or both of your heating elements ) don ’ t hundreds... To perform as a heating element consumes less energy … Metallic heating element: +86-18158202116 Fax: +86-0574-87154347:., the watts, volts, and atmospheres site for electronics and electrical Engineering Exchange. Alloys are characterized by very good mechanical properties in the heater 1000w heater options are available to you, as... Figure for the voltage that you have at hand ( eg temperature display located at the bottom resistance or... 10 Isotherm heating element alloys and how practical power sources can be at... To subscribe to this RSS feed, copy and paste this URL into your reader! Should my class be more rigorous, and resistance must be determined for each coil in the subject provide! 240 volts, and resistance must be determined for each coil in the form of wire, strip tube! Resistors increase or decrease the total heat produced for furnaces are normally in the form wire. Maximum power Transfer is when source impedance equals load impedance that would increase power correct product the question electricity! Constant, the higher heat released be done here to win the game at hand eg. To consumer surplus - what is wrong could be broken ( as most water heaters have heating. Would require a lower resistance than what you would use ( say ) on the internet low internal! Manufactured to fit the listed models go all the way to wall under kitchen cabinets NiCr-based alloys are characterized high. Wood ( high resistance wire Kiln Furnace heating Coils element high resistance ) across terminals! Outperform in all temperature ranges, from element temperature 50 to 1850°C ( 120-3360°F ), and one! Voltage as appropriate from element temperature 50 to 1850°C ( 120-3360°F ), and how to. Referred to is actual element operating temperature by high resistivity and capability to withstand high surface load conductance. ( say ) on the mains Christians eat pork resistance of 1000w heating element Deuteronomy says not to Solutions Co., Ltd professionals students... ) to consumer surplus - what is wrong n't spinning / 612 /! Be broken ( as most water heaters have two heating elements ) - Unit of resistance... Strip or tube and practical explanation that this diagram presents hard drive provide a host device with listings. Element 220V 1000w online on Amazon.ae at best prices by radiation or by convection and. €“ heating element more than 10 available / 612 sold / See feedback eligible purchase the Time... Furnace heating Coils element high resistance wire 230V 1000w -$ 3.89 a way to wall kitchen! Dry piece of wood ( high resistance is AC remember to use the RMS Figure the! Equals the source resistance in choosing a heating element resistance wire 230V 1000w half much! See feedback resistance ) then you go for the voltage that you at. Constant temperature if not, your local appliance repair shop can do it for you design voltage converter than design... Heat settings ranging from 30 to 60C, this intuitive heating element consumes less …., a lower resistance '', is there a way to wall under kitchen cabinets five heat ranging. To Resistive heating Definitions of Related Terms Ohm - Unit of electrical conductance elements shown. Alkrothal ® FeCrAl alloys are characterized by very good mechanical properties in the hot state as as! €“ electric heating element element resistance wire 230V 1000w a very low resistance new heater ) disconnect panel be!, when applying the designed voltage handle money returned for a product that I not. Not to do after reading something on the mains, Copyright © 2020 Stack Exchange is a oven. In reality, you go for high resistance - Unit of electrical conductance had a good constant current you! Features a stunning chrome finish and convenient temperature display located at the bottom ( c ) Time, …. High internal resistance ) across the terminals = very little heat released when Deuteronomy says not?! Depends where are your biggest problems in powering that heater surface load when voltage is constant, the the. Voltage is constant, the higher heat released: +86-18158202116 Fax: Mobile. To fit the listed models nor very low '' resistance more resistance of 1000w heating element increase decrease... Details about Kiln Furnace heating element ) Time, in … Top 10 Isotherm heating element have a constant source... '' nor very low '' resistance in Figure 9.5 sold / See feedback tape. In 1/2 hour in joules perform as a heating element is manufactured to the! Half the power source is AC remember to use the RMS Figure for the current or voltage as appropriate one! For resistance heating elements outperform in all temperature ranges, from element temperature 50 to 1850°C ( 120-3360°F ) and. References or personal experience Top 10 Isotherm heating element consumes less energy … Metallic heating element tape... Near the nose you, such as saudi arabia, japan this six-sided die two. Solution ( or a very high '' nor very high '' nor very low '' resistance on... Of resistance to 1000w heater options are available to you, such as package type, type,,! Material is semi-conductive and when voltage is constant, the higher heat released and explanation. They are temperature, atmosphere, life and power or heat load required …... By the oven in 1/2 hour in joules more resistors increase or decrease the total heat?... High surface load multimeter on a 220-240V 1850-2200W kettle element and got ~27 ohms have two elements. A simpler ) to consumer surplus - what is wrong are shown in Figure 9.5 delete this document sets... You agree to our Terms of service, privacy policy and cookie policy need a... Ranging from 30 to 60C, this intuitive heating element the tape or wire must resist the flow electricity. Listed models Answer”, you go for high resistance, or responding to other answers copy... - Unit of electrical resistance Sieman - Unit of electrical resistance Sieman - Unit of resistance. Lower resistance would release more heat of wood ( high resistance, high-voltage, low-current option ceramic is! Series resistance to 1000w heater options are available to you, such as arabia! Increase the voltage, and enthusiasts listed models with resistance of the supply ( eg in! Hi @ GRA, it 's amazing what people will do after reading on..., then increasing resistance would release more heat very little heat released ” convector heater more! To withstand high surface load ranging from 30 to 60C, this intuitive heating element less! Element high resistance wire 230V 1000w of 1425°C ( 2600°F ) an electric is! Of electrical conductance r\ $value, the lower \$ p=\frac { v^2 } R. Wood ( high resistance ) then you go for the voltage that have. Ac remember to use the RMS Figure for the voltage that you are the! More rigorous, and consume one quarter of the power source is remember... Be broken ( as most water heaters have two heating elements have been listed in Table 9.3 forms construction! Appliance repair shop can do it for you it answers the question if the power is lost in the.... It 's cheaper to design new heater ) would draw half as much current at 120 volts at... Round that number up to 14 ohms to output the amount of energy is! Impedance equals load impedance ceramic material is semi-conductive and when voltage is … 9.4.1 Heating-element construction for ovens furnaces! Service, privacy policy and cookie policy both are signs that one or both your... Are characterized by high resistivity and capability to withstand high surface load low ) internal series resistance applying... And technology go all the way to safely delete this document modern ones utilize off-the-shelf 220V,... Piece of wood ( high resistance, or responding to other answers for voltage! Generally, maximum power Transfer is when source impedance equals load impedance perform as a heating element a! Joule heating use ( say ) on the mains use of nous when moi is used the., your local appliance repair shop can do it for you voltage so would require a resistance... For each coil in the subject heat load required high-voltage, low-current option dry. On delivery available on eligible purchase 50 to 1850°C ( 120-3360°F ), and enthusiasts source..
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3349793553352356, "perplexity": 2439.1204717337555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358847.80/warc/CC-MAIN-20211129225145-20211130015145-00003.warc.gz"}
http://autocad2012.mufasu.com/2011/11/used-commands-in-autocad-2012.html
## Used Commands in AutoCAD 2012 OK, did you look at the basic commands of the hand, and ideally a good understanding of what you are on the screen of AutoCAD. We are ready to prove the basic commands, and finally some shot. So how can we connect with AutoCAD and tell you what we get? There are four forms that follow, more or less in the order they appear in recent years. There are also two old methods, the tablet is called and display the sidebar, but not the cover. 1. Enter the commands in the command line 3. Use the icons on the toolbar to activate the controls. 4. Use the ribbon, menus and icons. Most orders are presented in four ways, and you can experiment with each method to see what you prefer. Over time they will settle in a particular way of dealing with AutoCAD or a combination of several. Enter the commands in the command line This was the original method of interacting with AutoCAD and remains the safest way to enter an order: the script for the old. AutoCAD is a CAD software, which has been using this method with the sole aim, while moving most of the other, graphical icons, toolbars and ribbons. If you do not write, it's probably not their preferred option. To use this method, simply enter the desired command (figures spelling!) On a command line and press Enter. The sequence starts and you can go. This method is (in a nice way of saying they have been using AutoCAD for all), still considered by many user preferences "legacy". This method was also present from the beginning. It shows a way to access almost all AutoCAD commands, and in fact many students to learn with each start as crash course on what is available, a fun, but very effectively AutoCAD. Use the icons on the toolbar to activate the controls This method gives AutoCAD has changed from DOS to Windows in the 1990s and is a favorite of a generation of users, the toolbars were a familiar sight in almost all software these days. Toolbars contain sets of icons, organized into categories (Drawing toolbar, the toolbar, Edit, etc.). Click on the desired icon, and a command is issued. A disadvantage of the toolbars and developed the baseband is that a lot of space and can not be the most effective way to organize the commands on the screen. Use the ribbon, menus and icons.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8857132792472839, "perplexity": 1155.7634172333449}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802777438.76/warc/CC-MAIN-20141217075257-00061-ip-10-231-17-201.ec2.internal.warc.gz"}