title
stringlengths
1
149
section
stringlengths
1
1.9k
text
stringlengths
13
73.5k
Flue-gas desulfurization
FGD chemistry
Another important design consideration associated with wet FGD systems is that the flue gas exiting the absorber is saturated with water and still contains some SO2. These gases are highly corrosive to any downstream equipment such as fans, ducts, and stacks. Two methods that may minimize corrosion are: (1) reheating the gases to above their dew point, or (2) using materials of construction and designs that allow equipment to withstand the corrosive conditions. Both alternatives are expensive. Engineers determine which method to use on a site-by-site basis.
Flue-gas desulfurization
FGD chemistry
Scrubbing with an alkali solid or solution SO2 is an acid gas, and, therefore, the typical sorbent slurries or other materials used to remove the SO2 from the flue gases are alkaline. The reaction taking place in wet scrubbing using a CaCO3 (limestone) slurry produces calcium sulfite (CaSO3) and may be expressed in the simplified dry form as: CaCO3(s) + SO2(g) → CaSO3(s) + CO2(g)When wet scrubbing with a Ca(OH)2 (hydrated lime) slurry, the reaction also produces CaSO3 (calcium sulfite) and may be expressed in the simplified dry form as: Ca(OH)2(s) + SO2(g) → CaSO3(s) + H2O(l)When wet scrubbing with a Mg(OH)2 (magnesium hydroxide) slurry, the reaction produces MgSO3 (magnesium sulfite) and may be expressed in the simplified dry form as: Mg(OH)2(s) + SO2(g) → MgSO3(s) + H2O(l)To partially offset the cost of the FGD installation, some designs, particularly dry sorbent injection systems, further oxidize the CaSO3 (calcium sulfite) to produce marketable CaSO4·2H2O (gypsum) that can be of high enough quality to use in wallboard and other products. The process by which this synthetic gypsum is created is also known as forced oxidation: CaSO3(aq) + 2 H2O(l) + 1/2 O2(g) → CaSO4·2H2O(s)A natural alkaline usable to absorb SO2 is seawater. The SO2 is absorbed in the water, and when oxygen is added reacts to form sulfate ions SO2−4 and free H+. The surplus of H+ is offset by the carbonates in seawater pushing the carbonate equilibrium to release CO2 gas: SO2(g) + H2O(l) + 1/2 O2(g) → SO2−4(aq) + 2 H+HCO−3 + H+ → H2O(l) + CO2(g)In industry caustic (NaOH) is often used to scrub SO2, producing sodium sulfite: 2 NaOH(aq) + SO2(g) → Na2SO3(aq) + H2O(l) Types of wet scrubbers used in FGD To promote maximum gas–liquid surface area and residence time, a number of wet scrubber designs have been used, including spray towers, venturis, plate towers, and mobile packed beds. Because of scale buildup, plugging, or erosion, which affect FGD dependability and absorber efficiency, the trend is to use simple scrubbers such as spray towers instead of more complicated ones. The configuration of the tower may be vertical or horizontal, and flue gas can flow concurrently, countercurrently, or crosscurrently with respect to the liquid. The chief drawback of spray towers is that they require a higher liquid-to-gas ratio requirement for equivalent SO2 removal than other absorber designs.
Flue-gas desulfurization
FGD chemistry
FGD scrubbers produce a scaling wastewater that requires treatment to meet U.S. federal discharge regulations. However, technological advancements in ion-exchange membranes and electrodialysis systems has enabled high-efficiency treatment of FGD wastewater to meet recent EPA discharge limits. The treatment approach is similar for other highly scaling industrial wastewaters.
Flue-gas desulfurization
FGD chemistry
Venturi-rod scrubbers A venturi scrubber is a converging/diverging section of duct. The converging section accelerates the gas stream to high velocity. When the liquid stream is injected at the throat, which is the point of maximum velocity, the turbulence caused by the high gas velocity atomizes the liquid into small droplets, which creates the surface area necessary for mass transfer to take place. The higher the pressure drop in the venturi, the smaller the droplets and the higher the surface area. The penalty is in power consumption.
Flue-gas desulfurization
FGD chemistry
For simultaneous removal of SO2 and fly ash, venturi scrubbers can be used. In fact, many of the industrial sodium-based throwaway systems are venturi scrubbers originally designed to remove particulate matter. These units were slightly modified to inject a sodium-based scrubbing liquor. Although removal of both particles and SO2 in one vessel can be economic, the problems of high pressure drops and finding a scrubbing medium to remove heavy loadings of fly ash must be considered. However, in cases where the particle concentration is low, such as from oil-fired units, it can be more effective to remove particulate and SO2 simultaneously.
Flue-gas desulfurization
FGD chemistry
Packed bed scrubbers A packed scrubber consists of a tower with packing material inside. This packing material can be in the shape of saddles, rings, or some highly specialized shapes designed to maximize the contact area between the dirty gas and liquid. Packed towers typically operate at much lower pressure drops than venturi scrubbers and are therefore cheaper to operate. They also typically offer higher SO2 removal efficiency. The drawback is that they have a greater tendency to plug up if particles are present in excess in the exhaust air stream.
Flue-gas desulfurization
FGD chemistry
Spray towers A spray tower is the simplest type of scrubber. It consists of a tower with spray nozzles, which generate the droplets for surface contact. Spray towers are typically used when circulating a slurry (see below). The high speed of a venturi would cause erosion problems, while a packed tower would plug up if it tried to circulate a slurry.
Flue-gas desulfurization
FGD chemistry
Counter-current packed towers are infrequently used because they have a tendency to become plugged by collected particles or to scale when lime or limestone scrubbing slurries are used.
Flue-gas desulfurization
FGD chemistry
Scrubbing reagent As explained above, alkaline sorbents are used for scrubbing flue gases to remove SO2. Depending on the application, the two most important are lime and sodium hydroxide (also known as caustic soda). Lime is typically used on large coal- or oil-fired boilers as found in power plants, as it is very much less expensive than caustic soda. The problem is that it results in a slurry being circulated through the scrubber instead of a solution. This makes it harder on the equipment. A spray tower is typically used for this application. The use of lime results in a slurry of calcium sulfite (CaSO3) that must be disposed of. Fortunately, calcium sulfite can be oxidized to produce by-product gypsum (CaSO4·2H2O) which is marketable for use in the building products industry.
Flue-gas desulfurization
FGD chemistry
Caustic soda is limited to smaller combustion units because it is more expensive than lime, but it has the advantage that it forms a solution rather than a slurry. This makes it easier to operate. It produces a "spent caustic" solution of sodium sulfite/bisulfite (depending on the pH), or sodium sulfate that must be disposed of. This is not a problem in a kraft pulp mill for example, where this can be a source of makeup chemicals to the recovery cycle.
Flue-gas desulfurization
FGD chemistry
Scrubbing with sodium sulfite solution It is possible to scrub sulfur dioxide by using a cold solution of sodium sulfite; this forms a sodium hydrogen sulfite solution. By heating this solution it is possible to reverse the reaction to form sulfur dioxide and the sodium sulfite solution. Since the sodium sulfite solution is not consumed, it is called a regenerative treatment. The application of this reaction is also known as the Wellman–Lord process.
Flue-gas desulfurization
FGD chemistry
In some ways this can be thought of as being similar to the reversible liquid–liquid extraction of an inert gas such as xenon or radon (or some other solute which does not undergo a chemical change during the extraction) from water to another phase. While a chemical change does occur during the extraction of the sulfur dioxide from the gas mixture, it is the case that the extraction equilibrium is shifted by changing the temperature rather than by the use of a chemical reagent.
Flue-gas desulfurization
FGD chemistry
Gas-phase oxidation followed by reaction with ammonia A new, emerging flue gas desulfurization technology has been described by the IAEA. It is a radiation technology where an intense beam of electrons is fired into the flue gas at the same time as ammonia is added to the gas. The Chendu power plant in China started up such a flue gas desulfurization unit on a 100 MW scale in 1998. The Pomorzany power plant in Poland also started up a similar sized unit in 2003 and that plant removes both sulfur and nitrogen oxides. Both plants are reported to be operating successfully. However, the accelerator design principles and manufacturing quality need further improvement for continuous operation in industrial conditions.No radioactivity is required or created in the process. The electron beam is generated by a device similar to the electron gun in a TV set. This device is called an accelerator. This is an example of a radiation chemistry process where the physical effects of radiation are used to process a substance.
Flue-gas desulfurization
FGD chemistry
The action of the electron beam is to promote the oxidation of sulfur dioxide to sulfur(VI) compounds. The ammonia reacts with the sulfur compounds thus formed to produce ammonium sulfate, which can be used as a nitrogenous fertilizer. In addition, it can be used to lower the nitrogen oxide content of the flue gas. This method has attained industrial plant scale.
Flue-gas desulfurization
Facts and statistics
The information in this section was obtained from a US EPA published fact sheet.Flue gas desulfurization scrubbers have been applied to combustion units firing coal and oil that range in size from 5 MW to 1,500 MW. Scottish Power are spending £400 million installing FGD at Longannet power station, which has a capacity of over 2,000 GW. Dry scrubbers and spray scrubbers have generally been applied to units smaller than 300 MW.
Flue-gas desulfurization
Facts and statistics
FGD has been fitted by RWE npower at Aberthaw Power Station in south Wales using the seawater process and works successfully on the 1,580 MW plant. Approximately 85% of the flue gas desulfurization units installed in the US are wet scrubbers, 12% are spray dry systems, and 3% are dry injection systems. The highest SO2 removal efficiencies (greater than 90%) are achieved by wet scrubbers and the lowest (less than 80%) by dry scrubbers. However, the newer designs for dry scrubbers are capable of achieving efficiencies in the order of 90%. In spray drying and dry injection systems, the flue gas must first be cooled to about 10–20 °C above adiabatic saturation to avoid wet solids deposition on downstream equipment and plugging of baghouses.
Flue-gas desulfurization
Facts and statistics
The capital, operating and maintenance costs per short ton of SO2 removed (in 2001 US dollars) are: For wet scrubbers larger than 400 MW, the cost is $200 to $500 per ton For wet scrubbers smaller than 400 MW, the cost is $500 to $5,000 per ton For spray dry scrubbers larger than 200 MW, the cost is $150 to $300 per ton For spray dry scrubbers smaller than 200 MW, the cost is $500 to $4,000 per ton
Flue-gas desulfurization
Alternative methods of reducing sulfur dioxide emissions
An alternative to removing sulfur from the flue gases after burning is to remove the sulfur from the fuel before or during combustion. Hydrodesulfurization of fuel has been used for treating fuel oils before use. Fluidized bed combustion adds lime to the fuel during combustion. The lime reacts with the SO2 to form sulfates which become part of the ash.
Flue-gas desulfurization
Alternative methods of reducing sulfur dioxide emissions
This elemental sulfur is then separated and finally recovered at the end of the process for further usage in, for example, agricultural products. Safety is one of the greatest benefits of this method, as the whole process takes place at atmospheric pressure and ambient temperature. This method has been developed by Paqell, a joint venture between Shell Global Solutions and Paques.
Breast cancer management
Breast cancer management
Breast cancer management takes different approaches depending on physical and biological characteristics of the disease, as well as the age, over-all health and personal preferences of the patient. Treatment types can be classified into local therapy (surgery and radiotherapy) and systemic treatment (chemo-, endocrine, and targeted therapies). Local therapy is most efficacious in early stage breast cancer, while systemic therapy is generally justified in advanced and metastatic disease, or in diseases with specific phenotypes.
Breast cancer management
Breast cancer management
Historically, breast cancer was treated with radical surgery alone. Advances in the understanding of the natural course of breast cancer as well as the development of systemic therapies allowed for the use of breast-conserving surgeries, however, the nomenclature of viewing non-surgical management from the viewpoint of the definitive surgery lends to two adjectives connected with treatment timelines: adjuvant (after surgery) and neoadjuvant (before surgery).
Breast cancer management
Breast cancer management
The mainstay of breast cancer management is surgery for the local and regional tumor, followed (or preceded) by a combination of chemotherapy, radiotherapy, endocrine (hormone) therapy, and targeted therapy. Research is ongoing for the use of immunotherapy in breast cancer management. Management of breast cancer is undertaken by a multidisciplinary team, including medical-, radiation-, and surgical- oncologists, and is guided by national and international guidelines. Factors such as treatment, oncologist, hospital and stage of your breast cancer decides the cost of breast cancer one must pay.
Breast cancer management
Staging
Staging breast cancer is the initial step to help physicians determine the most appropriate course of treatment. As of 2016, guidelines incorporated biologic factors, such as tumor grade, cellular proliferation rate, estrogen and progesterone receptor expression, human epidermal growth factor 2 (HER2) expression, and gene expression profiling into the staging system. Cancer that has spread beyond the breast and the lymph nodes is classified as Stage IV, or metastatic cancer, and requires mostly systemic treatment.
Breast cancer management
Staging
The TNM staging system of a cancer is a measurement of the physical extent of the tumor and its spread, where: T stands for the main (primary) tumor (range of T0-T4) N stands for spread to nearby lymph nodes (range of N0-N3) M stands for metastasis (spread to distant parts of the body; either M0 or M1)If the stage is based on removal of the cancer with surgery and review by the pathologist, the letter p (for pathologic) or yp (pathologic after neoadjuvant therapy) may appear before the T and N letters. If the stage is based on clinical assessment using physical exam and imaging, the letter c (for clinical) may appear. The TNM information is then combined to give the cancer an overall stage. Stages are expressed in Roman numerals from stage I (the least advanced stage) to stage IV (the most advanced stage). Non-invasive cancer (carcinoma in situ) is listed as stage 0.TNM staging, in combination with histopathology, grade and genomic profiling, is used for the purpose of prognosis, and to determine whether additional treatment is warranted.
Breast cancer management
Classification
Breast cancer is classified into three major subtypes for the purpose of predicting response to treatment. These are determined by the presence or absence of receptors on the cells of the tumor. The three major subgroups are: Luminal-type, which are tumors positive for hormone receptors (estrogen or progesterone receptor). This subtype suggests a response to endocrine therapy. HER2-type, which are positive for over-expression of the HER2 receptor. ER and PR can be positive or negative. This subtype receives targeted therapy. Basal-type, or Triple Negative (TN), which are negative for all three major receptor typesAdditional classification schema are used for prognosis and include histopathology, grade, stage, and genomic profiling.
Breast cancer management
Surgery
Surgery is the primary management for breast cancer. Depending on staging and biologic characteristics of the tumor, surgery can be a lumpectomy (removal of the lump only), a mastectomy, or a modified radical mastectomy. Lymph nodes are often included in the scope of breast tumor removal. Surgery can be performed before or after receiving systemic therapy. Women who test positive for faulty BRCA1 or BRCA2 genes can choose to have risk-reducing surgery before the cancer appears.Lumpectomy techniques are increasingly utilized for breast-conservation cancer surgery. Studies indicate that for patients with a single tumor smaller than 4 cm, a lumpectomy with negative surgical margins may be as effective as a mastectomy. Prior to a lumpectomy, a needle-localization of the lesion with placement of a guidewire may be performed, sometimes by an interventional radiologist if the area being removed was detected by mammography or ultrasound, and sometimes by the surgeon if the lesion can be directly palpated.
Breast cancer management
Surgery
However, mastectomy may be the preferred treatment in certain instances: Two or more tumors exist in different areas of the breast (a "multifocal" cancer) The breast has previously received radiotherapy The tumor is large relative to the size of the breast The patient has had scleroderma or another disease of the connective tissue, which can complicate radiotherapy The patient lives in an area where radiotherapy is inaccessible The patient wishes to avoid systemic therapy The patient is apprehensive about the risk of local recurrence after lumpectomySpecific types of mastectomy can also include: skin-sparing, nipple-sparing, subcutaneous, and prophylactic.
Breast cancer management
Surgery
Standard practice requires the surgeon to establish that the tissue removed in the operation has margins clear of cancer, indicating that the cancer has been completely excised. Additional surgery may be necessary if the removed tissue does not have clear margins, sometimes requiring removal of part of the pectoralis major muscle, which is the main muscle of the anterior chest wall.
Breast cancer management
Surgery
During the operation, the lymph nodes in the axilla are also considered for removal. In the past, large axillary operations took out 10 to 40 nodes to establish whether cancer had spread. This had the unfortunate side effect of frequently causing lymphedema of the arm on the same side, as the removal of this many lymph nodes affected lymphatic drainage. More recently, the technique of sentinel lymph node (SLN) dissection has become popular, as it requires the removal of far fewer lymph nodes, resulting in fewer side effects while achieving the same 10-year survival as its predecessor. The sentinel lymph node is the first node that drains the tumor, and subsequent SLN mapping can save 65–70% of patients with breast cancer from having a complete lymph node dissection for what could turn out to be a negative nodal basin. Advances in SLN mapping over the past decade have increased the accuracy of detecting Sentinel Lymph Node from 80% using blue dye alone to between 92% and 98% using combined modalities. SLN biopsy is indicated for patients with T1 and T2 lesions (<5 cm) and carries a number of recommendations for use on patient subgroups. Recent trends continue to favor less radical axillar node resection even in the presence of some metastases in the sentinel node.A meta-analysis has found that in people with operable primary breast cancer, compared to being treated with axillary lymph node dissection, being treated with lesser axillary surgery (such as axillary sampling or sentinel lymph node biopsy) does not lessen the chance of survival. Overall survival is slightly reduced by receiving radiotherapy alone when compared to axillary lymph node dissection. In the management of primary breast cancer, having no axillary lymph nodes removed is linked to increased risk of regrowth of cancer. Treatment with axillary lymph node dissection has been found to give an increased risk of lymphoedema, pain, reduced arm movement and numbness when compared to those treated with sentinel lymph node dissection or no axillary surgery.
Breast cancer management
Surgery
Ovary removal Prophylactic oophorectomy may be prudent in women who are at a high risk for recurrence or are seeking an alternative to endocrine therapy as it removes the primary source of estrogen production in pre-menopausal women. Women who are carriers of a BRCA mutation have an increased risk of both breast and ovarian cancers and may choose to have their ovaries removed prophylactically as well.
Breast cancer management
Surgery
Breast reconstruction Breast reconstruction surgery is the rebuilding of the breast after breast cancer surgery, and is included in holistic approaches to cancer management to address identity and emotional aspects of the disease. Reconstruction can take place at the same time as cancer-removing surgery, or months to years later. Some women decide not to have reconstruction or opt for a prosthesis instead.
Breast cancer management
Surgery
Investigational surgical management Cryoablation is an experimental therapy available for women with small or early-stage breast cancer. The treatment freezes, then defrosts tumors using small needles so that only the harmful tissue is damaged and ultimately dies. This technique may provide an alternative to more invasive surgeries, potentially limiting side effects.
Breast cancer management
Radiation therapy
Radiation therapy is an adjuvant treatment for most women who have undergone lumpectomy and for some women who have mastectomy surgery. In these cases the purpose of radiation is to reduce the chance that the cancer will recur locally (within the breast or axilla). Radiation therapy involves using high-energy X-rays or gamma rays that target a tumor or post surgery tumor site. This radiation is very effective in killing cancer cells that may remain after surgery or recur where the tumor was removed.
Breast cancer management
Radiation therapy
Radiation therapy can be delivered by external beam radiotherapy, brachytherapy (internal radiotherapy), or by intra-operative radiotherapy (IORT). In the case of external beam radiotherapy, X-rays are delivered from outside the body by a machine called a Linear Accelerator or Linac. In contrast, brachytherapy involves the precise placement of radiation source(s) directly at the treatment site. IORT includes a one-time dose of radiation administered with breast surgery. Radiation therapy is important in the use of breast-conserving therapy because it reduces the risk of local recurrence.
Breast cancer management
Radiation therapy
Radiation therapy eliminates the microscopic cancer cells that may remain near the area where the tumor was surgically removed. The dose of radiation must be strong enough to ensure the elimination of cancer cells. However, radiation affects normal cells and cancer cells alike, causing some damage to the normal tissue around where the tumor was. Healthy tissue can repair itself, while cancer cells do not repair themselves as well as normal cells. For this reason, radiation treatments are given over an extended period, enabling the healthy tissue to heal. Treatments using external beam radiotherapy are typically given over a period of five to seven weeks, performed five days a week. Recent large trials (UK START and Canadian) have confirmed that shorter treatment courses, typically over three to four weeks, result in equivalent cancer control and side effects as the more protracted treatment schedules. Each treatment takes about 15 minutes. A newer approach, called 'accelerated partial breast irradiation' (APBI), uses brachytherapy to deliver the radiation in a much shorter period of time. APBI delivers radiation to only the immediate region surrounding the original tumor and can typically be completed over the course of one week.
Breast cancer management
Radiation therapy
Indications for radiation Radiation treatment is mainly effective in reducing the risk of local relapse in the affected breast. Therefore, it is recommended in most cases of breast conserving surgeries and less frequently after mastectomy. Indications for radiation treatment are constantly evolving. Patients treated in Europe have been more likely in the past to be recommended adjuvant radiation after breast cancer surgery as compared to patients in North America. Radiation therapy is usually recommended for all patients who had lumpectomy, quadrant-resection. Radiation therapy is usually not indicated in patients with advanced (stage IV disease) except for palliation of symptoms like bone pain or fungating lesions.
Breast cancer management
Radiation therapy
In general recommendations would include radiation: As part of breast conserving therapy.
Breast cancer management
Radiation therapy
After mastectomy for patients with higher risk of recurrence because of conditions such as a large primary tumor or substantial involvement of the lymph nodes.Other factors which may influence adding adjuvant radiation therapy: Tumor close to or involving the margins on pathology specimen Multiple areas of tumor (multicentric disease) Microscopic invasion of lymphatic or vascular tissues Microcopic invasion of the skin, nipple/areola, or underlying pectoralis major muscle Patients with extension out of the substance of a LN Inadequate numbers of axillary LN sampled Types of radiotherapy Radiotherapy can be delivered in many ways but is most commonly produced by a linear accelerator.
Breast cancer management
Radiation therapy
This usually involves treating the whole breast in the case of breast lumpectomy or the whole chest wall in the case of mastectomy. Lumpectomy patients with early-stage breast cancer may be eligible for a newer, shorter form of treatment called "breast brachytherapy". This approach allows physicians to treat only part of the breast in order to spare healthy tissue from unnecessary radiation.
Breast cancer management
Radiation therapy
Improvements in computers and treatment delivery technology have led to more complex radiotherapy treatment options. One such new technology is using IMRT (intensity modulated radiation therapy), which can change the shape and intensity of the radiation beam making "beamlets" at different points across and inside the breast. This allows for better dose distribution within the breast while minimizing dose to healthy organs such as the lung or heart. However, there is yet to be a demonstrated difference in treatment outcomes (both tumor recurrence and level of side effects) for IMRT in breast cancer when compared to conventional radiotherapy treatment. In addition, conventional radiotherapy can also deliver similar dose distributions utilizing modern computer dosimetry planning and equipment. External beam radiation therapy treatments for breast cancer are typically given every day, five days a week, for five to 10 weeks.Within the past decade, a new approach called accelerated partial breast irradiation (APBI) has gained popularity. APBI is used to deliver radiation as part of breast conservation therapy. It treats only the area where the tumor was surgically removed, plus adjacent tissue. APBI reduces the length of treatment to just five days, compared to the typical six or seven weeks for whole breast irradiation.
Breast cancer management
Radiation therapy
APBI treatments can be given as brachytherapy or external beam with a linear accelerator. These treatments are usually limited to women with well-defined tumors that have not spread. A meta-analysis of randomised trials of partial breast irradiation (PBI) vs. whole breast irradiation (WBI) as part of breast conserving therapy demonstrated a reduction in non-breast-cancer and overall mortality.In breast brachytherapy, the radiation source is placed inside the breast, treating the cavity from the inside out. There are several different devices that deliver breast brachytherapy. Some use a single catheter and balloon to deliver the radiation. Other devices utilize multiple catheters to deliver radiation.
Breast cancer management
Radiation therapy
A study is currently underway by the National Surgical Breast and Bowel Project (NSABP) to determine whether limiting radiation therapy to only the tumor site following lumpectomy is as effective as radiating the whole breast. New technology has also allowed more precise delivery of radiotherapy in a portable fashion – for example in the operating theatre. Targeted intraoperative radiotherapy (TARGIT) is a method of delivering therapeutic radiation from within the breast using a portable X-ray generator called Intrabeam.
Breast cancer management
Radiation therapy
The TARGIT-A trial was an international randomised controlled non-inferiority phase III clinical trial led from University College London. 28 centres in 9 countries accrued 2,232 patients to test whether TARGIT can replace the whole course of radiotherapy in selected patients. The TARGIT-A trial results found that the difference between the two treatments was 0.25% (95% CI -1.0 to 1.5) i.e., at most 1.5% worse or at best 1.0% better with single dose TARGIT than with standard course of several weeks of external beam radiotherapy. In the TARGIT-B trial, as the TARGIT technique is precisely aimed and given immediately after surgery, in theory it could be able provide a better boost dose to the tumor bed as suggested in phase II studies.
Breast cancer management
Systemic therapy
Systemic therapy uses medications to treat cancer cells throughout the body. Any combination of systemic treatments may be used to treat breast cancer. Standard of care systemic treatments include chemotherapy, endocrine therapy and targeted therapy. Chemotherapy Chemotherapy (drug treatment for cancer) may be used before surgery, after surgery, or instead of surgery for those cases in which surgery is considered unsuitable. Chemotherapy is justified for cancers whose prognosis after surgery is poor without additional intervention.
Breast cancer management
Systemic therapy
Hormonal therapy Patients with estrogen receptor-positive tumors are candidates for receiving endocrine therapy to slow the progression of breast tumors or to reduce chance of relapse. Endocrine therapy is usually administered after surgery, chemotherapy, and radiotherapy have been given, but can also occur in the neoadjuvant or non-surgical setting. Hormonal treatments include antiestrogen therapy, but also to a lesser extent, and/or more in the past, estrogen therapy and androgen therapy.
Breast cancer management
Systemic therapy
Antiestrogen therapy Antiestrogen therapy is used in the treatment of breast cancer in women with estrogen receptor-positive breast tumors. Antiestrogen therapy includes medications like the following: Selective estrogen receptor modulators (SERMs) like tamoxifen and toremifene Estrogen receptor antagonists and selective estrogen receptor degraders (SERDs) like fulvestrant and elacestrant Aromatase inhibitors like anastrozole and letrozole Gonadotropin-releasing hormone modulators (GnRH modulators) like leuprorelinEstrogen receptor-positive breast tumors are stimulated by estrogens and estrogen receptor activation, and thus are dependent on these processes for growth. SERMs, estrogen receptor antagonists, and SERDs reduce estrogen receptor signaling and thereby slow breast cancer progression. Aromatase inhibitors work by inhibiting the enzyme aromatase and thereby inhibiting the production of estrogens. GnRH modulators work by suppressing the hypothalamic–pituitary–gonadal axis (HPG axis) and thereby suppressing gonadal estrogen production. GnRH modulators are only useful in premenopausal women and in men, as postmenopausal women no longer have significant gonadal estrogen production. Conversely, SERMs, estrogen receptor antagonists, and aromatase inhibitors are effective in postmenopausal women as well.
Breast cancer management
Systemic therapy
Estrogen therapy Estrogen therapy for treatment of breast cancer was first reported to be effective in the early 1940s and was the first hormonal therapy to be used for breast cancer. Estrogen therapy for breast cancer has been described as paradoxical and has been referred to as the "estrogen paradox", as estrogens stimulate breast cancer and antiestrogen therapy is effective in the treatment of breast cancer. However, in high doses, as in high-dose estrogen therapy, a biphasic effect occurs in which breast cancer cells are induced to undergo apoptosis (programmed cell death) and breast cancer progression is slowed. High-dose estrogen therapy is similarly effective to antiestrogen therapy in the treatment of breast cancer. However, antiestrogen therapy showed fewer side effects and less toxicity than high-dose estrogen therapy, and thus almost completely replaced high-dose estrogen therapy in the endocrine management of breast cancer following its introduction in the 1970s. In any case, estrogen therapy for breast cancer continues to be researched and explored in modern times.High-dose estrogen therapy is only effective for breast cancer in postmenopausal women who are at least 5 years into the postmenopause. This relates to the menopausal gap hypothesis, in which the effects of estrogens change depending on the presence of prolonged estrogen deprivation. Although an "estrogen gap" is necessary for high-dose estrogen therapy, for instance with 15 mg/day diethylstilbestrol, to be effective for breast cancer, much higher doses of estrogens can also be effective without prior estrogen deprivation; small studies have found that massive doses of estrogens, such as 400 to 1,000 mg diethylstilbestrol, are effective in the treatment of breast cancer in premenopausal women. The sensitivity of breast cancer cells to estrogens appears to shift by several orders of magnitude with extended estrogen deprivation, which sensitizes breast cancer cells to the apoptotic effects of estrogen therapy. In women with strong estrogen deprivation due to extended antiestrogen therapy, for instance with aromatase inhibitors, even low doses of estrogens, such as 2 mg/day estradiol valerate, can become effective. The preceding processes may also underlie the near-significantly decreased breast cancer risk seen with 0.625 mg/day conjugated estrogens in long-postmenopausal women in the Women's Health Initiative (WHI) estrogen-only randomized controlled trial.Estrogen cycling, in which treatment is cycled between estrogen therapy and antiestrogen therapy, was reported at the 31st annual San Antonio Breast Cancer Symposium in 2013. About a third of the 66 participants—women with metastatic breast cancer that had developed resistance to standard estrogen-lowering therapy—a daily dose of estrogen could stop the growth of their tumors or even cause them to shrink. If study participants experienced disease progression on estrogen, they could go back to an aromatase inhibitor that they were previously resistant to and see a benefit—their tumors were once again inhibited by estrogen deprivation. That effect sometimes wore off after several months, but then the tumors might again be sensitive to estrogen therapy. In fact, some patients have cycled back and forth between estrogen and an aromatase inhibitor for several years. PET scans before starting estrogen and again 24 hours later predicted those tumors which responded to estrogen therapy: the responsive tumors showed an increased glucose uptake, called a PET flare. The mechanism of action is uncertain, although estrogen reduces the amount of a tumor-promoting hormone called insulin-like growth factor-1 (IGF1).
Breast cancer management
Systemic therapy
Androgen therapy Androgens and anabolic steroids such as testosterone, fluoxymesterone, drostanolone propionate, epitiostanol, and mepitiostane have historically been used to treat breast cancer because of their antiestrogenic effects in the breasts. However, they are now rarely if ever used due to their virilizing side effects, such as voice deepening, hirsutism, masculine muscle and fat changes, increased libido, and others, as well as availability of better-tolerated agents.
Breast cancer management
Systemic therapy
Targeted therapy In patients whose cancer expresses an over-abundance of the HER2 protein, a monoclonal antibody known as trastuzumab (Herceptin) is used to block the activity of the HER2 protein in breast cancer cells, slowing their growth. In the advanced cancer setting, trastuzumab use in combination with chemotherapy can both delay cancer growth as well as improve the recipient's survival. Pertuzumab may work synergistically with trastuzumab on the expanded EGFR family of receptors, although it is currently only standard of care for metastatic disease. Neratinib has been approved by the FDA for extended adjuvant treatment of early stage HER2-positive breast cancer.PARP inhibitors are used in the metastatic setting, and are being investigated for use in the non-metastatic setting through clinical trials. Approved antibody-drug conjugates: trastuzumab emtansine (2013), trastuzumab deruxtecan (2019), sacituzumab govitecan (2020).
Breast cancer management
Managing side effects
Drugs and radiotherapy given for cancer can cause unpleasant side effects such as nausea and vomiting, mouth sores, dermatitis, and menopausal symptoms. Around a third of patients with cancer use complementary therapies, including homeopathic medicines, to try to reduce these side effects.
Breast cancer management
Managing side effects
Insomnia It was believed that one would find a bi-directional relationship between insomnia and pain, but instead it was found that trouble sleeping was more likely a cause, rather than a consequence, of pain in patients with cancer. An early intervention to manage sleep would overall relieve patient with side effects.Approximately 40 percent of menopausal women experience sleep disruption, often in the form of difficulty with sleep initiation and frequent nighttime awakenings. There is a study, first to show sustained benefits in sleep quality from gabapentin, which Rochester researchers already have demonstrated alleviates hot flashes.
Breast cancer management
Managing side effects
Hot flushes Lifestyle adjustments are usually suggested first to manage hot flushes (or flashes) due to endocrine therapy. This can include avoiding triggers such as alcohol, caffeine and smoking. If hot flashes continue, and depending on their frequency and severity, several drugs can be effective in some patients, in particular SNRIs such as venlafaxine, also oxybutinin and others. Complementary medicines that contain phytoestrogens are not recommended for breast cancer patients as they may stimulate oestrogen receptor-positive tumours.
Breast cancer management
Managing side effects
Lymphedema Some patients develop lymphedema, as a result of axillary node dissection or of radiation treatment to the lymph nodes. Although traditional recommendations limited exercise, a new study shows that participating in a safe, structured weight-lifting routine can help women with lymphedema take control of their symptoms and reap the many rewards that resistance training has on their overall health as they begin life as a cancer survivor. It recommends that women start with a slowly progressive program, supervised by a certified fitness professional, in order to learn how to do these types of exercises properly. Women with lymphedema should also wear a well-fitting compression garment during all exercise sessions.
Breast cancer management
Managing side effects
Upper-limb dysfunction Upper-limb dysfunction is a common side effect of breast cancer treatment. Shoulder range of motion can be impaired after surgery. Exercise can meaningfully improve should range of motion in women with breast cancer. An exercise programme can be started early after surgery, if it does not negatively affect wound drainage.
Breast cancer management
Managing side effects
Side effects of radiation therapy External beam radiation therapy is a non-invasive treatment with some short term and some longer-term side effects. Patients undergoing some weeks of treatment usually experience fatigue caused by the healthy tissue repairing itself and aside from this there can be no side effects at all. However many breast cancer patients develop a suntan-like change in skin color in the exact area being treated. As with a suntan, this darkening of the skin usually returns to normal in the one to two months after treatment. In some cases permanent changes in color and texture of the skin is experienced. Other side effects sometimes experienced with radiation can include: Muscle stiffness Mild swelling Tenderness in the area LymphedemaAfter surgery, radiation and other treatments have been completed, many patients notice the affected breast seems smaller or seems to have shrunk. This is basically due to the removal of tissue during the lumpectomy operation.
Breast cancer management
Managing side effects
The use of adjuvant radiation has significant potential effects if the patient has to later undergo breast reconstruction surgery. Fibrosis of chest wall skin from radiation negatively affects skin elasticity and makes tissue expansion techniques difficult. Traditionally most patients are advised to defer immediate breast reconstruction when adjuvant radiation is planned and are most often recommended surgery involving autologous tissue reconstruction rather than breast implants.
Breast cancer management
Managing side effects
Studies suggest APBI may reduce the side effects associated with radiation therapy, because it treats only the tumor cavity and the surrounding tissue. In particular, a device that uses multiple catheters and allows modulation of the radiation dose delivered by each of these catheters has been shown to reduce harm to nearby, healthy tissue.
Adobe PageMaker
Adobe PageMaker
Adobe PageMaker (formerly Aldus PageMaker) is a discontinued desktop publishing computer program introduced in 1985 by the Aldus Corporation on the Apple Macintosh. The combination of the Macintosh's graphical user interface, PageMaker publishing software, and the Apple LaserWriter laser printer marked the beginning of the desktop publishing revolution. Ported to PCs running Windows 1.0 in 1987, PageMaker helped to popularize both the Macintosh platform and the Windows environment.A key component that led to PageMaker's success was its native support for Adobe Systems' PostScript page description language. After Adobe purchased the majority of Aldus's assets (including FreeHand, PressWise, PageMaker, etc.) in 1994 and subsequently phased out the Aldus name, version 6 was released. The program remained a major force in the high-end DTP market through the early 1990s, but new features were slow in coming. By the mid-1990s, it faced increasing competition from QuarkXPress on the Mac, and to a lesser degree, Ventura on the PC, and by the end of the decade it was no longer a major force. Quark proposed buying the product and canceling it, but instead, in 1999 Adobe released their "Quark Killer", Adobe InDesign. The last major release of PageMaker came in 2001, and customers were offered InDesign licenses at a lower cost.
Adobe PageMaker
Release history
Aldus Pagemaker 1.0 was released in July 1985 for the Macintosh and in December 1986 for the IBM PC.
Adobe PageMaker
Release history
Aldus Pagemaker 1.2 for Macintosh was released in 1986 and added support for PostScript fonts built into LaserWriter Plus or downloaded to the memory of other output devices. PageMaker was awarded a Codie award for Best New Use of a Computer in 1986. In October 1986, a version of Pagemaker was made available for Hewlett-Packard's HP Vectra computers. In 1987, Pagemaker was available on Digital Equipment's VAXstation computers.
Adobe PageMaker
Release history
Aldus Pagemaker 2.0 was released in 1987. Until May 1987, the initial Windows release was bundled with a full version of Windows 1.0.3; after that date, a "Windows-runtime" without task-switching capabilities was included. Thus, users who did not have Windows could run the application from MS-DOS. Aldus Pagemaker 3.0 for Macintosh was shipped in April 1988. PageMaker 3.0 for the PC was shipped in May 1988 and required Windows 2.0, which was bundled as a run-time version. Version 3.01 was available for OS/2 and took extensive advantage of multithreading for improved user responsiveness. Aldus PageMaker 4.0 for Macintosh was released in 1990 and offered new word-processing capabilities, expanded typographic controls, and enhanced features for handling long documents. A version for the PC was available by 1991. Aldus PageMaker 5.0 was released in January 1993. Adobe PageMaker 6.0 was released in 1995, a year after Adobe Systems acquired Aldus Corporation. Adobe PageMaker 6.5 was released in 1996. Support for versions 4.0, 5.0, 6.0, and 6.5 is no longer offered through the official Adobe support system. Due to Aldus' use of closed, proprietary data formats, this poses substantial problems for users who have works authored in these legacy versions.
Adobe PageMaker
Release history
Adobe PageMaker 7.0 was the final version made available. It was released 9 July 2001, though updates have been released for the two supported platforms since. The Macintosh version runs only in Mac OS 9 or earlier; there is no native support for Mac OS X, and it does not run on Intel-based Macs without SheepShaver. It does not run well under Classic, and Adobe recommends that customers use an older Macintosh capable of booting into Mac OS 9. The Windows version supports Windows XP, but according to Adobe, "PageMaker 7.x does not install or run on Windows Vista."
Adobe PageMaker
End of development
Development of PageMaker had flagged in the later years at Aldus and, by 1998, PageMaker had lost almost the entire professional market to the comparatively feature-rich QuarkXPress 3.3, released in 1992, and 4.0, released in 1996. Quark stated its intention to buy out Adobe and to divest the combined company of PageMaker to avoid anti-trust issues. Adobe rebuffed the offer and instead continued to work on a new page layout application code-named "Shuksan" (later "K2"), originally started by Aldus, openly planned and positioned as a "Quark killer". This was released as Adobe InDesign 1.0 in 1999.The last major release of PageMaker was 7.0 in 2001, after which the product was seen as "languishing on life support". Adobe ceased all development of PageMaker in 2004 and "strongly encouraged" users to migrate to InDesign, initially through special "InDesign PageMaker Edition" and "PageMaker Plug-in" versions, which added PageMaker's data merge, bullet, and numbering features to InDesign, and provided PageMaker-oriented help topics, complimentary Myriad Pro fonts, and templates. From 2005, these features were bundled into InDesign CS2, which was offered at half-price to existing PageMaker customers.No new major versions of Adobe PageMaker have been released since, and it does not ship alongside Adobe InDesign.
Adobe PageMaker
Reception
BYTE in 1989 listed PageMaker 3.0 as among the "Distinction" winners of the BYTE Awards, stating that it "is the program that showed many of us how to use the Macintosh to its full potential".
Adobe PageMaker
File formats
Adobe PageMaker file formats use various filename extensions, including PMD, PM3, PM4, PM5, PM6 and P65; these should be able to be opened in the applications Collabora Online, LibreOffice or Apache OpenOffice, they can then be saved into the OpenDocument format or other file formats.
Farm water
Farm water
Farm water, also known as agricultural water, is water committed for use in the production of food and fibre and collecting for further resources. In the US, some 80% of the fresh water withdrawn from rivers and groundwater is used to produce food and other agricultural products. Farm water may include water used in the irrigation of crops or the watering of livestock.
Farm water
Farm water
Its study is called agricultural hydrology.
Farm water
Farm water
Water is one of the most fundamental parts of the global economy. In areas without healthy water resources or sanitation services, economic growth cannot be sustained. Without access to clean water, nearly every industry would suffer, most notably agriculture. As water scarcity grows as a global concern, food security is also brought into consideration. A recent example of this could be the drought in California; for every $100 spent on foods from this state, a consumer is projected to pay up to $15 additionally.
Farm water
Livestock water use
Livestock and meat production have some of the largest water footprints of the agricultural industry, taking nearly 1,800 gallons of water to produce one pound of beef and 576 gallons for pork. About 108 gallons of water are needed to harvest one pound of corn. Livestock production is also one of the most resource-intensive agricultural outputs. This is largely due to their large feed conversion ratio. Livestock's large water consumption may also be attributed to the amount of time needed to raise an animal to slaughter. Again, in an invalid contrast to corn, which grows to maturity in about 100 days, about 995 days are needed to grow cattle. The global "food animal" population is just over 20 billion creatures; with 7+ billion humans, this equates to about 2.85 animals per human.
Farm water
Livestock water use
Cattle The beef and dairy industries are the most lucrative branches of the U.S. agricultural industry, but they are also the most resource intensive. To date, beef is the most popular of the meats; the U.S. alone produced 25.8 billion pounds in 2013. In this same year, 201.2 billion pounds of milk were produced. These cattle are mostly raised in centralized animal feeding operations, or CAFOs. Typically, a mature cow consumes 7 to 24 gallons of water a day; lactating cows require about twice as much water. The amount of water that cattle may drink in a day also depends upon the temperature. Cattle have a feed conversion ratio of 6:1, for every six pounds of food consumed, the animal should gain one pound. Thus, there is also a substantial "indirect" need for water in order to grow the feed for the livestock. Growing the amount of feed grains necessary for raising livestock accounts for 56 percent of the U.S water consumption. Of a 1,000 pound cow, only 430 pounds make it to the retail markets. This 18 percent loss, creates an even greater demand for cattle, being that CAFOs must make up for this lost profitable weight, by increasing the number of cows that they raise.
Farm water
Livestock water use
Water scarcity is not necessarily a new issue, however, cattle ranchers in America have been cutting herd sizes since the 1950s in efforts to curb water and manufacturing costs. This shift has led to more efficient feeding and health methods, allowing ranchers to harvest more beef per animal. The rising popularity of these CAFOs are creating a larger demand for water, however. Grass-fed or grazing cows consume about twelve percent more water through the ingestion of live plants, than those cows who are fed dried grains.
Farm water
Livestock water use
Poultry and fowl Water is one of the most crucial aspects of poultry raising, as like all animals, they use this to carry food through their system, assist in digestion, and regulate body temperature. Farmers monitor flock water consumption to measure the overall health of their birds. As birds grow older they consume more feed and about three times as much water because they are three times larger. In just three weeks, a 1000-bird flock's water consumption should increase by about 10 gallons a day. Water consumption is also influenced by temperature. In hot weather, birds pant to keep cool, thus losing much of their water. A study based in Ohio showed that 67% of water sampled near poultry farms contained antibiotics.
Farm water
Horticulture water use
With modern advancements, crops are being cultivated year round in countries all around the world. As water usage becomes a more pervasive global issue, irrigation practices for crops are being refined and becoming more sustainable. While several irrigation systems are used, these may be grouped into two types: high flow and low flow. These systems must be managed precisely to prevent runoff, overspray, or low-head drainage.
Farm water
Scarcity of water in agriculture
About 60 years ago, the common perception was that water was an infinite resource. At that time, fewer than half the current number of people were on the planet. Standard of living was not as high, so individuals consumed fewer calories, and ate less meat, so less water was needed to produce their food. They required a third of the volume of water presently taken from rivers. Today, the competition for water resources is much more intense, because nearly eight billion people are now on the planet, and their consumption of meat and vegetables is rising. Competition for water from industry, urbanisation, and biofuel crops is rising congruently. To avoid a global water crisis, farmers will have to make strides to increase productivity to meet growing demands for food, while industry and cities find ways to use water more efficiently.Successful agriculture is dependent upon farmers having sufficient access to water, but water scarcity is already a critical constraint to farming in many parts of the world. Physical water scarcity is where not enough water is available to meet all demands, including that needed for ecosystems to function effectively. Arid regions frequently suffer from physical water scarcity. It also occurs where water seems abundant, but where resources are over-committed. This can happen where hydraulic infrastructure is over-developed, usually for irrigation. Symptoms of physical water scarcity include environmental degradation and declining groundwater. Economic scarcity, meanwhile, is caused by a lack of investment in water or insufficient human capacity to satisfy the demand for water. Symptoms of economic water scarcity include a lack of infrastructure, with people often having to fetch water from rivers for domestic and agricultural uses. Some 2.8 billion people currently live in water-scarce areas. In developed countries, environmental regulations restrict water availability by redirecting water to aid endangered species, such as snail darters.
Farm water
Sustainable water use
While water use affects environmental degradation and economic growth, it is also sparking innovation regarding new irrigation methods. In 2006, the USDA predicted that if the agricultural sector improved water efficiency by just 10%, farms could save upwards of $200 million per year. Many of the practices that cut water use are cost effective. Farmers who use straw, compost, or mulch around their crops can reduce evaporation by about 75%, though the input costs are neither inexpensive nor readily available in some areas. This would also reduce the number of weeds and save a farmer from using herbicides. Mulches or ground covers also allow the soils to absorb more water by reducing compaction. The use of white or pale gravel is also practiced, as it reduces evaporation and keeps soil temperatures low by reflecting sunlight.In addition to reducing water loss at the sink, more sustainable ways to harvest water also can be used. Many modern small (nonindustrial) farmers are using rain barrels to collect the water needed for their crops and livestock. On average, rainwater harvesting where rain is frequent reduces the cost of water in half. This would also greatly reduce the stress on local aquifers and wells. Because farmers use the roofs of their buildings to gather this water, this also reduced rainwater runoff and soil erosion on and around their farms.
Reduced product
Reduced product
In model theory, a branch of mathematical logic, and in algebra, the reduced product is a construction that generalizes both direct product and ultraproduct.
Reduced product
Reduced product
Let {Si | i ∈ I} be a family of structures of the same signature σ indexed by a set I, and let U be a filter on I. The domain of the reduced product is the quotient of the Cartesian product ∏i∈ISi by a certain equivalence relation ~: two elements (ai) and (bi) of the Cartesian product are equivalent if {i∈I:ai=bi}∈U If U only contains I as an element, the equivalence relation is trivial, and the reduced product is just the original Cartesian product. If U is an ultrafilter, the reduced product is an ultraproduct.
Reduced product
Reduced product
Operations from σ are interpreted on the reduced product by applying the operation pointwise. Relations are interpreted by R((ai1)/∼,…,(ain)/∼)⟺{i∈I∣RSi(ai1,…,ain)}∈U. For example, if each structure is a vector space, then the reduced product is a vector space with addition defined as (a + b)i = ai + bi and multiplication by a scalar c as (ca)i = c ai.
Journal of General Virology
Journal of General Virology
Journal of General Virology is a not-for-profit peer-reviewed scientific journal published by the Microbiology Society. The journal was established in 1967 and covers research into animal, insect and plants viruses, also fungal viruses, prokaryotic viruses, and TSE agents. Antiviral compounds and clinical aspects of virus infection are also covered.Since 2020 the editor-in-chief is Paul Duprex (Centre for Vaccine Research, University of Pittsburgh), who took over from Professor Mark Harris (University of Leeds) who had served as Editor-in-Chief since 2015.
Journal of General Virology
Journal
Article types Journal of General Virology publishes primary research articles, Reviews, Short Communications, Personal Views, and Editorials. Since 2017 the journal has partnered with the International Committee on Taxonomy of Viruses to publish Open Access ICTV Virus Taxonomy Profiles which summarise chapters of the ICTV's 10th Report on Virus Taxonomy. All ICTV Virus Taxonomy Profiles are published under a Creative Commons Attribution license (CC-BY). Metrics The Microbiology Society journals are a signatory to DORA (the San Francisco Declaration on Research Assessment) and use a range of Article-Level Metrics (ALMs) as well as a range of journal-level metrics to assess quality and impact. An Altmetric score and Dimensions citation data are available for all articles published by the Microbiology Society journals. Abstracting and indexing Journal of General Virology is indexed in Biological Abstracts, BIOSIS Previews, CAB Abstracts, Chemical Abstracts Service, Current Awareness in Biological Sciences, Current Contents– Life Sciences, Current Opinion series, EMBASE, MEDLINE/Index Medicus/PubMed, Russian Academy of Science, Science Citation Index, SciSearch, SCOPUS, and on Google Scholar.
Journal of General Virology
Open access policy
Journal of General Virology is a hybrid title and allows authors to publish subscription articles free-of-charge. Authors can also publish Open Access articles under a Creative Commons Attribution license (CC-BY) by either paying an article processing charge (APC) or fee-free as part of a Publish and Read model.
Geometry E
Geometry E
The Geometry E is a battery-powered subcompact crossover produced by Chinese auto manufacturer Geely under the Geometry brand.
Geometry E
Overview
The Geometry E is officially the third brand new model of the Geometry brand, while replacing the short-lived Geometry EX3 sold in 2021 alone. It was developed based on the same platform as the Geely Vision X3 and the Geometry EX3 rebadged variant, and comes in three trims; Cute Tiger, Linglong Tiger, and Thunder Tiger. Pricing of the Geometry E starts at $12,947 (86,800 yuan) for the base model, while the Linglong Tiger and Thunder Tiger costs around $14,588 and $15,483 respectively. The battery of the Geometry E is a base 33.5 kWh and a longer-range 39.4 kWh lithium iron phosphate battery providing a NEDC range of 320 and 401 km (199 and 249 mi) respectively. The electric motor is a TZ160XS601 drive motor produced by GLB Intelligent Power Technologies capable of producing 60 kW and 130 Nm of torque, giving it a top speed of 121 km/h. Charge time for the Geometry E from 0-80% is 30 minutes.The interior of the Geometry E features two 10.25-inch infotainment screens and a central control screen as standard.
Evolution of the eye
Evolution of the eye
Many scientists have found the evolution of the eye attractive to study because the eye distinctively exemplifies an analogous organ found in many animal forms. Simple light detection is found in bacteria, single-celled organisms, plants and animals. Complex, image-forming eyes have evolved independently several times.Diverse eyes are known from the Burgess shale of the Middle Cambrian, and from the slightly older Emu Bay Shale.
Evolution of the eye
Evolution of the eye
Eyes vary in their visual acuity, the range of wavelengths they can detect, their sensitivity in low light, their ability to detect motion or to resolve objects, and whether they can discriminate colours.
Evolution of the eye
History of research
In 1802, philosopher William Paley called it a miracle of "design." In 1859, Charles Darwin himself wrote in his Origin of Species, that the evolution of the eye by natural selection seemed at first glance "absurd in the highest possible degree".
Evolution of the eye
History of research
However, he went on that despite the difficulty in imagining it, its evolution was perfectly feasible: ... if numerous gradations from a simple and imperfect eye to one complex and perfect can be shown to exist, each grade being useful to its possessor, as is certainly the case; if further, the eye ever varies and the variations be inherited, as is likewise certainly the case and if such variations should be useful to any animal under changing conditions of life, then the difficulty of believing that a perfect and complex eye could be formed by natural selection, though insuperable by our imagination, should not be considered as subversive of the theory.
Evolution of the eye
History of research
He suggested a stepwise evolution from "an optic nerve merely coated with pigment, and without any other mechanism" to "a moderately high stage of perfection", and gave examples of existing intermediate steps. Current research is investigating the genetic mechanisms underlying eye development and evolution.Biologist D.E. Nilsson has independently theorized about four general stages in the evolution of a vertebrate eye from a patch of photoreceptors. Nilsson and S. Pelger estimated in a classic paper that only a few hundred thousand generations are needed to evolve a complex eye in vertebrates. Another researcher, G.C. Young, has used the fossil record to infer evolutionary conclusions, based on the structure of eye orbits and openings in fossilized skulls for blood vessels and nerves to go through. All this adds to the growing amount of evidence that supports Darwin's theory.
Evolution of the eye
Rate of evolution
The first fossils of eyes found to date are from the Ediacaran period (about 555 million years ago). The lower Cambrian had a burst of apparently rapid evolution, called the "Cambrian explosion". One of the many hypotheses for "causes" of the Cambrian explosion is the "Light Switch" theory of Andrew Parker: it holds that the evolution of advanced eyes started an arms race that accelerated evolution. Before the Cambrian explosion, animals may have sensed light, but did not use it for fast locomotion or navigation by vision.
Evolution of the eye
Rate of evolution
The rate of eye evolution is difficult to estimate because the fossil record, particularly of the lower Cambrian, is poor. How fast a circular patch of photoreceptor cells can evolve into a fully functional vertebrate eye has been estimated based on rates of mutation, relative advantage to the organism, and natural selection. However, the time needed for each state was consistently overestimated and the generation time was set to one year, which is common in small animals. Even with these pessimistic values, the vertebrate eye could still evolve from a patch of photoreceptor cells in less than 364,000 years.
Evolution of the eye
Origins of the eye
Whether the eye evolved once or many times depends on the definition of an eye. All eyed animals share much of the genetic machinery for eye development. This suggests that the ancestor of eyed animals had some form of light-sensitive machinery – even if it was not a dedicated optical organ. However, even photoreceptor cells may have evolved more than once from molecularly similar chemoreceptor cells. Probably, photoreceptor cells existed long before the Cambrian explosion. Higher-level similarities – such as the use of the protein crystallin in the independently derived cephalopod and vertebrate lenses – reflect the co-option of a more fundamental protein to a new function within the eye.A shared trait common to all light-sensitive organs are opsins. Opsins belong to a family of photo-sensitive proteins and fall into nine groups, which already existed in the urbilaterian, the last common ancestor of all bilaterally symmetrical animals. Additionally, the genetic toolkit for positioning eyes is shared by all animals: The PAX6 gene controls where eyes develop in animals ranging from octopuses to mice and fruit flies. Such high-level genes are, by implication, much older than many of the structures that they control today; they must originally have served a different purpose, before they were co-opted for eye development.Eyes and other sensory organs probably evolved before the brain: There is no need for an information-processing organ (brain) before there is information to process. A living example are cubozoan jellyfish that possess eyes comparable to vertebrate and cephalopod camera eyes despite lacking a brain.
Evolution of the eye
Stages of evolution
The earliest predecessors of the eye were photoreceptor proteins that sense light, found even in unicellular organisms, called "eyespots". Eyespots can sense only ambient brightness: they can distinguish light from dark, sufficient for photoperiodism and daily synchronization of circadian rhythms. They are insufficient for vision, as they cannot distinguish shapes or determine the direction light is coming from. Eyespots are found in nearly all major animal groups, and are common among unicellular organisms, including euglena. The euglena's eyespot, called a stigma, is located at its anterior end. It is a small splotch of red pigment which shades a collection of light sensitive crystals. Together with the leading flagellum, the eyespot allows the organism to move in response to light, often toward the light to assist in photosynthesis, and to predict day and night, the primary function of circadian rhythms. Visual pigments are located in the brains of more complex organisms, and are thought to have a role in synchronising spawning with lunar cycles. By detecting the subtle changes in night-time illumination, organisms could synchronise the release of sperm and eggs to maximise the probability of fertilisation.Vision itself relies on a basic biochemistry which is common to all eyes. However, how this biochemical toolkit is used to interpret an organism's environment varies widely: eyes have a wide range of structures and forms, all of which have evolved quite late relative to the underlying proteins and molecules.At a cellular level, there appear to be two main types of eyes, one possessed by the protostomes (molluscs, annelid worms and arthropods), the other by the deuterostomes (chordates and echinoderms).The functional unit of the eye is the photoreceptor cell, which contains the opsin proteins and responds to light by initiating a nerve impulse. The light sensitive opsins are borne on a hairy layer, to maximise the surface area. The nature of these "hairs" differs, with two basic forms underlying photoreceptor structure: microvilli and cilia. In the eyes of protostomes, they are microvilli: extensions or protrusions of the cellular membrane. But in the eyes of deuterostomes, they are derived from cilia, which are separate structures. However, outside the eyes an organism may use the other type of photoreceptor cells, for instance the clamworm Platynereis dumerilii uses microvilliar cells in the eyes but has additionally deep brain ciliary photoreceptor cells. The actual derivation may be more complicated, as some microvilli contain traces of cilia – but other observations appear to support a fundamental difference between protostomes and deuterostomes. These considerations centre on the response of the cells to light – some use sodium to cause the electric signal that will form a nerve impulse, and others use potassium; further, protostomes on the whole construct a signal by allowing more sodium to pass through their cell walls, whereas deuterostomes allow less through.This suggests that when the two lineages diverged in the Precambrian, they had only very primitive light receptors, which developed into more complex eyes independently.
Evolution of the eye
Stages of evolution
Early eyes The basic light-processing unit of eyes is the photoreceptor cell, a specialized cell containing two types of molecules bound to each other and located in a membrane: the opsin, a light-sensitive protein; and a chromophore, the pigment that absorbs light. Groups of such cells are termed "eyespots", and have evolved independently somewhere between 40 and 65 times. These eyespots permit animals to gain only a basic sense of the direction and intensity of light, but not enough to discriminate an object from its surroundings.Developing an optical system that can discriminate the direction of light to within a few degrees is apparently much more difficult, and only six of the thirty-some phyla possess such a system. However, these phyla account for 96% of living species.
Evolution of the eye
Stages of evolution
These complex optical systems started out as the multicellular eyepatch gradually depressed into a cup, which first granted the ability to discriminate brightness in directions, then in finer and finer directions as the pit deepened. While flat eyepatches were ineffective at determining the direction of light, as a beam of light would activate exactly the same patch of photo-sensitive cells regardless of its direction, the "cup" shape of the pit eyes allowed limited directional differentiation by changing which cells the lights would hit depending upon the light's angle. Pit eyes, which had arisen by the Cambrian period, were seen in ancient snails, and are found in some snails and other invertebrates living today, such as planaria. Planaria can slightly differentiate the direction and intensity of light because of their cup-shaped, heavily pigmented retina cells, which shield the light-sensitive cells from exposure in all directions except for the single opening for the light. However, this proto-eye is still much more useful for detecting the absence or presence of light than its direction; this gradually changes as the eye's pit deepens and the number of photoreceptive cells grows, allowing for increasingly precise visual information.When a photon is absorbed by the chromophore, a chemical reaction causes the photon's energy to be transduced into electrical energy and relayed, in higher animals, to the nervous system. These photoreceptor cells form part of the retina, a thin layer of cells that relays visual information, including the light and day-length information needed by the circadian rhythm system, to the brain. However, some jellyfish, such as Cladonema (Cladonematidae), have elaborate eyes but no brain. Their eyes transmit a message directly to the muscles without the intermediate processing provided by a brain.During the Cambrian explosion, the development of the eye accelerated rapidly, with radical improvements in image-processing and detection of light direction.
Evolution of the eye
Stages of evolution
After the photosensitive cell region invaginated, there came a point when reducing the width of the light opening became more efficient at increasing visual resolution than continued deepening of the cup. By reducing the size of the opening, organisms achieved true imaging, allowing for fine directional sensing and even some shape-sensing. Eyes of this nature are currently found in the nautilus. Lacking a cornea or lens, they provide poor resolution and dim imaging, but are still, for the purpose of vision, a major improvement over the early eyepatches.Overgrowths of transparent cells prevented contamination and parasitic infestation. The chamber contents, now segregated, could slowly specialize into a transparent humour, for optimizations such as colour filtering, higher refractive index, blocking of ultraviolet radiation, or the ability to operate in and out of water. The layer may, in certain classes, be related to the moulting of the organism's shell or skin. An example of this can be observed in Onychophorans where the cuticula of the shell continues to the cornea. The cornea is composed of either one or two cuticular layers depending on how recently the animal has moulted. Along with the lens and two humors, the cornea is responsible for converging light and aiding the focusing of it on the back of the retina. The cornea protects the eyeball while at the same time accounting for approximately 2/3 of the eye's total refractive power.It is likely that a key reason eyes specialize in detecting a specific, narrow range of wavelengths on the electromagnetic spectrum—the visible spectrum—is that the earliest species to develop photosensitivity were aquatic, and water filters out electromagnetic radiation except for a range of wavelengths, the shorter of which we refer to as blue, through to longer wavelengths we identify as red. This same light-filtering property of water also influenced the photosensitivity of plants.
Evolution of the eye
Stages of evolution
Lens formation and diversification In a lensless eye, the light emanating from a distant point hits the back of the eye with about the same size as the eye's aperture. With the addition of a lens this incoming light is concentrated on a smaller surface area, without reducing the overall intensity of the stimulus. The focal length of an early lobopod with lens-containing simple eyes focused the image behind the retina, so while no part of the image could be brought into focus, the intensity of light allowed the organism to see in deeper (and therefore darker) waters. A subsequent increase of the lens's refractive index probably resulted in an in-focus image being formed.The development of the lens in camera-type eyes probably followed a different trajectory. The transparent cells over a pinhole eye's aperture split into two layers, with liquid in between. The liquid originally served as a circulatory fluid for oxygen, nutrients, wastes, and immune functions, allowing greater total thickness and higher mechanical protection. In addition, multiple interfaces between solids and liquids increase optical power, allowing wider viewing angles and greater imaging resolution. Again, the division of layers may have originated with the shedding of skin; intracellular fluid may infill naturally depending on layer depth.Note that this optical layout has not been found, nor is it expected to be found. Fossilization rarely preserves soft tissues, and even if it did, the new humour would almost certainly close as the remains desiccated, or as sediment overburden forced the layers together, making the fossilized eye resemble the previous layout.
Evolution of the eye
Stages of evolution
Crystallins Vertebrate lenses are composed of adapted epithelial cells which have high concentrations of the protein crystallin. These crystallins belong to two major families, the α-crystallins and the βγ-crystallins. Both categories of proteins were originally used for other functions in organisms, but eventually adapted for vision in animal eyes. In the embryo, the lens is living tissue, but the cellular machinery is not transparent so must be removed before the organism can see. Removing the machinery means the lens is composed of dead cells, packed with crystallins. These crystallins are special because they have the unique characteristics required for transparency and function in the lens such as tight packing, resistance to crystallization, and extreme longevity, as they must survive for the entirety of the organism's life. The refractive index gradient which makes the lens useful is caused by the radial shift in crystallin concentration in different parts of the lens, rather than by the specific type of protein: it is not the presence of crystallin, but the relative distribution of it, that renders the lens useful.It is biologically difficult to maintain a transparent layer of cells. Deposition of transparent, nonliving, material eased the need for nutrient supply and waste removal. It’s a common assumption that Trilobites used calcite, a mineral which today is known to be used for vision only in a single species of brittle star. Studies of eyes from 55 million years old crane fly fossils from the Fur Formation indicates that the calcite in the eyes of trilobites is a result of taphonomic and diagenetic processes and not an original feature. In other compound eyes and camera eyes, the material is crystallin. A gap between tissue layers naturally forms a biconvex shape, which is optically and mechanically ideal for substances of normal refractive index. A biconvex lens confers not only optical resolution, but aperture and low-light ability, as resolution is now decoupled from hole size – which slowly increases again, free from the circulatory constraints.
Evolution of the eye
Stages of evolution
Aqueous humor, iris, and cornea Independently, a transparent layer and a nontransparent layer may split forward from the lens: a separate cornea and iris. (These may happen before or after crystal deposition, or not at all.) Separation of the forward layer again forms a humour, the aqueous humour. This increases refractive power and again eases circulatory problems. Formation of a nontransparent ring allows more blood vessels, more circulation, and larger eye sizes. This flap around the perimeter of the lens also masks optical imperfections, which are more common at lens edges. The need to mask lens imperfections gradually increases with lens curvature and power, overall lens and eye size, and the resolution and aperture needs of the organism, driven by hunting or survival requirements. This type is now functionally identical to the eye of most vertebrates, including humans. Indeed, "the basic pattern of all vertebrate eyes is similar." Other developments Color vision Five classes of visual opsins are found in vertebrates. All but one of these developed prior to the divergence of Cyclostomata and fish. The five opsin classes are variously adapted depending on the light spectrum encountered. As light travels through water, longer wavelengths, such as reds and yellows, are absorbed more quickly than the shorter wavelengths of the greens and blues. This creates a gradient in the spectral power density, with the average wavelength becoming shorter as water depth increases. The visual opsins in fish are more sensitive to the range of light in their habitat and depth. However, land environments do not vary in wavelength composition, so that the opsin sensitivities among land vertebrates does not vary much. This directly contributes to the significant presence of communication colors. Color vision gives distinct selective advantages, such as better recognition of predators, food, and mates. Indeed, it is possible that simple sensory-neural mechanisms may selectively control general behavior patterns, such as escape, foraging, and hiding. Many examples of wavelength-specific behaviors have been identified, in two primary groups: Below 450 nm, associated with direct light, and above 450 nm, associated with reflected light. As opsin molecules were tuned to detect different wavelengths of light, at some point color vision developed when the photoreceptor cells used differently tuned opsins. This may have happened at any of the early stages of the eye's evolution, and may have disappeared and reevolved as relative selective pressures on the lineage varied.
Evolution of the eye
Stages of evolution
Polarization vision Polarization is the organization of disordered light into linear arrangements, which occurs when light passes through slit like filters, as well as when passing into a new medium. Sensitivity to polarized light is especially useful for organisms whose habitats are located more than a few meters under water. In this environment, color vision is less dependable, and therefore a weaker selective factor. While most photoreceptors have the ability to distinguish partially polarized light, terrestrial vertebrates' membranes are orientated perpendicularly, such that they are insensitive to polarized light. However, some fish can discern polarized light, demonstrating that they possess some linear photoreceptors. Additionally, cuttlefish are capable of perceiving the polarization of light with high visual fidelity, although they appear to lack any significant capacity for color differentiation. Like color vision, sensitivity to polarization can aid in an organism's ability to differentiate surrounding objects and individuals. Because of the marginal reflective interference of polarized light, it is often used for orientation and navigation, as well as distinguishing concealed objects, such as disguised prey.
Evolution of the eye
Stages of evolution
Focusing mechanism By utilizing the iris sphincter muscle and the ciliary body, some species move the lens back and forth, some stretch the lens flatter. Another mechanism regulates focusing chemically and independently of these two, by controlling growth of the eye and maintaining focal length. In addition, the pupil shape can be used to predict the focal system being utilized. A slit pupil can indicate the common multifocal system, while a circular pupil usually specifies a monofocal system. When using a circular form, the pupil will constrict under bright light, increasing the f-number, and will dilate when dark in order to decrease the depth of focus. Note that a focusing method is not a requirement. As photographers know, focal errors increase as aperture increases. Thus, countless organisms with small eyes are active in direct sunlight and survive with no focus mechanism at all. As a species grows larger, or transitions to dimmer environments, a means of focusing need only appear gradually.