text
stringlengths 8
5.77M
|
---|
NEW YORK -- Eric LeGrand, the Rutgers defensive tackle who was paralyzed from the neck down in a game against Army on Oct. 16, says he has regained some sensation in his entire body and movement in his shoulders.
LeGrand, 20, revealed the news in an exclusive first interview with ESPN's Tom Rinaldi, which will debut during the 9 a.m. ET edition of "SportsCenter" on Friday.
The 6-foot-2, 275-pound junior fractured his C-3 and C-4 vertebrae while making a tackle on a kickoff return and admitted in the interview that he feared for his life immediately after suffering the injury.
"Fear of death, that's the biggest fear that I got because I couldn't breathe the way I was breathing and I couldn't move," LeGrand said. "Laying out on the ground, motionless, not being able to breathe was the hardest part in thinking, can I die here?"
After nine hours of emergency surgery to stabilize LeGrand's spine, doctors initially gave him a 0-5 percent chance of regaining neurologic function. LeGrand's mother, Karen, said that she never gave him that information.
"I didn't want to hear that two percent of the people with this injury can walk, or five percent regain this, or -- I didn't want to hear about percentages because my son, in all honesty, is not a percentage," Karen said. "My son is my son. ... And nobody knows him, nobody knows the will that he has, nobody knows the faith that we have."
Six days after surgery, LeGrand first moved his shoulders. In early November, he was taken off a ventilator and was breathing on his own. He was transferred to the Kessler Institute in West Orange, N.J., to continue his rehabilitation.
In mid-December, LeGrand had another breakthrough. For the first time since the injury, he had sensation in his hands.
"I always rub his hands as he's laying there," Karen said, "and I think he said, 'I feel that. I think I feel that.' I'm like, what are you talking about? He's like, 'I think I feel something -- you rubbing my hands. I can feel my hands.'"
"As my mom, she placed her hand on me, I was like, 'Wow! I felt that.' And that was just a big shock and it was just like, 'Wow! It's coming back. It's coming back.'"
LeGrand remains at the Kessler Institute, and still faces a long and difficult road ahead, in terms of his rehabilitation. But he is brimming with optimism.
"I believe that I will walk one day," LeGrand said. "I believe it. God has a plan for me and I know it's not to be sitting here all the time. I know he has something planned better for me."
Kieran Darcy covers college sports for ESPNNewYork.com. |
Faculty Recital at 3 p.m. Sunday
February 4, 2010
February 4, 2010, Greencastle, Ind. — "The first faculty recital for spring semester will take place on Sunday at 3 p.m. in Thompson Recital Hall at the Green Center for the Performing Arts on the DePauw University campus," reports Greencastle's Banner-Graphic. "The recital will feature Dr. Matthew Markham, baritone, Barbara Paré, soprano and John Clodfelter ('94), pianist."
The newspaper notes, "Markham will be performing the first half of the recital with works by Schubert, Eben and a group of Engliish songs by American composers. Paré will perform the second half with works of Rachmaninff and Dvorak. There will also be some surprises at the end of the program featuring all three artists." |
Closed chains, chain nets, snow chains, protective tire chains, and other chain systems are not adjustable in terms of length and width (in the case of grid or ladder chains) unless the chain is opened and/or individual chain links are opened in order to remove or add chain links. The same applies for jewelry chains.
Many variants for connecting the last links in a finite chain by way of a special connecting element are also known in the art. Such a connection becomes necessary most frequently when chains break; for example, breaks on snow chains of motor vehicles or on protective tire chains of utility vehicles operating in stone quarries and similar environments. The problem that emerges when two ends of a broken chain must be connected is the fact that, most of the time, no suitable tools are available on site where the break occurred in order to install a regular chain link that was cut open for that purpose and must be welded closed again after having been inserted. Welding on the wheel is generally not possible because this may cause damage to the tire. Any removal of the chain armor from protective chains is extremely complex and creates, moreover, costs associated with the disruption of operations during the disassembly, repair work, and new installation.
Accordingly, repair chain links are already known in the prior art that can be connected to the links of the broken chain right on the vehicle. The repair chain links are often comprised of a ring with an overlapping thread that extends over a certain area in the way of a helical line. Chain links of this type can be inserted directly on the wheel into the outermost-lying chain links and then pressed together with the assistance of a strong pair of pliers. Also known are, furthermore, repair links that feature an (open) C-form; the only way to guide them around the neighboring chain links is with a pair of pliers, and then they must be bent in a backward direction. Reconnecting a broken chain without the use of tools is not possible with any of the known prior art repair links. Moreover, the repair links are usually only a short-term solution because, since they must be bendable after installation, they may not be manufactured of a hardened material. At a later time they must be replaced at a repair shop with other durable links. This requires additional expense. |
---
title: 发出动态方法和程序集
description: 使用 System.Reflection.Emit 命名空间发出动态方法和程序集,该命名空间允许编译器或工具在运行时发出元数据和 MSIL 代码。
ms.date: 08/30/2017
helpviewer_keywords:
- reflection emit
- dynamic assemblies
- metadata, emit interfaces
- reflection emit, overview
- assemblies [.NET Framework], emitting dynamic assemblies
ms.openlocfilehash: 76d2a83943d9df06cc66cf86c6869f18fac2a12c
ms.sourcegitcommit: cf5a800a33de64d0aad6d115ffcc935f32375164
ms.translationtype: HT
ms.contentlocale: zh-CN
ms.lasthandoff: 07/20/2020
ms.locfileid: "86475042"
---
# <a name="emitting-dynamic-methods-and-assemblies"></a>发出动态方法和程序集
本节介绍 <xref:System.Reflection.Emit> 命名空间中的一组托管类型,它们允许编译器或工具在运行时发出元数据和 Microsoft 中间语言 (MSIL),并在磁盘上生成可移植可执行 (PE) 文件(可选)。 脚本引擎和编译器是此命名空间的主要使用者。 在本节中,<xref:System.Reflection.Emit> 命名空间提供的功能被称为反射发出。
反射发出具有以下功能:
- 在运行时定义轻量全局方法(使用 <xref:System.Reflection.Emit.DynamicMethod> 类)并通过委托执行这些方法。
- 在运行时定义程序集,然后运行程序集以及/或者将程序集保存到磁盘。
- 在运行时定义程序集,运行程序集,然后卸载程序集并允许垃圾回收回收其资源。
- 在运行时在新的程序集中定义模块,然后运行模块以及/或者将模块保存到磁盘。
- 在运行时在模块中定义类型,创建这些类型的实例并调用其方法。
- 为已定义模块定义可供工具(如调试器和代码探查器)使用的符号化信息。
除了 <xref:System.Reflection.Emit> 命名空间中的托管类型,还有非托管元数据接口,非托管元数据接口在[元数据接口](../unmanaged-api/metadata/metadata-interfaces.md)参考文档中介绍。 托管的反射发出提供比非托管元数据接口更强的语义错误检查和更高级别的元数据抽象。
元数据和 MSIL 的另一个有用资源是《公共语言基础结构 (CLI)》文档,尤其是“第二部分:Metadata Definition and Semantics”(第 2 部分:元数据定义和语义)和“Partition III:CIL Instruction Set”(第 3 部分:CIL 指令集)。 该文档可在 [Ecma 网站](https://www.ecma-international.org/publications/standards/Ecma-335.htm)上联机获取。
## <a name="in-this-section"></a>本节内容
[反射发出中的安全问题](security-issues-in-reflection-emit.md)
介绍与使用反射发出创建动态程序集相关的安全问题。
[如何:定义和执行动态方法](how-to-define-and-execute-dynamic-methods.md) 介绍如何执行简单的动态方法和绑定到类实例的动态方法。
[如何:使用反射发出定义泛型类型](how-to-define-a-generic-type-with-reflection-emit.md) 介绍如何创建具有两个参数的简单泛型类型、如何对类型参数应用类约束、接口约束和特殊约束,以及如何创建使用类的类型参数作为参数类型和返回类型的成员。
[如何:使用反射发出定义泛型方法](how-to-define-a-generic-method-with-reflection-emit.md) 介绍如何创建、发出和调用简单的泛型方法。
[动态类型生成的可回收程序集](collectible-assemblies.md) 介绍可回收程序集,它是一个动态程序集,卸载该程序集时,无需卸载在其中创建了该程序集的应用程序域。
## <a name="reference"></a>参考
<xref:System.Reflection.Emit.OpCodes>
编录可用于生成方法体的 MSIL 指令代码。
<xref:System.Reflection.Emit>
包含用于发出动态方法、程序集和类型的托管类。
<xref:System.Type>
介绍 <xref:System.Type> 类,该类代表托管反射和反射发出中的类型,它是使用这些技术的关键。
<xref:System.Reflection>
包含用于浏览元数据和托管代码的托管类。
## <a name="related-sections"></a>相关章节
[反射](reflection.md)
说明如何浏览元数据和托管代码。
[.NET 中的程序集](../../standard/assembly/index.md)
概述 .NET 实现中的程序集。
|
Q:
Find first duplicate element with lowest second occurrence index
I'm trying to solve a problem on CodeFights called firstDuplicate, that states -
Given an array a that contains only numbers in the range from 1 to
a.length, find the first duplicate number for which the second
occurrence has the minimal index. In other words, if there are more
than 1 duplicated numbers, return the number for which the second
occurrence has a smaller index than the second occurrence of the other
number does. If there are no such elements, return -1.
Example
For a = [2, 3, 3, 1, 5, 2], the output should be firstDuplicate(a) =
3.
There are 2 duplicates: numbers 2 and 3. The second occurrence of 3
has a smaller index than than second occurrence of 2 does, so the
answer is 3.
For a = [2, 4, 3, 5, 1], the output should be firstDuplicate(a) = -1.
My solution -
public class FirstDuplicate {
private static HashMap<Integer, Integer> counts = new HashMap<>();
private static void findSecondIndexFrom(int[] num, int n, int i) {
// given an array, a starting index and a number, find second occurrence of that number beginning from next index
for(int x = i; x < num.length; x++) {
if(num[x] == n) {
// second occurrence found - place in map and terminate
counts.put(n, x);
return;
}
}
}
private static int firstDuplicate(int[] a) {
// for each element in loop, if it's not already in hashmap
// find it's second occurrence in array and place number and index in map
for(int i = 0; i < a.length; i++) {
if(!counts.containsKey(a[i])) {
findSecondIndexFrom(a, a[i], i+1);
}
}
System.out.println(counts);
// if map is empty - no duplicate elements, return -1
if(counts.size() == 0) {
return -1;
}
// else - get array of values from map, sort it, find lowest value and return corresponding key
ArrayList<Integer> values = new ArrayList<>(counts.values());
Collections.sort(values);
int lowest = values.get(0);
//System.out.println(lowest);
for(Map.Entry<Integer, Integer> entries: counts.entrySet()) {
if(entries.getValue() == lowest) {
return entries.getKey();
}
}
return -1;
}
public static void main(String[] args) {
// int[] a = new int[]{2, 3, 3, 1, 5, 2};
//int[] a = new int[]{2, 4, 3, 5, 1};
//int[] a = new int[]{8, 4, 6, 2, 6, 4, 7, 9, 5, 8};
//int[] a = new int[]{1, 1, 2, 2, 1};
int[] a = new int[]{10, 6, 8, 4, 9, 1, 7, 2, 5, 3};
System.out.println(firstDuplicate(a));
}
}
This solution passes only for about 4 of the 11 test cases on CodeFights. However, I manually executed each one of the test cases in my IDE, and each one produces the right result.
I can't figure out why this won't work in CodeFights. Does it have something to do with the use of the static HashMap?
A:
Edited: Since adding and checking if element is present in Set can be done in one step, code can be simplified to:
public static int findDuplicateWithLowestIndex(int... a){
Set<Integer> set = new HashSet<>();
for(int num : a){
if(!set.add(num)){
return num;
}
}
return -1;
}
You're completly right Patrick.
|
Friday, November 24, 2017
The best meals are created when chefs get to play around and innovate with their ingredients. Chef Leo Asaro at Tico Restaurant in Boston's Back Bay gets to do every once in a while with a special dinner called Leo's Lab, which happen every 2-3 weeks.
For Leo's Lab, guests are seated at the lab kitchen counter, so you can see all the action of Chef Leo Asaro preparing everything. There are only eight seats and only one seating for the night, so it's quite an exclusive experience.
Each Leo's Lab dinner comes with five savory courses, a dessert, and a chef's special cocktail - plus a welcome glass of prosecco for $75. Instead of a server, chef Asaro serves and explains each dish to the guests.
I attended the last dinner, which started with some Bay scallops with autumn berry, pine, and smoked oil
It was near the end of the season for Nantucket bay scallops, and we savored it paired with tart autumn berry sauce.
"Wrapper's Delight" - vegetables wrapped with jamon serrano and mole
The mole took chef Asaro 2-3 days to make and was very rich in flavors. The wrap filling was also accentuated with some rau ram (culantro).
Sunday, November 5, 2017
If you live in Boston, I'm sure you've visited the Boston Public Market, but did you know about The KITCHEN at Boston Public Market at the back of the market? The KITCHEN is managed by The Trustees, one of the largest owners and steward of agricultural land in Massachusetts and a founding member of Boston Public Market. The KITCHEN frequently holds hands-on cooking classes and educational experiences that highlight New England farmers, artisans, and chefs.
I recently got to attend a crepe making class with Saltbox Farm and the chefs from Saltbox Kitchen, their farm cafe in Concord, MA.
Since my mom was visiting me from Indonesia I took her along for the class.
Each table can fit four people and equipped with our own cutting boards, one induction stove per table, and the ingredients for our menu. The ingredients for The KITCHEN's cooking classes all come from the Boston Public Market, including some gorgeous oyster mushrooms for our crepes.
Wednesday, October 18, 2017
Wagamama is a chain of restaurants serving Asian food - primarily Japanese - which started in UK. I wasn't familiar until I moved to Boston 3 years ago, and I actually only tried it for the first time recently. Wagamama had opened a new location in the Boston Seaport district and invited some bloggers and instagrammers to try them out.
We started with a plate of Chili squid (crispy fried squid, shichimi, chili cilantro dipping sauce, $9)
For calamari lovers, this squid dish is a great variation. It's crispy but tasted light and spiced just right. I couldn't start eating them.
We also had some dumplings, both steamed and fried. Our favorite was the fried duck gyoza ($8)
The drinks at Wagamama are better than I would've expected from a chain restaurant. While a lot of them tended on the sweet and fruity side, they're not overly sweet and fairly well balanced. What I like most is the fact that they use spirits from Asia as much as possible.
For example, the Wagamama Mai Tai is made with Tanduay rum which is from the Philippines. They also use Iwai Whisky from Japan
Friday, October 13, 2017
Farm to Post is the dinner series at Back Bay's Post 390 that showcases the local farmers and New England producers the restaurant works with. I attended a pork dinner last year featuring Dogpatch Farm and I had another opportunity to attend a special dinner recently. In September, Post 390 held a "Foraged and Wild" dinner featuring (you guessed it) foraged ingredients.
Each Farm to Post dinner always starts with a cocktail reception with passed hors d'oeuvres. This year's most popular appetizer was probably the fried Damariscotta oyster (from Maine) with creamed wild spinach and bacon.
The first cocktail is a sparkling Cocchi Americano drink with wild peppermint and sweet fern tea
First course: "Secret spot" mushroom vol-au-vent
Marsh greens, spiced black walnuts, blackberries, wild flower petals, Solomon's plume vinaigrette.
This was paired with Oxbow Brewing Farmhouse Pale Ale from Newcastle, ME.
The pastry for the vol-au-vent was perfectly flaky. The mushrooms, and other foraged items in the dinner were foraged by Nicholas Deutmeyer. The mushrooms came from his secret spot (hence the name). We had black trumpet mushrooms, lobster mushrooms, and chanterelles. The greens also featured foraged sea beans and sea arugula. Perfect start to a foraging dinner!
Monday, October 2, 2017
The Forbes Under 30 Summit in Boston has been underway since this weekend. The summit gathers 7000 "young leaders" and present motivation panels, pitch contests, as well as music and food festivals. Tomorrow (October 3) will be this year's food festival. The food festival features young notable chefs from the 30 Under 30 list and they will compete for the title of America's Best Young Chef (there are two categories: the Judges' Choice and Audience Choice).
I attended last year's Forbes Under 30 Food Festival, which again featured some great young chefs from all over the world. Here are some of my favorite bites from the Forbes 30 Under 30 Food Festival last year:
This beef and truffle donut - probably my favorite of the night - from Henry Herbert of Hobbs House Bakery a.k.a. The Fabulous Baker Brothers in United Kingdom
Seaweed pie from Toni Toivanen of the Scandinavian pop-up restaurant Sandladan. While they ran out of the seaweed tartlets, I still loved the custard - made of fermented egg yolk and lobster brains (yes), topped with poached lobster and ants.
Thursday, September 28, 2017
Earls Kitchen + Bar is a Canadian-based restaurant chain that has made its way to a number of US cities. I haven't had the chance to check any of them out before, but I was recently invited to the opening party of the new location at The Prudential Location.
What I was surprised to see, and one of the coolest part of Earls at The Pru is the Cocktail Lab downstairs, where they will feature guest bartenders from all around the city.
The Goddess Manhattan, created by one of the restaurant's regular bartenders, was one of the bloggers' favorites of the night.
Other rotating bartenders who was also behind the bar that night were Will Isaza and Melinda Johnson-Maddox. I'm definitely excited about coming back to check out the Cocktail Lab and see who's behind the bar!
We also tried some bites off the menu during the party, including the Spicy Tuna Sushi Press (spicy soy marinated tuna, chives, nori, avocado, pickled ginger pressed on sushi rice, topped with sriracha mayo)
The tuna was served aburi-style, a.k.a. seared.
Sunday, September 10, 2017
Waypoint is the seafood-focused restaurant from Chef Michael Scelfo from Alden & Harlow. Tucked between Central Square and Harvard Square, it has become one of my favorite restaurants to go to for seafood.
One time, I went for the chef's counter tasting menu using a Gilt City voucher (seriously, Boston's Gilt City has some great restaurant deals listed from time to time).
For the tasting menu, we started with some oysters with pickled fennel mignonette, and fish pepper cocktail sauce
this was paired with Ca Di Rajo's Le Moss Pet Nat
We loved the wine. Le Moss is an unfiltered sparkling Glera Pet-nat (Petillant Naturel). Unlike champagne, these wines are bottled before fully completing its first fermentation.
The second course was the steak tartare, 3 minute egg, smoked trout roe, toast
This was one of the better steak tartare in the city, in my opinion. The flavor and texture of the meat was spot on, and the slightly runny egg yolk added a nice touch.
Saturday, August 19, 2017
The Taste of WGBH, their annual multi-event food and wine festival held by Boston's public radio station WGBH is coming soon on October 5-8. There will be four different events that weekend, from a red-carpet Chef's Gala to three fun tasting events.
Last year I attended Food Fight, one of their events that pitted Boston restaurants in a competition for the best food-on-a-stick.
The winner of last year's food fight is Kaki Lima, the Indonesian pop-up that's currently doing a residency at Wink and Nod. They served their sate lilit, a Balinese spiced chicken satay served with turmeric rice.
I also got seconds of the lobster corn dog from Lincoln, because lobster!
Monday, August 7, 2017
I recently had a staycation at the Element Boston Seaport hotel and had a summer cookout with friends. No, really, the Element hotel rooms are equipped with a kitchen and their patio has a grill that guests are welcome to use, as well, making it easy for both short or longer-term guests to "eat in" while they're staying here.
Yes, we cooked this at a hotel
First off, the Element is a new hotel that opened up in Seaport last year, just across the street from Lawn on D.
Element has rooms and suites, both of which are equipped with kitchenettes.
Sunday, July 30, 2017
Boston honestly doesn't have too much regional Chinese cuisine, but this is changing with the arrival of Sumiao in Kendall Square. Sumiao brings Hunanese cuisine, which is hard to find in most places, and combines it with a chich decor and solid cocktails.
The regular menu already has a number of authentic Hunanese dishes, but on weekends they add even more authentic recipes as specials. I got to try the Homemade La Rou with Mushrooms ($28)La rou is like Hunanese bacon - it is pork belly that's been smoked then hang-dried. This is one of the quintessential Hunanese food. In Hunan, they like to smoke their meats, which is then stir fried with some chili and vegetables (because Hunanese also love their chilies!). They also have la rou with a different preparation on their lunch menu for $13, and I encourage you to try it.
Hunan tofu pot, pork belly, green chili, black bean chili sauce, $18)
As I mentioned, Hunanese love their chilies. Chairman Mao hailed from Hunan and reportedly once said "you can't be a revolutionary if you don't eat chilies!" The Hunan tofu pot was one of the spicier dishes. The spice isn't too bad but it does build up (but if you want more, you can ask for "authentic spicy"). This was one of our favorites that day, with the nicely fried tofu and the flavorful pork belly.
Wednesday, July 26, 2017
CourseHorse is a portal to discover local classes, varying from tech classes, languages, life skills, and of course cooking and wine tasting classes. Even fitness classes are listed. I was invited to experience one of CourseHorse's openings, so naturally I looked for the food-related ones. Browsing through the selection I found numerous cooking classes, a cocktail making class at No. 9 Park, and a number of wine tastings. You can look through the current culinary offerings in Boston here.
I decided to take one of the wine tastings offered at Dave's Fresh Pasta in Somerville. Dave's Fresh Pasta is a gourmet food and wine store, but they also hold events like this wine tasting on a few Thursdays 7:30-9PM each season. The wine tastings at Dave's Fresh Pasta is typically $55 per person. I was there for their Local Cheese and Spring Wines tasting with Vineyard Road, a wine distributor based in Framingham. We tasted five different wines paired with food.
We started with 2015 Murgo Lapilli from Sicily, Italy ($11.95). This wine is 60% Chardonnay and 40% Sauvignon Blanc, which are typical grapes of Italy.
The wine's apple notes pair well with the Hudson Valley Camembert cheese, apple butter, and apple slices on cracker.
2016 Domaine Lelievre from Cotes de Toul, Lorraine, France ($16.95). This rose wine is a blend of Gamay and Pinot Noir.
Lorraine is in the northern center France. This region used to make a lot of wines in the 1500s but a lot of disease had diminished the planted area.
Monday, July 17, 2017
I have long wanted to try Toro. This tapas bar from duo Ken Oringer and Jamie Bissonnette has been around for many years (since 2005), but there's still always a long wait every night since they don't take reservations, even after they've opened other locations in New York and Bangkok. I finally went to try it when I saw a Gilt City voucher for it. The voucher for a tasting menu wasn't cheap at $100 but it was 8 courses including wine pairing, but the best part is that it allows you to make a reservation! If you've never used Gilt City, you can save $25 off your first order with my invite link.
The tapas tasting started with a Tortilla Espanola (egg, onion, potato, nettle, aioli). A nice rendition of the traditional Spanish dish. The ratio between egg and potato is just right.
Uni Bocadillo (pressed uni sandwich, miso butter, pickled mustard seeds)
This is similar to the uni sandwich at Coppa. Of course, I'm always happy to get uni on a tasting menu.
Since my friend is kosher, we got different third courses - I wanted at least one porky dish. I got the Jamon Blanco (Toast with lardo, marinated Jonah crab, black garlic, crispy shallots and avocado)
While this wasn't what I had in mind when I wanted a "pork" dish, I enjoyed the toast regardless.
Friday, June 9, 2017
Summer is eventually coming to Boston, and when it does, it's time for harbor cruises and visits to the Boston Harbor Islands. Thompson Island is also gearing up for the season. Thompson Island is open for one of two purposes: 1) a private event on the island, or 2) Outward Bound, an experiential education program for students.
The two are connected, and I recently visited the island on their Open House to learn more about both and the great cause the island is supporting! While the education center provides educational programs for students of all ages (corporate team building included), the private events that are booked on the island provide the funds needed for Outward Bound to offer FREE summer programs to Boston public school students!
The wonderful cause isn't the only draw for having your event on this island, though. Let's take a look at what they offer. The island is a mile off of downtown Boston and takes about 30 minutes with the Boston Harbor Cruises ship.
It was nice to get off the boat and head to our own private island - at least for the evening.
Sunday, May 21, 2017
Thirst Boston has come and gone, leaving us satiated and hungover after a liquor-fueled whirlwind of a weekend. There were parties and booze tastings, and of course, a number of educational seminars. I attended a hands-on Gin Lab where I got to make my own gin!
The seminar/lab was led by William Codman from Diageo, and he started off with a history of gin. From how the British discovered Genever during the War in Holland, to various political situations and bans that led to the popularity of gin (often distilled at home back then) in England. Apparently it was so popular that it became a huge problem since people were drinking way too much. The craziness is depicted in this Gin Lane painting by William Hogarth.
Well, now we mostly drink gin in moderation, thankfully ;)
We also learned about the different classifications of gin. To be called gin, the liquor has to be a neutral spirit that has juniper in it. When it's not redistilled after juniper and flavors are added, that is called "compound gin". A "distilled gin" means that the spirit has been re-distilled after the juniper and botanicals are added, but other flavorings can still be added after the redistillation.
We mostly know of London Dry Gin these days. London Gin is redistilled using traditional still, and flavorings can only be added during the distillation, not after, and they must be natural flavorings. No coloring may be added, although sugar may be added.
With this classification, we basically made a compound gin that day!
After the history lesson, we took turns "foraging" for our botanicals. OK, not really foraging, we went to the back of the room and picked out our botanicals from what they've prepared. Too bad were limited to five! But that's probably a good thing, as I might've gone crazy with the flavor combo otherwise.
Friday, May 19, 2017
Allston is home to some of the best restaurants in Boston, so wouldn't it be great to be able to try a couple dozen of them at the same time? You can at Taste of Allston, which is celebrating their 20th anniversary this year! Spend Saturday afternoon on June 24 tasting food from 20 restaurants, and enjoy the summer (which is finally here!) with live music, lawn games, and other activities! More details (plus a giveaway for a pair of tickets!) are below:
They will be rolling out the list of participating restaurants over the next few weeks. Current confirmed restaurants include White Horse Tavern, Roxy's Grilled Cheese, The Avenue, Loui Loui, and more. They're also promising cool treats from FoMu, FroYo World, and LimeRed Teahouse!
Watch their Facebook and Instagram pages for more restaurant announcements in the coming weeks.
There will be free parking, bike racks, and free bike valet.
Tickets are $25 each and can be purchased here. OR ... you can try your luck and ENTER TO WIN a pair of tickets below!
Friday, May 12, 2017
I was recently invited to try out a new Indian restaurant in Wakefield, MA - about 12 miles north of Boston. The reviews for the restaurant all seemed really good, so I accepted and drove up to try the lunch buffet at Maya Indian Grill one weekend.
As we sat, they delivered hot, fresh naan to the table. Bonus points!
The buffet items change daily, but there are about a dozen options - both vegetarian and non-vegetarian. They had the usual staples like chicken biryani, chicken tandoori, saag paneer, chana masala, and also a nice goat curry, which was probably my favorite of the savory dishes that afternoon.
Monday, April 10, 2017
I was first introduced to Community Servings last year, when they had a cooking class at the Boston Public Market. They prepare and deliver meals to people who are homebound, critically ill, and their families. It's not just mass produced food, but they really take care to prepare meals for each patient according to their dietary needs and illnesses.
They rely on volunteers to help prepare the meal, and of course, funds from donations and fundraising activities. Their biggest fundraising event is the LifeSavor gala, a glamorous evening that takes place each year and involve many of the local restaurants. This year's LifeSavor will take place on May 4 at The Langham.
LifeSavor starts with a luxurious cocktail party with silent auction and balloon pop raffle. After the reception, guests will be transported to one of 75 participating restaurants around the city for a wine-paired dinner.
Not that there isn't plenty of food to go around at the reception! Spread out over multiple rooms at The Langham, I had some delicious lamb chops ...
A spread of salads, pastas, cheeses, and more. |
1 INTRODUCTION {#SEC1}
==============
RNA-seq techniques provide an efficient means for measuring transcriptome data with high resolution and deep coverage ([@btt216-B21]). Millions of short reads sequenced from cDNA provide unique insights into a transcriptome at the nucleotide-level and mitigate many of the limitations of microarray data. Although there are still many remaining unsolved problems, new discoveries based on RNA-seq analysis ranging from genomic imprinting ([@btt216-B8]) to differential expression ([@btt216-B1]; [@btt216-B25]) promise an exciting future.
Current RNA-seq analysis pipelines typically contain two major components: an aligner and an assembler. An RNA-seq aligner \[e.g. TopHat ([@btt216-B23]), SpliceMap ([@btt216-B2]) and MapSplice ([@btt216-B27])\] attempts to determine where in the genome a given sequence comes from. An assembler \[e.g. Cufflinks ([@btt216-B24]) and Scripture ([@btt216-B9])\] addresses the problems of which transcripts are present and estimating their abundances.
Existing RNA-seq pipelines can be divided into two major categories: align-first pipelines and assembly-first pipelines ([@btt216-B21]). Assembly-first pipelines attempt to assemble and quantify the complete transcriptome without a reference. Several algorithms, such as Trinity ([@btt216-B7]) and TransABySS ([@btt216-B22]), have been developed. However, alignment to a reference genome is still necessary to interpret the results from an assembly-first pipeline and to relate them to existing knowledge. The assembly-first pipeline is compute-intensive and may require several days to complete. In align-first pipelines, a high-quality reference genome serves as a scaffold for inferring the source of RNA-seq fragments. Current alignment approaches are both computationally efficient and easily parallelized. Thus, the align-first RNA-seq analysis can be finished within hours even on a normal desktop machine. Therefore, align-first pipelines such as TopHat/Cufflinks ([@btt216-B24], [@btt216-B25]) or MapSplice/Cufflinks ([@btt216-B27]) are generally preferred when a suitable reference genome is available.
1.1 Multiple-alignment problem {#SEC1.1}
------------------------------
In this article, we assume that the RNA-seq data are paired-end reads, which are widely used for transcriptome inference. Our approach can be used for single-end reads as well. In paired-end RNA-seq data, a fragment is a sub-sequence from an expressed transcript. High-throughput sequencing provides two reads corresponding to the two ends of the fragment. If a fragment can be mapped to more than one location in the genome, we say that this fragment has *multiple alignments*, as showed in [Figure 1](#btt216-F1){ref-type="fig"}. As each fragment originates from one location in the genome, multiple alignments must be processed/corrected before subsequent analysis can proceed. Inappropriate handling of the multiple alignment fragments impacts the subsequent analysis and may lead to questionable conclusions. For example, the 'widespread RNA and DNA sequence differences' ([@btt216-B20]) are suspected to be (at least partially) due to systematic technical errors, including misalignments ([@btt216-B17]). Fig. 1.A fragment with paired end reads that can be aligned to two locations in the genome
Current RNA-seq analysis pipelines handle the multiple-alignment problem in both the alignment and assembly steps. Most existing aligners \[e.g. TopHat ([@btt216-B23])\] use a scoring system where only the alignments with the 'best score' are kept. However, a fragment may still have multiple alignments with equally good scores. In our experiments on real mouse RNA-seq data, we observe that at least 5% fragments have multiple alignments. The assembler \[e.g. Cufflinks ([@btt216-B24])\] either assumes that they contribute equally to each location or uses a probabilistic model to estimate their contributions based on the abundance of the corresponding transcripts ([@btt216-B19]).
1.2 Genomic factors causing multiple alignments {#SEC1.2}
-----------------------------------------------
In general, multiple alignments are caused by the existence of paralogous sequences within a genome. Duplicated and repetitive sequences need not be strictly identical. In this subsection, we discuss genomic factors that may lead to multiple alignments and their impact on RNA-seq analysis. Retrotransposition and gene duplication are two biological phenomena that generate sequences with high levels of nucleotide similarity. Interspersed highly repetitive sequences, such LINEs and SINEs, can be expressed in an autonomous or nonautonomous manner, but they are not our focus. That leaves us with three major types of genomic factors: processed pseudogenes ([@btt216-B3]; [@btt216-B26]; [@btt216-B28]), nonprocessed pseudogenes ([@btt216-B13]) and repetitive sequences shared by gene families ([@btt216-B11]; [@btt216-B14]).
Pseudogenes ([@btt216-B10]; [@btt216-B16]) are typically nonfunctional, though some of them may be expressed ([@btt216-B12]). They can be further categorized in two groups: processed pseudogenes and nonprocessed pseudogenes based on their causes. Both of them lead to the repetitive genomic sequences. In general, the pseudogenes are nonfunctional and under reduced selection pressure; thus, they typically exhibit a higher mutation rate than the expressed genes from which they originated.
### 1.2.1 Processed pseudogene {#SEC1.2.1}
A processed pseudogene ([@btt216-B26]) is generated when an mRNA is reverse transcribed and reintegrated back to the genome. The resulting DNA sequence of the processed pseudogene is the concatenated exon sequences from its original transcript. Because there are no splice junctions in the sequence of the processed pseudogene, it is easier for the current RNA-seq aligners to map the fragments to processed pseudogene than the actual gene from which they are expressed, especially those fragments that cross a splice junction. Both the unexpressed pseudogene and its corresponding expressed gene may be reported by the assembler if the implementation of the assembler does not consider such cases. For example, [@btt216-B9] observed that a few highly expressed transcripts may not be able to be fully reconstructed owing to alignment artifacts caused by the processed pseudogenes.
### 1.2.2 Nonprocessed pseudogene {#SEC1.2.2}
Nonprocessed pseudogenes ([@btt216-B13]) are typically caused by a historical gene duplication event, followed by an accumulation of mutations, and an eventual loss of function. Nonprocessed pseudogenes often share similar exon/intron structures with their originating gene. From the aligner's perspective, fragments can be mapped to either the expressed original gene, or its nonprocessed pseudogene, or both. Similar to processed pseudogenes, the assembler may report a nonprocessed pseudogene when its corresponding functional genes are expressed.
### 1.2.3 Repetitive shared sequences {#SEC1.2.3}
Besides pseudogenes, many functional gene families share subsequences that are almost identical to each other. One repetitive sequence shared by different genes in human genome is *Alu* ([@btt216-B11]; [@btt216-B14]). Consider the case when, among all genes that share the *Alu* sequence, but only a subset is expressed. Hence, the aligner will map the fragments originating from the expressed subset to all similar sequences on the genome. The assembler may report all genes sharing the repetitive sequence as being expressed.
Any of these three biological factors may lead to multiple alignments. Without proper post-processing, an assembler may report many unexpressed pseudogenes or even random regions as expressed genes, and it may also miss a few highly expressed genes.
Existing RNA-seq analysis pipelines provide heuristics for addressing the multiple alignment problem, however, they do not explicitly consider their genomic causes. In our study, using mouse RNA-seq data, the transcripts reported by Cufflinks include ∼3.5% from known pseudogenes and ∼10% from unannotated regions. A quarter of these 13.5% transcripts are likely to be false positives caused by multiple alignments.
[Figure 2](#btt216-F2){ref-type="fig"} shows the pile-up plots of two regions from a mouse genome reported by a current RNA-seq pipeline. The top one is a gene named *Caml3*, whereas the bottom one is unknown. The unknown gene's sequence is similar to the sequence of concatenated exons from *Caml3*. Fragments that are uniquely aligned to the unknown gene by the aligner can also be aligned to *Caml3*. However, the aligner fails to find the proper alignment because it does not consider all possible alignments crossing splice junctions owing to the search complexity. This collection of evidence indicates that the unknown gene is actually an unannotated processed pseudogene of *Caml3*. Fig. 2.Two transcripts reported by Cufflinks. The top one maps to a known gene named *Caml3*, and the bottom one does not map to any known gene. Two transcripts are aligned by their shared fragments in the plot. Owing to the space limitation, the top figure is truncated, and only shows the region containing shared fragments. The dashed line indicates the truncated boundary. The three vertical lines in purple represent three splice junctions in the top transcript
Therefore, the identification of expressed genes and unexpressed pseudogenes is a significant confounding factor in RNA-seq analysis. No existing analysis methods explicitly attempt to identify and reassign fragments that are mapped to pseudogenes. A similar observation was made by ContextMap ([@btt216-B5]) that multiple alignments from a RNA-seq aligner could be handled by removing the incorrect alignments based on the 'context' of the alignments. However, ContextMap simply defines the 'context' as a fixed window around the alignment on the genome. It also does not try to rescue any missed alignments. In contrast, we introduce the concept of fragment attractor, which leverages the results from both an aligner and an assembler to determine the appropriate 'context' for each individual alignment. Sharing maps between fragment attractors are built to help discover and restore missed alignments.
In this article, we introduce the GeneScissors pipeline, a comprehensive approach to address the problem of detecting and correcting those fragments errantly aligned to unexpressed genomic regions. When compared with the standard TopHat/Cufflinks pipeline, GeneScissors is able to remove 57% pseudogenes without using any annotation database. GeneScissors can reduce inference errors in existing analysis pipelines and aid in distinguishing truly unannotated genes from errors.
2 METHODS {#SEC2}
=========
In this section, we present GeneScissors, a general component that can be applied to any align-first RNA-seq pipeline to detect and correct errors in transcriptome inference owing to fragment misalignments. In a standard RNA-seq pipeline, the 'best' alignment for a fragment with multiple alignments is determined without considering the surrounding alignments of other fragments. Such decisions may be premature without considering the other fragments aligned to these regions. In the GeneScissors pipeline, we first collect all possible alignments for all fragments, and then examine those regions of the genome where multiple alignments map and then consider the other fragments aligned to these regions. In this way, GeneScissors is able to leverage statistics of fragment distribution and other features of the alignments.
[Figure 3](#btt216-F3){ref-type="fig"} describes the proposed workflow for RNA-seq analysis. It uses existing aligner and assembler (with minor modifications to keep all possible alignments discovered, details in [Section 3.1](#SEC3.1){ref-type="sec"}) to identify regions to which fragments align. To distinguish from expressed genes, we refer to each such region as a *fragment attractor*. Fragments with multiple alignments *link* corresponding fragment attractors. We refer to these fragments and their alignments as *shared fragments* and *shared alignments*, respectively. The relationships among linked fragment attractors are defined by their *shared fragments*. GeneScissors uses *sharing graphs* to represent the linked fragment attractors and to discover new fragment alignments. We create training instances using simulated RNA-seq fragments from annotated genes in Ensembl to build a classification model. Then, on real data, the classification model predicts and removes the fragment attractors that are likely due to misalignments. Existing assembly methods can be applied on the remaining fragment alignments to re-estimate the abundance level of expressed fragment attractors. We introduce the sharing graph in [Section 2.1](#SEC2.1){ref-type="sec"}, a classification model to identify the unexpressed fragment attractors in [Section 2.2](#SEC2.2){ref-type="sec"} and the features extraction method from the sharing graphs in [Section 2.3](#SEC2.3){ref-type="sec"}. Fig. 3.The workflow of GeneScissors Pipeline. The traditional RNA-seq analysis pipeline is the path on the left side. Its alignment and assembly results are used by GeneScissors to infer fragment attractors, build sharing graphs and identify all fragment alignments in the genome. GeneScissors then builds a classification model to detect and remove unexpressed genes
2.1 Sharing graph {#SEC2.1}
-----------------
We construct *sharing graphs* as follows. Each fragment attractor is represented by a node, and each pair of linked fragment attractors are connected by an edge. Each connected component is called a *sharing graph*. For each edge in a sharing graph, we build a position-by-position *sharing map* between the pair of linked fragment attractors through their shared fragments. For any fragment *f* aligned to a fragment attractor *g*, we first define function , which returns the aligned position in fragment attractor *g*, given a position in fragment *f* and its inverse function , which returns the corresponding position in *f* (if it exists), given a position in *g*. For a pair of linked fragment attractors *g~a~* and *g~b~* and one of their shared fragments *f*~1~, position *k* in *f*~1~ may be aligned to position *i* in fragment attractor *g~a~* and position *j* in *g~b~*. This provides a correspondence between position *i* in *g~a~* and position *j* in *g~b~* by and . A sharing map can be built between *g~a~* and *g~b~* through this approach by using all their shared fragments. It is possible that two shared fragments *f*~1~ and *f*~2~ map the same position in *g~a~* to two different positions in *g~b~*, i.e. . Empirically, such cases are rare, and when it happens, we use the majority rule to resolve the conflict.
The region of a fragment attractor that is covered by the sharing map is called the *shared region*. In addition to the shared fragments, some other fragments uniquely aligned to the fragment attractor may align to the shared region. These fragments should have been aligned to the linked fragment attractor too, but the aligner might have failed to recognize the alignments owing to the reasons we discussed previously. Therefore, with the help of the sharing map, we can 'restore' these missed alignments from existing aligner's result. For example, in [Figure 4](#btt216-F4){ref-type="fig"}a, we show a sharing graph among three fragment attractors. The red regions in the bottom row of each fragment attractor are the shared regions. The red dashed boxes contain the fragments uniquely aligned to one fragment attractor by the aligner but should have been aligned to the linked fragment attractors too. In [Figure 4](#btt216-F4){ref-type="fig"}b, we show more details on how the new alignments of the fragments are established through the sharing map. This alignment discovery operation needs to be done in both directions for each pair of linked fragment attractors. In our previous example in [Figure 2](#btt216-F2){ref-type="fig"}, the uniquely aligned fragments (between the black curve and the red curve) in the shared regions should have been aligned to both fragment attractors. Restoring fragment alignments to multiple positions does not cause inflation in abundance level estimation because transcriptome inference methods such as Cufflinks already consider the shared alignments. This approach enables us to safely rescue fragment alignments missed by an aligner. Fig. 4.(**a**) A sharing graph of three fragment attractors A, B and C. Each solid box represents a pile-up of fragments of a fragment attractor. Each pair of connected hollow rectangles represents a fragment of paired end reads. The red fragments are the shared fragments that can be mapped by the aligner to all three fragment attractors. The bottom row in each box represents the transcript sequence. The red regions (except the splice junctions in the transcript sequences) are the region to which the shared fragments align. (**b**) A sharing map between fragment attractors A and C and the discovered new alignments (shown in dashed rectangles). These new alignments are rescued from the uniquely aligned fragments in the shared region of one of the two fragment attractors
2.2 Classification model {#SEC2.2}
------------------------
GeneScissors processes RNA-seq data at the granularity of linked fragment attractors. Because there is no easy way to determine whether a fragment attractor are expressed in real datasets, we build our training model from simulated data and apply it to real data. We first generate our training set from a simulated population, and each sample is a set of fragments simulated based on a set of selected transcripts from the annotation database. (More details are in [Section 3.1](#SEC3.1){ref-type="sec"}). Then, we apply the aligner and the assembler on each sample of the simulated data, build the sharing graphs based on their results and generate training instances from the sharing graphs. The fragment attractors that cannot be mapped back to the selected transcripts are unexpressed ones. We use a classification model to infer whether a fragment attractor (hereby referred to as the *target* fragment attractor *g~t~*) is expressed using features of *g~t~* and another fragment attractor (hereby referred to as the *assistant* fragment attractor *g~a~*) linked to *g~t~* by an edge in the sharing graph. For every pair of linked fragment attractors, we build two instances. The instance is labeled according to whether the target fragment attractor is expressed. Therefore, one fragment attractor may be the target fragment attractor in multiple instances. The intuition is that, for an unexpressed target fragment attractor, there should always be some instances in which the assistant fragment attractors are expressed. In such instances, the assistant fragment attractor should have less consistent mismatches, longer sequence and lower proportion of shared fragments than the target fragment attractor (More details are in [Section 2.3](#SEC2.3){ref-type="sec"}, which describes all features we use). Thus, we can train a binary classification model using these features to identify unexpressed target fragment attractors. When we apply the model to test data and real data, all target fragment attractors, which are predicted as unexpressed at least once will be removed from the result of the assembler, and the reads that are uniquely aligned to these fragment attractors will be redistributed to the corresponding expressed fragment attractors. We experimented with support vector machines (SVMs), DecisionTrees and RandomForests as the learning method and found that RandomForests had the best overall performance. Once the classifier is built, we apply it on test data to evaluate the prediction accuracy and then apply it to real data to predict *unexpressed* fragment attractors and remove their fragment alignments. Recall that, for all uniquely aligned fragments in the shared regions of these fragment attractors, we also discover new alignments to their linked fragment attractors using the sharing map.
2.3 Fragment attractor features {#SEC2.3}
-------------------------------
We extract features from both target fragment attractor *g~t~* and assistant fragment attractor *g~a~* in each instance. Each instance contains 14 features, listed in [Table 1](#btt216-T1){ref-type="table"}. All features except the number of consistent mismatch locations are straightforwardly calculated: features *NE* and *NI* are directly collected from the assembler's output, and *NR*, *MF*, *MR* and *CM* are calculated by our sharing graph generator. The use of consistent mismatch count *CM*, as a feature is motivated by the observation that the pseudogenes usually have higher mutation rate. The concept of consistent mismatch and the method to find consistent mismatch locations across the genome are described in Appendix 1. The number of exons is helpful in distinguishing processed pseudogenes, which are singletons. All the other features are motivated by our observation that the unexpressed fragment attractors tend to have smaller number of alignment fragment and shorter region than their corresponding expressed ones. Table 1.The features used for detecting fragment attractors resulting from misalignmentsFeaturesDescriptionand are the observed numbers of exons. These two Boolean features tell whether the genes are singleton of exons.are the proportions of the fragments that can be aligned to *g~a~* and *g~t~* to the total fragments, respectively.are the proportions of the shared fragments to the fragments aligned *g~a~* and *g~t~*, respectively.are the proportions of the entire regions of *g~a~* and *g~t~* that are covered by shared fragments.are the numbers of base pairs that have consistent mismatches in the shared regions of *g~a~* and *g~t~*, respectively.
3 RESULTS {#SEC3}
=========
We first describe a series of modifications made to open-source RNA-seq analysis tools to support GeneScissors. Then, we describe the various datasets used for evaluation. We evaluated two standard pipelines that do not use GeneScissors: one using TopHat and the second using MapSplice as an aligner. We then added GeneScissors to each pipeline, to improve the alignment results, and we refer to these as GeneScissors(TopHat) and GeneScissors(MapSplice) pipelines. All four pipelines use Cufflinks as the transcriptome assembler.
3.1 Software {#SEC3.1}
------------
GeneScissors uses modified versions of TopHat and Cufflinks and uses components written in C++, Python and the BamTools ([@btt216-B4]) library. Cuffcompare is used to map the reported genes back to Ensembl annotations and categorize them into three types: annotated normal genes/transcripts, annotated pseudogenes and unannotated regions.
### 3.1.1 Modifications to TopHat and Cufflinks {#SEC3.1.1}
We first present the algorithms used by TopHat and Cufflinks in ranking and reporting alignments and genes and then discuss our modifications to retain all fragment and partial fragment (unpaired reads) alignments.
In TopHat, if the fragment *f* has multiple alignments *x* and *y*, TopHat retains only alignment *y* and does not report alignment *x*, when one of the following conditions is satisfied (tests are applied in order): *Mismatch rule: x* has more mismatches than *y*.*Splice junction rule: x* crosses more splice junctions than *y*.*Other rules:* Owing to the space limitations, we omit the conditions that are not relevant to the article.
Only alignments with the best score are reported by TopHat. We observed that the splice-junction rule tends to favor processed pseudogenes; the correct alignment of a fragment with a splice junction is frequently discarded by TopHat if the fragment can be aligned to a processed pseudogene with the same number of mismatches.
In Cufflinks, a gene that meets the following criteria is suppressed: *75% rule:* More than 75% of the fragment alignments supporting the gene are mappable to multiple genomic loci.
Consider the example shown in [Figure 2](#btt216-F2){ref-type="fig"}. Cufflinks fails to remove the unannotated pseudogene, which is composed mostly of uniquely aligned fragments. This suggests that the 75% rule is insufficient.
Therefore, in the GeneScissors pipeline, we disabled the splice junction rule in TopHat and the 75% rule in Cufflinks.
### 3.1.2 Simulator {#SEC3.1.2}
To generate training data for our classification model and evaluate the effectiveness of GeneScissors for detecting and removing unexpressed fragment attractors, we built a RNA-seq simulator to provide a 'ground truth' model for fragment attractors. The simulator randomly chooses a (user-specified) number of genes, and for each gene, it samples a subset of its transcripts. Then, it uniformly samples paired-end fragments up to a certain abundance level for each selected transcript. For each fragment, it assigns a quality score to each base pair, drawing from an empirical distribution derived from real data, and generates base pair errors based on their quality scores.
3.2 Data {#SEC3.2}
--------
Our study used inbred and F1 crosses of three mouse strains: CAST/EiJ, PWK/PhJ and WSB/EiJ. To minimize the impact of unknown SNPs to the alignments, we generated strain-specific genomes by incorporating high-confidence SNPs detected in a recent DNA sequencing project of laboratory mouse strains conducted by the Welcome Trust ([@btt216-B15]) into the *mm9* reference genome. We used the Ensembl database (build 63) ([@btt216-B6]) to annotate and evaluate the results from real and simulated data.
### 3.2.1 Simulated Data {#SEC3.2.1}
A RNA-seq simulator was used to generate synthetic data from 60 RNA-seq samples also derived from three inbred mouse strains: CAST/EiJ, PWK/PhJ and WSB/EiJ. In each sample, we selected annotated functional genes in Ensembl as the expressed genes and randomly set them to different levels of abundance. Many genes included multiple transcripts. We generated 10 million fragments with 100 base pair paired-end reads for each sample. We used TopHat and MapSplice as aligners and Cufflinks as the assembler to analyze the simulated data. More than 7.5% of the genes reported in the results were not from the selected genes in our simulation setting. From the results, we built shared graphs and used cross-validation to train and test our model. A feature selection study using the simulated data can be found in the [supplementary material](http://bioinformatics.oxfordjournals.org/lookup/suppl/doi:10.1093/bioinformatics/btt216/-/DC1).
### 3.2.2 Real data {#SEC3.2.2}
We applied GeneScissors to RNA-seq data from nine inbred samples and 53 F1 samples derived from three inbred mouse strains CAST/EiJ, PWK/PhJ and WSB/EiJ. We sequenced cDNA from mRNA extracted from brain tissues of three to six replicates of both sexes and the six possible crosses (including the reciprocal). To mitigate misalignment errors owing to heterozygosity, for each F1 sample, we aligned each fragment to the genome of each parent separately (i.e. the mm9 reference sequence with annotated SNPs) and then merged the two alignments while retaining all distinct multiple alignments (a union of the set of all mapped fragments each identified by their mapping coordinate and read identifier). For comparison purposes, we also applied this alignment strategy in the TopHat and MapSplice pipelines.
3.3 Results from simulated data {#SEC3.3}
-------------------------------
In [Table 2](#btt216-T2){ref-type="table"}, we first present the average precision, recall, F scores and Area under the Curve when LinearSVM, DecisionTree and RandomForests were used to build the classification models. All scores were measured by 10-fold cross-validation. The results demonstrate that our feature set is adequate and can help detect unexpressed genes efficiently. The RandomForests is the best and most consistent among all three methods. The classification model trained by RandomForests can detect near 90% spurious calls owing to misalignments. Though SVM has a slightly higher precision score, the recall is much lower than RandomForests. This is because RandomForests is more suitable than SVM for data with discrete features and is more powerful in handling correlations between features. Therefore, we chose RandomForests as the default classification method for our GeneScissors pipeline. Table 2.Summary of the results from different classification methodsStatisticsLinearSVMDecisionTreeRandomForestsPrecision81.90%83.70%89.60%Recall83.00%84.80%87.80%F-measurement85.70%84.20%88.60%Area under the curve0.8430.8370.91
Next, we investigated how much improvement GeneScissors could bring to the overall transcriptome calling by correcting fragment misalignment. We compared the results of our improved GeneScissors pipelines with those from the TopHat and MapSplice's pipelines. Both GeneScissors pipelines used the modified version of Cufflinks. The GeneScissors (TopHat) pipeline used the modified version of TopHat. The MapSplice and TopHat pipelines used the regular version of Cufflinks. We used the following three measurements to compare the performance at the gene level:
The results of different pipelines are summarized in [Table 3](#btt216-T3){ref-type="table"}. All statistics are averaged over a 10-fold cross-validation. We observe that Cufflinks tends to report a much higher number of genes in all four pipelines. There are only ∼13 000 expressed genes, but Cufflinks reports \>30 000 genes in the TopHat or MapSplice pipelines and \>26 000 genes in the GeneScissors pipelines. Table 3.Comparison of MapSplice, TopHat, GeneScissors (MapSplice) and GeneScissors (TopHat) pipelinesStatisticsMapSplice pipelineTopHat pipelineGeneScissors (MapSplice)GeneScissors (TopHat)Number of reported genes36 51630 62226 55626 473GenePrecision35.6%41.8%48.2%**48.3%**GeneRecall**95.1**%93.2%93.0%93.2%GeneF-measurement51.5%58.2%63.5%**63.6%**[^1]
A significant percentage of these reported genes can be mapped back to the 'expressed' genes from which we generated synthetic reads. Several reported genes are often mapped back to the same expressed gene by Cuffcompare. Cufflinks failed to recognize them as (possibly different transcripts of) the same gene, perhaps owing to both the length and variable number of splice junctions and/or the low fragment coverage seen for some transcripts. In this case, when we computed GenePrecision and GeneRecall, only one of them was counted as the 'correct' gene, the remaining ones were counted as 'incorrect' genes. As all four pipelines used Cufflinks to infer transcriptome, all of them had relatively low GenePrecision. The GeneScissors (MapSplice) pipeline had a 12.6% improvement in GenePrecision over the original MapSplice pipeline, at the cost of a slight drop in GeneRecall. The GeneScissors (TopHat) pipeline had a 6.5% improvement in GenePrecision over the TopHat pipeline while retaining the same level of GeneRecall. GeneScissors was able to detect and remove \>4000 spurious (gene) calls by correcting fragment misalignments.
We also observed that the MapSplice pipeline has the highest score on GeneRecall, but a much lower GenePrecision score comparing with TopHat pipeline and GeneScissors pipeline. This is because MapSplice can find more possible alignments than TopHat but is not able to identify the correct alignment when a fragment has multiple alignments. Hence, the MapSplice pipeline reported more false positives than the TopHat pipeline.
Overall, the GeneScissors (TopHat) pipeline performed best among the four pipelines on this challenging test case. It is obvious that (i) detecting and correcting fragment misalignments can improve the accuracy in transcriptome inference under all circumstances and (ii) given the correct fragment alignments, better transcriptome inference algorithms are still needed. In addition, GeneScissors does not assume all pseudogenes are unexpressed. GeneScissors is able to distinguish expressed pseudogenes from the rest with a comparable accuracy, demonstrated by a simulation study in the [supplementary material](http://bioinformatics.oxfordjournals.org/lookup/suppl/doi:10.1093/bioinformatics/btt216/-/DC1).
3.4 Results from real RNA-seq data {#SEC3.4}
----------------------------------
We also applied both TopHat pipeline and our GeneScissors (TopHat) pipeline on the real RNA-seq data. The running time for TopHat pipeline was ∼24 h per sample, and the extra running time for GeneScissors (TopHat) pipeline were ∼10 h per sample. Overall, the GeneScissors (TopHat) pipeline reported 4.25% fewer transcripts in real data than the TopHat pipeline ([Fig. 5](#btt216-F5){ref-type="fig"}a). Considering that GeneScissors removed most of false positives in our simulation study, it suggests that the transcripts reported by the TopHat pipeline include a significant number of false positives. Fig. 5.Comparisons between multiple samples run through both the GeneScissors pipeline and the TopHat pipeline. Results from the same sample are connected by an arrow. The three strains used were CAST/EiJ, PWK/PhJ and WSB/EiJ, and they are indicated by the initials C, P and W, respectively. The two letter designations indicate the direction of the cross with the initial of the maternal strain followed by the initial of the paternal strain. The samples are clustered according to replicates from the same sex and F1 cross, followed by the reciprocal cross. The sex is indicated by F(female) and M(male)
Despite the fewer number of transcripts reported by GeneScissors, [Figure 5](#btt216-F5){ref-type="fig"}b shows that GeneScissors actually reported 0.97% more transcripts that exactly match or partially match the splice junction annotations in the Ensembl database than the TopHat pipeline (The improvement is statistically significant with a *P*-value lower than under the paired student's *t*-test). These transcripts are likely the false negatives missed by the TopHat pipeline owing to misalignments. [Figure 5](#btt216-F5){ref-type="fig"}c shows that the TopHat pipeline reported \>800 transcripts that are annotated as pseudogenes in Ensembl. GeneScissors managed to remove \>53.6% of them, and the fraction of transcripts that overlap any pseudogenes decreased from 3.2 to 1.57%. [Figure 5](#btt216-F5){ref-type="fig"}d shows that GeneScissors reported 16% fewer unannotated transcripts than the TopHat pipeline. All these results indicate that GeneScissors is effective in detecting and correcting false positive and false negative transcript reports caused by fragment misalignments.
Furthermore, the number of pseudogenes reported by the original TopHat/Cufflinks pipeline in inbred samples is fewer than the number in F1 hybrids. Similarly, the fraction of pseudogenes () removed by GeneScissors in the inbred samples is smaller than the fraction () removed in the F1 hybrids. This indicates that the additional complications of F1 samples pose additional challenges to RNA-seq analysis pipelines and makes them more prone to errors than the inbred samples.
4 DISCUSSION AND CONCLUSION {#SEC4}
===========================
In this article, we presented GeneScissors, a general approach to detect and correct transcriptome inference errors caused by misalignments, which can be applied to any RNA-seq analysis pipeline. GeneScissors considers three underlying biological factors that lead to fragment misalignments and spurious transcript reporting. We proposed a classification model to detect false discoveries owing to misalignment, and the results show that it can provide significant improvement in overall accuracy.
Other heuristic approaches have been used to avoid reporting unexpressed genes in the RNA-seq assembly result, such as discarding all known pseudogenes reported by the TopHat pipeline, masking repeated elements in genome or aligning fragments to known transcriptome instead of genome. The key difference is that our RNA-seq analysis does not require any additional annotations beyond adding SNPs, and it still supports a novel 'transcript discovery'.
Transcript discovery is important because current annotations are incomplete with regard to genes, isoforms and allele-specific variants. For example, in the real data, we observed ∼4000 unannotated transcripts clustered ∼2300 unannotated genes on average. These transcripts persist after applying GeneScissors, which attempts to identify and correct misaligned fragments. This implies that current annotations are neither complete nor entirely accurate. For example, recent studies ([@btt216-B12]; [@btt216-B16]) found that some regions previously thought to be pseudogenes can actually be transcribed to mRNA. Hence, removing all annotated pseudogenes or highly repeated regions may lead to the removal of actual expressed transcripts. In contrast, GeneScissors might choose a pseudogene over the annotated paralog based on which better matches known genetic variants.
Furthermore, current pipelines using Cufflinks tend to overreport genes, especially when the genes share a high degree sequence similarity with other expressed genes in the data. The problem is alleviated to some extent by GeneScissors by recovering missed multiple fragment alignments and discarding fragment alignments to unexpressed genes/regions. However, there is still room for improvement.
In the future work, though our precision and recall scores are near 90%, we plan to exploit additional features and constraints to improve the classification accuracy. Example constraints include that each sharing graph must contain at least one expressed gene and each shared fragment must belong to an expressed gene. In addition, we plan to investigate how to rescue the discarded fragment alignments to an unexpressed fragment attractor, but not in the shared regions with any linked fragment attractors because these fragments should belong to some expressed genes.
Supplementary Material
======================
###### Supplementary Data
The authors thank those center members who prepared and processed samples as well as those who commented on and encouraged the development of GeneScissors; in particular, Weibo Wang, Isa-Kemal Pakatci, Zhishan Guo, John Calloway, James J. Crowley and Patrick F. Sullivan. They also thank three anonymous reviewers for their thoughtful comments.
*Funding:*\[NIMH/NHGRI P50 MH090338\], \[NIH GM P50 GM076468\], \[NSF IIS-1313606\], \[NSF IIS-0812464\].
*Conflict of Interest*: none declared.
Appendix 1
==========
CONSISTENT MISMATCHES
---------------------
For a given base pair location in the genome, if the number of aligned fragments that carry an allele different from the reference genome is much higher than the expected number owing to random sequencing errors, we call it a *consistent mismatch* location. There are three possible reasons that a consistent mismatch occurs: (i) A missing SNP or heterozygous site in a diploid sample's genome (inconsistency between the reference DNA sequence and the sample's DNA sequences), (ii) an RNA-editing site, and (iii) misaligned fragments (difference between the sequences of a gene and its pseudogene). Consider the example shown in [Figure 2](#btt216-F2){ref-type="fig"}, there are two visible consistent mismatches on the expressed gene, *Caml3*, and they are due to either of the first two reasons (an unreported SNP, a heterozygous SNP, or an RNA-editing event). Because the fragments aligned to the unannotated region originated from *Caml3*, in the pile-up plot of the unannotated region, there are more than six visible consistent mismatches owing to the third reason (misaligned fragments).
It is important to separate the consistent mismatches from the mismatches owing to sequencing errors. We assume that the sequencing error rate of a given base pair *c* in a given fragment is reflected in its quality score *q~c~* and can be derived as a function . Given a base pair location *l* in the genome, let *R*(*l*) be the set of base pairs aligned to the location. The number of mismatches *NM*(*l*) at this location should follow a sum of Bernoulli distributions with different success probabilities, which is . The *P*-value of the location is defined as . A significant *P*-value indicates that this location may be a consistent mismatch location. To find all consistent mismatch locations, we first need to estimate the sequencing error rate. The original function to calculate the error rate is
In this calculation, we need to exclude the consistent mismatches that are not caused by sequencing errors. This can be done iteratively, starting from an initial estimation using all positions that have at least ten fragments aligned. In each iteration, we mask positions on the genome that have much higher mismatch rate than the current estimated error rate and re-estimate the error rate. We use the empirical distribution of *e* as the new estimation of *e*. For the positions that contain less than three mismatches, we first calculate the following two probabilities in time complexity: then we calculate the exact probability as the *P*-value:
The number of mismatches at a position should be equal to a sum of Bernoulli distributions with different parameters, and the distribution of the sum can be approximated by a Poisson distribution based on [@btt216-B18]:
Therefore, the *P*-value can be approximated by
The positions with *P*-values less than are classified as consistent mismatch locations. This process continues until no more consistent mismatch locations are found. This threshold is empirically determined because current threshold gives us the best performance to identify the unexpressed genes.
[^1]: *Note*: The bold value of each row represents the best pipeline measured by the corresponding metric.
|
We'd firstly like to apologise to all the parties involved, and now I'd like to address each point one by one.
Prelude:
Our try-outs were very rushed, as we only began putting our team together 2 weeks before Open Division signups ended. There were a large number of players involved, and not much time available to see how they each worked together. Our initial roster selection from these try-outs was Lateks, Leodeddz, Matth, Invision, Rubikon and Vonethil, however PoG got there first and Matth and Rubikon joined them. At this point, we asked iPN and Luddee to sub in some games, and seeing how well they worked with the team, we asked them if they would he like to try out - iPN agreed.
Part 1: iPN
After a while of try-outs, iPN was indeed offered a spot in the team. In fact, we offered him a full spot as we felt he'd sufficiently performed in try-outs. Pleased with the decision, we started Open Division with iPN and after he confirmed he was happy we could announce him as a player, and the announcement was made, he quickly came back to say he thought we were only announcing that he would be playing in Open Division, and he had not decided if he wanted to join the team yet. After week 1, unfortunately iPN required some personal time off (no details will be offered here as I know he wants that private), which we were happy for him to take. MrBeck was available at this point and so we resumed with MrBeck. Upon iPN's return, he advised us that while he is interested, he still wanted to try out for another team. At the same time, MrBeck announced that he would not be available the following week due to try-outs with another team (that week being PIT). That's fine, we didn't remove either of them from our Open Division roster, but we cannot risk missing scrim nights in the middle of both Open and PIT due to our flex supports being pledged to other teams for scrim nights. The logical thing here, since try-outs continue, is to get in a 3rd try out to ensure we're able to continue to play without having to cancel practice.
As it happens, the third one, recently available at that time was Rubikon, whos team had just disbanded, and who confirmed he had no other try-outs and would be available for all of pit and the rest of Open, making him a secure choice going into tournaments.
We still kept iPN and MrBeck on our roster. We heard nothing more from MrBeck after that point.
Part 2: Matth
The tweet discusses how Matth was brought in to replace Caspere by Mendzel without them knowing and being strongly against the decision. The irony here is that the decision to test matth was made entirely based on the request of 2 players in the roster - Invision and Vonethil, so nobody here is sure how Invision did not know about these trials. During these try-outs, both Caspere and Matth would take different scrim nights, and after each scrim night, each player would be asked for their opinion.
At first feedback showed that Caspere was the players choice. This was passed on to the coach team - that the players did not want to trial further. However, as the schedule was already arranged, and there were only 2 more nights left, with Matth having cancelled plans to attend, the decision was made to just play the 2 days out and carry on as we were.
Player feedback changed, however, and at the end of the final night of scrims, every single player gave feedback that if they were to pick a player for the flex DPS role at that point, it would be Matth. Both coaches agreed that Matth brought an extra edge to the games and they made a final decision. There was a suggestion made to have a 7-man roster, with Caspere as a sub. The coaches discussed it and decided it wasn't feasible as there would be very limited occasions where they would need to bring in Caspere. I cannot comment on this decision, roster rotation is not something I personally have any experience in, but I trust the decision was correct. Again, this was discussed with Invision, with reasons why it wasn't a good idea explained, and again invision agreed he understood their point.
And yes, we have all these chats logged.
Part 3: The drop in atmosphere
This part is very true, players did begin to complain that scrims had become toxic, with the finger firmly pointed toward Matth. The coaches told the players that matth would be spoken to and requested to no longer bring down the mood of the team. They were then advised if they feel that there has been no change within a few days, to go back to the coaches and let them know that they're still not happy, and they will look for try-outs. There were actually some players contacted during this time to enquire if they would be interested in try-outs for this role, if this happened. No further mention was made of this toxicity for a while, until a week, maybe 2 later when...
Part 4: The cataclysm
Lateks, Leodeddz and Invision approached me to say that they still feel the mood is toxic and the want Matth removing, and Caspere bringing back. This was not said to the coaches, but to myself. I passed the message on to the coach team, at which point I was told that the other guys had suggested to the coaches that we look into running try-outs for the off tank role (leodeddz role) and the hitscan role (invisions role). Mendzel, who was demonised in the original tweet, advised those players to give them some more time to show their ability within the roster as they were still learning to adapt to a new flex support and a new flex DPS, and because there had been no time to practice together due to getting our legs broken on a daily basis during PIT, which started a few days after the recruits were added. Within a few days they had changed their minds and were happy with the improvements. On the other hand, the other players added to their demand, that Rubikon also be removed, and that Caspere will not re-join unless we remove Mendzel, so remove him too.
At this point it's all become a bit like a children's club - he said, she said, I want, I want, and I want, so Mendzel who was tired of everyone speaking to each other in private group chats and dark alleys, asked all of the players to a voice conversation to work it out. The players could not see eye-to-eye and the roster split into 2 groups of 3;
- Vonethil, Rubikon and Matth, who were happy to continue with the current roster, but would prefer some more try-outs be run, and keep Lateks.
- Lateks, Leodeddz and Invision, who simply wanted Rubikon, Matth and Mendzel removed and keep Vonethil.
After the voice conversation neither of the 2 groups wanted play together. At this point the most reasonable and level-headed player was Vonethil, who was happy for try-outs, but didn't want to have anybody removed to the detriment of the team and was also trying to resolve the issue.
Unfortunately, at this point I had to inform Lateks and the staff team that I'd had to take a few days out due to late work shifts and recently having a miscarriage. This is not ghosting, it was made clear I wouldn't be around for a few days. The staff were asked to review the above facts and come to a decision on the best action to take in the meantime, to save the team from disband - and a member of staff informed Invision yesterday, before the tweet was made, that no decisions had been made on the best course of action, and to hold tight.
Part 5: The Core Robbed of playoff spots
First point here, the core is considered those players who played in the open division matches, all 10 of them from start to finish – Lateks, Leodeddz, Invision, Vonethil.
The idea that any of these guys have been robbed of their playoff spot is simply not true - until last night, when the tweet was made, no decision had been made by staff on which way the team would have to go. Sadly, no matter which option was taken, it would mean 3 players would likely leave, an impossible decision for the staff to make light-heartedly. On the one hand, if we chose to keep the same 6 and kick nobody, Lateks, Leodeddz and Invision would likely leave. On the other hand, if we chose to replace Matth and bring back caspere, there was a chance Vonethil, Rubikon and Mendzel would leave. However, the overall view was that ideally, we want to keep the same 6 if we can, but considerations for who would be potential replacements if people left were made. Only 2 people have been removed from our Open Division roster on Battlefly, and that's iPN and Caspere, both of whom have not played with us for weeks, so we're not sure how this is classed as Play Off spots being robbed, but so be it if that's what is believed. Lateks, Leodeddz and Invision are all still there listed in the active roster, they have not even been moved to the inactive spots to make room for other players. Our decision on the best course had not been made until the tweet from iPN appeared last night, which made that decision for us.
If it's true that try-outs took place on an off day without people’s knowledge then I too am one of those who had no knowledge this was going down behind players backs; it will be investigated and handled accordingly, and I can only apologise if it's true.
Overall, we’re sorry that Lateks, Leodeddz and Invision feel the way they do. The three of them are great guys and were a vital part of Jigsaw until this point – and we apologise that this drama has come up in the EU OW scene just when things were looking good.
/ionztorm
Reply · Report Post |
Refactoring using statements with CodeRush/Refactor!
May 9th, 2012
As many of you know, the using statement is a good tool for managing types which will be accessing unmanaged resources. The using statement provides a simple and convenient syntax that ensures that objects that implement the IDisposable interface are correctly disposed.
To fix this, you can apply the automatic fix – the Introduce Using Statementcode provider:
The Introduce Using Statement code provider declares a using statement for the specified IDisposable implementer, removing the call to Dispose() if it exists. The code provider automatically detects and covers the required code into the using statement if applied on an undisposed variable, for example:
The refactoring preview hint helps you to see the refactored code, and the resulting code before applying it. If you wish to extend the code that must be placed inside the using statement, manually select it (including all required code):
Instead of the using statement, you can call the object’s Dispose method to indicate that you want the object to clean up its resources. However, in this case, you must make sure that the Dispose() call will always be executed by wrapping the entire code into the try/finally block. To do that, you can apply another refactoring that converts the selected using statement into a try/finally block – the Using to Try/Finally refactoring:
The result of the refactoring:
The Dispose() method will be called in the finally block even if an exception has been thrown during processing. But note that the scope of the variable is now changed. When the variable is declared inside of the using statement expression, the scope of the variable is the using statement and it is not visible outside of one. But after the statement is converted into a try/finally block, the scope of the variable will be increased, so it is visible to both try and finally blocks and the rest of the method (or parent block). This leads to the possibility that the object may be accidentally used again after control leaves the finally block even though the object probably no longer has access to its unmanaged resources. In other words, the object will no longer be fully initialized, and may cause an exception to be thrown if it is accessed. The Introduce Using Statement can fix this by converting try/finally block back to the using statement, so that the variable will remain in the scope of the using statement.
Both refactoring and code provider are the opposite of each other. You can refactor and improve the old code that uses the try/finally block or do the opposite – create a try/finally block for your advanced requirements.
The using statement may include multiple instances of the same type separated by a comma:
In this case these objects will both have the same scope. If you want to change the scope of one of the variables and write additional code, you can apply the Split Using Statement refactoring. The refactoring breaks the multi-declaration using statement into two or more neighboring using statements:
This refactoring has an opposite refactoring called the Consolidate Using Statements which combines several neighboring or nested using statements that cover variables of the same type into a single using statement:
All mentioned refactorings are available in both C# and Visual Basic languages if the specific language version supports the using statement. |
Epstein-Barr virus strain type and latent membrane protein 1 gene deletions in lymphomas in patients with rheumatic diseases.
Recent studies have shown that immunomodulatory therapy for the treatment of rheumatic diseases can be associated with the development of Epstein-Barr virus (EBV)-associated lymphoproliferative disorders. The present study was undertaken to determine the strain type of EBV in lymphoproliferative disorders that occur in patients with rheumatic disease and to investigate EBV latent membrane protein 1 (LMP-1) gene deletions that occur in these lymphoproliferative disorders. Ten EBV-associated lymphoid neoplasms in patients with rheumatoid arthritis or dermatomyositis were analyzed by polymerase chain reaction to determine EBV strain type and to investigate for the presence of a previously characterized 30-basepair deletion in the LMP-1 gene. The results indicated that lymphoproliferative disorders in these patients can harbor EBV strain type A or B, with a predominance of type A infection (80%). It was also shown that both wild-type and mutated LMP-1 genes can be found in these neoplasms, with the deleted form of the LMP-1 gene occurring in one-third of cases in this series. LMP-1 deletions associated with certain aggressive lymphoid neoplasms are not required for the genesis of lymphoproliferative disorders in patients with rheumatic disease. The relative frequencies of type A and type B EBV strains in these lymphoproliferative disorders show similarities to the frequencies in patients with post-solid organ transplantation immunosuppression-associated lymphoproliferative disorders. |
Hepatocytes from old rats retain responsiveness of c-myc expression to EGF in primary culture but do not enter S phase.
Responsiveness of rat hepatocytes from 22- to 24-month-old animals to growth stimulation was examined in primary culture by assay of replicative DNA synthesis and c-myc expression as parameters. Old rat hepatocytes showed very low DNA synthesis in response to EGF, rat sera, or hepatectomized rat sera, as compared to 2- to 3-month-old rat hepatocytes. However, c-myc expression was induced by EGF to a similar extent in cells from both old and young rats. These results suggest that the old cells can traverse from G0 to G1 phase of the cell cycle but they somehow are blocked from entering the S phase. |
PROGRAMS AND SERVICES
Our ECI program provides personalized therapy sessions to children from birth to 3 years old in their natural environment (home or daycare). Therapy services include but are not limited to therapy in communication, motor / coordination, and sensory processing.
Our Clinic Therapy Services program provides comprehensive evaluation and therapy services for children three to five years of age. Services includes individual and group therapy in communication, social skills, motor/coordination, and sensory processing.
The Family Education and Support (FES) program equips families and caregivers through education, workshops, support meetings and providing referrals to resources in the community. A library of developmental journals and books is also available to families through this service.
The Recess Program provides parents with the opportunity to take a break. Parents are able to drop their child off and go out to dinner, run errands or just go take a nap– whatever it is that will help them recharge. This service is provided monthly at all three locations.
The Warren Center
Our Story
For children with developmental differences
OUR HISTORY
Fifty years ago, very few resources existed for parents of disabled children in Dallas County. With nowhere to turn for community-based services or help, families lacked support and their children had limited options in life. The Warren Center was created in 1968 and for decades, we have provided much-needed resources and care to children with disabilities and their families. Grassroots efforts and proven successes paved the way for an increase in community and parental awareness. As our organization experienced growth, we began adapting our programs to meet the changing needs of families.
Over the years, The Warren Center has given parents access to the resources their children needed, and more importantly, hope and support.
The Art of Music Gala is a one of a kind experience that takes 2018’s theme “Welcome to Wonderland” and creates a magical experience...
Testimonials
What parents & caregivers say about us
“My son Alex has been diagnosed with autism. At the beginning, my family and I didn’t know where to turn for help.
We searched online and found The Warren Center. Now he has confidence and interacts with people. He says more words every day, all thanks to his speech therapist. Thank you for having such an amazing staff.”
Alex’s mom
“Dylan was also diagnosed with two rare conditions: X-linked Chondrodysplasia Punctata 1 (CDPX1) and Beals.
When Dylan was a baby, I would cry quite a bit. The Warren Center was there on the good days and bad. Having a team coming in and giving me a path, helped me focus. Whatever his potential is, that’s what I want.”
Dylan’s mom
“Elias has improved a lot and now others besides his family can understand what he’s saying and he can even pronounce his tricky first and last names perfectly. We are so grateful for The Warren Center. It’s nice to have such a great resource right in our community and everyone at The Warren Center has been so helpful and so caring.”
Elias' mom
Featured Blogs
It all started with a couple of neighborhood kids wanting to do their part to help the people in Houston recover from Hurricane Harvey. After the kids had heard about the damage that Hurricane Harvey had in the Houston area, they decided to start a...
The Warren Center launched its new Recess Program to provide parents with the opportunity to take a break! Parents are able to drop their child off and go out to dinner, run errands or just go take a nap – whatever it is that will... |
Severe pulmonary hypertension associated with primary Sjögren's syndrome.
Severe pulmonary hypertension is one of the fetal complications in various connective tissue diseases. We report a case of severe pulmonary hypertension associated with primary Sjögren's syndrome. In a lung biopsy specimen, there were findings of intimal and medial hypertrophy with narrowing vessel lumina and plexiform lesions. Moreover, deposits of immunoglobulin M, immunoglobulin A and complement protein C1q were found in the pulmonary arterial walls. Although pulmonary hypertension was refractory to oral prostacyclin, steroid therapy improved the clinical and hemodynamic conditions. In the present case, the immunological etiology may be related to the mechanisms of pulmonary hypertension associated with Sjögren's syndrome. |
<resources>
<!-- Default screen margins, per the Android Design guidelines. -->
<dimen name="activity_horizontal_margin">16dp</dimen>
<dimen name="activity_vertical_margin">16dp</dimen>
<dimen name="water_button_image_size">80dp</dimen>
</resources>
|
#!/usr/bin/env perl
#
# test script for proving external command for fetching tags works
#
use strict;
use warnings;
use Getopt::Std;
my $opt = {};
getopts( 'Lqx', $opt );
my %tag_lookup = (
tag100 => [qw/ host100 /],
tag200 => [qw/ host200 host210 host205 /],
tag300 => [qw/ host300 host350 host325 /],
tag400 => [qw/ tag100 tag200 tag300 host400 host401 /],
);
# if we get '-q' option, force an error
if ( $opt->{q} ) {
my $fail;
$fail->cause_death();
}
# if we get '-x' option, die with non-0 return code
if ( $opt->{x} ) {
warn 'Forced non-0 exit', $/;
exit 5;
}
# '-L' means list out available tags
if ( $opt->{L} ) {
print join(' ', sort keys %tag_lookup), $/;
exit 0;
}
my @lookup = @ARGV;
for (@lookup) {
if ( $tag_lookup{$_} ) {
push( @lookup, @{ $tag_lookup{$_} } );
$_ = '';
}
}
@lookup = grep { $_ !~ m/^$/ } sort @lookup;
if (@lookup) {
print "@lookup", $/;
}
|
You'll Probably Never Need Another Bass Cabinet
Ampeg enclosures have been setting the standard in bass tone for decades, and your Ampeg SVT-810E bass cabinet is no exception. Affectionately known as "The Fridge," this massive bottom-end box of boom brings the heavy with eight 10" speakers in a Baltic birch plywood enclosure, dual 1/4" and secure Neutrik Speakon inputs/outputs, a tilt-back handle bar, and a protective skid rails for easy transport. This workhorse cab pushes out a maximum SPL of 130 dB and is built to take the abuse of the road, delivering on all expectations the Ampeg name elicits.
The Power of Eight 10" Speakers
There's a reason Ampeg decided to go with eight separate 10" speakers: sonic efficiency. Ampeg learned that 15" or 18" speakers don't dish out the same sonic effectiveness as 10" speakers do, and the eight 10" speakers in your SVT-810E respond to transient peaks more quickly than larger speakers. This cab has a maximum SPL of 130 dB, so you can really push the volume hard without worrying about your signal becoming distorted.
Unchanged Matchless Design Since the Late '60s
Your SVT-810E bass cab is virtually identical to the original SVT-810. The renowned Infinite Baffle design features a sealed enclosure which keeps your tone nice and tight, resulting a rigid and focused sound. This baby has been featured with such artists as John McVie of Fleetwood Mac, Geezer Butler of Black Sabbath, and Chris Squire from Yes.
Durability and Mobility
Weighing in at 165 pounds, this beastly cab is built like a tree trunk, but that doesn't mean it has to be immobile like one. Your SVT-810E bass cab comes with a tilt-back handle bar and protective skid rails for easy transport. You'll be able to set this baby up in no time for sound check without any unnecessary heaving. Dual 1/4" and Speakon inputs/outputs means that your connecting cables will be secure and ready to project your signal reliably every time.
Product Description
The world-renowned Classic Series Cabinets are synonymous with bass enclosures and the sound they are expected to produce. From the undeniable tone of the SVT-810E to the unexpected low end of the extended range SVT-410HLF, Ampeg enclosures have set the standard for decades. All enclosures feature dual Neutrik Speak-On and 1/4" input/output jacks. Cabinets using high frequency horns contain a variable level attenuator. To prevent overpowering and damaging the horns, a resistive bulb is wired in series with the horn for protection.
A Workhorse of a Different Color
The Ampeg SVT-810E is the speaker enclosure people mean when they say SVT speaker cabinet. It's all about tone. Other than the color scheme, it is identical to the original SVT-810.
Why eight 10" speakers? Ampeg learned early on that 10" speakers work much more efficiently than fifteens or eighteens -- and if you put eight 10" speakers together, you can move a huge column of air. You'd need five 18" or six 15" speakers to move as much air as the SVT-810E. And they simply wouldn't be able to respond to transient peaks as quickly as Ampeg's tens.
The Ampeg SVT-810E is still manufactured using the same design dating all the way back to 1969. The Infinite Baffle design of these sealed enclosures produces vast amounts of tightly focused bass. That's why you'll find the SVT-810E on stage with such artists as Chris Squire (Yes), Geezer Butler (Black Sabbath), John McVie (Fleetwood Mac), and far too many others to list here. In the world of high performance bass cabs, the Ampeg SVT-810E stands tall and proud.
This review has been selected by our experts as particularly helpful.
"Here Is Your Dream Cabinet!"
Sound:
I've only played my SVT-CL head out of Ampeg cabs, and the 8x10 is simply the best match for the amp. It gives me my dream sound by handling the highs quite nicely wihtout sounding too sharp or caustic; the mids help my bass cut through my guitarist's screaming Marshall; and the lows are enough to handle the heaviest hitting drummers on the planet. Oh, those lows! Turn up as LOUD as you want and you will notice the lows are still clear--not soggy and undefined. This amp will handle whatever you've got, and it doesn't stop delivering. It's sound is BIG, BIG, BIG--you will not get lost amid the blissful chaos you and your bandmates create. They will know you're there--but the 4x10s and even the 6x10s won't let you make that claim. It also records quite well. You can simply put a mic in front of a single speaker and be amazed when you hear the recording. And you will know that you now sound like you think you should sound. With the SVT-CL, this cab reproduces that sound you dream about in your head. A wet dream come true!
Feature:
Simple. The grab-bar/handle-thing on the back, along with the built-in coasters, make for easy maneuverability.
Ease of Use:
Never met an Ampeg cab with a sound that sucked. As I mentioned, a big head like the SVT-CL may be too much for smaller cabs to handle; but this cab is designed to handle Ampeg's best heads. Plug in and play.
Quality:
We keep it in the studio, so the only time we'll ever move it is for gigs. It's solid, heavy, and durable.
Value:
Worth the price. The top-of-the-line 410 costs just as much as the 810. The 610 is fifty bucks cheaper. Go with the big daddy of them all and be happy.
Manufacturer Support:
Haven't had to deal with support, yet.
The Wow Factor:
It ain't ugly. Would it appear more appealing dressed in a metal grille with fuzzy carpet instead of cloth and tolex? LISTEN to it and then ask yourself if you care what it looks like.
Overall:
I have no need for another rig. To me, this one fits my playing perfectly, and it gives me and the band the sound that now plays a big role in defining us. It growls, it roars, it cannot be beat. If it were stolen (how can you lose something this big?), I would die. There's no way I'd play out of a different cabinet.
Thank you. Your vote has been counted. Could you briefly tell us why the review was or was not helpful?
Ampeg SVT-810E Bass Cab Customer Review
Sound:
After getting this Boom Box I asked myself why didn't I buy it sooner. The presence is so full sounding, and the response is so
immediate. This Cabinet will make your sound and playing better.
My Bandmates diffentially gave me smiles and nods of approval after playing just a few songs with it on the first day.
Feature:
The reason I avoided getting one was the weight and size of it.
I found it easy to handle and not as heavy to lift as other Cabinets.
Reason being, the tilt back design with the high clearence wheels. Skid rails on the back if you encounter a few steps. This thing handles just like your rolling around a two wheeled Dolly.
Lifting takes two people which is good. I always had to lift my 4 10 Cabinet alone and it's heavy by yourself. With the 8 10 someones always offering to help me lift it.
Ease of Use:
The SVT IV PRO allows me a two different ways to connect the Cabinet.
Mono-bridge or Bi-amping between the 10's. I found I always went back to running Mono-bridge Mode. Having that additional Power seemed to work better for us in the mix. The soundman is combining the Backline and the Front sound together, I couldn't do that before with my Cabinets. Small or large Venues this Cabinet has sounded great.
Quality:
I'd love to buy another one. The drummer said, one on each side of him would be just awesome. It's built like a Abrams Tank.
It's Big and Fast, and delivers a Powerful Punch, very imtimidating.
Value:
Anything good always has a good price.
But, the value of being a bigger asset to the Band, is priceless.
Manufacturer Support:
Sad but true, when it was delivered it had a ding in the handle.
It didn't happen in shipping, it was assembled that way.
Arrangements where made and it was replaced.
Overall:
Years of gear changing has come to an end.
The SVT IV PRO with the SVT 810E works for me.
Our extended 45-Day 100% Satisfaction Guarantee applies on
this productBuy it, and if it isn't just right for you, return it!
What Our Customers Are Saying About zZounds "They have a great selection of gear, and they make it easy and affordable for working folks to play their music on the instruments they really want! I'm already recommending zZounds to my fellow band members, and look forward to getting more gear from them in the future." - Customer on December 31, 2014
A temporary customer reference number is assigned to each customer only while they are shopping on zZounds.com. If you call our customer service department for assistance, this number makes it easier for us to answer any questions you have about products, services or purchases.
close [x]Freight Shipment InformationDelivery time for in-stock items
Typically 5-9 business days after order placement.
Things you should know
The freight carrier must be able to reach you at the phone number provided
on your order to schedule a delivery at your location.
You must be able to move the product once it is off the truck; the carrier
does not typically move the item into the building at the delivery location.
A temporary customer reference number is assigned to each customer only while they are shopping on zZounds.com. If you call our customer service department for assistance, this number makes it easier for us to answer any questions you have about products, services or purchases.
1. zZounds offers the industry‘s most musician-friendly payment plans
zZounds offers a variety of Play as You Pay installment plans that allow you to get the gear you want today. While our competitors might offer house credit cards that require you to fill out an application and pass a credit check, our payment plans are easy to get, simple to use, and, best of all, accrue zero interest. Unlike those house credit cards that can leave you vulnerable to exorbitant interest charges once the promotional period ends, we never have and never will charge interest when you take advantage of one of the zZounds Play as You Pay plans.
2. zZounds guarantees the lowest price
Seen it for less somewhere else? Get it for less at zZounds. Simply click the “Seen it for less?” link on any product page and as long as we‘re not losing money, we‘ll beat any legitimate competitor‘s lowest price, usually within two hours. Want our best price even faster? Call us at 800-zZounds (800-996-8637), and we‘ll provide you with a lower price on the phone. Also, if you purchase from us and later find the product for less elsewhere, call or email us within 45 days, and we‘ll refund you the difference.
3. zZounds gets you your gear fast
At zZounds, you‘ll get your gear fast and, usually, with free delivery. Because we have warehouses in New Jersey, Nevada and Mississippi, nearly 95 percent of Americans reside within our two-day shipping zones. Combine this with the fact that many of our best-selling products qualify for free two-day delivery and you get expedited shipping without the additional cost.
4. zZounds makes returns super easy
Buy it, try it, like it, or return it to us for a full refund. It‘s just that simple. If you‘re not in love with what you bought, return it to zZounds within 45 days, and we‘ll promptly arrange for an exchange or a refund. No hassles. No hidden charges. And if a product‘s defective or damaged, call us at 800-zZounds (800-996-8637), and we‘ll email you a prepaid shipping label so you can easily return it to us at zero cost to you.
5. zZounds delivers top-rated customer service
zZounds has received multiple Bizrate® Circle of Excellence Platinum awards, placing us in the top 0.7 percent of Bizrate‘s network of 5,200 retailers. Our success in meeting the needs of our customers -- since 1996, we‘ve satisfied over 1,000,000 people -- is due in large part to the fact that zZounds only hires experienced musicians to answer your calls and emails, and they are never paid a commission. |
Q:
mod_rewrite - Another simple one im sure
Currently making a php template/framework. Now I have done as advised and put all normal files in a PUBLIC folder with libraries and config in others and have placed the index.php into the public folder but then try and do a MOD_WRITE and nothing works - Im using the Coral8 Server (For testing) and have configured it all correctly to do it but doesn't seem to be working.
Here's what I've tried:
RewriteEngine on
RewriteRule ^public/?$ /public/index.php
RewriteRule ^public/([^/]+)/?$ /public/index.html
and this
<IfModule mod_rewrite.c>
RewriteEngine on
RewriteRule ^$ public/ [L]
RewriteRule (.*) public/$1 [L]
</IfModule>`
and this
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.*)$ index.php?url=$1 [PT,L]
</IfModule>
But none seem to work :(
Thank you in advance but someone tell me what I'm doing wrong so that I can learn from the mistake and then how to correct it.
Thank you
A:
Is this what you're looking for?
RewriteEngine on
RewriteRule (.*)\.html$ /public/?action=$1&%{QUERY_STRING}$ [L]
RewriteRule ^$ /public [L]
So your URL will look like relution.co.uk/about.html
|
Q:
How to set version number and product information in .Net Core self-contained exe
I have a .Net Core v2.2 project and I set the Publish options as follows:
I also set the Package information:
Also modified the .csproj file as follows:
The Details of the generated(?) exe looks like this:
What else should I set to make those informations appear not only in the dll file but in the exe file as well?
UPDATE
According to the possible answer, there is a fundamental change in the upcoming version of the .Net Core SDK starting with version 3.0. After verification, this question and answer will help solving the issue instead of providing an explanation of why it can't be done.
A:
What else should I set to make those informations appear not only in
the dll file but in the exe file as well?
I updated my Visual Studio 2019 to version 16.2.5 (latest is 16.3.1) and installed the new .Net Core 3.0 SDK. After that the generated exe contains all the version information and can be used as expected.
SDK download link.
|
Episode notes
Just in time for Halloween, we have the amazingly talented Alice Waddington (director Paradise Hills) on the program to talk about Logan’s Run. She discusses creating specific fantasy worlds, being inspired by Logan’s Run, and the pleasures of working with Awkwafina. We also get Vulture associate editor, Jordan Crucchiola, on the line to talk about what horror films we should check out this Halloween.
Paradise Hills is in theaters and streaming on November 1st.
And if you haven’t seen Logan’s Run, you simply must!
With April Wolfe, Alice Waddington, and Jordan Crucchiola.
You can let us know what you think of Switchblade Sisters on Twitter or Facebook.
Or email us at [email protected].
Produced by Casey O’Brien and Laura Swisher for MaximumFun.org. |
Alaska – Outdoor Educator
Hidden in the pristine Alaskan backcountry is a faraway adventure through the least visited National Park in the United States.
Questions? Call NCOAE Support (910) 399-8090
At-a-Glance
Hidden in the pristine Alaskan backcountry is a faraway adventure through the least visited National Park in the United States. Join The National Center for Outdoor and Adventure Education (NCOAE) for a 50-mile route that is isolated, challenging, and absolutely untouched. Our 32-day Alaska expedition will give you the skills you need to work professionally as an Outdoor Educator with remote, backcountry experience. And, it’s breathtaking.
Course Information
Adventure First
Alaska is synonymous with adventure.
With NCOAE’s Outdoor Educator – Alaska course, you’ll explore remote territory on a rugged route that is distinctly Alaska. Our adventure begins and ends with Alaskan wilderness, and in the middle you’ll find Alaskan solitude. Our route takes you across remote glaciers, cold-mountain streams, high-mountain passes, and wide-open tundra. And, all of this provides the true backpacking adventure of a lifetime, with food and supplies airdropped by helicopter to remote locations.
In the wilds of Alaska, you’ll learn the skills to treat medical, environmental, and traumatic incidents in the backcountry. During your expedition, you will have the opportunity to sharpen your teaching abilities and technical wilderness skills.
During this 32-day Outdoor Educator Instructor Course, you’ll find yourself exploring the heart of the least visited National Park in the United States. While traveling more than 50-miles across mountains, glaciers, and tundra, you’ll be learning, sharpening, and practicing the skills to plan and lead adventure-based experiential education trips.
Itinerary Highlights
Backpack through the remote wilderness in Alaska to explore the region’s pure streams, lush tundra, and sapphire lakes.
According to the National Park Service, at 13.2 million acres, Wrangell-St. Elias could fit Yellowstone National Park, Yosemite National Park and a country the size of Switzerland all within its borders. The park stretches from one of the tallest peaks in North America — Mount St. Elias (18,008) — to the ocean.
Leap across glacier creeks, scale steep and exposed sheep trails, and cross high-mountain passes during the days on the trail.
Whenever possible NCOAE attempts to follow our planned itineraries. However, there are multiple factors beyond our control, which some call the “Alaska Factor.” Weather conditions, group preferences, unforeseeable circumstances, and unpredictable variables can change on a moment’s notice. Join us, and you’ll see what we mean when we suggest you consider having an “open mind” as a prerequisite for an Alaskan adventure.
Why Take This Course
Advance your knowledge of a leading outdoor education curriculum
Discover which areas of outdoor and wilderness education are best suited to your strengths
Acquire career skills to teach and travel in pristine destinations around the country — and around the world!
Where You’ll Be
Wilderness of Wrangell — St. Elias National Park and Preserve — the largest unit in the U.S. National Park System
Beginning in McCarthy, Alaska, you’ll fly to a remote base camp in Wrangell – St. Elias National Park, relying solely on yourself and your fellow Outdoor Educators for everything.
Wrangell is the largest and least visited National Park in the United States. Spanning 13.2 million acres it’s as big as Yosemite National Park, Yellowstone National Park, and Switzerland combined. The park stretches from sea level to the top of one of the tallest peaks in North America — Mount St. Elias (18,008 feet).
You will be traveling and living in an area that is so remote, the only way in or out is by bush plane. Food and supplies will be airdropped at remote locations by helicopter.
No prerequisites are required. Throughout the course, you will acquire the knowledge and skills required to complete the course and to teach others how to navigate and thrive in the wilderness and in life in general.
This course is best suited to those who want to work fulltime as an outdoor educator; those who are currently enrolled in college or university programs in adventure education, recreation, wilderness leadership, and experiential education; and those who want to challenge themselves to be better educators.
You do not need to be an “Ironman” to participate in this course. That said, you will be required to carry your own pack, with your own gear and some group gear. Your pack will weigh between 40 and 50 pounds at any given time. You will be hiking over rugged terrain, crossing rivers and glaciers, making high altitude ascents, and negotiating deep valleys and steep ridges.
Yes, Wrangell – St. Elias is home to all three species of North American bears. The Wrangell is land of harsh landscapes in which bears like to travel looking for food. Statistically, it’s unlikely that you will encounter a bear in this 13.2 million-acre park, but it’s possible.
That said, NCOAE follows recommended best practices to mitigate the risks associated with bear encounters. You will learn and practice these risk management techniques as part of the course.
NCOAE provides all necessary group gear and food, but you will want, or be required, to bring certain items for yourself. We encourage you to contact NCOAE with all of your gear questions as early as possible. Start planning now by viewing the NCOAE 32 Day Alaska Pack List.
Yes. If you are a college student, you may be eligible to receive up to six (6) credit hours for this course. If you are attending a college or university as part of a degree granting program, your advisor can determine whether your school will accept and transfer academic credits for participating in NCOAE courses from the University of North Carolina – Wilmington. For more information on the consortium agreement process, contact [email protected].
At this time, we do not offer financial aid. However, if you are a college student you may be able to use your existing federal financial aid to help pay for your course, and you may receive college credits through a consortium agreement with NCOAE. If you are attending a college or university as part of a degree granting program, your advisor can determine whether your school will accept and transfer academic credits for participating in NCOAE courses from the University of North Carolina – Wilmington. If it does, consult your financial aid office to determine if, how, and which funds may be used toward your NCOAE course costs. For more information on the consortium agreement process, contact [email protected].
Academic Credit
Your work during our Outdoor Educator Instructor training course is eligible for up to six college-level credits through The University of North Carolina – Wilmington:
EVS 485 — Issues in Sustainability: 6 credits
EVS 592 — Issues in Sustainability: 6 credits
Course Outcomes
This 32-day training is strenuous and will test your route-finding skills, stamina, and ability to excel as a professional outdoor educator. You will bushwhack, travel steep side-hills, navigate challenging terrain, cross multiple ice-cold streams and rivers, and encounter untouched flora, fauna, and landscapes.
Where do you want to go (physically, mentally, socially, academic, etc.)?
Traveling through Wrangell—St. Elias National Park for a month will allow you to rely solely on yourself and your fellow Outdoor Educators – for everything. Additionally, you’ll learn environmental and civic responsibility, backcountry navigation, wilderness risk management, outdoor ethics, leadership and communication styles, as well as decision-making and teamwork. You will return home confident in your ability to lead others and guide, and instruct groups in the backcountry using your newfound skills.
How will NCOAE help you get there?
Alumni of The National Center for Outdoor & Adventure Education (NCOAE) have a distinct advantage when applying to enter the fields of outdoor education, adventure-based education, backcountry guiding, as well as city, college, or university outdoor recreation programs.
Our curriculum offers graduates guidance in the areas of outdoor or interpersonal and intrapersonal management skills, and teaches how to use the curriculum to provide students with the tools they need to make positive decisions about their lives and the world around them.
This Outdoor Educator course and remote adventure is only offered through NCOAE, which is why we’re excited to offer it in Alaska.
Upcoming Dates / Reserve Your Spot
This Outdoor Educator course and remote adventure is only offered through NCOAE, which is why we’re excited to offer it in Alaska.
Available Dates
Jun 14, 2019 - Jul 15, 2019
Jul 09, 2019 - Aug 09, 2019
Jun 13, 2020 - Jul 14, 2020
Jul 09, 2020 - Aug 09, 2020
This 32-day training is strenuous and will test your route-finding skills, stamina, and ability to excel as a professional outdoor educator. You will bushwhack, travel steep side-hills, navigate challenging terrain, cross multiple ice-cold streams and rivers, and encounter untouched flora, fauna, and landscapes. |
1. Introduction {#sec1}
===============
Graves\' disease is the most common cause of hyperthyroidism in the pediatric population \[[@B1], [@B2]\]. Treatment options for Graves\' disease include antithyroid medications, radioactive iodine, and surgery \[[@B1], [@B2]\]. Antithyroid medications used in children and adults include propylthiouracil (PTU) and methimazole (MMI), and carbimazole, which is metabolized to MMI, and is available in Europe but not the United States \[[@B3]\].
Recently, a significant safety concern related to hepatotoxicity risk associated with PTU use in children was brought to attention \[[@B4], [@B5]\]. It is thus now recommended that PTU not be used in children, except in special circumstances \[[@B4], [@B5]\], and MMI should be used for antithyroid drug therapy in children.
To date, the majority of publications related to medical therapy for Graves\' disease have focused on children treated with PTU \[[@B6]--[@B14]\]. The use of MMI in children has been described in far fewer reports \[[@B15]\]. The description of the nature of adverse events that are associated with methimazole use in the pediatric population is modest, as well.
At our center, we have routinely used MMI for Graves\' disease therapy for many years. To provide insights into adverse events that can be associated with MMI use, we reviewed the adverse events associated with MMI use in our last one hundred consecutive pediatric patients treated with this medication.
2. Methods and Materials {#sec2}
========================
This review of treatment practice outcomes was conducted with the approval of the Yale University Human Investigation Committee. All adverse events were reported to the United States Food and Drug Administration via the MedWatch program. Patients with the diagnosis of Graves\' disease were identified from ICD-9 coding (242.0 or 242.9).
The diagnosis of Graves\' disease was made if there were elevated total and/or free thyroxine \[T4\] and/or triiodothyronine \[T3\] concentrations, subnormal thyrotropin levels, and evidence of thyroid autoimmunity not thought to be consistent with Hashitoxicosis. The presence of goiter and ophthalmopathy supported the diagnosis of Graves\' disease; however eye disease was present in only 60% of the patients. In situations where there was a question of the diagnosis, a ^123^I uptake and scan was performed. Patients were diagnosed with Graves\' disease in this setting only if the ^123^I uptake was elevated.
The medical records of the last 100 consecutively treated patients with the diagnosis of Graves\' disease were reviewed. Data collected included the age, height, weight, ethnicity, and gender. Medication and dose information were also collected. Medical records were reviewed to determine if adverse events occurred, and the length of time from initiating therapy until when adverse events developed. For those patients receiving treatment with either surgery or radioactive iodine, this was noted. Initial thyroid function tests at diagnosis along with levels of thyroid stimulating immunoglobulin (TSI) or thyrotropin binding inhibitory immunoglobulin (TBII) were collected.
All data were recorded in an Excel data spread sheet. Data are presented as mean ± SEM. Statistical analysis among groups was performed by the Student\'s *t*-test.
3. Results {#sec3}
==========
One hundred consecutively treated patients evaluated for Graves\' disease were evaluated. The range in the patient age was from 3.5 years to 18 years. The mean age was 13.2 ± 3.5 years. 72% of the patients were female; 28% of the patients were male. 70% of patients were from the New Haven CT area; 30% of patients were from outside of the New Haven area. 62% of the individuals were Caucasian, 16% were Hispanic, 16% were Asian, and 6% were African American.
At diagnosis, TSH levels were 0.01 ± 0 mU/L. The initial total T4 levels were 18.3 ± 2.0 mcg/dL. The initial free T4 levels were 4.9 ± 2.4 ng/dL. Initial total T3 levels were 530 ± 175 ng/dL. TSI levels were available in 73% of the individuals and were 180 ± 70%. In all the patients in whom TSI was measured, levels were elevated.
The patients were treated with average daily dose of MMI of 0.3 ± 0.2 mg/kg/day. Medication was given once a day in 60% of patients and twice daily in 40% of patients.
Adverse events attributed to the use of MMI, were seen in 19 patients ([Table 1](#tab1){ref-type="table"}). The most common side effects included pruritus and hives, which were seen in eight patients. Five patients developed diffuse arthralgia, muscle pain, and/or joint pain. One patient developed lymphopenia and esoinophilia. Two patients developed neutropenia with absolute neutrophil counts of 500 and 750 counts per cubic ml. This problem was detected upon evaluation of new onset fever. Three patients developed Stevens-Johnson syndrome with diffuse cutaneous eruption and mucous membrane involvement. One patient with Stevens-Johnson syndrome required hospitalization for three days. Mild liver injuryliver injury was observed in one patient. In this individual, the aspartate aminotransferase (AST; SGOT) was 184 u/L, the alanine aminotransferase (ALT; SGPT) was 379 u/L, the alkaline phosphatase was 355 u/L; the bilirubin was 0.18 mg/dL; and the gamma-glutamyl transpeptidase, (GGT) was 193 u/L. The age, methimazole dose, and length of time from initiation of treatment until development of adverse events for each patient are shown in [Table 1](#tab1){ref-type="table"}.
When clinical characteristics were compared in individuals who developed adverse events to methimazole with those who did not develop such reactions, no differences were detected as related to age, gender, dose, or ethnicity. There was no relationship to initial thyroid hormone levels, or circulating levels of immunoglobulins.
Of those individuals who developed adverse events to MMI, thyroidectomy was performed in three patients (ages 3--4 years). Radioactive iodine was administered to thirteen individuals (age range 8--18 years), and three patients were changed to PTU, as treatment with radioactive iodine or surgery was refused by the families.
Adverse events to MMI occurred within one month of therapy in 20% of the patients, within three months of therapy in 50% of patients, and within six months of therapy in 90% of patients. In three patients, adverse events occurred after one and a half years of treatment.
4. Discussion {#sec4}
=============
Published reports related to the treatment of children with Graves\' disease have generally involved cohorts of children treated with PTU \[[@B6]--[@B14]\]. These studies reveal an incidence of minor adverse events between 1% \[[@B13]\] and 15% \[[@B12]\]. Within reports in which the use of MMI has been described, there has been little description of adverse events associated with this medication. Our data suggest that methimazole can be associated with a risk of adverse events in up to 19% of individuals. If one excludes the eight patients with pruritus and hives, which are minor side effects, the more serious adverse events were found in 11% of patients.
Based on published reports describing outcomes for children treated with antithyroid medications for Graves\' disease, up to 10 years ago, PTU was more widely used than MMI \[[@B11]--[@B13]\]. More recent data however, suggest that two thirds of children in the United States treated with antithyroid medications are now being treated with MMI, and one-third are treated with PTU \[[@B4]\].
Recently, a concerning risk of hepatotoxicity resulting in liver failure in children and adults and in pregnant women treated with PTU has been brought to attention \[[@B4], [@B16]\]. Based on the incidence of reported cases of acute liver failure and liver transplantation associated with PTU, it is estimated that up to 1 in 2,000 children will sustain acute liver injury in response to PTU \[[@B4], [@B16]\]. As the result, it is recommended that PTU not be used in children except in special circumstances, such as when an individual has had a toxic reaction to methimazole, and antithyroid medication is needed until definitive treatment either in the form of surgery or radioactive iodine can be performed \[[@B4]\]. As such, MMI use in the pediatric population is expected to increase.
Our data show that MMI is associated with adverse events in children. The most common adverse events were related to cutaneous eruptions and arthralgia. We observed one child who had cholestatic liver injury associated with methimazole. In the adult population, cholestatic liver injury has been reported to be associated with MMI use \[[@B18]\]. MMI associated liver injury is most typically seen in individuals who are older rather than younger, and in those who are treated with higher rather than lower MMI doses \[[@B18]\]. There were no reported cases of severe liver injury in any of our patients. In the individual who developed modest transaminase and alkaline phosphatase elevations, this condition reversed fully within one month after discontinuation of the medication.
Of concern was the development of Stevens-Johnson syndrome in three of the children, one of which required hospitalization. In each child, the condition reversed without long-term sequelae. Of note, the three patients who developed Stevens-Johnson syndrome were receiving large doses of MMI (30 mg). At present we do not know, though, if the risk of Stevens-Johnson syndrome is dose-related. Whereas most of the adverse events associated with MMI occurred within the first half year of the treatment onset, we observed adverse events after one and a half years of therapy in three children. These observations show that children treated with MMI warrant close follow-up for the development of potential toxic events.
Our observations raise the question about the utility of routine monitoring of hematological profiles or liver function tests or transaminase levels in patients on antithyroid medications. At present there is little evidence to support the notion that routine monitoring of these parameters is effective in minimizing the risk of antithyroid drug related adverse events \[[@B3], [@B19], [@B20]\]. If PTU is used, it is recommended that PTU should be stopped immediately and liver function and hepatocellular integrity be assessed in children who experience anorexia, pruritis rash, jaundice, light colored stool or dark urine, joint pain, right upper quadrant pain or abdominal bloating, nausea or fatigue \[[@B20], [@B21]\]. In addition, PTU and MMI should be stopped immediately and white blood counts be measured in children who develop fever, mouth sores, pharyngitis, or feel ill \[[@B3]\]. While routine monitoring of white blood counts may detect early agranulocytosis, it is not recommended because of the rarity of the condition and the lack of cost-effectiveness \[[@B3], [@B19]\]. Agranulocytosis has been reported in about 0.3% of adult patients taking MMI or PTU \[[@B3], [@B19], [@B20]\]. Data of the incidence of agranulocytosis in children is not available, but is estimated to be very low. In adults, agranulocytosis is dose-dependent with MMI, and rarely occurs at low doses \[[@B3], [@B19]\]. When it develops, agranulocytosis typically occurs within the first 100 days of therapy in 95% of individuals \[[@B3], [@B19]\].
We recognized that a potential limitation of our study is that our referral patterns may bias our outcomes, as some of the patients coming for second opinions may have been treated beforehand with MMI doses higher than we typically use. The demographics of self-referred patients may also differ from that seen in a typical cross-section of children with Graves\' disease. Our patients are also not typically treated beyond two years with MMI, which influences our ability to observe long-term side effects.
At present, PTU and MMI are the only antithyroid drugs available for Graves\' disease in the United States \[[@B3]\]. PTU was introduced for clinical use in 1948 and MMI in 1950 \[[@B21]\]. Although MMI is less hepatotoxic than PTU, our data show that MMI use is indeed associated with potential adverse events, which can be serious. Considering the hepatotoxicity risk associated with PTU, and the other minor and major adverse events associated with both PTU and MMI, strong consideration should be given to the development of less toxic antithyroid medications for use in children and adults.
This work was supported in part by a generous gift from B. Smith, family and friends.
######
Adverse events associated with methimazole.
Age at diagnosis Gender MMI dose at time Duration of therapy Reaction
------------------ -------- ------------------ ---------------------- --------------------------------------------
3 5/12 M 10 12 weeks Mild liver injury
4 M 10 32 weeks Myalgias/joint pain/facial urticaria
4 4/12 F 10 2 weeks Pruritis and hives
4 3/4 F 30 2 weeks Stevens-Johnson syndrome
5 1/12 F 7.5 3 weeks Diffuse urticaria
7 10/12 F 15 4 weeks Arthralgia
8 2/12 F 10 2 weeks Rash and joint pain
8 4/12 F 10 3 weeks Urticaria
8 5/12 M 20 9 weeks Arthralgia
8 10/12\* M 10 18 months Neutropenia (ANCA+)
8 10/12\* M 10 18 months Neutropenia (ANCA−)
10 7/12 M 40 4 weeks Myalgias
11 1/2 F 20 4 weeks Lymphopenia and eosinophilia
12 5/12 F 20 12 weeks Myalgias
12 6/12 F 30 12 weeks Stevens-Johnson syndrome (hospitalization)
14 2/12 F 30 4 weeks Stevens-Johnson, syndrome
15 4/12 F 30 2 weeks Rash
16 11/12 F 20 4 weeks Rash on arms and face
17 6/12 F 30 3 weeks Pruritic rash
\* identical twins.
[^1]: Academic Editor: Dennis M. Styne
|
Related stories
Pixar has recently updated its software suite. The new Pixar film, "Brave," was made with it. Because of that, it's a more expensive film than the ones before it, which, Catmull says, have each cost less than the first Pixar release, "Toy Story."
"It was a massive effort. We didn't have to do it." In fact, he says, "most companies that try to upgrade their software fail. But you need to do stuff that is just out there. It gets your head to a different place."
Pixar President Ed Catmull
Rafe Needleman/CNET
Obviously not a proponent of complacency, he adds, "If you get settled, the goal is to get the process right. And the process subverts your more radical nature."
He tells the audience that most films in Hollywood have to do well in the "elevator test," a concept familiar to entrepreneurs. Films, he said, have to do well in the elevator pitch to get funded. But, he says, "If we pass the elevator test, we don't want to make the movie."
What's next for Pixar? Catmull doesn't know. He hopes for "new kinds of looks. I don't know what they are."
He refers back to technology as an artistic stimulus: "Much of the technological push is to allow new imagery into the screen, to stimulate the imaginative process. Integration of technology into a story enriches both." |
12A/13A: There seems to be some sort of a mix-up. While VERSATILE for 'endowed with many skills' might work, it definitely works as an answer for 'flexible relatives being flexible' (anag of relatives, def = versatile).
21A: again, should be { ... chief OF advisors ...}
7D: I think it works, with 'rise' as a noun. Although, not particularly fond of a reverse clue, where the fodder isn't explicitly given.
Great points all, Mohsin! I really like the analogy for explaining why E=pot doesn't work. And great work spotting the fact that the clue for 12a actually works to give the answer for 13a - my guess is that the setter wanted to replace the clue for 13a and ended up replacing the clue for 12a instead, which has left D?A?O?I? without a proper clue!
Sadly, the mix-ups draw attention away from what *are* very good clues: e.g., 4a, 9a, 16a, 25a, and (the seemingly sadistic!) 20d! |
The NHRC-Remote+ allows DTMF remote control of up to eight loads. Unlike
many other DTMF remote control devices, the NHRC-Remote+ provides command
confirmation. Because it only requires receive audio, it can be used
with radio and wireline links. Each of the eight control outputs can be
configured for on/off or momentary control.
(Not quite actual size, it's 3" x 3")
Features:
Momentary or latching control of eight loads.
CW confirmation of control eliminates guesswork.
CW ID keeps you compliant with FCC rules.
Each output can have its own on and off code.
Unique codes allows a nearly unlimited number of units to be operated from the same audio source. |
A plating shop typically generates a waste stream that includes nickel, chromium, copper, zinc, cyanide, and other ions. Current technology for treating a waste stream from a plating shop generally involves separating the waste stream into alkaline, acid, chromate and cyanide waste streams. The cyanide stream is made strongly alkaline and is treated with a hypochlorite (bleach) to totally destroy the cyanide. The chromate containing stream is treated with metabisulfite or sulfur dioxide in order to reduce the hexavalent chromium to the trivalent state. Then all streams are combined and made strongly alkaline. This forces all the heavy metals to precipitate as oxides and hydroxides. Usually a flocculant is added to encourage full precipitation.
At this point the precipitates can be filtered out of the combined waste stream. Then the stream is neutralized with an acid. The treatment system is complex, including many tanks; feed loops, under automatic control, for introducing reagents to these tanks; impellers or equivalent devices for adequate mixing of reagents with the waste streams; and at least one filter press for separating the water from the precipitated metals. Not infrequently, the whole system is under computer control. Plating wastes treated in this manner typically meet requirements for discharge into water courses and sanitary sewers. The sludges generated are generally trucked to a sanitary landfill.
It can be readily appreciated that the system described above is large, costly, and requires massive quantities of chemicals for its operation. Further it is not readily amenable to treatment of acid mine waste and clean up of hazardous waste sites. The types of systems suitable for treating acid mine wastes, for recovering metals, and for removing heavy metals from the soil at hazardous waste sites vary considerably from the typical plating waste treatment system described above. Such systems are typically designed for each specific application and so no generalized design can be given.
Development of a treatment system which can remove heavy metals and other harmful contaminants from plating waste streams, acid mine wastes and hazardous waste sites or recover valuable metals in a simple and economical manner represents a great improvement in the field of waste treatment and satisfies a long felt need of the waste treatment industry. |
Q:
How “Unknown class in Interface Builder file” error can be fixed with a line reads "[MyClass class]"?
I read the following answer, so I know that "Unknown class in Interface Builder file" error can be solved using the -ObjC linker option. (FYI, MyClass is in static library.)
https://stackoverflow.com/a/6092090/534701
I've also found that single line of [MyClass class] code in the App Delegate can handle the same problem without using -ObjC option. My Question is how come the code can work.
According to the answer that I've attached above, the error occurs because symbols in my static libraries are not loaded. Then, it means [MyClass class] makes the Linker to load the symbol at runtime? (which doesn't make sense)
A:
The linker will try to reduce the final binary size by removing unused symbols. The classes instantiated from Storyboard are created using reflection, and thus it's impossible to the linker to know if that class is being used, so it's removed from the final binary.
By adding that line to your code you are forcing the linker to add that class at compilation time (not at runtime), making the reflection call from Storyboard to work as expected.
|
^E:\PYTHON\MNIST_TEST\ADABOOST\SIGN\SIGN\X64\RELEASE\SIGN.OBJ
E:\PYTHON\MNIST_TEST\ADABOOST\SIGN\X64\RELEASE\SIGN.DLL
E:\PYTHON\MNIST_TEST\ADABOOST\SIGN\X64\RELEASE\SIGN.PDB
|
International mycotoxin check sample program: Part II. Report on laboratory performance for determination of aflatoxin M1 in milk.
A sample of aflatoxin M1-contaminated lyophilized cow's milk was analyzed by 80 laboratories in 30 countries. Sufficient data were obtained to permit a statistical comparison of the performance of laboratories using AOAC methods I and II and those using high performance liquid chromatography for quantitation. A significant difference was noted between means for laboratories using AOAC method I as opposed to those using HPLC methods. Overall reproducibility (between- plus within-laboratory precision) was best for laboratories using HPLC methods and poorest for those using AOAC method II. |
Presidentís leadership key to victory
Wijitha Nakkawita
President Mahinda Rajapaksa
It was in this scenario that
the leadership of President Rajapaksa had to be reviewed. He had
a clear vision, the necessary political experience and the
background to face the challenge and as usual he gave the
leadership to the armed forces and for the first time the
public, even the man on the street saw that he was leading the
three forces to victory
The humanitarian operations against the LTTE terrorists were not
against the Tamil people but to free them from the yoke of the
terrorists, President Mahinda Rajapaksa often said when the operations
started after the LTTE terrorists closed the Mavil Aru anicut to deprive
more than 30,000 people, of water.
In his mind there was not the slightest doubt that the LTTE were a
group of hypocrite terrorists who would use each opportunity given to
them for talks or ceasefires to buy time to strengthen their armoury and
cadres and to collect more funds for purchase of arms.
It was in this background that President Mahinda Rajapaksa appeared
in the countryís political horizon determined to end the long drawn-out
terrorist war that had plagued the nation for three decades with death,
destruction and sabotage while most of the previous leaders except
perhaps President D. B. Wijetunga attempted to resolve by trying to
reach a negotiated settlement with the group of terrorists who would
have been laughing their sides out whenever any national leader proposed
a political solution.
The President once he had decided to start the humanitarian
operations called the commanders of the three Armed Forces and the
Defence Secretary.
Gotabhaya Rajapaksa directed the LTTE had to be defeated as he was
aware that would be counterproductive to start talks with them.
The commanders of the forces were given clear instructions that they
had the objective of defeating the LTTE and it was very clear in his
mind that the only way to achieve peace was by first completely
defeating the terrorists.
Once on a task President Rajapaksa was determined to fulfill it and
as an opposition parliamentarian, a minister or prime minister he always
was known to be a person who worked to reach each goal he wished to
reach.
At a time when a lot of leaders on both sides of the political
dichotomy thought they could appease the LTTE with various offers of
peace or devolution packages that the LTTE spurned with disdain as they
were not interested, as like the Shakespearean Shylock demanded their
pound of flesh the so-called separate state of the Tamils or Eelam - the
President was not ready to waste time, more and more lives and national
resources on comic exercises like carrying a white lotus or a brick to
build a library as he knew they would never yield any response from
Prabhakaran or the LTTE.
When he gave strong and clear leadership to the three armed forces he
also had another advantage that the public at large were with him though
they had been fed on myths that the LTTE could not be defeated, the war
could not be won, even if the war was being won India or some other
friend of the LTTE or the so-called international community would
pressure the government to stop the war at the last moment.
It was in this scenario that the leadership of President Rajapaksa
had to be reviewed.
He had a clear vision, the necessary political experience and the
background to face the challenge and as usual he gave the leadership to
the armed forces and for the first time the public, even the man on the
street saw that he was leading the three forces to victory.
When the humanitarian operations in the east was going on, one UNP
parliamentarian hailing from the Kandy district said in Parliament that
he had information that the LTTE cadres retreating from Thoppigala had
left behind 1000 or nearly that number of arms to be retrieved and used
later.
And still later another parliamentarian of the same party said the
actual numbers of the soldiers who died in the humanitarian operation
were not being disclosed to the parliament by the Prime Minister.
Yet as the Commander in Chief President Rajapaksa did not seek to
give replies to these comments that were not meant to hearten the armed
forces nor help the country in any way.
Since he was convinced as the President and the Commander in Chief
that the three Armed Forces and their commanders were capable of
exploding many myths including the invincibility of the terrorist
outfit, he stood by his decision and gave his strong leadership to the
force as he had faith in the men and women in the Forces.
The clear sign of a true national leader in President Mahinda
Rajapaksa was that he was not prepared to compromise the freedom and
sovereignty of the country for any kind of pressure and in the present
case he had to fight not only the terrorist outfit but also the major
opposition party that could not be anything but anti-national and
unpatriotic.
When Winston Churchill giving leadership to his forces was asking the
British people to make sacrifices the main Opposition Labour Party gave
Churchill un-stinted cooperation but in the case of our country the
Opposition Leader and his party men were not seen extending any
cooperation to the war against terrorism, but trying to bring up other
extraneous political issues for their political advantage.
It was in this background that the strong leadership of President
Mahinda Rajapaksa as Commander in Chief had to be viewed.
He had faith in his Forces, he had faith in the people and he had
faith in himself that he could lead the troops and the Forces to
victory. And like all true leaders he proved himself as a person who
could be ranked among our heroic kings like Vijayabahu or Dutu Gemunu or
among leaders like Churchill who led their Forces to victory. |
Death Crimson OX
is a light gun shooting game developed by Ecole. It was released in arcades in 2000 then ported to the Dreamcast console in 2001 (published by Sammy Entertainment), several months after Sega had dropped support for the console. It is the third game in the Death Crimson series, and the only one to be released outside Japan. The game can be played with either a standard controller or a light gun.
The game was also released as Guncom 2 in Europe and Death Crimson OX+ in Japan on the PlayStation 2.
Reception
The Dreamcast version received generally negative reviews. GameSpot gave the game a 4.2/10, describing it as a second-rate House of the Dead clone. IGN scored it a 4.3/10, citing a confusing storyline, poor visuals, and new gameplay mechanics which prevent the game from offering any sort of challenge. Game Informer gave it a 4.0/10, remarking that it "Gives you plenty of targets, but no real reason to keep pulling the trigger."
References
Category:Arcade games
Category:Dreamcast games
Category:PlayStation 2 games
Category:2001 video games
Category:Light gun games
Category:Video games developed in Japan |
I actually compete at Crossfit and do reasonable well in the Open (top 5-6% worldwide for the last few years) which I guess is a reasonable enough way to evaluate whether or not you do Crossfit. I rarely look at the MP anymore but I can likely put up fairly respectable times and do all of the WODs as Rx'd 99% of the time.
I have decent (but not great) lifting numbers. I am decent at what Crossfit calls gymnastics as well as I can do 30 MUs and HSPUs and pull ups, ring dips etc.
Most importantly I have not followed MP Crossfit for years and likely have seen better and more progressive gains because of it.
The biggest issue is exactly what has been stated before, there is no progression at all to it. There are crossfit athletes who incorporate some of the WODs into their routine but most of them are doing so much more that it is completely inconsequential to their success.
I would hazard to guess that pretty much everyone at Regionals this year would have won the Games from 2007-2010 or at least be very close. That is how much the landscape of this has changed.
A long time ago Tony Budding and Pat Sherwood were involved in programming as was Glassman......not sure who does it. Do I think MP could work for someone....yes. Do I think it is the fastest way to get better ? No.
I follow mainsite. I have a strong PL background and rarely do I have to scale the wt. My problem is knowing what the time domain should be? I can still squat and pull 600 but my mile is 10+ min....Fran is 7 min....really struggle with gymnastics...it would be helpful if they gave me a time cap...sometimes im at 30min when every one else finished in 15...if that makes sense?
I don't want to discuss the entire programming because that's very subjective. But people must be more educated in programming and how to scale.
Obviously is Crossfit HQ philosophy not to use % of RM or bodyweight, but in the past they used it, and I think is better, I really want to know why isn't the norm.
I adapt main site WODs to use % of my RM or sometimes bodyweight. Is against the objective to use a weight that doesn't allows you to complete the WOD in time, but if you don't know about the ideas behind the program you re clueless.
Some Wods are "short metcons", duplets that take 2-10 minutes, and other are supposed to take around 20 minutes, or more. Intensity is expected to be very different in these. And the weight must be specially selected.
For example, Fran Rx in 20 minutes is not training the expected energy system, you need a scaled weight.
They used to have the top athlete's times posted, does anybody know why they don't? Names like Spealler, Lipson, Holmberg would be listed under the WOD and then their times. They also used to have demo videos linked there too but they don't do that anymore either. What gives?
I asked them that before and their response was something along the lines of there were just too many people whining about form and range of motion and everything. Which seems like a legit reason to discontinue doing it.
Mike,
Just ask yourself what you think the top level CrossFitters would finish the workouts in. For instance, we are behind a little bit so today we are doing the kb swing, ghd, back extension, knees to elbows workout. I scaled the reps to 15 instead of 25 so we could keep the time domain similar to what I think a firebreather would do it in.
I asked them that before and their response was something along the lines of there were just too many people whining about form and range of motion and everything. Which seems like a legit reason to discontinue doing it.
I see what you mean, I just watched a YouTube video of Annie S and Greg A doing Elizabeth from 2007. That was the early days and I don't think they were doing it intentionally, standards were largely driven by the games and really developed further a few years after this. They started with their hands on the bar and Greg lowered the bar before full extension on the cleans. Man how far things have come since then. I was watching the 2012 Games earlier and it seems things have progressed so much even since them! |
[Plethysmography in venous pathology].
The plethysmography is an easy and precise and therefore interesting non invasive investigation technique in search for deep venous obstructions. It is giving a well defined picture of the venous function and offers the possibility to differentiate most of the cases with a post-thrombotic syndrome from primary varicosis and to make the follow-up of the surgical treatment of venous anomalies. |
Oral administration of human papillomavirus type 16 E7 displayed on Lactobacillus casei induces E7-specific antitumor effects in C57/BL6 mice.
The mounting of a specific immune response against the human papillomavirus type 16 E7 protein (HPV16 E7) is important for eradication of HPV16 E7-expressing cancer cells from the cervical mucosa. To induce a mucosal immune response by oral delivery of the E7 antigen, we expressed the HPV16 E7 antigen on the surface of Lactobacillus casei by employing a novel display system in which the poly-gamma-glutamic acid (gamma-PGA) synthetase complex A (PgsA) from Bacillus subtilis (chungkookjang) was used as an anchoring motif. After surface expression of the HPV16 E7 protein was confirmed by Western blot, flow cytometry and immunofluorescence microscopy, mice were orally inoculated with L. casei-PgsA-E7. E7-specific serum IgG and mucosal IgA productions were enhanced after oral administration and significantly enhanced after boosting. Systemic and local cellular immunities were significantly increased after boosting, as shown by increased counts of lymphocytes (SI = 9.7 +/- 1.8) and IFN-gamma secreting cells [510 +/- 86 spot-forming cells/10(6)cells] among splenocytes and increased IFN-gamma in supernatants of vaginal lymphocytes. Furthermore, in an E7-based mouse tumor model, animals receiving orally administered L. casei-PgsA-E7 showed reduced tumor size and increased survival rate versus mice receiving control (L. casei-PgsA) immunization. These results collectively indicate that the oral administration of E7 displayed on lactobacillus induces cellular immunity and antitumor effects in mice. |
Q:
Regex: Match opening/closing chars with spaces
I'm trying to complete a regular expression that will pull out matches based on their opening and closing characters, the closest I've gotten is
^(\[\[)[a-zA-Z.-_]+(\]\])
Which will match a string such as "[[word1]]" and bring me back all the matches if there is more than one, The problem is I want it to pick up matchs where there may be a space in so for example "[[word1 word2]]", now this will work if I add a space into my pattern above however this pops up a problem that it will only get one match for my entire string so for example if I have a string
"Hi [[Title]] [[Name]] [[surname]], How are you"
then the match will be [[Title]] [[Name]] [[surname]] rather than 3 matches [[Title]], [[Name]], [[surname]]. I'm sure I'm just a char or two away in the Regex but I'm stuck, How can I make it return the 3 matches.
Thanks
A:
You just need to make you regex non-greedy by using a ? like:
^(\[\[)[a-zA-Z.-_ ]+?(\]\])
Also there is a bug in your regex. You've included - in the char class thinking of it as a literal hyphen. But - in a char class is a meta char. So it effectively will match all char between . (period) and _ (underscore). So you need to escape it as:
^(\[\[)[a-zA-Z.\-_ ]+?(\]\])
or you can put is in some other place in the regex so that it will not have things on both sides of it as:
^(\[\[)[a-zA-Z._ -]+?(\]\])
or
^(\[\[)[-a-zA-Z._ ]+?(\]\])
|
Kaposi's sarcoma in the pediatric population: the critical need for a tissue diagnosis.
Kaposi's sarcoma (KS) is a low-grade vascular neoplasm mediated by the human herpesvirus-8. Only 1 clinical subtype, the endemic/African subtype, commonly affects the pediatric population. Although adults with KS often present with cutaneous findings and generalized lymphadenopathy, African children are more likely to present without classic skin findings. Definitive diagnosis requires histologic examination from tissue biopsy; however, as pathology resources are scarce in many developing African countries where KS is prominent, appropriate diagnosis and treatment of the condition are challenging. We report the case of a Malawian child who presented with generalized lymphadenopathy and was presumptively treated for lymphoma, with clinical worsening of his lesions. A diagnosis of KS was made after excisional biopsy of a superficial lymph node, with the initiation of appropriate therapy. The literature regarding pediatric KS is reviewed and recommendations are offered to allow accurate and timely diagnosis of the condition. |
The Washington Post reports the Washington Redskins are starting to think about their PK situation and coach Marty Schottenheimer has expressed some interest in free agent PKs Joe Nedney, Doug Brien and Morten Andersen. There have been some reports of the team being interested in Steve Christie, but the others seem to be the Top 3 on the list. Following up Saturday's report, the coach also said the Redskins may be interested in free agent G Dave Szott. The agent representing Seattle DT Riddick Parker said he would like to have his client visit Washington, but no visit is planned at this time. Ralph Cindrich, the agent for free agent QB Gus Frerotte, said he has spoken to Schottenheimer about Frerotte but does not believe the Redskins are ready to act at this time.
Mike Sando reports for the Tacoma News Tribune that the Seattle Seahawks have offered G Pete Kendall and S Jay Bellamy contract offers but both appear to be ready to test the free agency market. The team has talked with DT Riddick Parker, but have not offered him an offer. The team hasn't talked with S Kerry Joseph, WR Sean Dawkins, LB James Logan or LB George Koonce. Dawkins is the only one of them under contract and, as previously reported, may be a cap casualty as the team would save around $2.5 million by cutting him. CB Willie Williams is eligible for free agency because he met incentives in his contract that allows him to void the final four years of his deal.
The Seattle Union Record reports Seattle Seahawks DT Riddick Parker (sprained ankle) did not practice Tuesday, but should be able to play this week. DE Lamar King (dislocated shoulder) and FS Jay Bellamy (bruised back) were not even at practice. Bellamy is expected to play against the Raiders but King is not. RB Ricky Watters also sat out practice to rest a hyperextended toe. Watters did not practice at all last week, but played against the Broncos. |
Chelsea's Costa, Fabregas to beat boo boys
Related Links
London - Chelsea's first-team coach Steve Holland backed Cesc Fabregas and Diego Costa to win back the club's supporters after they were booed during the 3-1 Premier League win over Sunderland.
Fans of the West London club jeered the pair as their names were read out before Saturday's game on the PA system and then when they were substituted towards the end of the match.
They also repeatedly chanted in support of the sacked Jose Mourinho, who has been replaced on an interim basis by Guus Hiddink, and unveiled a banner saying: "You let Jose down, you let us down."
"I told the players whatever their feelings were regarding the situation, they had a responsibility to the football club and the supporters," said Holland, who took charge of the team with Hiddink watching from the stands.
"We have quality players and they care. I'm not a social media guy so I'm not sure of the exact reasons, but clearly the supporters have a right to voice their opinion. I'm happy to park this and move on.
"I've not spoken to those players. From my point of view I was happy with their contribution to this game and applauded them.
"If the players compete like they did today (Saturday), there will be no reason why the supporters won't be happy with that."
Goals from Branislav Ivanovic, Pedro Rodriguez and Oscar, with a penalty, saw Chelsea end a three-game winless league run against the backdrop of a peculiar atmosphere.
Holland believes that Mourinho will be back in management before the end of the season after the Portuguese expressed eagerness to return to work in a statement released by his agents.
"I think he will be back before the end of the season," said the former Crewe Alexandra manager.
"Big clubs will want him and I think he is someone who needs and wants football. He is not someone who will spend six, seven, eight months doing nothing in particular."
Holland added: "I don't know what the circumstances are contractually. He will want to get going sooner rather than later and it would not be a surprise if big clubs are interested in him.
Featured
The 2017/18 Absa Premiership season is under way. Can Bidvest Wits defend their title? Will Soweto giants Kaizer Chiefs or Orlando Pirates emerge victorious? Or will the bookies' favourites, Mamelodi Sundowns, taste success for a record eighth time? Stay glued to Sport24 to find out! |
DM 960 (B/S)
The DM 960 microphone head is suitable for close miking of vocals especially pop, soul or jazz. The hypercardioid polar pattern provides maximum gain before feedback. Due to the close miking effect and the bass boost the DM 960 features a powerful and smooth sound with transparent highs. |
Once again, these Pokémon GO stories continue to practically write themselves, further questioning if we as a society are ready to go out into our own neighborhoods and throw magical balls as adorable monsters.
This time, as cited by The Orlando Sentinel, a 37 year-old man open fired on two young men who were in his neighborhood, looking for Pokémon at 1:30AM. He says he believed the two were burglars ransacking houses when they began asking each other “Did you get anything?”
As they got in their car, the man approached them with a gun drawn and told them not to move. However, the two teenagers sped off, and the man open fire. Luckily, nobody was injured in the shooting, however, the man, who has not been identified, cited Florida’s 2005 “stand your ground” statute which grants citizens the right to deadly force “to prevent the imminent commission of a forcible felony.”
Of course, catching a Tauros and Marowak is in no way a deadly crime or a “forcible felony,” but the man claimed that the two tried “to strike him with the car,” the very same one that he voluntarily leaped in front of.
The boys did not report the shooting, thinking it was just someone trying to scare them. Their mother reported it in the morning when she found bullet-holes in their bumper and a flat tire.
To think that if these two had been tragically killed, Pokémon GO could have found itself at the center of the great American gun debate. That just shows how far and long the reach of this game has gotten, and it makes me wonder if we are able to handle it yet. Please, be extra special careful when playing this game, and don’t trespass.
And to those who choose to put themselves in a situation that forces them to “stand their ground,” remember, 911 is a perfectly legitimate, not to mention preferred, way to deal with such a situation. |
Responses of natural wildlife populations to air pollution.
Deer mice (Peromyscus californicus) trapped in areas of Los Angeles with high ambient air pollution are significantly more resistant to ozone (6.6 ppm for 12 h) than are mice trapped from areas with low ambient pollution (56 versus 0% survival, respectively). Laboratory-born progeny of these mice show similar response patterns, indicating a genetic basis to this resistance. Young mice (less than 1 yr of age) are more sensitive than older mice (15 versus 44% survival, respectively). Sensitivity is also affected by degree of inbreeding; progeny of full-sib crosses are more sensitive than randomly bred deer mice. The data suggest that deer mice are more resistant to ozone toxicity than are commercially bred laboratory mice and rats. |
BEIJING One of the principal student leaders of the 1989 pro-democracy movement flew Wednesday from Taiwan to the Chinese territory of Macao, saying he wanted to surrender to Chinese authorities after two decades in exile.
The former student leader, Wu’er Kaixi, was detained by immigration authorities at the Macao airport on the evening before the 20th anniversary of a bloody military crackdown in Beijing. He told several news agencies that he would return to Taiwan only if he were deported.
“His action is kind of an expression of anger and protest,” said Wang Dan, another former student leader, who is now in the United States.
“Maybe this is his only way to return to China. For all of us, this is the only way.”
In a statement, Mr. Wu’er said his effort to turn himself in was “in no way whatsoever an acknowledgment of guilt in the eyes of the law.” |
Drafted one year apart into one of professional sports’ fiercest rivalries, the Alexander Ovechkin-Sidney Crosby rivalry has been nothing short of majestic. Two international superstars one a soft-spoken, technically skilled Canadian and the other a beloved freak athlete from Moscow – have captured the attention of hockey fans across the country, continent and globe.
A decade later, both players have lived up to (and exceeded) the high expectations set upon them by their respective organization and fan base.
Alexander Ovechkin continues to dominate defenses with his unique blend of power and speed, embodying an all-around skill set that regularly brings Washingtonians out of their seats. Whether Ovechkin scores solo on a one-man break or finishes (always perfect) Nicklas Backstrom passes with a physics-defying slapshot, The Great 8 is known to leave Caps fans wondering “how the hell does he do that?”
It’s almost a guarantee Ovechkin will sound off the goal siren on a nightly basis, flashing his trademark toothless grin as he crashes into the glass and celebrates with the Verizon Center faithful.
Sidney Crosby has settled for more steak, less sizzle. While he may not have the highlight reel goals, board-rattling hits and nationwide fandom his rival is known for, Crosby is no doubt a winner. Two Olympic gold medals and one Stanley Cup victory have often led to Crosby overshadowing Ovechkin, with fans and critics alike praising Crosby’s team-first style. While Crosby has consistently had a better supporting cast in Pittsburgh, there’s no denying the key piece behind the Penguins multiple Stanley Cup runs.
So which player is the true face of the NHL?
Skeptics argue Ovechkin’s numbers don’t mean anything without a cup, but I would argue that the blame falls on his teammates, who have failed to give necessary support in the playoffs.
Sidney haters argue that Crosby’s team accomplishments do not make him superior to Ovechkin, but his individual numbers are not far off from Ovi. Without the year and a half of inactivity due to concussions, Crosby could be right there with Ovechkin regarding goals scored.
Luckily we still have another decade or so to watch these two greats settle the argument once and for all. Until then, it’s just a matter of opinion on which of these two future Hall of Famers is the greatest of his era. |
/* ssl/kssl.h -*- mode: C; c-file-style: "eay" -*- */
/* Written by Vern Staats <[email protected]> for the OpenSSL project 2000.
* project 2000.
*/
/* ====================================================================
* Copyright (c) 2000 The OpenSSL Project. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in
* the documentation and/or other materials provided with the
* distribution.
*
* 3. All advertising materials mentioning features or use of this
* software must display the following acknowledgment:
* "This product includes software developed by the OpenSSL Project
* for use in the OpenSSL Toolkit. (http://www.OpenSSL.org/)"
*
* 4. The names "OpenSSL Toolkit" and "OpenSSL Project" must not be used to
* endorse or promote products derived from this software without
* prior written permission. For written permission, please contact
* [email protected].
*
* 5. Products derived from this software may not be called "OpenSSL"
* nor may "OpenSSL" appear in their names without prior written
* permission of the OpenSSL Project.
*
* 6. Redistributions of any form whatsoever must retain the following
* acknowledgment:
* "This product includes software developed by the OpenSSL Project
* for use in the OpenSSL Toolkit (http://www.OpenSSL.org/)"
*
* THIS SOFTWARE IS PROVIDED BY THE OpenSSL PROJECT ``AS IS'' AND ANY
* EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE OpenSSL PROJECT OR
* ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
* STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
* OF THE POSSIBILITY OF SUCH DAMAGE.
* ====================================================================
*
* This product includes cryptographic software written by Eric Young
* ([email protected]). This product includes software written by Tim
* Hudson ([email protected]).
*
*/
/*
** 19990701 VRS Started.
*/
#ifndef KSSL_H
#define KSSL_H
#include <openssl/opensslconf.h>
#ifndef OPENSSL_NO_KRB5
#include <stdio.h>
#include <ctype.h>
#include <krb5.h>
#ifdef OPENSSL_SYS_WIN32
/* These can sometimes get redefined indirectly by krb5 header files
* after they get undefed in ossl_typ.h
*/
#undef X509_NAME
#undef X509_EXTENSIONS
#undef OCSP_REQUEST
#undef OCSP_RESPONSE
#endif
#ifdef __cplusplus
extern "C" {
#endif
/*
** Depending on which KRB5 implementation used, some types from
** the other may be missing. Resolve that here and now
*/
#ifdef KRB5_HEIMDAL
typedef unsigned char krb5_octet;
#define FAR
#else
#ifndef FAR
#define FAR
#endif
#endif
/* Uncomment this to debug kssl problems or
** to trace usage of the Kerberos session key
**
** #define KSSL_DEBUG
*/
#ifndef KRB5SVC
#define KRB5SVC "host"
#endif
#ifndef KRB5KEYTAB
#define KRB5KEYTAB "/etc/krb5.keytab"
#endif
#ifndef KRB5SENDAUTH
#define KRB5SENDAUTH 1
#endif
#ifndef KRB5CHECKAUTH
#define KRB5CHECKAUTH 1
#endif
#ifndef KSSL_CLOCKSKEW
#define KSSL_CLOCKSKEW 300;
#endif
#define KSSL_ERR_MAX 255
typedef struct kssl_err_st {
int reason;
char text[KSSL_ERR_MAX+1];
} KSSL_ERR;
/* Context for passing
** (1) Kerberos session key to SSL, and
** (2) Config data between application and SSL lib
*/
typedef struct kssl_ctx_st
{
/* used by: disposition: */
char *service_name; /* C,S default ok (kssl) */
char *service_host; /* C input, REQUIRED */
char *client_princ; /* S output from krb5 ticket */
char *keytab_file; /* S NULL (/etc/krb5.keytab) */
char *cred_cache; /* C NULL (default) */
krb5_enctype enctype;
int length;
krb5_octet FAR *key;
} KSSL_CTX;
#define KSSL_CLIENT 1
#define KSSL_SERVER 2
#define KSSL_SERVICE 3
#define KSSL_KEYTAB 4
#define KSSL_CTX_OK 0
#define KSSL_CTX_ERR 1
#define KSSL_NOMEM 2
/* Public (for use by applications that use OpenSSL with Kerberos 5 support */
krb5_error_code kssl_ctx_setstring(KSSL_CTX *kssl_ctx, int which, char *text);
KSSL_CTX *kssl_ctx_new(void);
KSSL_CTX *kssl_ctx_free(KSSL_CTX *kssl_ctx);
void kssl_ctx_show(KSSL_CTX *kssl_ctx);
krb5_error_code kssl_ctx_setprinc(KSSL_CTX *kssl_ctx, int which,
krb5_data *realm, krb5_data *entity, int nentities);
krb5_error_code kssl_cget_tkt(KSSL_CTX *kssl_ctx, krb5_data **enc_tktp,
krb5_data *authenp, KSSL_ERR *kssl_err);
krb5_error_code kssl_sget_tkt(KSSL_CTX *kssl_ctx, krb5_data *indata,
krb5_ticket_times *ttimes, KSSL_ERR *kssl_err);
krb5_error_code kssl_ctx_setkey(KSSL_CTX *kssl_ctx, krb5_keyblock *session);
void kssl_err_set(KSSL_ERR *kssl_err, int reason, char *text);
void kssl_krb5_free_data_contents(krb5_context context, krb5_data *data);
krb5_error_code kssl_build_principal_2(krb5_context context,
krb5_principal *princ, int rlen, const char *realm,
int slen, const char *svc, int hlen, const char *host);
krb5_error_code kssl_validate_times(krb5_timestamp atime,
krb5_ticket_times *ttimes);
krb5_error_code kssl_check_authent(KSSL_CTX *kssl_ctx, krb5_data *authentp,
krb5_timestamp *atimep, KSSL_ERR *kssl_err);
unsigned char *kssl_skip_confound(krb5_enctype enctype, unsigned char *authn);
void SSL_set0_kssl_ctx(SSL *s, KSSL_CTX *kctx);
KSSL_CTX * SSL_get0_kssl_ctx(SSL *s);
char *kssl_ctx_get0_client_princ(KSSL_CTX *kctx);
#ifdef __cplusplus
}
#endif
#endif /* OPENSSL_NO_KRB5 */
#endif /* KSSL_H */
|
This application seeks funding to expand and improve the North Atlantic Population Project (NAPP) database. NAPP is a unique collaboration of eight leading data producers who have leveraged resources to create an extraordinary cross-national historical database. Over the past four years, the collaborating partners have cleaned, edited, coded, harmonized, and disseminated almost 90 million records describing basic demographic characteristics of the populations of five countries. These data include the entire population of Britain and Canada in 1881, Iceland in 1870, 1880, and 1901, Norway in 1865 and 1900, and the United States in 1880. These are the only complete-count national microdata available for scholarly research, and they represent an extraordinary resource for the study of small areas and population subgroups. This fundamental social science infrastructure is already stimulating broad-based comparative investigations of economic development and demographic change. To exploit the research potential of these data, we now propose: (1) expanding the chronological and geographic dimension of the database by incorporating data from additional census years for each country and adding data from Sweden; (2) coordinating national projects to link individuals between censuses, which will permit longitudinal analysis; and (3) improving NAPP variables, data editing, documentation, and web- based dissemination tools. The availability of multiple cross-sections for the population of the North Atlantic world in the nineteenth and early twentieth centuries will open up vast new terrain in the fields of history, economics, demography, and sociology. In addition, linked samples hold the promise of finally resolving some of the longest-running debates in social and economic history. Scholars will be able to gauge trends and differentials of social and geographic mobility and the interrelationship of geographic and economic movement far more reliably than heretofore. The expanded NAPP database is directly relevant to the central mission of the NIH as the steward of medical and behavioral research for the nation: NAPP will advance fundamental knowledge about the nature and behavior of human population dynamics. This basic infrastructure will advance health-related research on population growth and movement, fertility, mortality, and disability. [unreadable] [unreadable] |
Bank Islam may assume BIMB listing status
Bank Islam Malaysia Bhd, the country’s oldest Islamic lender, may assume the listing status of its parent company BIMB Holdings Bhd, sources said. BIMB owns 51 per cent of Bank Islam, from which it derives the bulk of its earnings. “BIMB have been discussing the pros and cons of such a move. They may want to simplify the BIMB group structure,” one of the sources told Business Times.
Bank Islam accounts for some 85 per cent of BIMB’s profit before zakat and taxation (PBZT). The rest of the BIMB group’s earnings comes mainly from a listed Islamic insurance firm, Syarikat Takaful Malaysia Bhd (STMB), in which BIMB owns a 65.2 per cent stake………………………………………..Full Article: Source |
Q:
System.AccessViolationException: Attempted to read or write protected memory
I get the following exception when I try to "find and replace" in a Word 2007 working on Windows Vista or Windows 7.
System.AccessViolationException:
Attempted to read or write protected
memory. This is often an indication
that other memory is corrupt. at
Microsoft.Office.Interop.Word.Find.Execute(Object&
FindText, Object& MatchCase, Object&
MatchWholeWord, Object&
MatchWildcards, Object&
MatchSoundsLike, Object&
MatchAllWordForms, Object& Forward,
Object& Wrap, Object& Format, Object&
ReplaceWith, Object& Replace, Object&
MatchKashida, Object& MatchDiacritics,
Object& MatchAlefHamza, Object&
MatchControl)
Is there any solution for this ?
Iam using .NET3.5 C#.
**********CODE****************
public static Application Open(string fileName)
{
object fileNameAsObject = (object)fileName;
Application wordApplication;
wordApplication = new Application();
object readnly = false;
object missing = System.Reflection.Missing.Value;
wordApplication.Documents.Open(
ref fileNameAsObject, ref missing, ref readnly,
ref missing,ref missing, ref missing, ref missing,
ref missing,ref missing, ref missing, ref missing,
ref missing,ref missing, ref missing, ref missing,
ref missing
);
return wordApplication;
}
private static void ReplaceObject(
ref Application wordApplication,
object ObjectTobeReplaced, object NewObject)
{
// ++++++++Find Replace options Starts++++++
object findtext = ObjectTobeReplaced;
object findreplacement = NewObject;
object findforward = true;
object findformat = false;
object findwrap = WdFindWrap.wdFindContinue;
object findmatchcase = false;
object findmatchwholeword = false;
object findmatchwildcards = false;
object findmatchsoundslike = false;
object findmatchallwordforms = false;
object replace = 2; //find = 1; replace = 2
object nevim = false;
Range range = wordApplication.ActiveDocument.Content;
range.Find.Execute(
ref findtext, ref findmatchcase, ref findmatchwholeword,
ref findmatchwildcards,ref findmatchsoundslike,
ref findmatchallwordforms, ref findforward, ref findwrap,
ref findformat, ref findreplacement, ref replace,
ref nevim, ref nevim, ref nevim, ref nevim
);
A:
Issue was fixed after reinstalling Office :)... But still dont know what caused the issue
|
"Meth rooms" for drug users could stem ice epidemic: experts
As the dangerous drug 'ice' continues to be a threat to our community, the Sunshine Coast Daily's four-part series with the University of the Sunshine Coast examines the impact of the drug with those who are battling it on the front lines.
DESIGNATED safe places for crystal methamphetamine, or "ice", users could help the Sunshine Coast shake off the drug's increasing grip on the region, according to experts.
The Noffs Foundation - a youth outreach group - called for the introduction of "ice rooms" earlier this year, with chief executive Matt Noffs likening them to the heroin injecting centres in Sydney's King's Cross.
Mr Noffs believes ice consumption rooms have the potential to help curb addiction to the potentially deadly narcotic.
Former addicts, GPs, youth counsellors and domestic violence experts have told the Daily of the growing threat ice poses to the Sunshine Coast community.
Reader poll
Would you support safe rooms for meth use and addiction treatment?
This poll ended on 19 September 2016.
Current Results
No, but I support other alternative measures
33%
No, I only support punishment
35%
Yes, I support the use of safe rooms
31%
This is not a scientific poll. The results reflect only the opinions of those who chose to participate.
Noffs Queensland's Russell Workman said if Mr Noffs felt the ice rooms would work nationally, they would work on the Coast.
"If he feels that that's a solution then I would say I definitely agree that it would be a solution and it would work on the Sunshine Coast," Mr Workman said.
The Sunshine Coast Council was approached for its view on ice rooms for the Coast but did not respond before publication. |
Q:
PHP how to get list of all defined variable?
How to get the list of defined variables and its values. eg define('DS', DIRECTORY_SEPARATOR);
How can I list all of them at the end of page to check all of them working as expected. Instead of printing all of them one by one.
I've tried get_defined_vars() But I'm getting all Super Global variables like below.
Array
(
[GLOBALS] => Array
(
[GLOBALS] => Array
*RECURSION*
[_POST] => Array
(
)
[_GET] => Array
(
)
[_COOKIE] => Array
(
[PHPSESSID] => tkv7odk47idt4r2ob7389tkr81
[CAKEPHP] => cep9tbooimh5kbhn8jovmaqgi1
)
[_FILES] => Array
(
)
[_SERVER] => Array
But I want only those which I've mentioned using the define() statement.
Is there any way of getting it.
A:
You can get this by using get_defined_constants
$constarray = get_defined_constants(true);
foreach($constarray['user'] as $key => $val)
eval(sprintf('$_CONSTANTS[\'%s\'] = \'%s\';', addslashes($key), addslashes($val)));
|
/*
* Copyright (C) 2013, 2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
* BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
* THE POSSIBILITY OF SUCH DAMAGE.
*/
WebInspector.FilterBar = class FilterBar extends WebInspector.Object
{
constructor(element)
{
super();
this._element = element || document.createElement("div");
this._element.classList.add("filter-bar");
this._filtersNavigationBar = new WebInspector.NavigationBar;
this._element.appendChild(this._filtersNavigationBar.element);
this._filterFunctionsMap = new Map;
this._inputField = document.createElement("input");
this._inputField.type = "search";
this._inputField.spellcheck = false;
this._inputField.incremental = true;
this._inputField.addEventListener("search", this._handleFilterChanged.bind(this), false);
this._element.appendChild(this._inputField);
this._lastFilterValue = this.filters;
}
// Public
get element()
{
return this._element;
}
get placeholder()
{
return this._inputField.getAttribute("placeholder");
}
set placeholder(text)
{
this._inputField.setAttribute("placeholder", text);
}
get inputField()
{
return this._inputField;
}
get filters()
{
return {text: this._inputField.value, functions: [...this._filterFunctionsMap.values()]};
}
set filters(filters)
{
filters = filters || {};
var oldTextValue = this._inputField.value;
this._inputField.value = filters.text || "";
if (oldTextValue !== this._inputField.value)
this._handleFilterChanged();
}
addFilterBarButton(identifier, filterFunction, activatedByDefault, defaultToolTip, activatedToolTip, image, imageWidth, imageHeight)
{
var filterBarButton = new WebInspector.FilterBarButton(identifier, filterFunction, activatedByDefault, defaultToolTip, activatedToolTip, image, imageWidth, imageHeight);
filterBarButton.addEventListener(WebInspector.ButtonNavigationItem.Event.Clicked, this._handleFilterBarButtonClicked, this);
filterBarButton.addEventListener(WebInspector.FilterBarButton.Event.ActivatedStateToggled, this._handleFilterButtonToggled, this);
this._filtersNavigationBar.addNavigationItem(filterBarButton);
if (filterBarButton.activated) {
this._filterFunctionsMap.set(filterBarButton.identifier, filterBarButton.filterFunction);
this._handleFilterChanged();
}
}
hasActiveFilters()
{
return !!this._inputField.value || !!this._filterFunctionsMap.size;
}
hasFilterChanged()
{
var currentFunctions = this.filters.functions;
if (this._lastFilterValue.text !== this._inputField.value || this._lastFilterValue.functions.length !== currentFunctions.length)
return true;
for (var i = 0; i < currentFunctions.length; ++i) {
if (this._lastFilterValue.functions[i] !== currentFunctions[i])
return true;
}
return false;
}
// Private
_handleFilterBarButtonClicked(event)
{
var filterBarButton = event.target;
filterBarButton.toggle();
}
_handleFilterButtonToggled(event)
{
var filterBarButton = event.target;
if (filterBarButton.activated)
this._filterFunctionsMap.set(filterBarButton.identifier, filterBarButton.filterFunction);
else
this._filterFunctionsMap.delete(filterBarButton.identifier);
this._handleFilterChanged();
}
_handleFilterChanged()
{
if (this.hasFilterChanged()) {
this._lastFilterValue = this.filters;
this.dispatchEventToListeners(WebInspector.FilterBar.Event.FilterDidChange);
}
}
};
WebInspector.FilterBar.Event = {
FilterDidChange: "filter-bar-text-filter-did-change"
};
|
The ubiquitin system is a major pathway for selective protein degradation (Finley D et al (1991) Annu Rev Cell Biol 7: 25-69). Degradation by this system is instrumental in a variety of cellular functions such as DNA repair, cell cycle progression, signal transduction, transcription, and antigen presentation. The ubiquitin pathway also eliminates proteins that are misfolded, misplaced, or that are in other ways abnormal. This pathway requires the covalent attachment of ubiquitin (E1), a highly conserved 76 amino acid protein, to defined lysine residues of substrate proteins.
Substrate recognition by this pathway involves a specialized recognition and targeting apparatus, the ubiquitin-conjugating system. Ubiquitin-conjugating enzyme (E2) and ubiquitin-protein ligase (E3), either independently or in conjunction, catalyze isopeptide formation between the carboxyl terminus of ubiquitin and amino groups of internal lysine residues of target proteins (Scheffner M et al (1995) Nature 373: 81-83). Ubiquitin-protein conjugates are then recognized and degraded by a specific protease complex, the 26S proteasome. Both E2 and E3 exist as protein families, and their pattern of expression is thought to determine substrate specificity (Nuber U et al (1996) J Biol Chem 271: 2795:2800).
The yeast ubiquitin-conjugating enzyme Ubc3 (also known as CDC34) plays a crucial role in the progression of the cell cycle from the G1 to S phase and the correct positioning of ubiquitin on a surface of Ubc3 is a requirement for this cell cycle transition (Prendergast JA et al (1995) J Biol Chem 270: 9347-9352). Mutation studies have suggested that amino acids S-73, S-97, and S-139 of Ubc3 may be critical for substrate specificity, while C-95 is the site of catalytic activity (Liu Y et al (1995) Mol Cel Biol 15: 5635-5644). An alteration in C-95 and another highly conserved amino acid, L-99, resulted in a dominant negative mutation (Banerjee A et al (1995) J Biol Chem 270: 26209-26215). Overexpression of this mutation of Ubc3 was found to block cell growth in otherwise wild type strains. |
Q:
How to send data to a php script page during jQuery load and accept the data
I have a php function that builds a list of items for me. Im not sure but i read that you cant call a php function explicitly though jQuery/js.
So i saw that you can still call php pages like this:
$("#name").click(function(){
$("#div").load("script.php");
});
If i can call a php page like that, is there also a way to send it a URL when that page is loaded like this?
$("#name").click(function(){
$("#div").load("script.php", 'http://gdata.youtube.com/feeds/');
});
also another problem comes up that how do i make the script accept that string through from jQuery?
normally when you call a function you pass parameter with the call like so:
<?php makeList( 'http://gdata.youtube.com/feeds/' ); ?>
//on the function-side
<?php
function makeList( $feedURL )
{
//...stuff that uses $feedURL...
}
?>
Since i will make the function a script that runs upon being called how would i pass it a parameter?
I have no idea if this is possible or not and i would understand if this creates tons of security issues which makes it not acceptable.
A:
You have the $.get and $.post methods in jQuery.
$.post('script.php', { url: 'http://gdata.youtube.com/feeds/' }, function(data) {
//data will hold the output of your script.php
});
The url is posted to your PHP script and you can access it through $_POST['url'].
|
National Register of Historic Places listings in Roanoke County, Virginia
__NOTOC__
This is a list of the National Register of Historic Places listings in Roanoke County, Virginia.
This is intended to be a complete list of the properties and districts on the National Register of Historic Places in Roanoke County, Virginia, United States. The locations of National Register properties and districts for which the latitude and longitude coordinates are included below, may be seen in an online map.
There are 10 properties and districts listed on the National Register in the county.
Current listings
|}
See also
List of National Historic Landmarks in Virginia
National Register of Historic Places listings in Virginia
National Register of Historic Places listings in Roanoke, Virginia
National Register of Historic Places listings in Salem, Virginia
References
Roanoke |
Q:
javascript and d3.js variable scope: variable resets outside of function call
I'm sorry if this is a duplicate, but I've read a few posts about variable scope and I can't seem to figure this out. Any help is much appreciated.
Basically, I'm just trying to read in a csv and determine how many rows it has and then assign number to a global variable. Here is my code:
<script src="https://d3js.org/d3.v4.min.js" charset="utf-8"></script>
<script type="application/javascript">
var maxEpochs;
setMaxEpochs();
console.log(maxEpochs);
function setMaxEpochs() {
/*
for (i = 0; i < 2; i++) {
if ( document.chooseModel.model[i].checked ) {
modelChoice = document.chooseModel.model[i].value;
break;
}
}
console.log(modelChoice);
*/
// set initial value for maxEpochs I DON'T UNDERSTAND WHY THIS DOESN'T WORK
d3.csv("epochStats.csv", function(d) {
console.log(d.length);
maxEpochs = d.length;
console.log(maxEpochs);
});
console.log(maxEpochs);
}
</script>
NOTE: epochStats.csv just has to be a csv with a few lines in it. That data doesn't matter for this example.
So, when I run this I get the following output in my console:
maxEpochsFail.html:31 undefined
maxEpochsFail.html:12 undefined
maxEpochsFail.html:27 101
maxEpochsFail.html:29 101
The line numbers might not quite match (I have some <head> tags etc at the top), but the point is, the first two console.logs within the function print 100 which is the correct number, but then once I'm outside the function it reverts to undefined.
Any thoughts on this would be much appreciated. I'm kind of banging my head against the wall with it.
Thanks,
Seth
A:
This is due to the asynchronous nature of javascript. The blocks of code you write do not get evaluated "in order" as you might expect. Basically, anything that uses the d3.csv call HAS to be within that block:
d3.csv("epochStats.csv", function(d) {
console.log(d.length);
maxEpochs = d.length;
console.log(maxEpochs);
// anything else that uses the value of maxEpochs, or reads the file data, etc....
});
console.log(maxEpochs); <-- This could be evaluated before the above csv block.
|
/* SPDX-License-Identifier: GPL-2.0+ */
/*
* (C) Copyright 2013
* Reinhard Pfau, Guntermann & Drunck GmbH, [email protected]
*/
#ifndef __HRE_H
#define __HRE_H
struct key_program {
uint32_t magic;
uint32_t code_crc;
uint32_t code_size;
uint8_t code[];
};
struct h_reg {
bool valid;
uint8_t digest[20];
};
/* CCDM specific contants */
enum {
/* NV indices */
NV_COMMON_DATA_INDEX = 0x40000001,
/* magics for key blob chains */
MAGIC_KEY_PROGRAM = 0x68726500,
MAGIC_HMAC = 0x68616300,
MAGIC_END_OF_CHAIN = 0x00000000,
/* sizes */
NV_COMMON_DATA_MIN_SIZE = 3 * sizeof(uint64_t) + 2 * sizeof(uint16_t),
};
int hre_verify_program(struct key_program *prg);
int hre_run_program(struct udevice *tpm, const uint8_t *code, size_t code_size);
#endif /* __HRE_H */
|
<?xml version="1.0" encoding="UTF-8"?>
<Scheme
LastUpgradeVersion = "1100"
version = "1.3">
<BuildAction
parallelizeBuildables = "YES"
buildImplicitDependencies = "YES">
<BuildActionEntries>
<BuildActionEntry
buildForTesting = "YES"
buildForRunning = "YES"
buildForProfiling = "YES"
buildForArchiving = "YES"
buildForAnalyzing = "YES">
<BuildableReference
BuildableIdentifier = "primary"
BlueprintIdentifier = "C3A5467F1BC1A817005C1BBC"
BuildableName = "Universal iOS Framework"
BlueprintName = "Universal iOS Framework"
ReferencedContainer = "container:CorePlot.xcodeproj">
</BuildableReference>
</BuildActionEntry>
</BuildActionEntries>
</BuildAction>
<TestAction
buildConfiguration = "Debug"
selectedDebuggerIdentifier = "Xcode.DebuggerFoundation.Debugger.LLDB"
selectedLauncherIdentifier = "Xcode.DebuggerFoundation.Launcher.LLDB"
shouldUseLaunchSchemeArgsEnv = "YES">
<Testables>
</Testables>
</TestAction>
<LaunchAction
buildConfiguration = "Debug"
selectedDebuggerIdentifier = "Xcode.DebuggerFoundation.Debugger.LLDB"
selectedLauncherIdentifier = "Xcode.DebuggerFoundation.Launcher.LLDB"
launchStyle = "0"
useCustomWorkingDirectory = "NO"
ignoresPersistentStateOnLaunch = "NO"
debugDocumentVersioning = "YES"
debugServiceExtension = "internal"
allowLocationSimulation = "YES">
<MacroExpansion>
<BuildableReference
BuildableIdentifier = "primary"
BlueprintIdentifier = "C3A5467F1BC1A817005C1BBC"
BuildableName = "Universal iOS Framework"
BlueprintName = "Universal iOS Framework"
ReferencedContainer = "container:CorePlot.xcodeproj">
</BuildableReference>
</MacroExpansion>
</LaunchAction>
<ProfileAction
buildConfiguration = "Release"
shouldUseLaunchSchemeArgsEnv = "YES"
savedToolIdentifier = ""
useCustomWorkingDirectory = "NO"
debugDocumentVersioning = "YES">
<MacroExpansion>
<BuildableReference
BuildableIdentifier = "primary"
BlueprintIdentifier = "C3A5467F1BC1A817005C1BBC"
BuildableName = "Universal iOS Framework"
BlueprintName = "Universal iOS Framework"
ReferencedContainer = "container:CorePlot.xcodeproj">
</BuildableReference>
</MacroExpansion>
</ProfileAction>
<AnalyzeAction
buildConfiguration = "Debug">
</AnalyzeAction>
<ArchiveAction
buildConfiguration = "Release"
revealArchiveInOrganizer = "YES">
</ArchiveAction>
</Scheme>
|
What is constant in C ?
A constant specifies a fixed value that appears directly in a program and this value does not change during the program's execution. Constants are used to create values that are assigned to variables used in expressions or passed to functions . Some examples of constants are 120, 4.576, 'B', "amit" etc.
There are various type of constants available in C programming language.
1. Integer constants.
2. Floating point constants
3. Character constants
4. String constants
1. Integer constants :
An integer is a whole number without any decimal point. It consists of a sequence of digits. An integer constants is assumed to be of the int type, Whole values is between -32768 to 32767.
By default an, integer constant is a decimal number . However, they can be represented in octal (base 8) or hexadecimal (base 16) .
Rules for constructing Integer constants :
(a). No commas or blank spaces are allowed in an integer constant.
(b). It may be either positive or negative.
(c). It must not have a decimal point.
(d). It must have atleast one digit. Eg. 5 .
(e). The value of constant cannot be exceed a specified range.
2. Floating point constants :
When you need to represent quantities that vary continuously such as temperature (eg. 98.4) height (eg. 5.3), integer constants are not sufficient. So in order to represent these quantities, floating point constants are used. Floating point constants contain a decimal point or an exponent sign or both. Floating point constants are also called real constants.
Floating point constants can be written in two forms ,
Fractional form (like, 125.50, .75 , 210. , -0.55)
Exponential form (like, 1.4954e2 , 6.0e6 ) 3. Character constant :
A character constant is a single character which may be a alphabet, a digit or a special symbol enclosed in a single inverted commas. Both inverted commas should point to left. The type of character constant is 'char' .
For example : 'A' ; 'S' ; '$' ; ' ' ; 'X' ; '0' ';
The maximum length of a character constant can be 1 character. Character constants have integers values that are determined by the computer particular character set called ASCII values.
3. String constants :
A string constants consists of any number of consecutive characters enclosed in double quotation marks. For example : "273" , "yellow" etc.
While working with strings, following points should be kept in mind,
(a). " " is an empty string.
(b). The compiler automatically places a null character ('\0') at end of every string constants as the last character within the string. This character is not visible when the string is displayed. In this way the computer knows where the string ends.
(c). "A" is combination of character 'A' and '\0' . So you cannot mix character constants and character strings.
(d) . Unlike single character constant, a single character string constant do not has an equivalent value.
|
/*
* (C) Copyright 2013 Nuxeo SA (http://nuxeo.com/) and others.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
* Contributors:
* dmetzler
*/
package org.nuxeo.ecm.restapi.server.jaxrs;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import org.nuxeo.ecm.webengine.model.WebObject;
import org.nuxeo.ecm.webengine.model.impl.AbstractResource;
import org.nuxeo.ecm.webengine.model.impl.ResourceTypeImpl;
/**
* @since 5.8
*/
@WebObject(type = "doc")
public class DocObject extends AbstractResource<ResourceTypeImpl> {
@GET
public Object doGet() {
return getView("index");
}
@GET
@Path("{resource}.json")
public Object doGetResource(@PathParam("resource") String resource) {
return getView(resource).arg("resource", resource);
}
}
|
You put hour upon grueling hour into this, that much is clear; however, this is absolutely not my "kind" of mindfuck. In fact, I was deeply disturbed by the imagery and immediately google searched fluffy puppies to ease the pain and horror that your evil film radiates. If i could, I would holy sword of justice your film and obliterate or purify it. Likely, I will never have the chance to do such, so instead I will leave you with these words:
"Unexpressed emotions will never die. They are buried alive and will come forth later in uglier ways"
-Sigmund Freud
also
"Our greatest pretenses are built up not to hide the evil and the ugly in us, but our emptiness. The hardest thing to hide is something that is not there."
-Eric Hoffer
lastly
"The world needs anger. The world often continues to allow evil because it isn't angry enough."
-Bede Jarrett |
DOCTORS DISCOVER NEW KNEE LIGAMENT THAT MAY BE CRUCIAL FOR ACL INJURIES
DOCTORS DISCOVER NEW KNEE LIGAMENT THAT MAY BE CRUCIAL FOR ACL INJURIES
Belgian surgeons’ discovery of the purpose of a previously unknown knee ligament may be a game changer for injured athletes.
The anterolateral ligament (ALL) could be the reason behind a common sports injury known as an anterior cruciate ligament (ACL) grade 3 sprain or tear.
There are four main ligaments — or fibrous tissue that connects bones to each other — found in the knee joint, which hold the upper and lower leg bones together.
Two surgeons at the University Hospitals Leuven in Belgium were researching why some patients who underwent ACL repairs still had episodes where the knee “gives way” during activity.
They came across an article on the ALL, first theorized by a French surgeon in 1879, but no one ever determined what its structure or function was until now.
The surgeons looked at 41 donated knee joints and found the ALL in all but one of the subjects. They realized that the ALL runs from the outer side of the thigh bone to the shin bone and may play a role in how we twist our knees or switch direction.
They believe that this ligament may be key for people who experience anterior cruciate ligament (ACL) injuries.
ACL injuries usually occur when people change direction quickly, stop suddenly, slow down while running, land from a jump incorrectly or because of direct collision, according to the American Academy of Orthopaedic Surgeons. Symptoms include pain with swelling, loss of full range of motion, pain while walking and tenderness along the joint line.
In grade 1 sprains of the ligament, the ACL becomes stretched, and in grade 2 sprains, it becomes loosened. However, in ACL grade 3 sprains or ACL tears, the ligament completely severs and the knee becomes unstable.
Torn ACLs need surgery and physical therapy in order to repair the ligament. Washington Redskins quarterback Robert Griffin III underwent the procedure in January to repair both his ACL and his lateral collateral ligament (LCL). People who do not need a full range of motion from their knees like the elderly and those who are not physically active may be able to rehabilitate their knees through just therapy and using braces.
Then where does the ALL come in? The authors noted that patients who have ACL surgery sometimes still have “pivot shift” problems where the knee does not stay in place during use. This could be because the surgeons aren’t addressing the torn ALL, which may be injured at the same time as the ACL. This makes the knee more vulnerable to injury when it rotates. ALL injuries may also be responsible for small fractures that usually are blamed on ACL injuries.
“We know today that a number of patients continue to experience discomfort and instability after knee replacement even despite successful surgery, which may be related to (previously unrealized) damage of the anterolateral ligament during surgical exposure for prosthetic insertion,” author Dr. Johan Bellemans, professor of orthopaedic surgery at the Catholic University Leuven in Belgium, told the National Post. “We are currently looking into this, hoping to help our patients by improving the way we operate on them while preserving the structures that are vital for maintaining their knee joint.”
Other experts agreed that this was a landmark finding.
“If you look back through history there has been a veiled understanding that something is going on on that side of the knee but this work finally gives us a better understanding,” Dr. Joel Melton, a consultant knee surgeon at Addenbrooke’s Hospital in Cambridge, England, told the BBC. “I think this is very exciting — there is no doubt they have hit upon a very important anatomical structure.”
The article was published in the October issue of the Journal of Anatomy. |
Q:
Determining if time is full or half hour
I am setting up a date instance like this:
Calendar date = Calendar.getInstance();
date.set(2015, 9, 25, 12, 0);
In this case, I know that it's 12:00, a full hour, but I also have cases where I input the date parameters dynamically, so I want to determine if that date is a full or half hour.
E.g., for 12:00 and 12:30 it would return true, while for 12:23 it would return false.
I've tried timeInMillis % 36000000 from another answer, but it didn't work.
A:
Use get minutes to check the minutes.
int minutes = date.get(Calendar.MINUTE); // gets the minutes
return (minutes == 0 || minutes == 30);
|
Friday, 18 August
University Challenge team one step away from semi-finals
Celebrations were confined to a “smile, nothing wild”, as University of Liverpool’s University Challenge team triumphed in their first quarter- final match.
After defeating University of Bristol by 175 points – 115 points, the team, made up of captain Declan Crew, alongside Ben Mawdsley, Jem Davis and Hugh Hiscock, were asked whether they liked “living dangerously” by host, Jeremy Paxman.
His question followed a tight match, with both teams leading at different points before a late surge by Liverpool snatched victory at the last.
Fairly even
French PhD student, Hugh Hiscock, said: “We were fairly even teams so we were quite confident, especially after the first and second rounds, where we built up quite a lot of momentum.
“You need luck with the questions, and they just fell right for us in the second half of the match.
“It’s surprising how quickly you can build up momentum and rack up the points when you get a few questions right.”
After triumphing over Glasgow and Sheffield in previous rounds, the team were on the back foot as Bristol raced to an early lead.
And trailing with as little as five minutes of the contest remaining; questions on fossils, the anatomy of fish and the ratio of a photon to its frequency put Liverpool back on top to finish 60 points ahead at the gong.
Hugh said: “We were just relieved. It went by in such a flash. The half-hour match time is intense and in the last few minutes we went on a great run.
“It was a good team performance and quite a high when the gong went, but relief at the same time.”
Long way to go
Hugh, 23 and originally from Southampton, revealed his family were initially in “shock” to see him involved in a show that was a “childhood institution”, but as victory followed victory have “got used to seeing me on TV”, he said.
Reflecting the focus the team have engendered as they take Liverpool to the heady heights of one win away from a semi-final place, Hugh said celebrations remain on ice.
He added: “We smiled, nothing wild. Obviously we’re delighted with the result, but we’ve still got a long way to go.” |
In an effort to educate voters, we will be posting responses to our candidate questionnaire. Questionnaires were emailed to each candidate running for City Council, President of City Council, and Mayor. Candidates have until March 4th to submit. We are publishing results in the order they are received.
How frequently do you use a mode of transportation other than your car to navigate the city? Based on your experience, where should the city prioritize resources for transportation?
ODN: As we know usually in our daily, weekly, monthly, or however you prioritize your spending we initiate a plan to create a budget that delegates for transportation to be of the utmost importance. I am within those same boundaries and feel as though cleaner air and better water ways also involved can only be developed if we prioritize incentives and actions (Walking, Bicycles, Public Transportation etc) to compliment our natural resources including gas and establishing betters ways of conserving.
What role do you believe biking and walking improvements can play in creating a safer, healthier, more livable Baltimore?
ODN:Awareness allows for a conscious effort to reiterate Healthier Outlooks, utilizing alternative methods of supplementing system usage from vehicles to public transportation improves the interaction between our public and their neighborhoods, communities and comradely alike, also adjust the visibility of walkers, bikers, and even skate boarders for safety precautions among the traffic and high profile areas including where signs and lanes are not accessible.
Often road redesigns that improve the safety for people on bikes or people walking do so in a way that removes priority for single occupant vehicles. This can look like removing lanes for travel or decreasing available street parking. Can you describe how you would manage public expectations during project implementation, and handle any backlash from constituents that don’t share in the City’s vision for complete streets?
ODN: Allow them their opinion to reassure them of their value as a citizen and permissive recounts in expression to continue sharing views which can be an Initiative. Once researched, itemized and formatted to be documented and established as the complaint, fortified into better living and objectives among the community and our people, I love the identity of "And Justice For All." Within those boundaries there-after, investigation, reiterating of documented information and other factors take place, which are outcomes to exhibit to them facts which resolves their values whether correct or not, removing those factors is once again another completion of the Iniative. Policy and procedures are all a part of the belief in Civil/Government. I've managed, handled, communicated and implemented successfully more than my share. Alleviating the concerns of residents are definitely key before, during and after this election, providing them with better resources for their issues deters any ongoing relationships in those inefficient manners. Please remember to use 311, and "Love Baltimore" more..
Recent audits have discovered that the Department of Transportation struggles to measure key performance indicators. The city’s procurement and project management processes have also faced scrutiny. This has led to significant delays of key improvements to bicycle infrastructure in Baltimore. How will you work to improve performance and accountability of city agencies like the Department of Transportation under your leadership?
ODN: Accountability is a great point of reference. Let's begin with leadership or those whose responsibility is to correct issues, provide the necessary resources to eliminate any further issues and demonstrate the act of providing the necessary service. When stated in those regards, we have it at a point of reference to complete the circle from the person/party who accepts responsibility and from he/she who investigates to assure an outcome has a legitimate response. When evidence including documentation proves the claim. That's when we take into consideration the taxpayers heart earned dollars. Now don't you or I afford ourselves these factors through working hard and supporting our own general concepts of work ethic? Now back into perspective with the objective, on our jobs that form of grievance is inexcusable, but more than the facts that provide the information, quality assurance reflects there is an operational issue within their ranks which provides a source to give an account of the specific details your question complies. Again we ask them the question just what is at stake here, we know on our jobs and in our homes when we are inconsistent with details/requirements how certain systems will fail. These Departments, Employees, etc; are residents, citizens and people like ourselves. The term you used "Department" in my assumption of details would be way to vague to regard until we have put a name with a face. The identified particular whose taken responsibility and decided to participate in his or her chosen role professionally as the Leader with admittance and assistance to comply appropriate details we can designate and begin to correct to follow along especially after the failure to comply has occurred. Saying the Department of Transportation wouldn't be accountable enough without strategically regarding the specific personnel that causes any procedure from maintaining it's functions throughout the system that has been created to maintain a stable environment. Also that would lead further into more detailed accounts on the performance of the individual. When these occurrences prevent the overall objective, immediately the source of those systems must be made to comply. My obligation to the question is the discernment utilized in detail to regard each and ever source for it's worth, where it accounts to the details/factors for which the source of the issues are found inconsistent, leading to the most potential or best outcome all around.
The percentage of people choosing to take public transit or ride a bike for transportation is increasing in Baltimore, while the percentage of residents without access to a vehicle is over 30%. How would you rate the city’s current investment in sustainable transportation solutions for its residents, and as a council person what would you do to support increased investment?
ODN: I'm a firm believer in our city, accountability through demonstration,/participation and now additional interests furthered by the role i am seeking as Councilman resembles a full fledged system of daily values that provide a consistent receptive outcome from public transit and it's affiliates. . Each and every commitment to opportune encounters around the city I have not seen it's concept of services fail or unprovided. Operational and maintenance occurrences which are normal and can happen at any given instance remind any user the basics of time management and work ethic for personal control. Accountability with the Council for the City of Baltimore and residents relying on my consistency requires informative ways of maximizing my established platform( the Revitalization of Our Community in the 9th District's Revival ) with it's potential to secure the desired outcomes and continue a successful accomplished extension of the Charm City Orange Route to the West Side Shopping Center which creates more revenue through traffic from pedestrians to cleaner storefronts, environmental standards exemplified through consistent and ongoing growth. One significant measure at the peak of the resolutions I seek plus maintaining the values at the hearts of our citizens, researching for more and greater performance enhancers while implementing a newer resemblance to our old and thriving district. (the Revival)
A recent study by Harvard economists found that the single strongest factor affecting the odds of a child escaping poverty is not the test scores of his or her local schools or the crime in the community; it is the percent of workers in his or her neighborhood who have long commutes. How do you plan to improve transportation options and commute times for our most vulnerable residents?
ODN: Wow what a relief! My how times have changed from those dates listed on your various facts, we are so fortunate to have the ability of technology to correspond with transportation like we never had it before, As technology has evolved we now are involved with additional negotiating skills for better commutes. Techniques for better transportation include ways to avoid heavy traffic areas, and apps for everything from Lyft to Uber. Our core performance considerably advanced for our children with exposure to digital phones, tablets, and their own portable devices. We now have more email addresses, video services and ways to be involved considerably with parents, teachers and the world. Buses/Trains also moved into a new era. their routes extension are maximized in comparison, the ability technology brings in the future is going to be even more momentous, In order to meet demanding needs of our youths/young adults, parents and the Community, in regards to safety. I look forward to advocating in support of newer resolutions (Extending the Orange Route of the Charm City Circulator to the Westside Shopping Center).
What other information about your candidacy would you like to share with our members?
ODN: I Mr. “O”, a Veteran of the Armed Forces and a resident of the 9th District of Baltimore Maryland where I am running for Councilman. Known amongst my peers, colleagues and friends alike as Octavia D. Njuhigu, my nickname is synonymous with my love for Education and was deemed by my former students while participating at the Public Library System and professionally as an Administrator/Educator for several institutions/organizations alike in private sectors, while maintaining simultaneously at the helm of my desk, in pursuits of Community Relations with support for our public. I am also named Octavia D. because of friendship stemmed during the heart of the Civil Rights Movement from famed sci fi writer Octavia E. Butler with my mother Ella B. Njuhigu, Ms. Butler a notable for being the first within her own genre as an African American Author, and also their Political Activism for Equality including Woman’s Rights. Same applies for the considerations and perspectives in applying my own brand of personality to ongoing commitments to culture. A profound revelation throughout my research, support, and economic struggles supplementing each loss of value with better, stronger and more efficient perspectives to symbolize the strength in unity which I learned from a strong family background in supporting ongoing efforts for equality, with those willing to stand for their own values in society removing stigma and stereotypes. I was taught never to be deterred by social structure and influences. Techniques I’ve applied in all concentrations including methods of teaching kids how to be engaged, inspired and diligent towards educational pursuits with foreseeing career goals early on and completing courses with better grades based by the criteria’s for colleges/universities and careers as they approach job markets confidently based on their advantages. A method very similar I also implement to various levels of adversity in support of Adults Literacy, GED, and Career Development/Job Placement etc; major factors found in our government system associated to the social and economic structures for those who cannot completely ascertain formal institutionalized training (from Immigrants to Mental Health/Special Needs and the public in economically underdeveloped environments). Mr. “O” the personification of my fond loving attitude towards supporting my communities, is also a presenter for workshops, forums, and other youth and young adult related festivities for their advocacy and to boost their morale, including publicly speaking openly and in my own affirmed passions with integrity for self-initiative, goals and quality assurance. A pastime I love and feel admirable in sharing with the community each and every occurrence. Dedicated to the Memory and Respects of My Family (Descendants of Willie Johnson) the Njuhigu Family(James Njuhigu) of Kiambu Kenya, and a Special Thank You to My Aunt Willie Mae Strickland for her Work and Dedication during the Civil Rights Movement, and the Commitment it Has Brought Me In My Own Life to Excellence in Equality/Community. c2016 Mr. "O" (Octavia D. Njuhigu) Candidate for Councilman 9th District the Revitalization Project (A Revival of Community) |
Commentary on Political Economy
Sunday, 9 September 2018
Descartes's World, Part 3
The Cartesian insistence on the power of logico-mathematical
deduction and intuition as the cardinal methodical tools for the asseveration
of Truth meet an insurmountable obstacle in their own formalism. The “truth” of logico-mathematics has no substantive content: it is “true” by
definition – quod erat demonstrandum
(as was to be demonstrated) – in that the conclusion (the demonstrandum) is already
contained (erat, “was”!) in the
premises (the demonstrans) and, worse
still, the premises already contain the conclusion! But if the premises and the conclusion are so formally related that
they are tautological, then no practical conclusion – no “demonstration”! - can
ever be extracted from such formal reasoning! Only if the logico-mathematical
calculation is “false”, in the sense that it is purely practico-conventional
because it involves heterogeneous elements and therefore has substantive
content, - only in that case can logico-mathematics be “useful”, not “true”, in
a strictly conventional sense!
Here, Descartes’s reliance on the intellect
or Reason as the foundation of human knowledge quite simply falls apart.
Specifically, the twin foundations of the intellect – intuition and deduction –
prove to be categorically antinomic because intuition has a substantive
immanent materialist basis, whereas deduction is entirely formal and
tautological. A life-world in which any reality could be deduced logico-mathematically from intuited premises simply begs the question of how this “original
intuition” (the phrase is Leibniz’s, see the 24 Metaphysical Theses, discussed in M. Heidegger, The Metaphysical Foundations of Logic,
and his dissection in the dimension of philosophical anthropology [Husserl’s
plaint against Heidegger] of the Kantian intuitus
in the Kantbuch) came about in the
first place – of what its substantive content
is. Similarly with cause and effect. If indeed the effect is in the cause and
the cause is in the effect, then it is simply meaningless to connect the two
separate events – cause and effect – in terms of causation because they are not
in fact “separate” events! Far from being “laws of nature or of physics”,
scientific findings are “conventions” in the sense that (a) they are based on
induction, and, (b) they reflect a specific human practical orientation rather
than any “universal laws”. Indeed, mathematical calculations and logical
deductions – not to mention scientific “laws” based on causation –can be valid only ifall their categories are formally equal. But this formal equality necessarily
requires the substantive equality of the elements that these categories
represent! Yet, this is logically impossible, given that substantive
elements are categorically different (different toto caelo, toto genere) from
the logico-mathematical categories that supposedly “stand for” them! This is
the paradox: logico-mathematical categories cannot be “deductive” but merely practico-conventional or empirico-inductive
because deductions must be based
ultimately on intuition - and it is
quite simply impossible to identify and isolate formally the “intuition” that is supposedly “behind” these entities,
because intuition is a substantive, not a formal, entity! Given that the
“truth” of logico-mathematical deductions is entirely formal, its demonstration
must be based on a substantive content – it must be “shown” to be true. But
such a “showing” (Latin, de-monstrare,
to show) is necessarily a practical, physical, substantive and material task
that contradicts the supposed “formality” and “logico-mathematical necessity”
of every “deduction”!
Hume’s skepticism in the Treatise of Human Nature effectively
demolishes Descartes’s analytic a priori
precisely by dissecting every causal relation in terms of individual “images”
that – as “images” – cannot have any con-nection in a pictographic sense or
indeed in any other sense! It was in a colossal effort to rescue Reason from
this nihilistic assault that Kant enucleated his critical idealism and
synthetic a priori as an Ubergang to bridge its antinomic
opposition to Nature. The Kantian attempt to define and refine the “aesthetic”
basis of Pure Reason through the Schematismus
fails miserably once the inescapably “conventional foundation” of every
intuition and every “truth”, of every mathematical identity and every “law of
nature” is laid bare. There is no “synthesis”, let alone any “a priori”,
linking events in the universe! The “legality” of Kant’s theorization of Pure
Reason is precisely what Schopenhauer and Nietzsche will be first to attack,
and quite validly so.
Just like
the Self or the Ego, the idea of God must take the mental form of some
particular thought, of which actual physical ec-sistence (physical embodiment)
cannot be predicated. For the Ego in the cogito,
for the human intellect, the perception
of the world is necessarily “false” because no sensual perception can ever
match the formal “purity” of logico-mathematical deduction. In that case, it is
not merely that the human senses err in their estimation of the world: it is
rather that the intellect itself will never be able to comprehend the world,
given that the world is inevitably made up of “appearances”! According to
Descartes, the intellect’s comprehension of the world is imperfect and prone to
error not because of any intrinsic flaws in the intellect – which shares its
status as “substance” with the divine -, but because it is deluded by the will
– which is the “mortal” facet of the intellect - into trusting the lure of “mere
appearances” (Kant’s “bloss Erscheinungen”).
But if the intellect’s “perception” of the world is false, for whatever reason,
then for Descartes, vis-à-vis the intellect, the world has really and truly
become a “fable”: what is more, a fable that, in his requirement of logico-mathematical
determinism of the world – the vera
mathesis – can also be nothing more than a lifeless, soul-less mechanism!
(Again, see Nietzsche’s savage parody of Descartes’s reduction of the world to
“a machine” in The Anti-Christ, par.14). Given that Descartes’s ontological proof
is incapable of bridging the hiatus
irrationalis (Fichte) between Subject and Object, between Reason and
Nature, his rationalism, his “method” cannot but amount to a dogmatic moral imperative,
an early version of the Kantian categorical imperative, the dictamen of the divine affinity of the
intellect. Whence Nietzsche’s riposte in Twilight
of the Idols, “Taken from the moral
viewpoint, the world is false!”Not
indeed because for Nietzsche the world is “false” (pace M. Cacciari in Krisis,
ch. 2) – because that would entail the existence of a universal “truth” against
which the world was “false” – but precisely because for Nietzsche no “universal
truth” is even meaningful, let alone possible! (In this regard, the link
established by Cacciari, loc. cit., between Nietzsche and Wittgenstein is quite
valid and fertile: for Wittgenstein, the world cannot be com-prehended by
language: the world can only be “shown”, but not by language – cf. Tractatus Logico-Philosophicus.)
But, leaving all this to one side, how can
the human mind interact with the universe, then? More saliently, if science and
technology are purely a means of knowing a strictly deterministic universe, of finding
a unique “universal key” to the interpretation of “the great book of nature”, (a)
how can humans then be aware of acquiring this “knowledge” given that any such knowledge
would amount to a mechanical operari
necessarily deprived of any “awareness” or “consciousness”? (This is a paradox
that Spinoza tackled as well, unsuccessfully.) And, (b), how can humans interact
with the world without transforming it? Furthermore, even assuming that such
interaction with and trans-formation of the world by humans is possible and
real, how can it be said to emanate causally
from the world itself and, worse still, how can such trans-formation of the
world by humans exclude any negative outcomes or degradation of nature and the
world? The optimistic bias and aporetic nature of Cartesian rationalism is all
here – specifically, Descartes’s well-nigh total neglect of any negative outcomes from scientific
discovery or research is a sign of the bourgeois
mission “to recover and discover” the world toward its own vision of
perfection (per-ficere, to render
flawless through work) – toward its own Utopia. Hence, the positive charge of
Cartesian rationalism goes beyond a mere inert “mechanicism” – robotic,
conflictual, and pessimistic, like Hobbes’s – in favour of a more Baconian eudaemonic slant. In the event, even as
he sought to evade it, mechanicism was Descartes’s only way out of his helpless
idealism: he remained so close to mechanicism
– especially Hobbes’s in physics and politics (cf. C. Schmitt’s “The State as
Mechanism in Hobbes and Descartes”, Appendix to Der Leviathan) – that whenever he sought to elucidate his “method”
or “rules” he invariably resorted to the industrial artisanal reality around
him.
Because of the impossibility of reconciling
the (idealistic logico-mathematical) intellect with the (material scientific)
world, Descartes was forced to point to a (scientific?) “method” that is purely
“deductive” and therefore tautological! In attempting to reconcile the
universal mathesis and its absolute determinism with the drive to reconstruct
the truth and the world to a humanized environment, to an earthly paradise,
Descartes is forced to resort to human “skills” in which he sees a “method”
(non-existent in reality) that is literally pro-ductive and mechanical at one
and the same time.
Our method…resembles the procedures in the
mechanical crafts, which have no need of methods other than their own, and which
supply their own instructions for making their own tools. If, for example,
someone wanted to practise one of these crafts…but did not possess any of the
tools, he would be forced at first to use a hard stone…as an anvil, to make a
rock do as a hammer, to make a pair of tongs out of wood…Thus equipped, he
would not immediately attempt to forge swords, helmets, or other iron
implements for others to use; rather, he would first of all make hammers, an
anvil, tongs and other tools for his own use (Rules).
In
reality, what Descartes has done is to reduce his purported “scientific method”
to the mere instinctive human practical – immanentist – invention of means to
satisfy their needs! By conceding that his method “resembles the procedures in the mechanical crafts” (“mechanical”,
not intellectual, crafts!), Descartes ends up proving the exact opposite of
what he intended – that is to say, precisely that there is “no need of methods other than” the practices that humans end up
adopting instinctively to satisfy their physio-logical
(physical andmental) needs! But here it is no longer the mathesis universalis that determines
work; rather, it is work that subsumes the mathesis,
the physics, the science, the technology, to itself in order to satisfy
existing human needs and to pro-duce new ones (cf. Negri, op.cit., p.299).
It is not
“science” that dictates our technical productive activity, but rather it is our
technical productive activity that we rationalize and institutionalise as
“science”. The very fact that Descartes
resorts to manufacturing skills to exemplify the substance of his “method”
evinces, first, the inability to distill such a method from human technical
activity, and, second, the impossibility of splitting human activity into the
“scientific-theoretical” and the “technological-practical”! In reality, all
human productive activity is technical-practical. Homo faber and homo sapiens are
identical entities. Descartes intuits but does not see that his “rules” and
“method” merely mimic the practical activity of homo faber – that, in other words, science is merely the
abstraction of technological activity – or, put succinctly, that homo faber and homo sapiens form an indivisible whole.
Descartes
could never bridge the gnawing gap that his rational idealism opened between
the two antinomic approaches to the lifeworld; consequently, he cannot explain
the world except as a mechanism because his logico-mathematical deductive
reasoning is simply and wholly inapplicable to the empirico-inductive
productive manufacturing practices humans adopt in reality – and in vastly growing
numbers in his own time as capitalist manufacturing industry supplants the old
and moribund feudal mode of production and its theocratic societies. Regardless,
he could not ignore completely the epochal changes occurring all around him in
his theoretical epistemological framework. Descartes himself highlights this
“turn” in the Discourse on Method
where his insistence on the bon sens (good
sense or common sense) goes hand in hand with the adoption of French as the
discursive language freed from the logico-deductive strictures and metaphysical
and theologicalprejudices embedded in
the Latin language – clearly heralding his adherence to the growing revulsion
at the segregation of the logico-deductive reasoning of the Latin-speaking
learned strata in favour of the empirico-inductive productive practices of
artisans.
To
summarise, then, the antinomies of Cartesian rational idealism encapsulate, on
one side, the inability of the earliest bourgeois philosophical reflection to comprehend
the profound transformation of European society from a theocratic absolutist
feudal order to that of a rapidly expanding capitalist marketplace society
founded on formally free labour and manufacturing industry; and, on the other
side of this antinomic thought, they provide the theoretical impetus for a revaluation
of technical-scientific and empirio-inductive industrial labour as against the millenary dominance of dogmatic Scholastic logico-mathematical
deductive thought. The socio-political
and economic significance of this “great transformation” (to borrow a phrase
used in a different historical context by Karl Polanyi) is lucidly
recapitulated by Paolo Rossi here:
Also
within the ambit of philosophy there arose a valorization of the arts and crafts vastly different from
the traditional: some of the procedures utilized by technicians and artisans to
transform nature help in the understanding of natural reality….To introduce tools and instruments in
science, to conceive them as a source of truth, was not an easy task. Truth, in the science of our time, refers,
almost exclusively, to the interpretation
of signs generated by instruments…The defence of mechanical arts from the charge
of being undignified, the renewed emphasis on the coincidence of the horizons
of culture and that of the liberal arts, on one side, and practical work and
servile work , on the other, implied in reality the abandonment of a millenary
image of science, they implied the end of
an essential distinction between knowing
and doing. (My translation and
emphases.) |
Sub Navigation
Mineral water giant pulls plug on Fiji
Pop stars, celebrities, and even the US President may have to find a new tipple after mineral water company Fiji Water announced it was pulling out of the South Pacific country, because of disputes with the military junta.
The firm said it had decided to close its only factory, in the Yaqara Valley on the main island of Viti Levu, with the loss of 400 jobs, after the government announced a new tax of 15 Fijian cents (about 10c) a litre on companies extracting more than 3.5 million litres of water a month. Previously, the tax rate was one third of one cent.
"This new tax is untenable and as a consequence, Fiji Water is left with no choice but to close our facility," the company said.
The new tax sent "a clear and unmistakable message to businesses operating in Fiji or looking to invest there: the country is increasingly unstable and a very risky place in which to invest."
Orders from suppliers have been put on hold by Fiji Water, which produces the most popular brand of imported water in the US.
It sells its highly-recognisable bottles in 40 countries, and claims to be responsible for 20 per cent of Fiji's exports and 3 per cent of its GDP.
The firm has been clashing with Fiji's military regime, which took power in a 2006 coup.
Last month, its executive David Roth was deported from the country after self-appointed Prime Minister Commodore Frank Bainimarama accused him of acting "in a manner prejudicial to good governance and public order".
If Fiji Water's decision to pull out of the nation stands, it is difficult to see how the product can survive.
The trendy brand is built on the allegedly unique and life-enhancing properties of the underground spring from which it comes, which is said to be particularly pure.
That marketing pitch has been enough to turn Fiji Water into a favourite of the celebrity classes, drunk by everyone from Scarlett Johansson and Justin Timberlake to Nicole Kidman and the Obamas.
Fiji Water has styled itself as an eco-friendly product, claiming it pays to offset all the carbon emissions which come from transporting square plastic bottles from one of the world's most remote locations to the refrigerators of major cities.
Al Gore swigs it while delivering speeches about global warming, while the firm's owners are Los Angeles philanthropists Lynda and Stewart Resnick, who have given millions of dollars to progressive causes and Democratic Party politicians.
Critics have scoffed at the notion that bottled mineral water can be environmentally responsible, and point out that many Fijians have no access to clean drinking water, and suffer from diseases such as typhoid.
Others have raised eyebrows at the firm's corporate structure - court records show that in 2008 it was owned by an entity in the tax haven of Luxembourg, though some assets have recently been transferred to Switzerland.
Despite its present hostility to Fiji's taxman, the firm has enjoyed tax-exempt status on corporate income since it was founded in 1995.
Fiji is meanwhile suffering because of economic sanctions against its government from several countries.
Its rulers have fallen out with many overseas investors, including Rupert Murdoch's News Corp, which in September sold its controlling stake in Fiji's main newspaper because of ownership limits on foreign media companies.
Yesterday, Mr Bainimarama began his own PR offensive, claiming that "as usual Fiji Water has adopted tactics that demonstrate Fiji Water does not care about Fiji or Fijians".
NATURALLY FROM FIJI
*Fiji Water is sold in 40 countries.
*The firm has one factory in the Yaqara Valley on the main island of Viti Levu, employing 400 people.
*It claims to be responsible for 20 per cent of Fiji's exports and 3 per cent of its GDP.
*Fiji Water has also aggressively styled itself as an eco-friendly product, claiming that it pays to offset all carbon emissions.
*Fiji Water is a favourite of the celebrity classes, drunk by everyone from Scarlett Johansson and Justin Timberlake to Nicole Kidman.
*The firm's owners are LA-based philanthropists who support the Democratic Party of President Barack Obama |
Introduction {#S1}
============
There are four NOTCH receptors (NOTCH1--4) in mammals, which are ubiquitously expressed. Activation of the NOTCH signaling occurs after engagement of a NOTCH receptor with one of its membrane bound Delta-like ligands 1,3,4 (DLL1, DLL3, DLL4) or Jagged ligands 1,2. In some contexts, NOTCH can become activated through ligand-independent mechanism(s) leading to a variety of human diseases ([@B1]). After ligand engagement NOTCH undergoes a series of proteolytic cleavages, resulting in an activated NOTCH intracellular domain (NICD), which translocates into the nucleus to activate gene transcription. Given that NOTCH signaling is critical in regulating cell fate decisions in many tissue types, it is not surprising that NOTCH activity is deregulated in several malignancies ([@B2]--[@B4]). The first evidence for the involvement of NOTCH signaling in cancer was discovered in T-cell acute lymphoblastic leukemia (T-ALL), where activating mutations were identified in NOTCH1 ([@B5]). Our laboratory showed that oncogenic NOTCH1 regulates MYC expression and leukemia-initiating cell activity and demonstrated the efficacy of NOTCH1 inhibitors in pre-clinical T-ALL models ([@B6]--[@B9]). Activating mutations in *NOTCH1* have also been identified in chronic lymphocytic leukemia, non-small cell lung carcinoma, and translocations involving NOTCH1/2 in patients with triple negative breast cancer ([@B10]--[@B13]). While mutations in NOTCH receptors are rare in other tumor types, NOTCH is aberrantly activated in several malignancies, including colorectal and pancreatic cancer, melanoma, adenocystic carcinoma, and medulloblastoma through a variety of mechanisms ([@B2], [@B4]). Conversely, loss of function mutations in *NOTCH1/2/3* have also been identified suggesting NOTCH can also function as a tumor suppressor ([@B2], [@B3]).
While progress has been made in how NOTCH signaling contributes to malignant transformation, the role of NOTCH activity in anti-tumor immune responses is less clear. While several cell types contribute to anti-tumor responses, CD4 T-helper 1 (TH1) cells and CD8 cytotoxic T-lymphocytes (CTL), are critical in mediating anti-tumor immunity due to their ability to recognize tumor antigens and mediate tumor killing. Several studies have shown that NOTCH is required for activation and effector function of CD4 and CD8 T-cells ([@B14]). Tumor cells can dampen T-cell responses by producing immunosuppressive cytokines, expressing inhibitory ligands, and recruiting immunosuppressive myeloid and lymphoid cells into the tumor microenvironment ([@B15]). Given that NOTCH is required for T-cell activation and effector function it is reasonable to hypothesize that NOTCH contributes to T-cell anti-tumor responses and that tumor cells may evade T-cell mediated killing by suppressing NOTCH activation. Consistent with this hypothesis, new data suggest that NOTCH activation is suppressed in tumor-infiltrating T-cells and that NOTCH re-activation induces potent anti-tumor T-cell responses in mouse cancer models ([@B16]--[@B20]).
Adoptive transplants of tumor antigen-specific T-cells is one immunotherapy used to overcome the limitations of endogenous T-cells and enhance anti-tumor responses. Tumor antigen-specific T-cells are either isolated from the tumor site or engineered with synthetic T-cell receptors (sTCRs) or chimeric antigen receptors (CARs) specific for tumor antigens ([@B21], [@B22]). Recently, NOTCH signaling has been utilized to improve the generation and efficacy of adoptive T-cell therapies (ACT) ([@B23], [@B24]). Furthermore, newly developed synthetic NOTCH receptors (synNOTCH) have been engineered to enhance the specificity of CAR T-cells ([@B25]--[@B27]). These studies highlight the importance of studying NOTCH responses in T-cell-mediated anti-tumor immunity in order to design more effective T-cell-based immunotherapies.
NOTCH Signaling is Required for T-Cell Activation and Effector Function {#S2}
=======================================================================
NOTCH signaling has been extensively studied in T-cell development, activation, and effector function. Upon TCR-stimulation naïve CD4 T-cells differentiate into multiple subsets of T-helper (TH) cells ([@B14], [@B28]). TH subsets are designed to recognize and fight distinct types of infection and are characterized by their specific cytokine profile. NOTCH activation has been shown to play a role in the differentiation of TH1, TH2, TH9, TH17, T-regulatory cells, and follicular TH cells ([@B14], [@B28]). TH1 cells mediate anti-tumor responses in conjunction with CTLs. Genetic deletion or pharmacologic inhibition of NOTCH1 signaling with gamma-secretase inhibitors (GSIs) decreases the numbers of activated TH1 cells *in vitro* and in mouse models of TH1-driven autoimmune disease ([@B29], [@B30]). NOTCH directly stimulates the transcription of the TH1 master transcriptional regulator T-BET (*TBX21*) as well as the TH1 signature cytokine interferon-gamma (*IFNγ*) ([@B29]--[@B31]).
CD8 naïve T-cells differentiate into CTLs upon early TCR stimulation, and then terminal effector cells or memory precursor cells ([@B14]). Recent evidence shows that conditional deletion of *Notch1* or inhibition of NOTCH signaling with GSIs diminishes the production of CTL effector molecules, including IFNγ, tumor necrosis factor alpha, granzyme B, and perforin, as well as a reduction in the CD8 transcription factors T-BET and eomesodermin (EOMES) ([@B32]--[@B36]). In addition to playing a role in activating effector T-cells NOTCH is also important in the maintenance and generation of memory T-cells ([@B35], [@B37]). While these studies provide compelling evidence that NOTCH signaling regulates T-cell effector activation, it remains unclear how NOTCH dictates such a multitude of responses in T-cells. Data from several studies suggest that NOTCH ligands may dictate T-cell effector responses.
NOTCH Ligands Dictate T-Cell Fate {#S3}
=================================
NOTCH ligands have been shown to have diverse effects on T-cell effector function. In CD4 T-cells, activation of the TCR in the presence of DLL1/4 skews toward a TH1 fate and inhibits TH2 differentiation ([@B38], [@B39]). Conversely, Jagged1/2 ligands may be important for TH2 differentiation, but appear to have no role in TH1 differentiation ([@B38], [@B39]). The role of DLL1 in CD8 T-cell activation and differentiation is unclear ([@B38], [@B39]). One study found that DLL1 overexpression in dendritic cells results in increased levels of granzyme-B expression in alloantigen stimulated CD8 T-cells ([@B32]). However, a prior study reported that CD8 T-cells stimulated with DLL1 and alloantigens resulted in decreased IFN-γ production and increased IL-10 production, suggesting a suppressive role for DLL1 in CD8 activation ([@B40]). Additional studies are needed to clarify the effects of DLL1 and other NOTCH ligands on the activation and effector function of CTLs.
These studies suggest that T-cell effector function mediated by NOTCH is determined by the stimulating ligand, this is further supported by data demonstrating that ligand expression on antigen-presenting cells (APC) is dictated by the engaging stimulus. For example, APC exposed to allergens upregulate Jagged1/2 expression inducing a TH2 response whereas viral infection stimulates DLL1/4 expression on APC and a TH1 response ([@B41], [@B42]). However, some studies demonstrate normal T-cell polarization and effector function in the absence of NOTCH ligands, favoring a model in which NOTCH enhances T-cell activation and proliferation, however, cytokines instruct T-cell fate ([@B39]). Understanding how NOTCH ligands dictate effector function will be critical to maximize the therapeutic potential of NOTCH-based immunotherapies.
Tumor Cells and Their Microenvironment Suppress the Expression of NOTCH Receptors and Ligands {#S4}
=============================================================================================
Full-length NOTCH receptors are normally expressed on naïve mouse T-cells and activated in response to antigen; however, T-cells isolated from tumor bearing mice have decreased expression of NOTCH (1--4), ([@B18], [@B19]). Consistent with this reduction in NOTCH levels, significant decreases in NOTCH target genes (*Deltex1, Hey1*, and *Hes1*) are also observed in tumor-associated T-cells ([@B19]), suggesting that tumor-associated T-cells have repressed NOTCH signaling and potentially decreased effector function.
Reduction in NOTCH1/2 levels was found to be mediated in part by tumor-infiltrating myeloid-derived suppressor cells (MDSCs) ([@B18]). MDSCs are a heterogeneous population of immature myeloid cells that are recruited to sites of inflammation and the tumor microenvironment to prevent immune-mediated damage ([@B43]). MDSCs are recruited by multiple factors including vascular endothelial growth factor (VEGF), IL-1β, and IL-6 ([@B44]). Coculturing of MDSC with activated T-cells reduced the expression of full length and intracellular NOTCH1/2 ([@B18]). MDSC isolated from cancer patients have been shown to suppress T-cell activation ([@B45], [@B46]), however, whether MDSC suppress *via* effects on NOTCH signaling is not known.
In addition to reducing NOTCH1/2 levels, reductions in NOTCH ligand expression on T-cells and other immune cells has also been observed in murine tumor models ([@B16], [@B19]). Reduced expression of DLL1/4 in the bone marrow of tumor bearing mice inversely correlated with increased VEGF levels in one study ([@B16]). VEGF has been shown to potentiate T-cell anti-tumor responses, suggesting that expression of this growth factor by cancer cells may inhibit T-cell responses by downregulating DLL1/4 ([@B47]). MDSC isolated from the tumor site have decreased DLL1/4 and increased Jagged1/2 expression ([@B18]). Given that DLL1/4 induce TH1 and CTL effector function, this could be an additional mechanism, whereby the tumor microenvironment impairs/disables NOTCH signaling. While these studies demonstrate that NOTCH activity is impaired in tumor-infiltrating T-cells in mouse cancer models, precisely how NOTCH receptor/ligands are downregulated is unclear. Furthermore, there is as yet, no direct evidence that NOTCH signaling is impaired in T-cells from cancer patients.
Activation of NOTCH Receptors and Their Ligands Increases T-Cell-Mediated Anti-Tumor Response {#S5}
=============================================================================================
Conditional activation of NOTCH1/2 in CD8 T-cells induces a robust and sustained anti-tumor response, resulting in increased IFNγ production and reduced tumor burden ([@B18], [@B20]). Similarly, treatment of tumor bearing mice with an agonistic NOTCH2 antibody enhanced CD8 T-cell cytotoxicity and reduced tumor size ([@B20]). Consistent with this finding, conditional deletion of *Notch2* in CD8 T-cells potentiated tumor growth in mice and reduced overall survival ([@B20]).
Constitutive expression of DLL1 on bone marrow and dendritic cells was also reported to enhance T-cell infiltration into tumors, suppress tumor growth and increase the survival of mice transplanted with murine tumor cell lines \[Lewis Lung Carcinoma (LLC), D459 Fibrosarcoma, and EL4 T cell Lymphoma\] ([@B16], [@B20]). Increased DLL1 but not Jagged2 expression on dendritic cells stimulated T-cell cytotoxicity and increased IFN-γ levels ([@B20]). Moreover, therapeutic administration of a multivalent, clustered form of DLL1 (c-DLL1) arrested tumor growth and prolonged survival of mice transplanted with LLCs or D459 tumor cells ([@B16], [@B17]). The c-DLL1 was shown to bind and activate NOTCH (1--4), resulting in increased NOTCH target gene expression ([@B16], [@B17]). Administration of c-DLL1 stimulated IFN-γ production and increased tumor-infiltrating antigen-specific T-cells ([@B16], [@B17]). Tumor regression in c-DLL1 treated mice appears to be T-cell mediated, since c-DLL1 treatment had no effect on tumor growth in *Rag1*^−/−^ recipients or in mice treated with anti-CD8 antibody ([@B16]). Furthermore, adoptive transfer of tumor antigen-specific T-cells from c-DLL1-treated mice were sufficient to attenuate tumor growth in immunocompromised NOD-SCID mice ([@B17]).
The proteasome inhibitor bortezomib was shown to enhance T-cell-mediated anti-tumor responses in part by restoration of NOTCH receptors and ligand mRNA expression ([@B19]). Bortezomib treatment led to increased expression of CD25, CD44, IFNγ, and granzyme B in CD8^+^ T-cells isolated from mice engrafted with cancer cell lines ([@B19], [@B48]). Combination treatments consisting of bortezomib and adoptive T-cell transfer reduced tumor burden and prolonged survival in human renal carcinoma xenografts ([@B48]). Whether bortezomib treatment regulates NOTCH activity directly or if these effects are secondary is unknown. Together these studies support the concept that activating NOTCH enhances T-cell anti-tumor immunity and prolongs tumor-free survival. While the development of NOTCH agonist antibodies and c-DLL1 therapies appear to be a promising approach to enhance T-cell anti-tumor immunity, the potential effects on NOTCH driven malignancies needs to be considered.
Current Challenges in ACT {#S6}
=========================
Adoptive T-cell therapies involves the generation of tumor antigen-specific CTLs *in vitro*, which are then infused back into the patient where they kill tumor cells. Tumor-specific T-cells are generated by selection and expansion of tumor-infiltrating lymphocytes (TIL), or by transduction of sTCR or CAR ([@B21], [@B22]). ACT using TILs has been a successful treatment option for melanoma, however, this approach could only be used on patients whose T-cells could be isolated and cultured ([@B49], [@B50]). CAR T-cell therapies have yielded exceptional clinical results in B-ALL ([@B51]--[@B53]), but identification of tumor-specific antigens is needed in order to expand CAR T-cell therapies to additional malignancies. Both approaches need improvement because the generation of TIL and CAR T-cells is time consuming and T-cell numbers are limiting. Furthermore, while the T-cells used for ACT have enhanced tumor antigen recognition, they are still susceptible to immunosuppressive factors in the tumor microenvironment.
NOTCH Ligands in T-Cell-Based Immunotherapies {#S7}
=============================================
Generation of CAR-specific T-cells from induced pluripotent stem cells (iPSC) from cancer patients is one approach currently being utilized to overcome limited numbers of patient T-cells ([@B24]). Using this approach iPSC are differentiated into T-cells by culturing on stroma expressing the NOTCH ligand DLL1 ([@B24]). Similar approaches have been used to generate CAR T-cells from hematopoietic stem cells ([@B54]). Researchers have also used pluripotency and reprogramming factors to expand human tumor-specific T-cells ([@B55], [@B56]). While this strategy produces unlimited tumor-specific CTLs, the TCR repertoires are often limited. To overcome this obstacle, investigators have begun to test the efficacy of T-stem cell memory (T~SCM~) cells in adoptive T-cell transplants. T~SCM~ cells have the ability to function as memory T-cells by responding rapidly to antigen, however, they are not terminally differentiated and therefore possess an enhanced capacity for self-renewal and proliferation ([@B57]). T~SCM~ cells have been characterized in mice and humans and found to persist years after primary infection or vaccination. The current model to generate T~SCM~ cells is by stimulating naïve T-cells in the presence of Wnt3A or inhibitors of glycogen synthase kinase-3b ([@B57]). Adoptive T-cell transplants with CAR T-cells generated from T~SCM~ cells results in more potent anti-tumor responses than CAR T-cells generated from other T-cell types ([@B57]). Recent work by Kondo et al. exploit NOTCH pathway activation to generate T~SCM~ cells from activated mouse and human T-cells referred to as iT~SCM~ cells ([@B23]). iT~SCM~ cells re-capitulate the features of T~SCM~ cells including rapid response to antigen re-stimulation and increased self-renewal capacity. iT~SCM~ cells also exhibit decreased expression of the T-cell inhibitory receptors programmed cell death-1 (PD-1) and cytotoxic-T-lymphocyte-associated protein 4 (CTLA-4), allowing for enhanced survival and activation in the tumor microenvironment ([@B23]). Unlike traditional T~SCM~ cells generated from naïve T-cells, iT~SCM~ cells are derived from activated T-cells and therefore could be generated from TILs, eliminating the need for transduction with sTCRs or CARs. These alternative methods to generate T-cells for ACT may provide greater anti-tumor immunity by increasing T-cell longevity and yield.
Synthetic NOTCH Receptors Generate Potent CAR Specific T-Cells {#S8}
==============================================================
While current methods have markedly enhanced anti-tumor reactivity, CAR T-cells are still restricted to endogenous T-cell responses have limited capabilities to overcome the immunosuppressive microenvironment. To overcome this, researchers generated CAR T-cells with synthetic NOTCH receptors (synNOTCH), which allow for specific cytotoxic responses ([@B26], [@B27]). NOTCH receptors are single pass transmembrane proteins composed of an extracellular ligand-binding domain, a transmembrane region, and an intracellular signaling domain. synNOTCH receptors contain the transmembrane domain, however, they have synthetic extracellular ligand domains and intracellular transcriptional domains ([@B26], [@B27]). In recent work by Roybal et al., human T-cells were engineered to express synNOTCH receptors, where the extracellular ligand domain of NOTCH was replaced with CARs targeting tumor antigens, CD19 or HER2 ([@B27]). Following CAR engagement, the synNOTCH receptor undergoes transmembrane cleavage, releasing the synthetic NICD. NICD then translocates to the nucleus to activate gene transcription. Unlike normal NICD which recognizes and binds CBF1/RBP-Jkappa sites, synthetic NICD is replaced with an intracellular transcription activation domain (Gal4-VP64 or tTA) that in turn drives a distinct reporter expressed in the synNOTCH expressing cell ([@B26], [@B27]). synNOTCH receptors have been engineered to drive the expression of several cytotoxic factors that enhance T-cell anti-tumor responses, including expression of the death ligand TRAIL, the cytokine IL-12, and the transcription factor T-BET. In addition, synNOTCH receptors can drive the production of antibodies to PD-1 and CTLA-4 to overcome inhibitory ligand expression by cancer cells or express IL-10 and PD-L1 to reduce inflammation generated by enhanced T-cell cytotoxicity. synNOTCH-engineered T-cells have shown efficacy in conventional humanized xenograft models ([@B27]). Using these synNOTCH receptors to customize CAR T-cell responses will enhance anti-tumor activity, and armor the T-cells against the immune suppression mediated by the tumor microenvironment.
Conclusion/Future Perspectives {#S9}
==============================
Activation of T-cell effector function in an immunosuppressive microenvironment is a critical component of effective T-cell-mediated anti-tumor immunity. Tumor cells and their microenvironment suppress T-cell responses in part by repressing NOTCH receptors and ligands and consequently T-cell effector function. While in-depth characterization of tumor cells has led to the development of targeted therapies, characterization of tumor-infiltrating T-cells from patients is still lacking. Several studies have begun to establish gene signatures that represent a variety of immune populations and demonstrated that these signatures can be predictive of clinical outcome and response to immune therapy ([@B58], [@B59]). A similar approach could be taken to determine if NOTCH receptors and ligands are suppressed in T-cells isolated from cancer patients. Therapies that activate/maintain NOTCH signaling were shown to improve T-cell-mediated tumor clearance, prolonging the survival of tumor bearing mice. However, the efficacy and safety of this approach in patients remains unclear.
NOTCH ligands may also serve as tools to improve the generation and efficacy of T-cells used for ACT. T~SCM~ cells may overcome the obstacles currently facing ACT, including increasing anti-tumor responses and decreasing immunosuppression. The use of synNOTCH CAR T-cells is particularly intriguing as the cytotoxic response of these cells can be tailored to provide an enhanced and specific anti-tumor response. Future studies examining combinations of synNOTCH T-cells on the anti-tumor immune responses and their effects on endogenous tumor-infiltrating T-cells should provide insight.
While these findings highlight the exciting potential to improve T-cell-based immunotherapies, there are still many questions regarding the clinical relevance and application of these approaches. In addition, the safety and efficacy of these NOTCH strategies need to be evaluated to ensure that sustained NOTCH activation does not result in leukemic transformation or potentiate tumor growth. One major limitation in accomplishing these goals is the lack of a primary derived xenograft mouse model with a humanized immune system. Continued research will provide a better understanding as to how NOTCH signaling contributes to T-cell anti-tumor responses and uncover new approaches to improve T-cell-based immunotherapies.
Author Contributions {#S10}
====================
JR wrote the manuscript with help from MK.
Conflict of Interest Statement {#S11}
==============================
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The handling Editor declared a shared affiliation, though no other collaboration, with the authors.
**Funding.** This research was supported by grants from the National Institute of Health and the National Cancer Institute (RO1CA96899) to MK. Research was also partially supported by a Hyundai Hope on Wheels Award and an Innovator Award from Alex's Lemonade Stand to MK.
[^1]: Edited by: Barbara A. Osborne, University of Massachusetts Amherst, United States
[^2]: Reviewed by: Lucio Miele, LSU Health Sciences Center New Orleans, United States; Warren Pear, University of Pennsylvania, United States
[^3]: Specialty section: This article was submitted to Cancer Immunity and Immunotherapy, a section of the journal Frontiers in Immunology
|
Saturday, 24 November 2012
Europe Economy: Science Fund Cuts Could Hurt
Science fund cuts could hurt EU recovery, scientists warn
(Reuters) - Cutting science funding in the European Union would threaten economic recovery in the bloc, the heads of scientific organisations said on Friday after such cuts were proposed.
"We believe it would be deeply damaging to future economic growth if we were to cut funding now," Andrew Harrison, director general of Grenoble-based neutron research centre the Institut Laue-Langevin, told Reuters.
EU leaders on Friday abandoned talks to find a deal on the bloc's budget for 2014-2020 but European Council President Herman Van Rompuy, who chaired the summit, proposed cuts in a number of areas, including research and innovation, in an effort to reach a deal. |
Comments:
Cubbish at 14.04.2020 at 09:31Hi. I'm a single mom and a veteran. I'm emergency room nurse. I love to travel and I have been all around the worl.
Altay at 15.04.2020 at 23:5215 Can paunch a rabbit, pluck a pheasant and gut a fish, but allows men the privilege
Hanneke at 17.04.2020 at 18:07Let's say you're a woman with low body image.
Sindy at 18.04.2020 at 03:32Wow. Perfect!
Orders at 13.04.2020 at 10:45But you're right, he's been used to objectifying women, and did to me at first, and I called him out on it. He's a very sensitive person but with a history of being with physically spectacular women who weren't good for him in the long run, he knows I'm much more than they are ever are/were, and makes sure he tells me how incredible and unique I am constantly...
Copa at 14.04.2020 at 21:24I'd always suspected that korn.
Parn at 11.04.2020 at 19:07Hi..am hornets serious dont have time for game.
Vitalism at 16.04.2020 at 22:50Army lifer or just until the moment payback ends? |
module Cloud where
import Rumpus
start::Start
start = do
cloudID <- ask
let numPuffs = 5
forM_ [0..numPuffs] $ \i -> do
hue <- randomRange (0.5, 0.6)
x <- (0.5 + i / numPuffs +) <$> randomRange (0,0.2)
y <- randomRange (-0.1,0.1)
z <- randomRange (-0.1,0.1)
puffID <- spawnChild $ do
myShape ==> Sphere
myColor ==> colorHSL hue 0.5 0.5
myPose ==> position (V3 x y z)
myUpdate ==> do
now <- (i +) <$> getNow
setSize (realToFrac (sin now / 2 + 1) * 0.2 + 0.2) -- 0.2<->0.4
inEntity puffID $ setRepeatingAction (1/4) $ do
chance <- randomRange (0,1::Int)
when (chance == 0) $ do
cloudPose <- getEntityPose cloudID
startPos <- V3 <$> pure x
<*> randomRange (-0.1, 0.1)
<*> randomRange (-0.1, 0.1)
hue <- randomRange (0.5,0.7)
spawnChild_ $ do
myShape ==> Sphere
myPose ==> cloudPose !*! position startPos
mySize ==> 0.03
myBody ==> Physical
myColor ==> colorHSL hue 0.5 0.8
myLifetime ==> 2
|
Q:
FireBase - angularfire setting object key as variable
I am using firebase(angularfire) in my angularjs app to store and process my message system but can't seem to figure out how to replicate the example data from the firebase docs
// room members are easily accessible (or restricted)
// we also store these by room ID
"members": {
// we'll talk about indices like this below
"one": {
"mchen": true,
"hmadi": true
}
}
Here the members.one contains the user name as a key and I am trying to do this for my data as well but can't seem to figure out a solution.
The members portion of my firebase data is like so:
members { one: { } }
I have two variables set in the $scope.
user_name = kep; //person chatting with name
sender_name = pek; //current user name
So I want to use the set function to insert data into members.one or in this case members.user_name + ':' + sender_name but where I am having trouble is how to actually insert the data without creating a parent object.
ref.child('members').child(user_name + ':' + sender_name).set({
user_name: true, sender_name: true
});
The problem arises when I try to pass user_name and sender_name into the set() function below is the result it gets.
members { "kep:pek": { user_name: true, sender_name: true }}
where as I want it to be:
members { "kep:pek": { kep: true, pek: true }}
If I put user_name and sender_name into an object and then run the set() function with the object passed it will create the following structure which is not what I am looking for:
members { "kep:pek": { newObject: { kep: true, pek: true }}}
A:
Firebase team member here.
The Firebase Database is a just a JSON document.
So let's say you want to structure your data this way:
{
"members" : {
"kep:pek" : {
"kep" : true,
"pek" : true
}
}
}
A custom key is created by using the .child() method, or by creating a key in the JavaScript Object.
JSBin Demo
var rootRef = new Firebase('<my-firebase-app>');
var membersRef = rootRef.child('members');
var user_name = 'kep';
var sender_name = 'pek';
// child object to store custom keys
var objectToSave = {};
// set keys in [] syntax
objectToSave[user_name] = true;
objectToSave[sender_name] = true;
// use .child() with a formatted string to save the object
membersRef.child(user_name + ':' + sender_name).set(objectToSave);
|
Q:
periodically update server from J2ME application
I'm writing a J2ME application that periodically updates server. how do i implement the functionality using J2ME? how do i run application on phone at startup and how do i keep it running?. I am planning to deploy the application it on symbian platform
A:
There are several ways to achieve this. I think the best one is to use a separate thread to handle your server communication/updates:
public class UpdateRunner extends Thread {
...
public UpdateRunner() {
// create an open sockets here
}
public void run() {
while(true) {
try {
// send your messages/updates to server
catch(...) {
// handle errors like disconnections
}
}
}
}
You can also use a timer to run some code periodically:
private class ServerTask extends TimerTask {
public void run() {
// send message here
}
}
then use it:
Timer serverTimer = new Timer();
serverTimer .scheduleAtFixedRate(new ServerTask(), 0, 500);
About running it on startup I dont think its possible, because the JVM has some security issues on letting software use network at will.
|
Explore Merriam-Webster
Origin and Etymology of gall
Middle English galle, going back to Old English gealla, galla, going back to Germanic *gallōn-, galla- (whence Old High German & Old Saxon galla, Old Norse gall), going back to Indo-European *ǵholh3-n- (whence, without the suffix, Greek cholḗ "bile, bitter hatred," chólos "bitter hatred, wrath," Avestan zāra- "bile"), a derivative of *ǵhelh3- "green, yellow" — more at 1yellow
Note:The sense "boldness," first attested in the U.S. in the second half of the 19th century, is perhaps of independent origin.
gall
Definition of gall
Origin and Etymology of gall
Middle English galle "sore on the skin, stain, evil, barren or wet spot in a field (in names)," probably in part going back to Anglian Old English *galla (West Saxon gealla) "sore on the skin of a horse," in part borrowed from Middle Low German galle "swelling in a joint, blastodisc, barren place," both nouns going back to Germanic *gallan- (whence also Old Norse galli "fault, flaw"), perhaps going back to an Indo-European base *ǵholH-, whence, from the derivative *ǵholH-r-, Norwegian galder "windgall," Old Irish galar "disease, pain," Welsh galar "mourning, grief"
Examples of gall in a Sentence
It galls me that such a small group of people can have so much power.
move that rope so the sharp edge of the hull doesn't gall it
Origin and Etymology of gall
Middle English gallen, in part derivative of galle2gall, in part borrowed from Middle French galer "to scratch, rub, mount an attack on," derivative of gale "gallnut, callus," borrowed from Latin galla4gall
Origin and Etymology of gall
Note:Latin galla cannot be akin to 2gall if the latter does in fact descend from Indo-European *ǵholH-, and in any case the basic meaning of galla appears to be "excrescence" rather than "sore, blight." |
---
author:
- |
Stephen William Semmes\
Rice University\
Houston, Texas
title: Some remarks about Cauchy integrals
---
A basic theme in the wonderful books and surveys of Stein, Weiss, and Zygmund is that Hilbert transforms, Poisson kernels, heat kernels, and related objects are quite interesting and fundamental. I certainly like this point of view. There is a variety of ways in which things can be interesting or fundamental, of course.
In the last several years there have been striking developments connected to Cauchy integrals, and in this regard I would like to mention the names of Pertti Mattila, Mark Melnikov, and Joan Verdera in particular. I think many of us are familiar with the remarkable new ideas involving Menger curvature, and indeed a lot of work using this has been done by a lot of people, and continues to be done. Let us also recall some matters related to *symmetric measures*.
Let $\mu$ be a nonnegative Borel measure on the complex plane ${\bf C}$, which is finite on bounded sets. Following Mattila, $\mu$ is said to be symmetric if for each point $a$ in the support of $\mu$ and each positive real number $r$ we have that the integral of $z - a$ over the open ball with center $a$ and radius $r$ with respect to the measure $d\mu(z)$ is equal to $0$. One might think of this as a kind of flatness condition related to the existence of principal values of Cauchy integrals.
If $\mu$ is equal to a constant times $1$-dimensional Lebesgue measure on a line, then $\mu$ is a symmetric measure. For that matter, $2$-dimensional Lebesgue measure on ${\bf C}$ is symmetric, and there are other possibilities. Mattila discusses this, and shows that a symmetric measure which satisfies some additional conditions is equal to a constant times $1$-dimensional Lebesgue measure on a line.
Mattila uses this to show that existence almost everywhere of principle values of a measure implies rectifiability properties of the measure. Mattila and Preiss have considered similar questions in general dimensions, where the geometry is more complecated. Mattila’s student Petri Huovinen has explored analogous matters in the plane with more tricky kernels than the Cauchy kernel.
Another kind of $m$-dimensional symmetry condition for a nonnegative Borel measure $\mu$ on ${\bf R}^n$, which is finite on bounded sets, asks that for each point $a$ in the support of $\mu$ and for each radius $r > 0$ the $\mu$ measure of the open ball with center $a$ and radius $r$ is equal to a constant $c$, depending only on $\mu$, times $r^m$. This condition holds for constant multiples of $m$-dimensional Lebesgue measure on $m$-dimensional planes in ${\bf
R}^n$. The converse is known in some cases, and non-flat examples are also known in some cases. These types of measures have been studied extensively by Preiss, partly in collaboration with Kirchheim and with Kowalski, and it seems fair to say that many mysteries remain.
In general, it seems to me that there are a lot of very interesting questions involving geometry of sets, measures, currents, and varifolds and quantities such as Cauchy integrals, measurements of symmetry like those considered by Huovinen, Mattila, and Preiss, and densities ratios. This may entail rather exact conditions and special geometric structures, or approximate versions and some kind of regularity. In the latter case, instead of asking that some quantity vanish exactly, one might look at situations where it satisfies a bound like $O(r^\alpha)$ for some positive real number $\alpha$, and where $r > 0$ is a radius or similar parameter.
I like very much a paper by Verdera on $T(1)$ theorems for Cauchy integrals, which uses Menger curvature ideas. It seems to me that it could be a starting point for a new kind of operator theory for certain kinds of operators. There is a lot of room for development for new kinds of structure of linear operators.
Another reasonably-specific area with a lot of possibilities is to try to combine Menger curvature ideas with the rotation method. At first they may not seem to fit together too easily. However, I would not be too surprised if some interesting things could come up in this manner.
Integrals of curvature on curves and surfaces
=============================================
In this section we discuss some topics that came up in Chapters 2 and 3 of Part III of [@DS2]. These involve relations between derivatives of Cauchy integrals on curves and surfaces and curvatures of the curves and surfaces. In ${\bf R}^n$ for $n > 2$, “Cauchy integrals” can be based on generalizations of complex analysis using quarternions or Clifford algebras (as in [@BDS]). Part of the point here is to bring out the basic features and types of computations in a simple way, if not finer aspects which can also be considered.
Let us consider first curves in the plane ${\bf R}^2$. We shall identify ${\bf R}^2$ with the set ${\bf C}$ of complex numbers.
Let $\Gamma$ be some kind of curve in ${\bf C}$, or perhaps union of pieces of curves. For each $z \in {\bf C} \backslash \Gamma$, we have the contour integral $$\label{int_{Gamma} frac{1}{(z-zeta)^2} d zeta}
\int_{\Gamma} \frac{1}{(z-\zeta)^2} \, d\zeta$$ as from complex analysis. More precisely, “$d\zeta$” is the element of integration such that if $\gamma$ is an arc in ${\bf C}$ from a point $a$ to another point $b$, then $$\int_{\gamma} d\zeta = b - a.$$ This works no matter how $\gamma$ goes from $a$ to $b$. This is different from integrating arclength, for which the element of integration is often written $|d\zeta|$. For this we have that $$\int_{\gamma} |d\zeta| = {\rm length}(\gamma),$$ and this very much depends on the way that $\gamma$ goes from $a$ to $b$.
If $\Gamma$ a union of closed curves, then $$\label{int_{Gamma} frac{1}{(z-zeta)^2} d zeta = 0}
\int_{\Gamma} \frac{1}{(z-\zeta)^2} \, d\zeta = 0.$$ This is a standard formula from complex analysis (an instance of “Cauchy formulae”), and one can look at it in the following manner. As a function of $\zeta$, $1/(z-\zeta)^2$ is the complex derivative in $\zeta$ of $1/(z-\zeta)$, $$\frac{d}{d\zeta} \biggl(\frac{1}{z-\zeta}\biggr)
= \frac{1}{(z-\zeta)^2}.$$ If $\gamma$ is a curve from points $a$ to $b$ again, which does not pass through $z$, then $$\label{int_{gamma} frac{1}{(z-zeta)^2} d zeta = frac{1}{z-b} - frac{1}{z-a}}
\int_{\gamma} \frac{1}{(z-\zeta)^2} \, d\zeta
= \frac{1}{z-b} - \frac{1}{z-a}.$$ In particular, one gets $0$ for closed curves (since that corresponds to having $a = b$).
As a variation of these matters, if $\Gamma$ is a line, then $$\label{int_{Gamma} frac{1}{(z-zeta)^2} d zeta = 0, 2}
\int_{\Gamma} \frac{1}{(z-\zeta)^2} \, d\zeta = 0$$ again. This can be derived from (\[int\_[gamma]{} frac[1]{}[(z-zeta)\^2]{} d zeta = frac[1]{}[z-b]{} - frac[1]{}[z-a]{}\]) (and can be looked at in terms of ordinary calculus, without complex analysis). There is enough decay in the integral so that there is no problem with using the whole line.
What would happen with these formulae if we replaced the complex element of integration $d\zeta$ with the arclength element of integration $|d\zeta|$? In general we would not have (\[int\_[Gamma]{} frac[1]{}[(z-zeta)\^2]{} d zeta = 0\]) for unions of closed curves, or (\[int\_[gamma]{} frac[1]{}[(z-zeta)\^2]{} d zeta = frac[1]{}[z-b]{} - frac[1]{}[z-a]{}\]) for a curve $\gamma$ from $a$ to $b$. However, we would still have (\[int\_[Gamma]{} frac[1]{}[(z-zeta)\^2]{} d zeta = 0, 2\]) for a line, because in this case $d\zeta$ would be a constant times $|d\zeta|$.
Let us be a bit more general and consider an element of integration $d\alpha(\zeta)$ which is positive, like the arclength element $|d\zeta|$, but which is allowed to have variable density. Let us look at an integral of the form $$\label{int_{Gamma} frac{1}{(z-zeta)^2} dalpha(zeta)}
\int_{\Gamma} \frac{1}{(z-\zeta)^2} \, d\alpha(\zeta).$$ This integral can be viewed as a kind of measurement of *curvature* of $\Gamma$ (which also takes into account the variability of the density in $d\alpha(\zeta)$).
If we put absolute values inside the integral, then the result would be roughly $\dist(z,\Gamma)^{-1}$, $$\label{int_{Gamma} frac{1}{|z-zeta|^2} dalpha(zeta) approx dist(z,Gamma)^{-1}}
\int_{\Gamma} \frac{1}{|z-\zeta|^2} \, d\alpha(\zeta)
\approx \dist(z,\Gamma)^{-1}$$ under suitable conditions on $\Gamma$. For instance, if $\Gamma$ is a line, then the left side of (\[int\_[Gamma]{} frac[1]{}[|z-zeta|\^2]{} dalpha(zeta) approx dist(z,Gamma)\^[-1]{}\]) is equal to a positive constant times the right side of (\[int\_[Gamma]{} frac[1]{}[|z-zeta|\^2]{} dalpha(zeta) approx dist(z,Gamma)\^[-1]{}\]).
The curvature of a curve is defined in terms of the derivative of the unit normal vector along the curve, or, what is essentially the same here, the derivative of the unit tangent vector. The unit tangent vector gives exactly what is missing from $|d\zeta|$ to get $d\zeta$, if we write the unit tangent vector as a complex number. (One should also follow the tangent in the orientation of the curve.)
If the curve is a line or a line segment, then the tangent is constant, which one can pull in and out of the integral. In general one can view (\[int\_[Gamma]{} frac[1]{}[(z-zeta)\^2]{} dalpha(zeta)\]) as a measurement of the variability of the unit tangent vectors, and of the variability of the positive density involved in $d\alpha(\zeta)$.
Let us look at some simple examples. Suppose first that $\Gamma$ is a straight line segment from a point $a \in {\bf C}$ to another point $b$, $a \ne b$. Then $|d\zeta|$ is a constant multiple of $d\zeta$, and $$\int_{\Gamma} \frac{1}{(z-\zeta)^2} \, |d\zeta|
= ({\rm constant}) \cdot \Bigl(\frac{1}{z-b} - \frac{1}{z-a}\Bigr).$$ In this case the ordinary curvature is $0$, except that one can say that there are contributions at the endpoints, like Direc delta functions, which are reflected in right side. If $z$ gets close to $\Gamma$, but does not get close to the endpoints $a$, $b$ of $\Gamma$, then the right side stays bounded and behaves nicely. This is “small” in comparison with $\dist(z,\Gamma)^{-1}$. Near $a$ or $b$, we get something which is indeed like $|z-a|^{-1}$ or $|z-b|^{-1}$.
As another example, suppose that we have a third point $p \in
{\bf C}$, where $p$ does not lie in the line segment between $a$ and $b$ (and is not equal to $a$ or $b$). Consider the curve $\Gamma$ which goes from $a$ to $p$ along the line segment between them, and then goes from $p$ to $b$ along the line segment between them. Again $|d\zeta|$ is a constant multiple of $d\zeta$. Now we have $$\int_{\Gamma} \frac{1}{(z-\zeta)^2} \, |d\zeta|
= c_1 \Bigl(\frac{1}{z-p} - \frac{1}{z-a}\Bigr)
+ c_2 \Bigl(\frac{1}{z-b} - \frac{1}{z-p}\Bigr),$$ where $c_1$ and $c_2$ are constants which are not equal to each other. This is like the previous case, except that the right side behaves like a constant times $|z-p|^{-1}$ near $p$ (and remains bounded away from $a$, $b$, $p$). This reflects the presence of another Dirac delta function for the curvature, at $p$. If the curve flattens out, so that the angle between the two segments is close to $\pi$, then the coefficient $c_1 - c_2$ of the $(z-p)^{-1}$ term becomes small.
Now suppose that $\Gamma$ is the unit circle in ${\bf C}$, centered around the origin. In this case $|d\zeta|$ is the same as $d\zeta/\zeta$, except for a constant factor, and we consider $$\int_{\Gamma} \frac{1}{(z-\zeta)^2} \, \frac{d\zeta}{\zeta}.$$ If $z = 0$, then one can check that this integral is $0$. For $z \ne 0$, let us rewrite the integral as $$\int_{\Gamma} \frac{1}{(z-\zeta)^2} \,
\Bigl(\frac{1}{\zeta} - \frac{1}{z}\Bigr) \, d\zeta
+ \frac{1}{z} \int_{\Gamma} \frac{1}{(z-\zeta)^2} \, d\zeta.$$ The second integral is $0$ for all $z \in {\bf C} \backslash \Gamma$, as in the earlier discussion. The first integral is equal to $$\int_{\Gamma} \frac{1}{(z-\zeta)^2} \, \frac{(z-\zeta)}{z \zeta} \, d\zeta
= \frac{1}{z} \int_{\Gamma} \frac{1}{(z-\zeta)} \, \frac{1}{\zeta} \, d\zeta.$$ On the other hand, $$\frac{1}{(z-\zeta)} \, \frac{1}{\zeta}
= \frac{1}{z} \Bigl(\frac{1}{z-\zeta} + \frac{1}{\zeta}\Bigr),$$ and so we obtain $$\frac{1}{z^2} \int_{\Gamma} \Bigl(\frac{1}{z-\zeta} + \frac{1}{\zeta}\Bigr)
\, d\zeta.$$ For $|z| > 1$ we have that $$\int_{\Gamma} \frac{1}{z-\zeta} \, d\zeta = 0,$$ and thus we get a constant times $1/z^2$ above. If $|z| < 1$, then $$\int_{\Gamma} \frac{1}{z-\zeta} \, d\zeta
= - \int_{\Gamma} \frac{1}{\zeta} \, d\zeta,$$ and the earlier expression is equal to $0$.
For another example, fix a point $q \in {\bf C}$, and suppose that $\Gamma$ consists of a finite number of rays emanating from $q$. On each ray, we assume that we have an element of integration $d\alpha(\zeta)$ which is a positive constant times the arclength element.
If $R$ is one of these rays, then $$\int_R \frac{1}{(z-\zeta)^2} \, d\alpha(\zeta)
= ({\rm constant}) \, \frac{1}{z-q}.$$ This constant takes into account both the direction of the ray and the density factor in $d\alpha(\zeta)$ on $R$.
If we now sum over the rays, we still get $$\int_{\Gamma} \frac{1}{(z-\zeta)^2} \, d\alpha(\zeta)
= ({\rm constant}) \, \frac{1}{z-q};$$ however, this constant can be $0$. This happens if $\Gamma$ is a union of lines through $q$, with constant density on each line, and it also happens more generally, when the directions of the rays satisfy a suitable balancing condition, depending also on the density factors for the individual rays. This can happen with $3$ rays, for instance.
When the constant is $0$, $\Gamma$ (with these choices of density factors) has “curvature $0$”, even if this is somewhat complicated, because of the singularity at $q$. This is a special case of the situation treated in [@AA].
In general, “weak” or integrated curvature is defined using suitable test functions on ${\bf R}^2$ with values in ${\bf R}^2$ (or on ${\bf R}^n$ with values in ${\bf R}^n$), as in [@AA]. For $n =
2$ one can reformulate this in terms of complex-valued functions on ${\bf C}$, and complex-analyticity gives rise to simpler formulas. The link between this kind of story with Cauchy integrals and the weak notion of curvature for varifolds as in [@AA] was suggested by Bob Hardt.
For more information on these topics, see Chapter 2 of Part III of [@DS2]. In [@DS2] there are further issues which are not needed in various settings.
Now let us look at similar matters in ${\bf R}^n$, $n > 2$, and $(n-1)$-dimensional surfaces there. Ordinary complex analysis is no longer available, but there are substitutes, in terms of quarternions (in low dimensions) and Clifford algebras. For the sake of definiteness let us focus on the latter.
Let $n$ be a positive integer. The *Clifford algebra* $\mathcal{C}(n)$ has $n$ generators $e_1, e_2, \ldots, e_n$ which satisfy the following relations: $$\begin{aligned}
e_j \, e_k & = & - e_k \, e_j \quad\hbox{when } j \ne k; \\
e_j^2 & = & -1 \qquad\hbox{ for all } j. \nonumber\end{aligned}$$ Here $1$ denotes the identity element in the algebra. These are the only relations. More precisely, one can think of $\mathcal{C}(n)$ first as a real vector space of dimension $2^n$, in which one has a basis consisting of all products of $e_j$’s of the form $$e_{j_1} \, e_{j_2} \cdots e_{j_\ell},$$ where $j_1 < j_2 < \cdots < j_\ell$, and $\ell$ is allowed to range from $0$ to $n$, inclusively. When $\ell = 0$ this is interpreted as giving the identity element $1$. If $\beta, \gamma \in
\mathcal{C}(n)$, then $\beta$ and $\gamma$ are given by linear combinations of these basis elements, and it is easy to define the product $\beta \, \gamma$ using the relations above and standard rules (associativity and distributivity).
If $n = 1$, then the result is isomorphic to the complex numbers in a natural way, and if $n = 2$, the result is isomorphic to the quarternions. Note that $\mathcal{C}(n)$ contains ${\bf R}$ in a natural way, as multiples of the identity element.
A basic feature of the Clifford algebra $\mathcal{C}(n)$ is that if $\beta \in \mathcal{C}(n)$ is in the linear span of $e_1, e_2,
\ldots, e_n$ (without taking products of the $e_j$’s), then $\beta$ can be inverted in the algebra if and only if $\beta \ne 0$. More precisely, if $$\beta = \sum_{j=1}^n \beta_j \, e_j,$$ where each $\beta_j$ is a real number, then $$\beta^2 = - \sum_{j=1}^n |\beta_j|^2.$$ If $\beta \ne 0$, then the right side is a nonzero real number, and $-(\sum_{j=1}^n |\beta_j|^2)^{-1} \beta$ is the multiplicative inverse of $\beta$.
More generally, if $\beta$ is in the linear span of $1$ and $e_1, e_2, \ldots, e_n$, so that $$\beta = \beta_0 + \sum_{j=1}^n \beta_j \, e_j,$$ where $\beta_0, \beta_1, \ldots, \beta_n$, then we set $$\beta^* = \beta_0 - \sum_{j=1}^n \beta_j \, e_j.$$ This is analogous to complex conjugation of complex numbers, and we have that $$\beta \, \beta^* = \beta^* \, \beta = \sum_{j=0}^n |\beta_j|^2.$$ If $\beta \ne 0$, then $(\sum_{j=0}^n |\beta_j|^2)^{-1} \beta^*$ is the multiplicative inverse of $\beta$, just as in the case of complex numbers.
When $n > 2$, nonzero elements of $\mathcal{C}(n)$ may not be invertible. For real and complex numbers and quarternions it is true that nonzero elements are invertible. The preceding observations are substitutes for this which are often sufficient.
Now let us turn to *Clifford analysis*, which is an analogue of complex numbers in higher dimensions using Clifford algebras. (See [@BDS] for more information.)
Suppose that $f$ is a function on ${\bf R}^n$, or some subdomain of ${\bf R}^n$, which takes values in $\mathcal{C}(n)$. We assume that $f$ is smooth enough for the formulas that follow (with the amount of smoothness perhaps depending on the circumstances). Define a differential operator $\mathcal{D}$ by $$\mathcal{D} f = \sum_{j=1}^n e_j \, \frac{\partial}{\partial x_j} f.$$
Actually, there are some natural variants of this to also consider. This is the “left” version of the operator; there is also a “right” version, in which the $e_j$’s are moved to the right side of the derivatives of $f$. This makes a difference, because the Clifford algebra is not commutative, but the “right” version enjoys the same kind of properties as the “left” version. (Sometimes one uses the two at the same time, as in certain integral formulas, in which the two operators are acting on separate functions which are then part of the same expression.)
As another alternative, one can use the Clifford algebra $\mathcal{C}(n-1)$ for Clifford analysis on ${\bf R}^n$, with one direction in ${\bf R}^n$ associated to the multiplicative identity element $1$, and the remaining $n-1$ directions associated to $e_1,
e_2, \ldots, e_{n-1}$. There is an operator analogous to $\mathcal{D}$, and properties similar to the ones that we are about to describe (with adjustments analogous to the conjugation operation $\beta \mapsto \beta^*$).
For the sake of definiteness, let us stick to the version that we have. A function $f$ as above is said to be *Clifford analytic* if $$\mathcal{D} f = 0$$ (on the domain of $f$).
Clifford analytic functions have a lot of features analogous to those of complex analytic functions, including integral formulas. There is a natural version of a *Cauchy kernel*, which is given by $$\mathcal{E}(x-y) = \frac{\sum_{j=1}^n (x_j - y_j) \, e_j}{|x-y|^n}.$$ This function is Clifford analytic in $x$ and $y$ away from $x = y$, and it has a “fundamental singularity” at $x = y$, just as $1/(z-w)$ has in the complex case.
One can calculate these properties directly, and one can also look at them in the following way. A basic indentity involving $\mathcal{D}$ is $$\mathcal{D}^2 = - \Delta,$$ where $\Delta$ denotes the Laplacian, $\Delta = \sum_{j=1}^n \partial^2 /
\partial x_j^2$. The kernel $\mathcal{E}(x)$ is a constant multiple of $$\begin{aligned}
&& \mathcal{D}(|x|^{n-2}) \quad\hbox{when } n > 2, \\
&& \mathcal{D}(\log |x|) \quad\hbox{when } n = 2. \nonumber\end{aligned}$$ For instance, the Clifford analyticity of $\mathcal{E}(x)$ for $x \ne 0$ follows from the harmonicity of $|x|^{n-2}$, $\log |x|$ for $x \ne 0$ (when $n > 2$, $n = 2$, respectively).
Analogous to (\[int\_[Gamma]{} frac[1]{}[(z-zeta)\^2]{} d zeta\]), let us consider integrals of the form $$\label{int_{Gamma} frac{partial}{partial x_m} mathcal{E}(x-y) N(y) dy}
\int_{\Gamma} \frac{\partial}{\partial x_m} \mathcal{E}(x-y) \,
N(y) \, dy, \quad x \in {\bf R}^n \backslash \Gamma,$$ where $\Gamma$ is some kind of $(n-1)$-dimensional surface in ${\bf R}^n$, or union of pieces of surfaces, $$N(y) = \sum_{j=1}^n N_j(y) \, e_j$$ is the unit normal to $\Gamma$ (using some choice of orientation for $\Gamma$), turned into an element of $\mathcal{C}(n)$ using the $e_j$’s in this way, and $dy$ denotes the usual element of surface integration on $\Gamma$. Thus $N(y) \, dy$ is a Clifford-algebra-valued element of integration on $\Gamma$ which is analogous to $d\zeta$ for complex contour integrals, as in (\[int\_[Gamma]{} frac[1]{}[(z-zeta)\^2]{} d zeta\]). A version of the Cauchy integral formula implies that $$\int_{\Gamma} \mathcal{E}(x-y) \, N(y) \, dy$$ is locally constant on ${\bf R}^n \backslash \Gamma$ when $\Gamma$ is a “closed surface” in ${\bf R}^n$, i.e., the boundary of some bounded domain (which is reasonably nice). In fact, this integral is a nonzero constant inside the domain, and it is zero outside the domain. At any rate, the differentiated integral (\[int\_[Gamma]{} frac[partial]{}[partial x\_m]{} mathcal[E]{}(x-y) N(y) dy\]) is then $0$ for all $x \in {\bf R}^n \backslash \Gamma$, in analogy with (\[int\_[Gamma]{} frac[1]{}[(z-zeta)\^2]{} d zeta = 0\]).
Now suppose that we have a positive element of integration $d\alpha(y)$ on $\Gamma$, which is the usual element of surface integration $dy$ together with a positive density which is allowed to be variable. Consider integrals of the form $$\label{int_{Gamma} frac{partial}{partial x_m} mathcal{E}(x-y) d alpha(y)}
\int_{\Gamma} \frac{\partial}{\partial x_m} \mathcal{E}(x-y)
\, d\alpha(y), \quad x \in {\bf R}^n \backslash \Gamma.$$ This again can be viewed in terms of integrations of curvatures of $\Gamma$ (also incorporating the variability of the density in $d\alpha(y)$). In a “flat” situation, as when $\Gamma$ is an $(n-1)$-dimensional plane, or a piece of one, $N(y)$ is constant, and if $d\alpha(y)$ is replaced with a constant times $dy$, then we can reduce to (\[int\_[Gamma]{} frac[partial]{}[partial x\_m]{} mathcal[E]{}(x-y) N(y) dy\]), where special integral formulas such as Cauchy formulas can be used.
Topics related to this are discussed in Chapter 3 of Part III of [@DS2], although, as before, further issues are involved there which are not needed in various settings. See [@BDS] for more on Clifford analysis, including integral formulas. Related matters of curvature are investigated in [@H].
Cauchy integrals and totally real surfaces in ${\bf C}^m$
=========================================================
Let us begin by reviewing some geometrically-oriented linear algebra, about which Reese Harvey once tutored me. Fix a positive integer $m$. The standard Hermitian inner product on ${\bf C}^m$ is defined by $$\langle v, w \rangle = \sum_{j=1}^m v_j \, \overline{w_j},$$ where $v$, $w$ are elements of ${\bf C}^m$ and $v_j$, $w_j$ denote their $j$th components, $1 \le j \le m$. This expression is complex-linear in $v$, conjugate-complex-linear in $w$, and satisfies $$\langle w, v \rangle = \overline{\langle v, w \rangle}.$$ Of course $\langle v, v \rangle$ is the same as $|v|^2$, the square of the standard Euclidean length of $v$.
Define $(v, w)$ to be the real part of $\langle v, w \rangle$. This is a real inner product on ${\bf C}^m$, which is real linear in both $v$ and $w$, symmetric in $v$ and $w$, and such that $(v, v)$ is also equal to $|v|^2$. This is the same as the standard real inner product on ${\bf C}^m \approx {\bf R}^{2m}$.
Now define $[v, w]$ to be the imaginary part of $\langle v, w \rangle$. This is a real linear function in each of $v$ and $w$, and it is antisymmetric, in the sense that $$[w, v] = - [v, w].$$ Also, $[v, w]$ is nondegenerate, which means that for each nonzero $v$ in ${\bf C}^m$ there is a $w$ in ${\bf C}^m$ such that $[v, w] \ne 0$. Indeed, one can take $w = i \, v$.
Let $L$ be an $m$-dimensional real-linear subspace of ${\bf C}^m$. We say that $L$ is totally-real if $L$ is transverse to $i \, L$, where $i \, L = \{ i \, v : v \in L\}$. Transversality here can be phrased either in terms of $L \cap i \, L = \{0\}$, or in terms of $L + i \, L = {\bf C}^m$.
An extreme version of this occurs when $i \, L$ is the orthogonal complement of $L$. Because we are assuming that $L$ has real dimension $m$, this is the same as saying that elements of $i \, L$ are orthogonal to elements of $L$. This is equivalent to saying that $[v, w] = 0$ for all $v$, $w$ in $L$. Such a real $m$-dimensional plane is said to be Lagrangian.
As a basic example, ${\bf R}^m$ is a Lagrangian subspace of ${\bf C}^m$. In fact, the Lagrangian subspaces of ${\bf C}^m$ can be characterized as images of ${\bf R}^m$ under unitary linear transformations on ${\bf C}^m$. The images of ${\bf R}^m$ under special unitary linear transformations, which is to say unitary transformations with complex determinant equal to $1$, are called special Lagrangian subspaces of ${\bf C}^m$.
Now suppose that $M$ is some kind of submanifold or surface in ${\bf C}^m$ with real dimension $m$. We assume at least that $M$ is a closed subset of ${\bf C}^m$ which is equipped with a nonnegative Borel measure $\mu$, in such a way that $M$ is equal to the support of $\mu$, and the $\mu$-measure of bounded sets are finite. One might also ask that $\mu$ behave well in the sense of a doubling condition on $M$, or even Ahlfors-regularity of dimension $m$. One may wish to assume that $M$ is reasonably smooth, and anyway we would ask that $M$ is at least rectifiable, so that $\mu$ can be written as the restriction of $m$-dimensional Hausdorff measure to $M$ times a density function, and $M$ has $m$-dimensional approximate tangent spaces at almost all points.
Let us focus on the case where $M$ is totally real, so that its approximate tangent planes are totally real, at least almost everywhere. In fact one can consider quantitative versions of this. Namely, if $$d\nu_m = dz_1 \wedge dz_2 \wedge \cdots \wedge dz_m$$ is the standard complex volume form on ${\bf C}^m$, then a linear subspace $L$ of ${\bf C}^m$ of real dimension $m$ is totally real if and only if the restriction of $d\nu_m$ to $L$ is nonzero. In any event, the absolute value of the restriction of $d\nu_m$ to $L$ is equal to a nonnegative real number times the standard positive element of $m$-dimensional volume on $L$, and positive lower bounds on that real number correspond to quantitative measurements of the extent to which $L$ is totally real. In the extreme case when $L$ is Lagrangian, this real number is equal to $1$. For the surface $M$, one can consider lower bounds on this real coefficient at each point, or at least almost everywhere.
From now on let us assume that $M$ is oriented, so that the approximate tangent planes to $M$ are oriented. This means that reasonably-nice complex-valued functions on $M$ can be integrated against the restriction of $d\nu_m$ to $M$. One can then define pseudo-accretivity and para-accretivity conditions for the restriction of $d\nu_m$ to $M$ as in [@D-J-S], which basically mean that classes of averages of the restriction of $d\nu_m$ to $M$ have nice lower bounds for their absolute values compared to the corresponding averages of the absolute value of the restriction to $d\nu_m$ to $m$. This takes into account the oscillations of the restriction of $d\nu_m$ to $M$.
Note that if $M$ is a smooth submanifold of ${\bf C}^m$ of real dimension $m$, then $M$ is said to be Lagrangian if its tangent spaces are Lagrangian $m$-planes at each point. This turns out to be equivalent to saying that $M$ can be represented locally at each point as the graph of the gradient of a real-valued smooth function on ${\bf
R}^m$ in an appropriate sense, as in [@weinstein]. If the tangent planes of $M$ are special Lagrangian, then $M$ is said to be a special Lagrangian submanifold. See [@reese; @reese-blaine] in connection with these.
It seems to me that there is a fair amount of room here for various interesting things to come up, basically concerning the geometry of $M$ and aspects of several complex variables on ${\bf
C}^m$ around $M$. When $m = 1$, this would include the Cauchy integral operator applied to functions on a curve and holomorphic functions on the complement of the curve. In general this can include questions about functional calculi, as in [@C-M1; @C-M2; @D-J-S], and $\overline{\partial}$ problems with data of type $(0,m)$, as well as relations between the two.
Potentials on various spaces
============================
Let $n$ be a positive integer greater than $1$, and consider the potential operator $P$ acting on functions on ${\bf R}^n$ defined by $$\label{def of P on R^n}
P(f)(x) = \int_{{\bf R}^n} \frac{1}{|x-z|^{n-1}} \, f(z) \, dz.$$ Here $dz$ denotes Lebesgue measure on ${\bf R}^n$. More precisely, if $f$ lies in $L^q({\bf R}^n)$, then $P(f)$ is defined almost everywhere on ${\bf R}^n$ if $1 \le q < n$, it is defined almost everywhere modulo constants when $q = n$, and it is defined modulo constants everywhere if $n < q < \infty$. (If $q = \infty$, then one can take it to be defined modulo affine functions.) We shall review the reasons behind these statements in a moment.
The case where $n = 1$ is a bit different and special, and we shall not pay attention to it in these notes for simplicity. Similarly, we shall normally restrict our attention to functions in $L^q$ with $1 < q < \infty$.
A basic fact about this operator on ${\bf R}^n$ is that if $f
\in L^q({\bf R}^n)$, then the first derivatives of $P(f)$, taken in the sense of distributions, all lie in $L^q({\bf R}^n)$, as long as $1
< q < \infty$. Indeed, the first derivatives of $P(f)$ are given by first Riesz transforms of $f$ (modulo normalizing constant factors), and these are well-known to be bounded on $L^q$ when $1 < q < \infty$. (In connection with these statements, see [@St; @SW].)
One might rephrase this as saying that $P$ maps $L^q$ into the Sobolev space of functions on ${\bf R}^n$ whose first derivatives lie in $L^q$ when $1 < q < \infty$. Instead of taking derivatives, one can look at the oscillations of $P(f)$ more directly, as follows. Let $r$ be a positive real number, which represents the scale at which we shall be working. Consider the expression $$\label{frac{P(f)(x) - P(f)(y)}{r}}
\frac{P(f)(x) - P(f)(y)}{r}.$$
To analyze this, let us decompose $P(f)$ into local and distant parts at the scale of $r$. Specifically, define operators $L_r$ and $J_r$ by $$\label{def of L_r}
L_r(f)(x) = \int_{\{z \in {\bf R}^n : \, |z-x| < r\}}
\frac{1}{|x-z|^{n-1}} \, f(z) \, dz$$ and $$\label{def of J_r}
J_r(f)(x) = \int_{\{z \in {\bf R}^n : \, |z-x| \ge r\}}
\frac{1}{|x-z|^{n-1}} \, f(z) \, dz.$$ Thus $P(f) = L_r(f) + J_r(f)$, at least formally (we shall say more about this in a moment), so that $$\begin{aligned}
\label{frac{P(f)(x) - P(f)(y)}{r} = ..., 1}
\lefteqn{\frac{P(f)(x) - P(f)(y)}{r} = } \\
&& \frac{L_r(f)(x) - L_r(f)(y)}{r} + \frac{J_r(f)(x) - J_r(f)(y)}{r}.
\nonumber\end{aligned}$$
More precisely, $L_r(f)(x)$ is defined almost everywhere in $x$ when $f \in L^q({\bf R}^n)$ and $1 \le q \le n$, and it is defined everywhere when $q > n$. These are standard results in real analysis (as in [@St]), which can be derived from Fubini’s theorem and Hölder’s inequality. On the other hand, if $1 \le q < n$, then $J_r(f)(x)$ is defined everywhere on ${\bf R}^n$, because Hölder’s inequality can be used to show that the integral converges. This does not work when $q \ge n$, but in this case one can consider the integral which formally defines the difference $J_r(f)(x) - J_r(y)$. Namely, $$\begin{aligned}
\label{J_r(f)(x) - J_r(f)(y), 1}
\lefteqn{\quad J_r(f)(x) - J_r(f)(y) = } \\
& & \int_{{\bf R}^n}
\biggl(\frac{1}{|x-z|^{n-1}} \, {\bf 1}_{{\bf R}^n \backslash B(x,r)}(z)
- \frac{1}{|y-z|^{n-1}} \, {\bf 1}_{{\bf R}^n \backslash B(y,r)}(z) \biggr)
\, f(z) \, dz. \nonumber\end{aligned}$$ Here ${\bf 1}_A(z)$ denotes the characteristic function of a set $A$, so that it is equal to $1$ when $z \in A$ and to $0$ when $z$ is not in $A$, and $B(x,r)$ denotes the open ball in ${\bf R}^n$ with center $x$ and radius $r$. The integral on the right side of (\[J\_r(f)(x) - J\_r(f)(y), 1\]) does converge when $f \in L^q({\bf R}^n)$ and $q <
\infty$, because the kernel against which $f$ is integrated is bounded everywhere, and decays at infinity in $z$ like $O(|z|^{-n})$. This is easy to check.
Using this, one gets that $J_r(f)$ is defined “modulo constants” on ${\bf R}^n$ when $f \in L^q({\bf R}^n)$ and $n \le q <
\infty$. This is also why $P(f)$ can be defined modulo constants on ${\bf R}^n$ in this case (almost everywhere when $q = n$), because of what we know about $L_r(f)$. Note that $J_r(f)$ for different values of $r$ can be related by the obvious formulae, with the differences given by convergent integrals. Using this one can see that the definition of $P(f)$ in terms of $J_r(f)$ and $L_r(f)$ does not depend on $r$.
Now let us use (\[frac[P(f)(x) - P(f)(y)]{}[r]{} = ..., 1\]) to estimate $r^{-1} (P(f)(x) - P(f)(y))$. Specifically, in keeping with the idea that $P(f)$ should be in the Sobolev space corresponding to having its first derivatives be in $L^q({\bf R}^n)$ when $f$ is in $L^q({\bf R}^n)$, $1 < q < \infty$, one would like to see that $$\label{local average of the difference quotient}
\frac{1}{|B(x,r)|} \int_{B(x,r)} \frac{|P(f)(x) - P(f)(y)|}{r}
\, dy$$ lies in $L^q({\bf R}^n)$, with the $L^q$ norm bounded uniformly over $r > 0$. Here $|A|$ denotes the Lebesgue measure of a set $A$ in ${\bf R}^n$, in this case the ball $B(x,r)$. In fact, one can even try to show that the supremum over $r > 0$ of (\[local average of the difference quotient\]) lies in $L^q$. By well-known results, if $q
> 1$, then both conditions follow from the information that the gradient of $P(f)$ lies in $L^q$ on ${\bf R}^n$, and both conditions imply that the gradient of $P(f)$ lies in $L^q$. (Parts of this work for $q = 1$, and there are related results for the other parts.) We would like to look at this more directly, however.
For the contributions of $L_r(f)$ in (\[frac[P(f)(x) - P(f)(y)]{}[r]{} = ..., 1\]) to (\[local average of the difference quotient\]), one can obtain estimates like the ones just mentioned by standard means. For instance, $$\sup_{r > 0} r^{-1} \, L_r(f)(x)$$ can be bounded (pointwise) by a constant times the Hardy–Littlewood maximal function of $f$ (by analyzing it in terms of sums or integrals of averages of $f$ over balls centered at $x$). Compare with [@St; @SW]. One also does not need the fact that one has a difference $L_r(f)(x) - L_r(f)(y)$ in (\[frac[P(f)(x) - P(f)(y)]{}[r]{} = ..., 1\]), but instead the two terms can be treated independently. The localization involved is already sufficient to work back to $f$ in a good way.
For the $J_r(f)$ terms one should be more careful. In particular, it is important that we have a difference $J_r(f)(x) -
J_r(f)(y)$, rather than trying to deal with the two terms separately. We have seen an aspect of this before, with simply having the difference be well-defined when $f$ lies in $L^q({\bf R}^n)$ and $n
\le q < \infty$.
Consider the auxiliary operator $T_r(f)$ defined by $$\label{def of T_r}
T_r(f)(x) = \int_{\{z \in {\bf R}^n : \, |z-x| \ge r\}}
\frac{x-z}{|x-z|^{n+1}} \, f(z) \, dz.$$ This is defined everywhere on ${\bf R}^n$ when $f$ lies in $L^q({\bf
R}^n)$ and $1 \le q < \infty$, because of Hölder’s inequality. Note that $T_r(f)$ takes values in vectors, rather than scalars, because of the presence of $x-z$ in the numerator in the kernel of the operator. In fact, $$\nabla_x \frac{1}{|x-z|^{n-1}} = - (n-1) \frac{x-z}{|x-z|^{n+1}}.$$ Using this and some calculus (along the lines of Taylor’s theorem), one can get that $$\begin{aligned}
\label{estimate for J_r(f)(x) - J_r(f)(y) - (n-1) (y-x) cdot T_r(f)(x)}
\lefteqn{r^{-1} \, |J_r(f)(x) - J_r(f)(y) - (n-1) (y-x) \cdot
T_r(f)(x)|} \\
& & \qquad\qquad
\le C \int_{{\bf R}^n} \frac{r}{|x-z|^{n+1} + r^{n+1}}
\, |f(z)| \, dz
\nonumber\end{aligned}$$ for a suitable constant $C$ and all $x, y \in {\bf R}^n$ with $|x-y|
\le r$. (In other words, the kernel on the right side of (\[estimate for J\_r(f)(x) - J\_r(f)(y) - (n-1) (y-x) cdot T\_r(f)(x)\]) corresponds to the second derivatives of the kernel of $J_r$, while $T_r$ reflects the first derivative.)
The contribution of the right-hand side of (\[estimate for J\_r(f)(x) - J\_r(f)(y) - (n-1) (y-x) cdot T\_r(f)(x)\]) to (\[local average of the difference quotient\]) satisfies the kind of estimates that we want, by standard results. (The right-hand side of (\[estimate for J\_r(f)(x) - J\_r(f)(y) - (n-1) (y-x) cdot T\_r(f)(x)\]) is approximately the same as the Poisson integral of $|f|$. Compare with [@St; @SW] again.) The remaining piece to consider is $$(n-1) \, r^{-1} \, (y-x) \cdot T_r(f)(x).$$ After averaging in $y$ over $B(x,r)$, as in (\[local average of the difference quotient\]), we are reduced to looking simply at $|T_r(f)(x)|$. Here again the Riesz transforms arise, but in the form of the truncated singular integral operators, rather than the singular integral operators themselves (with the limit as $r \to 0$). By well-known results, these truncated operators $T_r$ have the property that they are bounded on $L^q({\bf R}^n)$ when $1 < q < \infty$, with the operator norm being uniformly bounded in $r$. Moreover, the maximal truncated operator $$\sup_{r > 0} |T_r(f)(x)|$$ is bounded on $L^q({\bf R}^n)$, $1 < q < \infty$. See [@St; @SW].
These statements are all closely related to the original one concerning the way that the first derivatives of $P(f)$ are given by first Riesz transforms of $f$ (up to constant multiples), and lie in $L^q({\bf R}^n)$ when $f$ does and $1 < q < \infty$. Instead of comparing the derivatives of $P(f)$ with Riesz transforms of $f$, we compare oscillations of $P(f)$ at the scale of $r$ with averages of $f$ and truncated Riesz transforms of $f$ at the scale of $r$. We do this directly, rather than going through derivatives and integrations of them.
A nice feature of this discussion is that it lends itself in a simple manner to more general settings. In particular, it applies to situations in which it may not be as convenient to work with derivatives and integrations of them, while measurements of oscillations at the scale of $r$ and related estimates still make sense.
Instead of ${\bf R}^n$, let us consider a set $E$ in some ${\bf R}^m$. Let us assume that $E$ is *Ahlfors-regular of dimension $n$*, by which we mean that $E$ is closed, has at least two elements (to avoid degeneracies), and that there is a constant $C > 0$ such that $$\label{Ahlfors-regularity condition}
C^{-1} \, t^n \le H^n(E \cap \overline{B}(x,t)) \le C \, t^n$$ for all $x \in E$ and $t > 0$ with $t \le \diam E$. Here $H^n$ denotes $n$-dimensional Hausdorff measure (as in [@Fe; @Ma4]), and $\overline{B}(x,t)$ denotes the closed ball in the ambient space ${\bf
R}^m$ with center $x$ and radius $t$.
This condition on $E$ ensures that $E$ behaves measure-theoretically like ${\bf R}^n$, even if it could be very different geometrically. Note that one can have Ahlfors-regular sets of noninteger dimension, and in fact of any dimension in $(0,m]$ (for subsets of ${\bf R}^m$).
Given a function $f$ on $E$, define $P(f)$ on $E$ in the same manner as before, i.e., by $$\label{def of P on E}
P(f)(x) = \int_{E} \frac{1}{|x-z|^{n-1}} \, f(z) \, dz,$$ where now $dz$ denotes the restriction of $H^n$-measure to $E$. Also, $|x-z|$ uses the ordinary Euclidean distance on ${\bf R}^m$.
The Ahlfors-regularity of dimension $n$ of $E$ ensures that $P(f)$ has many of the same basic properties on $E$ as on ${\bf R}^n$. In particular, if $f$ is in $L^q(E)$, then $P(f)$ is defined almost everywhere on $E$ (using the measure $H^n$ still) when $1 \le q < n$, it is defined almost everywhere modulo constants on $E$ when $q = n$, and it is defined everywhere on $E$ modulo constants when $n < q <
\infty$. One can show these statements in essentially the same manner as on ${\bf R}^n$, and related results about integrability, bounded mean oscillation, and Hölder continuity can also be proven in essentially the same manner as on ${\bf R}^n$.
What about the kind of properties discussed before, connected to Sobolev spaces? For this again one encounters operators on functions on $E$ with kernels of the form $$\label{kernels for operators}
\frac{x-z}{|x-z|^{n+1}}.$$ It is not true that operators like these have the same kind of $L^q$-boundedness properties as the Riesz transforms do for arbitrary Ahlfors-regular sets in ${\bf R}^m$, but this is true for integer dimensions $n$ and “uniformly rectifiable” sets $E$. In this connection, see [@Ca; @CDM; @CMM; @Da1; @Da2; @Da3; @Da4; @DS1; @DS2; @Ma4; @MMV], for instance (and further references therein).
When $E$ is not a plane, the operators related to the kernels (\[kernels for operators\]) are no longer convolution operators, and one loses some of the special structure connected to that. However, many real-variable methods still apply, or can be made to work. See [@CW1; @CW2; @CM*; @jean-lin]. For example, the Hardy–Littlewood maximal operator still behaves in essentially the same manner as on Euclidean spaces, as do various averaging operators (as were used in the earlier discussion). Although one does not know that singular integral operators with kernels as in (\[kernels for operators\]) are bounded on $L^q$ spaces for arbitrary Ahlfors-regular sets $E$, there are results which say that boundedness on one $L^q$ space implies boundedness on all others, $1 < q < \infty$. Boundedness of singular integral operators (of the general Calderón–Zygmund type) implies uniform boundedness of the corresponding truncated integral operators, and also boundedness of the maximal truncated integral operators.
At any rate, a basic statement now is the following. Let $n$ be a positive integer, and suppose that $E$ is an Ahlfors-regular set in some ${\bf R}^m$ which is “uniformly rectifiable”. Define the potential operator $P$ on functions on $E$ as in (\[def of P on E\]). Then $P$ takes functions in $L^q(E)$, $1 < q < \infty$, to functions on $E$ (perhaps modulo constants) which satisfy “Sobolev space” conditions like the ones on ${\bf R}^n$ for functions with gradient in $L^q$. In particular, one can look at this in terms of $L^q$ estimates for the analogue of (\[local average of the difference quotient\]) on $E$, just as before. These estimates can be derived from the same kinds of computations as before, with averaging operators and operators like $T_r$ in (\[def of T\_r\]), but now on $E$. The estimates for $T_r$ use the assumption of uniform rectifiability of $E$ (boundedness of singular integral operators). The various other integral operators, with the absolute values inside the integral sign, are handled using only the Ahlfors-regularity of $E$.
Note that for sets $E$ of this type, one does not necessarily have the same kind of properties concerning integrating derivatives as on ${\bf R}^n$. In other words, one does not automatically get as much from looking at infinitesimal oscillations, along the lines of derivatives, as one would on ${\bf R}^n$. The set $E$ could be quite disconnected, for instance. However, one gets the same kind of estimates at larger scales for the potentials that one would normally have on ${\bf R}^n$ for a function with its first derivatives in $L^q$, by looking at a given scale $r$ directly (rather than trying to integrate bounds for infinitesimal oscillations), as above.
For some topics related to Sobolev-type classes on general spaces, see [@FHK; @Ha; @HaK1; @HaK2; @HeK1; @HeK2] (and references therein).
Although the potential operator in (\[def of P on E\]) has a nice form, it is also more complicated than necessary. Suppose that $E$ is an $n$-dimensional Lipschitz graph, or that $E$ is simply bilipschitz–equivalent to ${\bf R}^n$, or to a subset of ${\bf R}^n$. In these cases the basic subtleties for singular integral operator with kernel as in (\[kernels for operators\]) already occur. However, one can obtain potential operators with the same kind of nice properties by making a bilipschitz change of variables into ${\bf
R}^n$, and using the classical potential operator there. This leads back to the classical first Riesz transforms on ${\bf R}^n$, as in [@St; @SW].
Now let us consider a rather different kind of situation. Suppose that $E$ is an Ahlfors-regular subset of dimension $n$ of some ${\bf R}^m$ again. For this there will be no need to have particular attention to integer values of $n$. Let us say that $E$ is a *snowflake* of order $\alpha$, $0 < \alpha < 1$, if there is a constant $C_1$ and a metric $\rho(x,y)$ on $E$ such that $$\label{snowflake condition}
C_1^{-1} \, |x-y| \le \rho(x,y)^\alpha \le C_1 \, |x-y|$$ for all $x, y \in E$.
In this case, let us define a potential operator $\widetilde{P}$ on functions on $E$ by $$\label{def of widetilde{P}}
\widetilde{P}(f)(x)
= \int_E \frac{1}{\rho(x,z)^{\alpha (n-1)}} \, f(z) \, dz.$$ Here $dz$ denotes the restriction of $n$-dimensional Hausdorff measure to $E$ again. This operator is very similar to the one before, since $\rho(x,z)^{\alpha (n-1)}$ is bounded from above and below by constant multiples of $|x-z|^{n-1}$, so that the kernel of $\widetilde{P}$ is bounded from above and below by constant multiples of the kernel of the operator $P$ in (\[def of P on E\]).
This operator enjoys the same basic properties as before, with $\widetilde{P}(f)$ being defined almost everywhere when $f$ lies in $L^q(E)$ and $1 \le q < n$, defined modulo constants almost everywhere when $q = n$, and defined modulo constants everywhere when $n < q <
\infty$, for essentially the same reasons as in the previous circumstances. However, there is a significant difference with this operator, which one can see as follows. Let $x$, $y$, $z$ be three points in $E$, with $x \ne z$ and $y \ne z$. Then $$\label{inequality for differences of kernels}
\biggl| \frac{1}{\rho(x,z)^{\alpha (n-1)}}
- \frac{1}{\rho(y,z)^{\alpha (n-1)}} \biggr|
\le C \, \frac{\rho(x,y)}{\min(\rho(x,z),\rho(y,z))^{\alpha (n-1) + 1}}$$ for some constant $C$ which does not depend on $x$, $y$, or $z$, but only on $\alpha (n-1)$. Indeed, one can choose $C$ so that $$\label{inequality simply for positive numbers}
\bigl| a^{\alpha (n-1)} - b^{\alpha (n-1)} \bigr|
\le C \, \frac{|a-b|}{\min(a,b)^{\alpha (n-1) + 1}}$$ whenever $a$ and $b$ are positive real numbers. This is an elementary observation, and in fact one can take $C = \alpha (n-1)$. One can get (\[inequality for differences of kernels\]) from (\[inequality simply for positive numbers\]) by taking $a = \rho(x,z)$ and $b =
\rho(y,z)$, and using the fact that $$| \rho(x,z) - \rho(y,z)| \le \rho(x,y).$$ This last comes from the triangle inequality for $\rho(\cdot, \cdot)$, which we assumed to be a metric.
Using the snowflake condition (\[snowflake condition\]), we can obtain from (\[inequality for differences of kernels\]) that $$\label{inequality for differences of kernels, 2}
\biggl| \frac{1}{\rho(x,z)^{\alpha (n-1)}}
- \frac{1}{\rho(y,z)^{\alpha (n-1)}} \biggr|
\le C' \, \frac{|x-y|^{1/\alpha}}{\min(|x-z|, |y-z|)^{(n-1) + 1/\alpha}}$$ for all $x, y, z \in {\bf R}^n$ with $x \ne z$, $y \ne z$, and with a modestly different constant $C'$. The main point here is that the exponent in the denominator on the right side of the inequality is strictly larger than $n$, because $\alpha$ is required to lie in $(0,1)$. In the previous contexts, using the kernel $1/ |x-z|^{n-1}$ for the potential operator, there was an analogous inequality with $\alpha = 1$, so that the exponent in the denominator was equal to $n$.
With an exponent larger than $n$, there is no need for anything like singular integral operators here. More precisely, there is no need for the operators $T_r$ in (\[def of T\_r\]) here; one can simply drop them, and estimate the analogue of $|J_r(f)(x) -
J_r(f)(y)|$ when $|x - y| \le r$ directly, using (\[inequality for differences of kernels, 2\]). In other words, one automatically gets an estimate like (\[estimate for J\_r(f)(x) - J\_r(f)(y) - (n-1) (y-x) cdot T\_r(f)(x)\]) in this setting, without the $T_r$ term, and with some minor adjustments to the right-hand side. Specifically, the $r$ in the numerator on the right side of (\[estimate for J\_r(f)(x) - J\_r(f)(y) - (n-1) (y-x) cdot T\_r(f)(x)\]) would become an $r^{1/\alpha -1}$ in the present situation, and the exponent $n+1$ in the denominator would be replaced with $n - 1 + 1/\alpha$. This leads to the same kinds of results in terms of $L^q$ norms and the like as before, because the rate of decay is enough so that the quantities in question still look like suitable averaging operators in $f$. (That is, they are like Poisson integrals, but with somewhat less decay. The decay is better than $1/|x-z|^n$, which is the key. As usual, see [@St; @SW] for similar matters.)
The bottom line is that if we use the potential operator $\widetilde{P}$ from (\[def of widetilde[P]{}\]) instead of the operator $P$ from (\[def of P on E\]), then the two operators are approximately the same in some respects, with the kernels being of comparable size in particular, but in this situation the operator $\widetilde{P}$ has the nice feature that it automatically enjoys the same kind of properties as in the ${\bf R}^n$ case, in terms of estimates for expressions like (\[local average of the difference quotient\]) (under the snowflake assumption for $E$). That is, one automatically has that $\widetilde{P}(f)$ behaves like a function in a Sobolev class corresponding to first derivatives being in $L^q$ when $f$ lies in $L^q$. One does not need $L^q$ estimates for singular integral operators for this, as would arise if we did try to use the operator $P(f)$ from (\[def of P on E\]).
These remarks suggest numerous questions...
Of course, some other basic examples involve nilpotent Lie groups, like the Heisenberg group, and their invariant geometries.
As a last comment, note that for the case of snowflakes we never really needed to assume that $E$ was a subset of some ${\bf
R}^m$. One could have worked just as well with abstract metric spaces (still with the snowflake condition). However, Assouad’s embedding theorem [@A1; @A2; @A3] provides a way to go back into some ${\bf
R}^m$ anyway. The notion of uniform rectifiability makes sense for abstract metric spaces, and not just subsets of ${\bf R}^m$, and an embedding into some ${\bf R}^m$ is sometimes convenient. In this regard, see [@S5].
[99]{}
W. Allard and F. Almgren, [*The structure of stationary one dimensional varifolds with positive density*]{}, Inventiones Mathematicae [**34**]{} (1976), 83–97.
P. Assouad, [*Espaces Métriques, Plongements, Facteurs*]{}, Thèse de Doctorat (January, 1977), Université de Paris XI, 91405 Orsay, France.
P. Assouad, [*Étude d’une dimension métrique liée à la possibilité de plongement dans ${\bf R}^n$*]{}, Comptes Rendus Académie des Sciences Paris [**288**]{} (1979), 731–734.
P. Assouad, [*Plongements Lipschitziens dans ${\bf
R}^n$*]{}, Bulletin Société Mathématique de France [**111**]{} (1983), 429–448.
P. Auscher, S. Hofmann, M. Lacey, J. Lewis, A. McIntosh, and P. Tchamitchian, [*The solution of Kato’s conjectures*]{}, Comptes Rendus des Scéances de l’Académie des Sciences de Paris Sér. I [**322**]{} (2001), 601–606.
P. Auscher, S. Hofmann, M. Lacey, A. McIntosh, and P. Tchamitchian, [*The solution of the Kato square root problem for second order elliptic operators on ${\bf R}^n$*]{}, Annals of Mathematics (2) [**156**]{} (2002), 633–654.
P. Auscher, S. Hofmann, J. Lewis, and P. Tchamitchian, [*Extrapolation of Carleson measures and the analyticity of Kato’s square-root operators*]{}, Acta Mathematica [**187**]{} (2001), 161–190.
P. Auscher, S. Hofmann, A. McIntosh, and P. Tchamitchian, [*The Kato square root problem for higher order elliptic operators and systems on ${\bf R}^n$, Dedicated to the memory of Tosio Kato*]{}, Journal of Evolution Equations [**1**]{} (2001), 361–385.
P. Auscher and P. Tchamitchian, [*Square Root Problem for Divergence Operators and Related Topics*]{}, Astérisque [**249**]{}, 1998.
S. Bell, [*The Cauchy Transform, Potential Theory, and Conformal Mapping*]{}, CRC Press, 1992.
F. Brackx, R. Delanghe, and F. Sommen, [*Clifford Analysis*]{}, Pitman, 1982.
J. Burbea, [*The Cauchy and Szegö kernels on multiply connected regions*]{}, Rendiconti del Circolo Matematico di Palermo (2) [**31**]{} (1982), 105–118.
A. Calderón, [*Cauchy integrals on Lipschitz curves and related operators*]{}, Proceedings of the National Academy of Sciences U.S.A. [**74**]{} (1977), 1324–1327.
R. Coifman, G. David, and Y. Meyer, [*La solution des conjectures de Calderón*]{}, Advances in Mathematics [**48**]{} (1983), 144–148.
R. Coifman, A. McIntosh, and Y. Meyer, [*L’Intégrale de Cauchy définit un opérateur borné sur $L^2$ pour les courbes lipschitziennes*]{}, Annals of Mathematics (2) [**116**]{} (1982), 361–387.
R. Coifman and Y. Meyer, [*Au–delà des Opérateurs Pseudo-Différentiels*]{}, Astérisque [**58**]{}, 1978.
R. Coifman and Y. Meyer, [*Fourier analysis of multilinear convolutions, Calderón’s theorem, and analysis on Lipschitz curves*]{}, in [*Euclidean Harmonic Analysis*]{}, Proceedings of Seminars held at the University of Maryland, 1979, 104–122, Lecture Notes in Mathematics [**779**]{}, Springer-Verlag, 1980.
R. Coifman and Y. Meyer, [*Une généralisation du théorème de Calderón sur l’intégrale de Cauchy*]{}, in [*Fourier Analysis*]{}, Proceedings of the seminar at El Escorial, Spain, June 1979, 87–116, Asociación Matemática Español, 1980.
R. Coifman and Y. Meyer, [*Non-linear harmonic analysis, operator theory, and PDE*]{}, in [*Beijing Lectures in Harmonic Analysis*]{}, Annals of Mathematics Studies [**112**]{}, 3–45, Princeton University Press, 1986.
R. Coifman and S. Semmes, [*Real-analytic operator-valued functions defined in BMO*]{}, in [*Analysis and Partial Differential Equations*]{}, 85–100, Marcel Dekker, 1990.
R. Coifman and S. Semmes, [*$L^2$ estimates in nonlinear Fourier analysis*]{}, in [*Harmonic Analysis (Sendai, 1990)*]{}, Proceedings of the ICM-90 Satellite Conference, 79–95, Springer-Verlag, 1991.
R. Coifman and G. Weiss, [*Analyse Harmonique Non-commutative sur Certains Espaces Homogènes*]{}, Lecture Notes in Mathematics [**242**]{} (1971), Springer-Verlag.
R. Coifman and G. Weiss, [*Extensions of Hardy spaces and their use in analysis*]{}, Bulletin of the American Mathematical Society [**83**]{} (1977), 569–645.
M. Cowling, I. Doust, A. McIntosh, and A. Yagi, [*Banach space operators with a bounded $H^\infty$ functional calculus*]{}, Journal of the Australian Mathematical Society Ser. A [**60**]{} (1996), 51–89.
G. David, [*Courbes corde-arc et espaces de Hardy généralisés*]{}, Annales de l’Institut Fourier (Grenoble) [**32**]{} (1982), 227–239.
G. David, [*Opérateurs intégraux singuliers sur certaines courbes du plan complexe*]{}, Annales Scientifiques de l’École Normale Supérieure (4) [**17**]{} (1984), 157–189.
G. David, [*Opérateurs d’intégrale singulière sur les surfaces régulières*]{}, Annales Scientifiques de l’École Normale Supérieure (4) [**21**]{} (1988), 225–258.
G. David, [*Morceaux de graphes lipschitziens et intégrales singulières sur un surface*]{}, Revista Matemática Iberoamericana [**4**]{} (1988), 73–114.
G. David, [*Wavelets and Singular Integrals on Curves and Surfaces*]{}, Lecture Notes in Mathematics [**1465**]{}, Springer-Verlag, 1991.
G. David, J.-L. Journé, and S. Semmes, [*Opérateurs de Calderón–Zygmund, fonctions para-accrétives et interpolation*]{}, Revista Matemática Iberoamericana [**1**]{} (4) (1985), 1–56.
G. David and S. Semmes, [*Singular integrals and rectifiable sets in ${\bf R}^n$: Au-delà des graphes lipschitziens*]{}, Astérisque [**193**]{}, Société Mathématique de France, 1991.
G. David and S. Semmes, [*Analysis of and on Uniformly Rectifiable Sets*]{}, Mathematical Surveys and Monographs [**38**]{}, 1993, American Mathematical Society.
P. Duren, [*Theory of $H^p$ Spaces*]{}, Academic Press, 1970.
H. Federer, [*Geometric Measure Theory*]{}, Springer-Verlag, 1969.
B. Franchi, P. Haj[ł]{}asz, and P. Koskela, [*Definitions of Sobolev classes on metric spaces*]{}, Annales de l’Institut Fourier (Grenoble) [**49**]{} (1999), 1903–1924.
J. Garnett, [*Bounded Analytic Functions*]{}, Academic Press, 1981.
A. Gleason, [*The abstract theorem of Cauchy–Weil*]{}, Pacific Journal of Mathematics [**12**]{} (1962), 511–525.
A. Gleason, [*The Cauchy–Weil theorem*]{}, Journal of Mathematics and Mechanics [**12**]{} (1963), 429–444.
P. Haj[ł]{}asz, [*Sobolev spaces on an arbitrary metric space*]{}, Potential Analysis [**5**]{} (1996), 403–415.
P. Haj[ł]{}asz and P. Koskela, [*Sobolev meets Poincaré*]{}, Comptes Rendus de l’Académie des Sciences Paris Sér. I Math. [**320**]{} (1995), 1211–1215.
P. Haj[ł]{}asz and P. Koskela, [*Sobolev Met Poincaré*]{}, Memoirs of the American Mathematical Society [**688**]{} (2000).
R. Harvey, [*Spinors and Calibrations*]{}, Academic Press, 1990.
R. Harvey and B. Lawson, [*Calibrated geometries*]{}, Acta Mathematica [**148**]{} (1982), 47–157.
J. Heinonen and P. Koskela, [*From local to global in quasiconformal structures*]{}, Proceedings of the National Academy of Sciences (U.S.A.) [**93**]{} (1996), 554–556.
J. Heinonen and P. Koskela, [*Quasiconformal maps in metric spaces with controlled geometry*]{}, Acta Mathematica [**181**]{} (1998), 1–61.
G. Henkin and J. Leiterer, [*Andreotti–Grauert Theory by Integral Formulas*]{}, Akademie-Verlag, 1988.
L. Hörmander, [*An Introduction to Complex Analysis in Several Variables*]{}, North-Holland, 1973.
P. Huovinen, [*Singular integrals and rectifiability of measures in the plane*]{}, Ann. Acad. Sci. Fenn.Math. Diss. [**109**]{}, 1997.
P. Huovinen, [*A nicely behaved singular integral on a purely unrectifiable set*]{}, Proceedings of the American Mathematical Society [**129**]{} (2001), 3345–3351.
J. Hutchinson, [*$C^{1,\alpha}$ multiple function regularity and tangent cone behavior for varifolds with second fundamental form in $L^p$*]{}, Proceedings of Symposia in Pure Mathematics [**44**]{}, 281–306, American Mathematical Society, 1982.
B. Jefferies, A. McIntosh, and J. Picton-Warlow, [*The monogenic functional calculus*]{}, Studia Mathematica [**136**]{} (1999), 99–119.
J.-L. Journé, [*Calderón–Zygmund Operators, Pseudo-Differential Operators, and the Cauchy Integral of Calderón*]{}, Lecture Notes in Mathematics [**994**]{}, Springer-Verlag, 1983.
C. Kenig and Y. Meyer, [*Kato’s square roots of accretive operators and Cauchy kernels on Lipschitz curves are the same*]{}, in [*Recent Progress in Fourier Analysis (El Escorial, 1983)*]{}, 123–143, North-Holland, 1985.
N. Kerzman and E. Stein, [*The Cauchy kernel, the Szegö kernel, and the Riemann mapping function*]{}, Mathematische Annalen [**236**]{} (1978), 85–93.
B. Kirchheim and D. Preiss, [*Uniformly distributed measures in Euclidean spaces*]{}, Mathematica Scandinavica [**90**]{} (2002), 152–160.
O. Kowalski and D. Preiss, [*Besicovitch-type properties of measures and submanifolds*]{}, Journal für die Reine und Angewandte Mathematik [**379**]{} (1987), 115–151.
S. Krantz, [*Function Theory of Several Complex Variables*]{}, second edition, AMS Chelsea Publishing, 2001.
S. Krantz and H. Parks, [*The Geometry of Domains in Space*]{}, Birkhäuser, 1999.
P. Mattila, [*Principal values of Cauchy integrals, rectifiable measures and sets*]{}, in [*Harmonic Analysis (Sendai, 1990)*]{}, ICM-90 Satellite Conference Proceedings, Springer-Verlag, 1991.
P. Mattila, [*Cauchy singular integrals and rectifiability in measures of the plane*]{}, Advances in Mathematics [**115**]{} (1995), 1–34.
P. Mattila, [*Tangent measures, densities, and singular integrals*]{}, in [*Fractal Geometry and Stochastics (Finsterbergen, 1994)*]{}, 43–52, Birkhäuser, 1995.
P. Mattila, [*Geometry of Sets and Measures in Euclidean Spaces*]{}, Cambridge University Press, 1995.
P. Mattila, M. Melnikov, and J. Verdera, [*The Cauchy integral, analytic capacity, and uniform rectifiability*]{}, Annals of Mathematics (2) [**144**]{} (1996), 127–136.
P. Mattila and D. Preiss, [*Rectifiable measures in ${\bf R}^n$ and existence of principal values for singular integrals*]{}, Journal of the London Mathematical Society (2) [**52**]{} (1995), 482–496.
A. McIntosh, [*Operators which have an $H^\infty$ functional calculus*]{}, in [*Miniconference on Operator Theory and Partial Differential Equations (North Ryde, 1986)*]{}, 210–231, Proceedings of the Centre for Mathematical Analysis [**14**]{}, Australian National University, 1986.
A. McIntosh and Y. Meyer, [*Algèbres d’opérateurs définis par des intégrales singulières*]{}, Comptes Rendus des Scéances de l’Académie des Sciences de Paris Sér. I Math. [**301**]{} (1985), 395–397.
A. McIntosh and A. Pryde, [*A functional calculus for several commuting operators*]{}, Indiana University Mathematics Journal [**36**]{} (1987), 421–439.
A. McIntosh and A. Yagi, [*Operators of type $\omega$ without a bounded $H^\infty$ functional calculus*]{}, in [*Miniconference on Operators in Analysis (Sydney, 1989)*]{}, 159–172, Proceedings of the Centre for Mathematical Analysis [**24**]{}, Australian National University, 1990.
M. Melnikov, [*Analytic capacity: A discrete approach and the curvature of measure*]{} (Russian), Mat. Sb. [**186**]{} (6) (1995), 57–76; English Translation in Sb. Mat. [**186**]{} (6) (1995), 827–846.
M. Melnikov and J. Verdera, [*A geometric proof of the $L^2$ boundedness of the Cauchy integral on Lipschitz graphs*]{}, International Mathematical Research Notices (1995), 325–331.
D. Preiss, [*Geometry of measures in ${\bf R}^n$: distribution, rectifiability, and densities*]{}, Annals of Mathematics (2) [**125**]{} (1987), 537–643.
S. Semmes, [*The Cauchy integral and related operators on smooth curves*]{}, Ph. D. thesis, Washington University in St. Louis, 1983.
S. Semmes, [*A criterion for the boundedness of singular integrals on hypersurfaces*]{}, Transactions of the American Mathematical Society [**311**]{} (1989), 501–513.
S. Semmes, [*Analysis vs. geometry on a class of rectifiable hypersurfaces in ${\bf R}^n$*]{}, Indiana University Mathematics Journal [**39**]{} (1990), 1005–1035.
S. Semmes, [*Chord-arc surfaces with small constant I*]{}, Advances in Mathematics [**85**]{} (1991), 198–223.
S. Semmes, [*Bilipschitz embeddings of metric spaces into Euclidean spaces*]{}, Publicacions Matemàtiques [**43**]{} (1999), 571–653.
E. Stein, [*Singular Integrals and Differentiability Properties of Functions*]{}, Princeton University Press, 1970.
E. Stein and G. Weiss, [*Introduction to Fourier Analysis on Euclidean Spaces*]{}, Princeton University Press, Princeton, 1971.
J. Verdera, [*On the $T(1)$ theorem for the Cauchy integral*]{}, Arkiv för Matematik [**38**]{} (2000), 183–199.
J. Verdera, [*$L^2$ boundedness of the Cauchy integral and Menger curvature*]{}, in [*Harmonic Analysis and Boundary Value Problems (Fayetteville AR, 2000)*]{}, 139–158, Contemporary Mathematics [**277**]{}, American Mathematical Society, 2001.
A. Weinstein, [*Lectures on Symplectic Manifolds*]{}, Conference Board of the Mathematical Sciences Regional Conference Series in Mathematics [**29**]{}, American Mathematical Society, 1977.
|
Billy Hutchinson Slams McGrory Over ‘Winky’ Rea Subpoena
The following is a statement issued by the progressive Unionist Party today.
‘McGrory sabotages efforts to ‘deal with the past’ PUP Leader Cllr Billy Hutchinson slams the DPP for his political witch hunt of Winston Rea.
PUP leader Billy Hutchinson
The Progressive Unionist Party utterly condemns attempts by the DPP Barra McGrory to raid the Boston College history archive in a malicious and flagrantly political fishing expedition against Winston Churchill Rea.
Mr Rea, who as a representative of the PUP, made a very significant and widely acknowledged contribution to the peace process; from ceasefires to decommissioning and the political architecture which facilitated them in between.
Party Leader Cllr Billy Hutchinson slammed the effort, “This malicious abuse of process before the courts is just another in the long history of ‘balancing’ exercises where Loyalists have been singled out, not on merit but for the optics of appearing independent to a political audience. Internment, collective punishment, shoot to kill, historic investigations, the past is replete with instances where Loyalists have suffered punitive measures for the sake of appearance rather than requirement.
After the arrests of high profile Republicans relating to the Jean McConville investigation, Barra McGrory has abused his authority in an effort to appear, rather than to be, balanced. A previous subpoena request listed Winston Rea and a Republican in the very same circumstances. When that subpoena failed due to lack of evidence, the Republican’s name mysteriously disappeared and the token Loyalist required for public consumption remained. Unfortunately that Loyalist happened to be Winston Rea.”
He continued, “The distinct lack of evidence in this case is the starkest indication that the judicial process is being used in an exercise of political window dressing. Winston Rea’s rights under the ECHR are being subjected to the unfettered will of the DPP as he continues his overtly political agenda. We need not look to far back to be reminded of a mass supergrass show trial or the ‘On the Runs’ debacle. Perhaps if Winston Rea had been a Republican, he would have received a letter of comfort rather than an international subpoena.
Barra McGrory, North’s DPP
Had this material been deposited in a UK University, lack of evidence would have quashed the case long before it reached the eyes of a judge. International legal cooperation has allowed Winston Rea’s Human Rights to be subject to the whims of this DPP by virtue of legal loophole.
I would further caution that Barra McGrory’s efforts will lay waste to any hope of ‘dealing with the past.’ The Stormont House Agreement called for an oral history archive and an information recovery process. Both were to be underpinned by strict confidentiality, yet here the DPP seeks to run roughshod over a similar attempt to understand the past and the confidentiality that gave birth to it.
After this evidently politically motivated witch-hunt against Winston Rea, who is likely to engage in the structures envisaged by that Agreement? The understandable mood within Loyalism suggests no-one. But then that outcome was obvious to the DPP in his duty to consider the public interest. Thus, the deeper question arises – who does this serve?”
Cllr Hutchinson concluded, “With the DUP content to get their Stormont budget passed, is dealing with the past to be left to Sinn Fein’s revisionists and any other version of history suppressed by their stacked process? These questions are central to the success of any search for truth or recovery of information; the cabal of the First Minister, Deputy First Minister and DPP need to think long and hard about the implications of their answers.” |
The invention relates to high-temperature pipe supports, and in particular to high-temperature pipe supports for semiconductor apparatuses.
FIG. 1A is a partial cross section of a conventional chemical vapor deposition (CVD) apparatus. In FIG. 1A, the conventional CVD apparatus 10 comprises a seat 12, support ring 14, outer pipe 16, and inner pipe 18. The support ring 14 is disposed on the inner bore of the seat 12 near the bottom thereof. The inner pipe 18 and the outer pipe 16 are separately disposed on the seat 12 and support ring 14. The inner pipe 18 defines a sealed reaction chamber 182. During a CVD process, wafers are transferred into the reaction chamber 182 via a robot from the bottom entrance 184. The entrance 184 is closed and sealed, forming a vacuum in the reaction chamber 182 prior to CVD processes.
Due to the high operation temperature (above 700° C.) of CVD processes, the conventional inner pipe 18 and outer pipe 16 is quartz. The seat 12 and support ring 14 are stainless steel. The seat 12 is water-cooled from the outer bore and the sidewall adjacent to the gas outlet 122. After a number of CVD processes, silicide will coat the inner surfaces of the seat 12 and gas outlet 122. The coated silicide peels easily, however, producing particles P in the buffer space between the inner and outer pipes 16 and 18.
FIG. 1B is an enlarged view of area a in FIG. 1A. In order to easily position the inner pipe 18 during assembly, the conventional support ring 14 comprises a lead angle 144 around the inner upper edge such that particles P easily accumulate in the recess between the inner pipe 18 and the support ring 14. Particles P may be sucked into the reaction chamber 182 through the gap between the inner pipe 18 and the support ring 14 when the vacuum of the reaction chamber 182 is breaking and the entrance 184 is opened. Thus the silicide particles can be a principle reaction chamber contaminant source in a conventional CVD apparatus. |
Tony Hall OBE
World Croquet Federation
Hall of Fame
Tony Hall OBE
Born: 1932
Inducted: 2009
Tony Hall has devoted himself to croquet for more than twenty years, both as a player and an administrator. He served as an active and ambitious President of the World Croquet Federation from 1998 to 2003 and then promptly undertook the role of Treasurer of the Australian Croquet Association from 2004 to 2012. Before becoming President of the World Croquet Federation, he had served as as President of his club and of the Croquet New South Wales and as the Senior Vice-President of Australian Croquet Association.
Tony spent 38 years in the Australian Army, joining in 1949 and serving overseas in the Antarctic, Malaya, England, Thailand, Vietnam and Papua New Guinea. He retired as a full Colonel in 1987 and was awarded the OBE. He gained considerable administrative experience both as an officer and from playing and being actively involved in the administration of hockey, squash and swimming. He helped to start a hockey club, was secretary of the Canberra Veterans Hockey Association for three years and represented his State for ten years. He served as the treasurer of a squash club for 14 years and spent ten years administering a swimming club in which time he became a senior swimming referee and was made a life member of the club after it became the top club in Australia. His involvement with swimming also gave him direct experience of high level sports politics with the New South Wales and Australian Swimming Associations.
He took up croquet in 1989 became Secretary of the Canberra Croquet Club within months of joining and then its Treasurer after two years. As President of Croquet NSW from 1993 to 1996, he visited all NSW clubs, helping to increase membership from 48 to 62 clubs in three years. During his term as President of the WCF he visited all 24 countries who were then members and ten other potential members.
In 1998 to 2006, he acted as chairman of the first WCF Golf Croquet Rules Committee and then continued to serve as Australia's representative until 2013. The adoption of the new rules in 2001 are widely regarded as greatly assisting the dramatic increase in the popularity of Golf Croquet all over the world.
On becoming its President, Tony had several ambitions for the WCF. He wanted it to become a genuine international body for croquet, having responsibility for the rules of the games of Association Croquet and Golf Croquet, for organising international competitions and for standardising handicapping and everything that happens on the court. He also wanted to expand the number of office bearers and officers so that the duties of the Secretary-General could be delegated among a larger number of administrators. He also performed the duties of the Secretary-General as well as those of the President for one year in the middle of his term before finding a replacement for the inaugural incumbent and a number of other officers. He saw most of these ambitions realised during his term if office and some of those that remained, such as the management of the Laws of Association Croquet and of the MacRobertson Shield, have since been brought under the WCF umbrella.
Since 1990, Tony travelled around the world every year to play and administer croquet, including attending every World Championship and MacRobertson Shield competition. He played in the Australian Association Croquet Championships every year and has been ranked in the top hundred in the world. He has played in the British, New Zealand, Irish, Canadian, German and United States Association Croquet National Championships and in all other WCF countries with courts, winning the German Open in 2001. He also won the British GC Open Doubles and won his Australian tracksuit in 1998 to play for Australia in the third Golf Croquet World Championships. Since then he has since played in five more Golf Croquet World Championships, with a best placing of twentieth. From 2001 to 2006 he represented NSW in the Australian Interstate Association Croquet Championship and also represented NSW in 2007 and 2008 in the first two Australian Interstate Golf Croquet Championship. He won the 2002 and 2005 National Golf Croquet Handicap Championship, the 2004 Australian Golf Croquet Open Singles Championship and, in 2005, won the NSW and Queensland Open Singles Association Croquet championships. While doing all this, he also played hockey with his veterans’ team for twenty years, touring England, New Zealand, Canada, South Africa, Australia and South America!
Tony also found time to serve for five years as Tournament Referee of the Sonoma-Cutrer World Championships in California and as the Tournament Referee for the 1997 WCF Golf Croquet World Championship at Leamington Spa. He also acted as a referee in almost all the world level events since then until 2011.
Tony is a widower with three children and seven grandchildren. He has displayed an extraordinary amount of energy and commitment to croquet and other sports since his retirement in 1989 both as a player and, above all, as an administrator who knew how to run sports bodies to a very high standard. The world of croquet owes him a considerable debt. |
Millions of Muslims around the world are celebrating the Eid al-Adha holiday as some two million pilgrims carry out the final rites of the annual Hajj in Saudi Arabia.
Hundreds of thousands of Muslim pilgrims on Friday made their way towards a massive multi-storey complex in Mina after dawn to cast pebbles at three large columns.
It is here where Muslims believe the devil tried to talk the Prophet Ibrahim out of submitting to God's will.
Muslims believe Ibrahim's faith was tested when God commanded him to sacrifice his only son Ismail. Ibrahim was prepared to submit to the command, but then God stayed his hand, sparing his son.
The final days of Hajj coincide with the Eid al-Adha holiday, or "Feast of Sacrifice", to commemorate Ibrahim's test of faith. For the holiday, Muslims slaughter livestock and distribute the meat to the poor.
For the final three days of Hajj, pilgrims sleep in a large tent valley called Mina and for three days take part in a symbolic stoning of the devil.
Most pilgrims will remain in Mina until Monday before completing the Hajj.
They will then circle the cube-shaped Kaaba in Mecca, Islam's most sacred site, before departing.
The Kaaba represents the metaphorical house of God and the oneness of God in Islam. Observant Muslims around the world face towards the Kaaba during the five daily prayers.
The five-day-long Hajj is a series of rituals meant to cleanse the soul of sins and instill a sense of equality and brotherhood among Muslims.
The pilgrimage is required of all Muslims with means to perform once in a lifetime.
During the last three days of Hajj, male pilgrims shave their heads and remove the white terrycloth garments worn during the Hajj. Women cut off a small lock of hair in a sign of spiritual rebirth and renewal.
However, there was some disappointment among residents of Qatar, who bore the brunt of a diplomatic crisis in the Gulf.
With Saudi Arabia among the countries blockading Doha by land and air, it was challenging for hopeful pilgrims to travel. Reports say only a few dozen residents of Qatar were able to perform the Hajj this year.
There are more than one billion Muslims around the world.
Many used the occasion of Eid to remember those suffering in conflicts around the world, from Yemen and Syria to Iraq and Myanmar.
Some will celebrate Eid al-Adha instead on Saturday, including many followers of the Islamic faith in Pakistan.
Here are some social media reactions to the festive day as #EidAlAdha trended worldwide on Twitter:
Israel didn't give us permits this year on Eid so this time I won't be in Jerusalem 😓 — Nadia Daniali (@nadia_daniali) September 1, 2017
Eid ul adha mubarak to everyone around the world. May you all have a happy and blessed one. Look after each other and take care. #EidAlAdha — Junaid khan 83 (@JunaidkhanREAL) September 1, 2017
#EidAlAdha mubarak to all those celebrating today. Spare a thought this day for the #Rohingyas who are fighting a lonely battle — Rana Ayyub (@RanaAyyub) August 31, 2017
Eid Mubarak! Sending my best wishes to Muslims celebrating Eid al-Adha & the end of the Hajj today. https://t.co/UjUx5gg0dE pic.twitter.com/4gsVb3gA0a — Justin Trudeau (@JustinTrudeau) August 31, 2017
I would like to wish Muslims in Britain and across the world Eid Mubarak. pic.twitter.com/jdKLSai1HB — Jeremy Corbyn (@jeremycorbyn) September 1, 2017
Eid Mubarak to all Muslims as you celebrate #EidAlAdha & its message of solidarity & compassion with the poor & most vulnerable in societies — António Guterres (@antonioguterres) August 31, 2017 |
{
"version": "2.0",
"service": "AWS RDS DataService provides Http Endpoint to query RDS databases.",
"operations": {
"ExecuteSql": "Executes any SQL statement on the target database synchronously"
},
"shapes": {
"Arn": {
"base": null,
"refs": { }
},
"ArrayValues": {
"base": "Array value",
"refs": {
"ArrayValues$member": null
}
},
"BadRequestException": {
"base": "Invalid Request exception",
"refs": {
"BadRequestException$message": "Error message"
}
},
"Blob": {
"base": null,
"refs": { }
},
"Boolean": {
"base": null,
"refs": { }
},
"ColumnMetadata": {
"base": "Column Metadata",
"refs": {
"ColumnMetadata$arrayBaseColumnType": "Homogenous array base SQL type from java.sql.Types.",
"ColumnMetadata$isAutoIncrement": "Whether the designated column is automatically numbered",
"ColumnMetadata$isCaseSensitive": "Whether values in the designated column's case matters",
"ColumnMetadata$isCurrency": "Whether values in the designated column is a cash value",
"ColumnMetadata$isSigned": "Whether values in the designated column are signed numbers",
"ColumnMetadata$label": "Usually specified by the SQL AS. If not specified, return column name.",
"ColumnMetadata$name": "Name of the column.",
"ColumnMetadata$nullable": "Indicates the nullability of values in the designated column. One of columnNoNulls (0), columnNullable (1), columnNullableUnknown (2)",
"ColumnMetadata$precision": "Get the designated column's specified column size.For numeric data, this is the maximum precision. For character data, this is the length in characters. For datetime datatypes, this is the length in characters of the String representation (assuming the maximum allowed precision of the fractional seconds component). For binary data, this is the length in bytes. For the ROWID datatype, this is the length in bytes. 0 is returned for data types where the column size is not applicable.",
"ColumnMetadata$scale": "Designated column's number of digits to right of the decimal point. 0 is returned for data types where the scale is not applicable.",
"ColumnMetadata$schemaName": "Designated column's table's schema",
"ColumnMetadata$tableName": "Designated column's table name",
"ColumnMetadata$type": "SQL type from java.sql.Types.",
"ColumnMetadata$typeName": "Database-specific type name."
}
},
"ColumnMetadataList": {
"base": "List of Column metadata",
"refs": {
"ColumnMetadataList$member": null
}
},
"DbName": {
"base": null,
"refs": { }
},
"Double": {
"base": null,
"refs": { }
},
"ExecuteSqlRequest": {
"base": "Execute SQL Request",
"refs": {
"ExecuteSqlRequest$awsSecretStoreArn": "ARN of the db credentials in AWS Secret Store or the friendly secret name",
"ExecuteSqlRequest$database": "Target DB name",
"ExecuteSqlRequest$dbClusterOrInstanceArn": "ARN of the target db cluster or instance",
"ExecuteSqlRequest$schema": "Target Schema name",
"ExecuteSqlRequest$sqlStatements": "SQL statement(s) to be executed. Statements can be chained by using semicolons"
}
},
"ExecuteSqlResponse": {
"base": "Execute SQL response",
"refs": {
"ExecuteSqlResponse$sqlStatementResults": "Results returned by executing the sql statement(s)"
}
},
"Float": {
"base": null,
"refs": { }
},
"ForbiddenException": {
"base": "Access denied exception",
"refs": {
"ForbiddenException$message": "Error message"
}
},
"Integer": {
"base": null,
"refs": { }
},
"InternalServerErrorException": {
"base": "Internal service error",
"refs": { }
},
"Long": {
"base": null,
"refs": { }
},
"Record": {
"base": "Row or Record",
"refs": {
"Record$values": "Record"
}
},
"Records": {
"base": "List of records",
"refs": {
"Records$member": null
}
},
"ResultFrame": {
"base": "Result Frame",
"refs": {
"ResultFrame$records": "ResultSet Metadata.",
"ResultFrame$resultSetMetadata": "ResultSet Metadata."
}
},
"ResultSetMetadata": {
"base": "List of columns and their types.",
"refs": {
"ResultSetMetadata$columnCount": "Number of columns",
"ResultSetMetadata$columnMetadata": "List of columns and their types"
}
},
"Row": {
"base": "List of column values",
"refs": {
"Row$member": null
}
},
"ServiceUnavailableError": {
"base": "Internal service unavailable error",
"refs": { }
},
"SqlStatement": {
"base": null,
"refs": { }
},
"SqlStatementResult": {
"base": "SQL statement execution result",
"refs": {
"SqlStatementResult$numberOfRecordsUpdated": "Number of rows updated.",
"SqlStatementResult$resultFrame": "ResultFrame returned by executing the sql statement"
}
},
"SqlStatementResults": {
"base": "SQL statement execution results",
"refs": {
"SqlStatementResults$member": null
}
},
"String": {
"base": null,
"refs": { }
},
"StructValue": {
"base": "User Defined Type",
"refs": {
"StructValue$attributes": "Struct or UDT"
}
},
"Value": {
"base": "Column value",
"refs": {
"Value$arrayValues": "Arbitrarily nested arrays",
"Value$bigIntValue": "Long value",
"Value$bitValue": "Bit value",
"Value$blobValue": "Blob value",
"Value$doubleValue": "Double value",
"Value$intValue": "Integer value",
"Value$isNull": "Is column null",
"Value$realValue": "Float value",
"Value$stringValue": "String value",
"Value$structValue": "Struct or UDT"
}
}
}
} |
Case-hardening
Case-hardening or surface hardening is the process of hardening the surface of a metal object while allowing the metal deeper underneath to remain soft, thus forming a thin layer of harder metal (called the "case") at the surface. For iron or steel with low carbon content, which has poor to no hardenability of its own, the case-hardening process involves infusing additional carbon or nitrogen into the surface layer. Case-hardening is usually done after the part has been formed into its final shape, but can also be done to increase the hardening element content of bars to be used in a pattern welding or similar process. The term Face hardening is also used to describe this technique, when discussing modern armour.
Hardening is desirable for metal components that are subject to sliding contact with hard or abrasive materials, as the hardened metal is more resistant to surface wear. However, because hardened metal is usually more brittle than softer metal, through-hardening (that is, hardening the metal uniformly throughout the piece) is not always a suitable choice. In such circumstances, case-hardening can produce a component that will not fracture (because of the soft core that can absorb stresses without cracking), but also provides adequate wear resistance on the hardened surface.
History
Early iron smelting made use of bloomeries which produced two layers of metal: one with a very low carbon content which is worked into wrought iron, and one with a high carbon outer layer. Since the high carbon iron is hot short, meaning it fractures and crumbles when forged, it was not useful without more smelting. As a result, it went largely unused in the west until the popularization of the finery forge. The wrought iron, with nearly no carbon in it, was very malleable and ductile but not very hard.
Case-hardening involves packing the low-carbon iron within a substance high in carbon, then heating this pack to encourage carbon migration into the surface of the iron. This forms a thin surface layer of higher carbon steel, with the carbon content gradually decreasing deeper from the surface. The resulting product combines much of the toughness of a low-carbon steel core, with the hardness and wear resistance of the outer high-carbon steel.
The traditional method of applying the carbon to the surface of the iron involved packing the iron in a mixture of ground bone and charcoal or a combination of leather, hooves, salt and urine, all inside a well-sealed box. This carburizing package is then heated to a high temperature but still under the melting point of the iron and left at that temperature for a length of time. The longer the package is held at the high temperature, the deeper the carbon will diffuse into the surface. Different depths of hardening are desirable for different purposes: sharp tools need deep hardening to allow grinding and resharpening without exposing the soft core, while machine parts like gears might need only shallow hardening for increased wear resistance.
The resulting case-hardened part may show distinct surface discoloration, if the carbon material is mixed organic matter as described above. The steel darkens significantly, and shows a mottled pattern of black, blue, and purple caused by the various compounds formed from impurities in the bone and charcoal. This oxide surface works similarly to bluing, providing a degree of corrosion resistance, as well as an attractive finish. Case colouring refers to this pattern and is commonly encountered as a decorative finish on firearms.
Case-hardened steel combines extreme hardness and extreme toughness, something which is not readily matched by homogeneous alloys since hard steel alone tends to be brittle.
Chemistry
Carbon itself is solid at case-hardening temperatures and so is immobile. Transport to the surface of the steel was as gaseous carbon monoxide, generated by the breakdown of the carburising compound and the oxygen packed into the sealed box. This takes place with pure carbon but too slowly to be workable. Although oxygen is required for this process it is re-circulated through the CO cycle and so can be carried out inside a sealed box. The sealing is necessary to stop the CO either leaking out or being oxidised to CO2 by excess outside air.
Adding an easily decomposed carbonate "energiser" such as barium carbonate breaks down to BaO + CO2 and this encourages the reaction
C (from the donor) + CO2 <—> 2 CO
increasing the overall abundance of CO and the activity of the carburising compound.
It is a common knowledge fallacy that case-hardening was done with bone but this is misleading. Although bone was used, the main carbon donor was hoof and horn. Bone contains some carbonates but is mainly calcium phosphate (as hydroxylapatite). This does not have the beneficial effect of encouraging CO production and it can also introduce phosphorus as an impurity into the steel alloy.
Modern use
Both carbon and alloy steels are suitable for case-hardening; typically mild steels are used, with low carbon content, usually less than 0.3% (see plain-carbon steel for more information). These mild steels are not normally hardenable due to the low quantity of carbon, so the surface of the steel is chemically altered to increase the hardenability. Case-hardened steel is formed by diffusing carbon (carburization), nitrogen (nitriding) and/or boron (boriding) into the outer layer of the steel at high temperature, and then heat treating the surface layer to the desired hardness.
The term case-hardening is derived from the practicalities of the carburization process itself, which is essentially the same as the ancient process. The steel work piece is placed inside a case packed tight with a carbon-based case-hardening compound. This is collectively known as a carburizing pack. The pack is put inside a hot furnace for a variable length of time. Time and temperature determines how deep into the surface the hardening extends. However, the depth of hardening is ultimately limited by the inability of carbon to diffuse deeply into solid steel, and a typical depth of surface hardening with this method is up to 1.5 mm. Other techniques are also used in modern carburizing, such as heating in a carbon-rich atmosphere. Small items may be case-hardened by repeated heating with a torch and quenching in a carbon rich medium, such as the commercial products Kasenit / Casenite or "Cherry Red". Older formulations of these compounds contain potentially toxic cyanide compounds, while the more recent types such as Cherry Red do not.
Processes
Flame or induction hardening
Flame or induction hardening are processes in which the surface of the steel is heated very rapidly to high temperatures (by direct application of an oxy-gas flame, or by induction heating) then cooled rapidly, generally using water; this creates a "case" of martensite on the surface. A carbon content of 0.3–0.6 wt% C is needed for this type of hardening.
Typical uses are for the shackle of a lock, where the outer layer is hardened to be file resistant, and mechanical gears, where hard gear mesh surfaces are needed to maintain a long service life while toughness is required to maintain durability and resistance to catastrophic failure.
Flame hardening uses direct impingement of an oxy-gas flame onto a defined surface area. The result of the hardening process is controlled by four factors:
Design of the flame head
Duration of heating
Target temperature to be reached
Composition of the metal being treated
Carburizing
Carburizing is a process used to case-harden steel with a carbon content between 0.1 and 0.3 wt% C. In this process steel is introduced to a carbon rich environment at elevated temperatures for a certain amount of time, and then quenched so that the carbon is locked in the structure; one of the simpler procedures is repeatedly to heat a part with an acetylene torch set with a fuel-rich flame and quench it in a carbon-rich fluid such as oil.
Carburization is a diffusion-controlled process, so the longer the steel is held in the carbon-rich environment the greater the carbon penetration will be and the higher the carbon content. The carburized section will have a carbon content high enough that it can be hardened again through flame or induction hardening.
It is possible to carburize only a portion of a part, either by protecting the rest by a process such as copper plating, or by applying a carburizing medium to only a section of the part.
The carbon can come from a solid, liquid or gaseous source; if it comes from a solid source the process is called pack carburizing. Packing low carbon steel parts with a carbonaceous material and heating for some time diffuses carbon into the outer layers. A heating period of a few hours might form a high-carbon layer about one millimeter thick.
Liquid carburizing involves placing parts in a bath of a molten carbon-containing material, often a metal cyanide; gas carburizing involves placing the parts in a furnace maintained with a methane-rich interior.
Nitriding
Nitriding heats the steel part to in an atmosphere of ammonia gas and dissociated ammonia. The time the part spends in this environment dictates the depth of the case. The hardness is achieved by the formation of nitrides. Nitride forming elements must be present for this method to work; these elements include chromium, molybdenum, and aluminum. The advantage of this process is that it causes little distortion, so the part can be case-hardened after being quenched, tempered and machined.
No quenching is done after nitriding.
Cyaniding
Cyaniding is a case-hardening process that is fast and efficient; it is mainly used on low-carbon steels. The part is heated to 871–954 °C (1600–1750 °F) in a bath of sodium cyanide and then is quenched and rinsed, in water or oil, to remove any residual cyanide.
2NaCN + O2 → 2NaCNO
2NaCNO + O2 → Na2CO3 + CO + N2
2CO → CO2 + C
This process produces a thin, hard shell (between 0.25 and 0.75 mm, 0.01 and 0.03 inches) that is harder than the one produced by carburizing, and can be completed in 20 to 30 minutes compared to several hours so the parts have less opportunity to become distorted. It is typically used on small parts such as bolts, nuts, screws and small gears. The major drawback of cyaniding is that cyanide salts are poisonous.
Carbonitriding
Carbonitriding is similar to cyaniding except a gaseous atmosphere of ammonia and hydrocarbons is used instead of sodium cyanide. If the part is to be quenched, it is heated to ; if not, then the part is heated to .
Ferritic nitrocarburizing
Ferritic nitrocarburizing diffuses mostly nitrogen and some carbon into the case of a workpiece below the critical temperature, approximately . Under the critical temperature the workpiece's microstructure does not convert to an austenitic phase, but stays in the ferritic phase, which is why it is called ferritic nitrocarburization.
Applications
Parts that are subject to high pressures and sharp impacts are still commonly case-hardened. Examples include firing pins and rifle bolt faces, or engine camshafts. In these cases, the surfaces requiring the hardness may be hardened selectively, leaving the bulk of the part in its original tough state.
Firearms were a common item case-hardened in the past, as they required precision machining best done on low carbon alloys, yet needed the hardness and wear resistance of a higher carbon alloy. Many modern replicas of older firearms, particularly single action revolvers, are still made with case-hardened frames, or with case coloring, which simulates the mottled pattern left by traditional charcoal and bone case-hardening.
Another common application of case-hardening is on screws, particularly self-drilling screws. In order for the screws to be able to drill, cut and tap into other materials like steel, the drill point and the forming threads must be harder than the material(s) that it is drilling into. However, if the whole screw is uniformly hard, it will become very brittle and it will break easily. This is overcome by ensuring that only the surface is hardened, and the core remains relatively softer and thus less brittle. For screws and fasteners, case-hardening is achieved by a simple heat treatment consisting of heating and then quenching.
For theft prevention, lock shackles and chains are often case-hardened to resist cutting, whilst remaining less brittle inside to resist impact. As case-hardened components are difficult to machine, they are generally shaped before hardening.
See also
Differential hardening
Diffusion hardening
Quench polish quench
Shot peening
Surface engineering
Von Stahel und Eysen
References
External links
Case Hardening
Surface Hardening of Steels
Case Hardening Steel and Metal
Category:Metal heat treatments |
0 Replies - 5251 Views - Last Post: 18 May 2013 - 10:57 AM
[link] Free Azure Cloud space and setting up a VM w/ Powershell
Posted 18 May 2013 - 10:57 AM
An interesting pair of articles about how to setup, get into, and move around in free Azure space, and the second is how to go about setting up VMs with powershell. If you haven't played with Azure this isn't a bad place to start!
The most exciting Azure feature to me and the focus of this post is the Windows Azure Virtual Machines (VMs) service.
This is something you can use today this afternoon.
With Windows Azure and the addition of the Virtual Machines offering, there are tools, features and functions that enable you as an IT Pro, to have a readily-available lab without shelling out a lot of (or any) money for enterprise-class software and hardware. Of course there are other ways this functionality can be used but this post covers the idea of a simple lab.
Some of the great features I've discovered in working with Azure VMs:
Quickly create VMs from 'canned' OS images
Upload your own VM images
Customize a VM image and deploy it like a template
Attach additional virtual disks for data
Azure has PowerShell providers
There are a lot more |
1. Field of the Invention
The present invention relates to a power supply unit for a portable game machine, and more particularly, to a universal power supply unit for the game machine model of Nintendo Game Boy Advance.
2. Description of the Prior Art
The Nintendo Game Boy machine is portable due to its small volume so that the Nintendo video games are widely popular. The current model of the game machine 1 (Game Boy Advance), as shown in FIGS. 1 and 2, has a battery chamber 11 at the bottom thereof in which two dry batteries 12 are received for supplying 3V DC voltage to the game machine. Besides, a cover 13 is engaged in the battery chamber 11.
The power of the game machine 1 is much consumed so that two dry batteries 12 can only be used for few hours. After the batteries have run down, they have to be replaced, thereby causing a considerable cost. In order to resolve this problem, a power transforming unit 2, as shown in FIG. 3, is installed in the battery chamber 11 in addition to the design of power supply by batteries. The power transformer 2 includes a transformer connector 21 which transforms the main power to 3V DC voltage which passes through a cover 22 engaged in the battery chamber 11, utilizing the electric contact between the positive and negative output poles 221 at the side of the cover and the positive and negative poles 111 inside of the battery chamber 11 for supplying the required power to the portable game machine 1.
However, the cover 22 and the transformer connector 21 of the power transformer 2 have special specification. In other words, the DC jack 222 on the cover 22 is only suitable for the connecting terminal 211 of the transformer connector 21 manufactured by Nintendo. Besides, the output of the transformer connector 21 is preset at 3V DC voltage. Thus, the inside of the cover 22 only has a filtering inductance L1 and a diode D1. Therefore, the cover 22 doesn""t fit other power supply units, for example, the DC 12V power supplied from the cigarette lighter in automobiles. Moreover, there are no mains sockets in automobiles for the transformer connector 21. Thus, much inconvenience in use is still existing.
It is a primary object of the present invention to provide a power supply unit for a portable game machine which is engaged in a cover of a battery chamber and contains a constant voltage loop. Besides, the power supply unit can be used in combination with commercially available transformers so that the convenience in use is much improved.
It is another object of the present invention to provide a power supply unit for a portable game machine in which the circuit devices have advantages of convenient assembly, stable structure and excellent electric contact. |
953 F.Supp. 1558 (1997)
Milton BIVINS, Plaintiff,
v.
BRUNO'S, INC., et al., Defendants.
No. 5:95-cv-400-4 (WDO).
United States District Court, M.D. Georgia, Macon Division.
February 10, 1997.
John A. Draughon, Russell M. Boston, Macon, GA, for Milton Bivins, Jr.
Stanford Glenn Wilson, Victor J. Maya, Atlanta, GA, for Bruno's, Incorporated, Piggly Wiggly Southern, Inc., Bruno's Food Stores, Inc.
ORDER
OWENS, District Judge.
Before the court is defendants' motion for summary judgment on plaintiff's claims *1559 brought under the Americans with Disabilities Act ("ADA"). After carefully considering the arguments of counsel, the relevant case law and the record as a whole, the court issues the following order.
FACTS
Milton Bivins began working for a Piggly Wiggly grocery store[1] in Macon, Georgia as a part time bag clerk on January 7, 1974. After a number of raises, he was promoted to the position of frozen foods clerk in 1989, where he remained until his termination in 1993.
The frozen foods clerk position[2] consists of unloading grocery boxes onto pallets, pushing or pulling the pallets to the appropriate aisle in the store, and unloading the individual items from the boxes onto the shelves. The pallets are on wheels, and often weigh up to 1,800 pounds each. Most of the items clerks have to lift and carry weigh under ten pounds. However, clerks frequently have to carry items weighing 10-25 pounds, and occasionally items weighing 25-50 pounds. The heaviest items in the frozen foods section are large bags of ice, which weigh 42-48 pounds but can be broken up into individual six-eight pound bags. Sometimes the clerks first "break down" a pallet into smaller components to make moving the items through the store easier. Finally, clerks spend approximately 7% of their time reaching above their head (Colley, On-Site Analysis conducted June 19, 1996, at 2-4).
Bivins began having problems with his neck after a case of Coffeemate fell onto his head on October 6, 1986. Between 1986 and 1988 Bivins took three separate leaves of absence due to problems with his neck, for a total of approximately eight months' absence. In August 1988 Bivins underwent surgery to fuse joints in his neck and back. A few months after the surgery, Bivins had recovered enough to return to his job as a frozen foods clerk, where he worked without problem for about eighteen months. Then in September of 1991, Bivins reinjured his neck while attempting to pull a pallet loaded with groceries. Dr. William B. Dasher examined Bivins and recommended a second fusion surgery which was performed on December 18, 1992. After the surgery, Bivins had to wear a brace which severely restricted his ability to move his neck. Thus, Bivins began his fourth leave of absence on December 18, and Dr. Dasher did not give him a release to return to work in any capacity until the following September.
On September 21, 1993, Dr. Dasher released Bivins to return to work in a light duty capacity. Dasher's release stated that because of the restricted movement of his neck and head, Bivins could not do repetitive overhead lifting, and could not do any heavy lifting (defined as anything over ten pounds), pushing or pulling of heavy objects, or driving (Dasher, Exh. # 8). The day after he obtained the release, Bivins went with Ms. Gayle Colley, the Certified Rehabilitation Supplier assigned to him under the Georgia workers compensation statute, to speak with the store manager, Danny Mathews, about returning to work. Mathews told Bivins he could not allow him to return without the approval of the Bruno's Birmingham headquarters. Ms. Colley later called Mathews back and was told that Bill Webster, a human resources manager in the Birmingham office, had decided to not to allow Bivins to return to work because of the ten-pound weight restriction (Mathews, at 61-63, 66). *1560 Ms. Colley was told that the company was afraid Bivins would reinjure himself and would prefer to wait until Bivins had fully recovered before letting him return to work (Colley, at 237).
Ms. Colley continued to press Mathews to let Bivins begin working again as soon as possible, and discussed with him possible ways to modify Bivins' job duties to allow him to perform as much of the job as possible with his restrictions. According to her studies, with the ten-pound restriction in place at that time, Bivins would not have been able to unload stock from trucks and pallets or bring stock from the back of the store to the aisles. She also felt that the heavy 42-48 pound bags of ice would be a problem for Bivins unless they were first broken down into the individual 6-8 pound bags. Finally, Bivins would not have been able to safely reach the top shelf in the frozen foods department, which accounted for approximately 10% of the area to be stocked. Ms. Colley later testified that in her judgment, Bivinswith accommodations in the form of help from co-workers in moving all heavy items and breaking down large components into manageable individual packages would have been able to perform approximately 50% of his job duties (Colley, at 110).
Piggly Wiggly had a routine policy of terminating any employee who took a leave of absence over one year. Pursuant to this policy, on December 20, 1993, Bivins received a termination letter permanently dismissing him from his job for failing to report to work within a year. The store manager Mathews encouraged Bivins to reapply, but his application for a stock clerk position was denied on February 20, 1994, due to a lack of vacancies in that position.
Plaintiff filed suit alleging that his termination and Bruno's failure to rehire him violated the ADA.
DISCUSSION
I. Summary Judgment
Federal Rule of Civil Procedure 56(c) provides that summary judgment may be entered in favor of the movant where the entire record shows that there is no genuine issue as to any material fact and that the moving party is entitled to judgment as a matter of law. The court examines the substantive law involved to determine which facts are material, and all reasonable doubts regarding facts are resolved in favor of the nonmoving party. Irby v. Bittick, 44 F.3d 949, 953 (11th Cir. 1995).
The movant is entitled to judgment as a matter of law when the "nonmoving party has failed to make a sufficient showing on an essential element of her case with respect to which she has the burden of proof." Celotex Corp. v. Catrett, 477 U.S. 317, 323, 106 S.Ct. 2548, 2552, 91 L.Ed.2d 265 (1986). Once a party has properly supported its motion for summary judgment, the burden shifts to the nonmovant to create, through significantly probative evidence, genuine issues of material fact necessitating a trial. Id. at 324, 106 S.Ct. at 2553.
II. ADA Context
The ADA provides that a covered employer shall not discriminate against a qualified individual with a disability in relation to employment decisions. 42 U.S.C. § 12112(a). Although the Eleventh Circuit has not explicitly held so, it is widely agreed that the burden-shifting analysis laid out in McDonnell Douglas Corporation v. Green, 411 U.S. 792, 93 S.Ct. 1817, 36 L.Ed.2d 668 (1973), and applied to Title VII cases is similarly followed in deciding claims brought under the ADA. See Moses v. American Nonwovens, Inc., 97 F.3d 446, 447 (11th Cir.1996) (implicitly using burden-shifting framework); McNemar v. The Disney Store, Inc., 91 F.3d 610, 619 (3d Cir.1996); Rizzo v. Children's World Learning Centers, Inc., 84 F.3d 758, 761 n. 2 (5th Cir.1996); see also Johnson v. Boardman Petroleum, Inc., 923 F.Supp. 1563 (S.D.Ga.1996); Lewis v. Zilog, Inc., 908 F.Supp. 931 (N.D.Ga.1995). Thus, once plaintiff has established a prima facie case, the burden shifts to the defendant to give legitimate, non-discriminatory reasons for the decision. Once that is done, unless the plaintiff creates a genuine issue of material fact as to whether the proffered reasons are pretextual, the defendant is entitled to summary judgment. Throughout the litigation, *1561 the plaintiff bears the ultimate burden of proving that he was the victim of discrimination. St. Mary's Honor Center v. Hicks, 509 U.S. 502, 509-511, 113 S.Ct. 2742, 2748-49, 125 L.Ed.2d 407 (1993).
In order to establish a prima facie case under the ADA, a plaintiff must show that (1) he has a disability; (2) he is a qualified individual; and (3) he was subjected to unlawful discrimination as a result of his disability. Gordon v. E.L. Hamm & Assoc., Inc., 100 F.3d 907, 910 (11th Cir.1996). A "qualified individual with a disability" is defined as "an individual with a disability who, with or without reasonable accommodation, can perform the essential functions of the employment position that such individual holds or desires." Id. at 911 (quoting 42 U.S.C. § 12111(8)). Consideration is given to the employer's judgment as to what functions of a job are essential. 42 U.S.C. § 12111(8).
"Discriminate" is defined in the ADA context as "not making reasonable accommodations ... to an otherwise qualified individual with a disability ... unless [the employer] can demonstrate that the accommodation would impose an undue hardship on the operation of the business." Harris v. H & W Contracting Company, 102 F.3d 516 (11th Cir.1996) (quoting 42 U.S.C.A. § 12112(b)(5)(A) (West 1995)). "Undue hardship" is defined as an action requiring significant difficulty or expense, when considered in light of factors including the nature and cost of the accommodation to the employer and the affect of the accommodation on the employer's operations. 42 U.S.C. § 12111(10).[3] The employee at all times retains the burden of persuasion as to whether reasonable accommodations were available at the time the complained of action occurred. Moses, 97 F.3d at 447.
III. Claims Under the ADA
In order to have a discrimination claim, there must first be an adverse employment decision of some kind.[4] At first blush, it may appear that in this case there are only two actions by Bruno's that might qualify as adverse employment decisions: the termination of and failure to rehire Bivins. However, there is a third candidate that better highlights the real problem in this casethe refusal to allow Bivins to return to work once he was given the release to perform in a light duty capacity. Although plaintiff has not made this precise argument, the court finds that the refusal to let Bivins work in September 1993 was one adverse employment decision, and the termination in December 1993, although the inevitable result of that refusal, constitutes a separate and distinct adverse employment decision. Of these three employment decisions, the refusal to allow Bivins to work with the light duty restriction is the one most obviously associated with Bivins' impairment, and therefore the one most likely to establish a winning claim under the ADA.
However, this does not complete Bivins' prima facie case. Bivins must show not only that he was the victim of an adverse employment decision, but that the decision deprived him of a job for which he was qualified with or without an accommodation. This he cannot do. The impairment under which he suffered was clearly highly detrimental to his ability to perform his job. Indeed, *1562 Gayle Colley, the person charged with helping him get back to work, admitted that even with the accommodation of help from other co-workers to lift any items over ten pounds and stock the overhead shelves, Bivins would only have been able to perform about 50% of the duties of stock clerk. Given these facts, the court is persuaded that Bivins was not able to perform the essential functions of the position at the time of his initial release in September 1993, and therefore did not fall within the definition of "qualified individual."
Moreover, all of this hinges on the erroneous assumption that the accommodation of having other employees do the "heavy lifting" for Bivins was "reasonable". The employer's duty to provide reasonable accommodation must end long before the reasonable accommodation required becomes another employee to do the disabled employee's job. Milton v. Scrivner, Inc., 53 F.3d 1118, 1125 (10th Cir.1995) (holding that grocery store employer was not required to change employee's duties in such a way that other employees would have to work harder or longer hours in order to reasonably accommodate employee's disability); Benson v. Northwest Airlines, Inc., 62 F.3d 1108, 1112-13 (8th Cir.1995) (holding that employer need not reallocate essential functions of job to make reasonable accommodation). The ADA was intended to prevent discrimination against people who could perform the job with a not unduly burdensome accommodation of their disability. See generally 42 U.S.C. § 12101(b). It was not intended to force the employer to subsidize disabled people by keeping them in their paid positions while still having to hire someone else to actually do their jobs.
While Bivins could surely have performed some aspects of the stock clerk job, it is clear that a good many others he could not perform at that time. Plaintiff's argument that he could have handled the job with a step stool and help from other stock clerks for the heavy items defined as anything over ten pounds only begs the question. A nurse can do the job of a neurosurgeon if given a little "help" with the more technical aspects of diagnosis and surgery. The analogy may tend to the extreme, but the principle at stake is identical: anyone can do anything with "help" from another person who does all the difficult parts of the job for you. The inescapable fact is that a stock clerk position consists almost entirely of lifting, reaching, carrying, pushing and pulling items, some of which are heavy and some of which are not. Bivins simply could not do a lot of this type of activity at the time.[5]
It would be perverse to force employers to keep people in jobs that include a large number of duties the employees can no longer perform without help. The court finds that Bivins was not qualified for the job of stock clerk or frozen foods clerk in September 1993, and Bruno's was under no duty to allow him back to work at that time, as there was no "reasonable" accommodation that would have allowed Bivins to perform the duties of the job. Accordingly, plaintiff has failed to state a prima facie case with regard to the decision not to allow him to return to work.
The other two adverse employment decisions fail not only because Bivins was not qualified for the job, and therefore cannot make out a prima facie case, but also for other reasons precluding relief under the ADA. His termination in December 1993 for not reporting to work within a year was a non-discriminatory exercise of existing administrative company policy. Plaintiff has not presented any evidence showing that Bivins was singled out for termination on *1563 account of his impairment. Likewise, the subsequent failure to rehire Bivins was on account of a lack of vacancies in the applied for position of stock clerk. Plaintiff has not presented any evidence supporting the allegation that these reasons were pretextual.
Therefore, the court finds that even if plaintiff had satisfied the burden of presenting a prima facie case, defendant has provided legitimate, non-discriminatory reasons for the adverse employment decisions related to these claims sufficient to entitle it to summary judgment as to both. Pritchard v. Southern Company Services, 92 F.3d 1130, 1134-35 (11th Cir.1996) (holding that in order to survive motion for summary judgment, plaintiff must present sufficient evidence to allow a reasonable factfinder to conclude that the proffered legitimate, non-discriminatory reason for discharge is not believable).[6]
CONCLUSION
Having carefully considered the matter, defendant's motion for summary judgment is GRANTED as to all claims, and the case is DISMISSED.
NOTES
[1] Since 1987, the Piggly Wiggly store has been owned and operated by the parent company Bruno's, Inc. ("Bruno's"), which has a headquarters office in Birmingham, Alabama.
[2] While Bruno's did not use written job descriptions, testimony shows that the positions of frozen foods clerk, stock clerk and grocery clerk all consist of the same duties and require the same qualifications (Mathews, at 94). Much of the evidence attempting to describe the duties and qualifications of the stock clerk position was in the form of job analysis reports prepared by Ms. Gayle Colley, plaintiff's Rehabilitation Supervisor under the Georgia workers compensation statute. The court finds Ms. Colley's reports to be thorough and precise and has no reason to doubt their accuracy. Ms. Colley's reports were prepared largely from her own observations and descriptions from actual clerks, and so are in substantial agreement with the descriptions given by the employees. Unless otherwise noted, the court has drawn the facts regarding the requirements of the clerk positions from Ms. Colley's reports.
[3] The court notes that the definitions cited are impossibly circular and provide little helpful guidance to employers trying to decide what the ADA requires in the way of accommodation.
[4] Of course, the threshold issue, and one the parties spent a good deal of their briefs arguing, is whether Bivins' impairment sufficiently limited one or more of his major life activities to constitute a disability under the ADA. Defendant argues that plaintiff's impairment does not constitute a disability, and points to two cases from other jurisdictions holding that 25-pound lifting restrictions did not constitute a disability. See Williams v. Channel Master Satellite Systems, Inc., 101 F.3d 346, 349 (4th Cir.1996); Aucutt v. Six Flags Over Mid-America, 85 F.3d 1311, 1319 (8th Cir.1996).
The impairment here is more restrictive than the impairments referenced in those cases, and was accompanied by other restrictions on Bivins' ability to move and reach. As such, the court is unwilling at this point to hold that as a matter of law a restriction against lifting anything over tenpounds cannot constitute a disability. Moreover, in light of the rulings set forth below, the court need not decide the issue of whether Bivins suffered from a disability in disposing of this motion.
[5] As plaintiff spends a good deal of time in his briefs pointing out, it is undoubtedly true that most if not all of the items moved by stock clerks can be broken down into smaller components moveable by even the smallest child or the weakest patient; the individual packages of any product rarely weigh enough to be measured in pounds at all. But the job of stock clerk requires some degree of efficiency the employee simply must be able to move a box or so of product at a time. Just having boxes put in front of him and taking out one package at a time and putting it on the shelf may qualify someone to be a "shelver", but does not necessarily qualify him to be a stock clerk. No amount of evidence showing that individual packages of corn weigh 8-9 ounces and individual packets of lima beans weigh 12-15 ounces can change that fact (Mason, at 8-16).
[6] The evidence of a subsequent offer by Bruno's to Bivins for a stock clerk job in no way shows that Bruno's discriminated against him when it failed to rehire him in February 1994. Furthermore, it is important to note that the important considerations for the court to consider are the company's actions at the time of the alleged adverse employment decisions. It is irrelevant that the restrictions on plaintiff's movements were subsequently lessened to allow plaintiff to lift up to 50 pounds. That was unforeseeable at the time of the decisions, and there was no indication that Dr. Dasher considered the impairment to be temporary at the time.
|
Q:
why am i unable to read strings form a txt file from net logo?
I am trying to read in net logo the following line from a txt file:
job1 1 1 15 25 90 3 1111 1100 0010 0110 1011 0 0 0 0 0 0 0 0 0 0 0
however, all the time i received: Expected a constant (line 1, character 5)
In this case i have many problems.
A) How can i make netlogo to read string "Job1" ?
B) Considering that the 10th number is a binary, how can i make to be headed as a string instead a number?
I appreciate your answer.
Gorillaz Fan
A:
I`m not quite sure if I really got, what you want to achieve. Do you want to read all elements of the "txt-file" as strings, but seperated by the white spaces?
If yes, you could try to read the file character by character to check for the length of strings between the white spaces. Then go through the file again and extract these strings. Here is an example code of how it could be achieved. Perhaps there are more elegant versions, but this one works perfect for me:
globals
[
char-list
char-counter
string-list
current-char
]
to read
set char-counter 0
set char-list []
set string-list []
set current-char 0
;; Open the file and go through it, char by char to check where the blank spaces are
file-open "in.txt"
while [file-at-end? = false]
[
;; Save current char
set current-char file-read-characters 1
;; Check if the char is a blank space...
ifelse (current-char != " ")
;; If not, increase the length of the current string
[
set char-counter char-counter + 1
]
;; If yes, save the length of the previous string, and reset the char-counter
[
set char-list lput char-counter char-list
set char-counter 0
]
]
file-close
;; Now read the file again and extract only the strings which are not blank spaces
file-open "in.txt"
let i 0
while [i < length char-list]
[
;; Read the next number of characters as defined by the previously created char-list
set string-list lput file-read-characters item i char-list string-list
;; Skip 1 space:
set current-char file-read-characters 1
;; Increase i
set i i + 1
]
file-close
end
|
/*
* << Haru Free PDF Library >> -- hpdf_fontdef.c
*
* URL: http://libharu.org
*
* Copyright (c) 1999-2006 Takeshi Kanno <[email protected]>
* Copyright (c) 2007-2009 Antony Dovgal <[email protected]>
*
* Permission to use, copy, modify, distribute and sell this software
* and its documentation for any purpose is hereby granted without fee,
* provided that the above copyright notice appear in all copies and
* that both that copyright notice and this permission notice appear
* in supporting documentation.
* It is provided "as is" without express or implied warranty.
*
*/
#include "hpdf_conf.h"
#include "hpdf_utils.h"
#include "hpdf_fontdef.h"
void
HPDF_FontDef_Cleanup (HPDF_FontDef fontdef)
{
if (!fontdef)
return;
HPDF_PTRACE ((" HPDF_FontDef_Cleanup\n"));
if (fontdef->clean_fn)
fontdef->clean_fn (fontdef);
fontdef->descriptor = NULL;
}
void
HPDF_FontDef_Free (HPDF_FontDef fontdef)
{
if (!fontdef)
return;
HPDF_PTRACE ((" HPDF_FontDef_Free\n"));
if (fontdef->free_fn)
fontdef->free_fn (fontdef);
HPDF_FreeMem (fontdef->mmgr, fontdef);
}
HPDF_BOOL
HPDF_FontDef_Validate (HPDF_FontDef fontdef)
{
HPDF_PTRACE ((" HPDF_FontDef_Validate\n"));
if (!fontdef || fontdef->sig_bytes != HPDF_FONTDEF_SIG_BYTES)
return HPDF_FALSE;
else
return HPDF_TRUE;
}
|
Wausau - Police find $10,000 of heroin while searching a Wausau apartment.Money was also found during the search at 501 McIndoe street, apartment 5.One person was arrested.Police have not said who it was, but say the suspect has past convictions for selling drugsPolice think the heroin was enough for at least 170 street sales.
Charges are anticipated of maintaining a drug trafficking place and possession of heroin with intent to deliver. The investigation was a joint effort of the Marathon County Sheriff's department and Wausau police. |
Q:
I need to get the directory as a command-line argument using getopt in c
I am very new to getopt and I need to get the directory name as an argument by using getopt. It does not work.
The program needs to figure out which argv is the directory so that I can pass the path to a function. I pass either the last command-line argument as a path, if there is a dirname argument, or pass the current working directory to that function.
Please help me on that by providing the correct code fragment:
dt [-h] [-I n] [-L -d -g -i -p -s -t -u | -l] [dirname]
I have tried using optopt but it did not work.
int c;
while( (c = getopt(argc, argv, "hI:Ldgipstul")) != -1){
switch(c){
case 'h':
printf("This is the help message, please read README file for further information");
exit(1);
printf("In the help page\n");
break;
case 'I':
printf("Setting indentation\n");
indentation = atoi(optarg);
printf("Indentation is: %d\n", indentation);
break;
case 'L':
printf("Following symbolic links\n");
break;
case 'd':
//printf("Time of last modification\n");
break;
case 'g':
//printf("Print group id\n");
groupid = groupId(path);
printf("Group Id is: %d\n",groupid);
break;
case 'i':
printf("Print number of links in inode table\n");
int numberlink = numberLinks(path);
printf("number of links: %d\n",numberlink);
break;
case 'p':
printf("Permissions\n");
break;
case 's':
printf("Sizes\n");
break;
case 't':
printf("Information of file\n");
break;
case 'u':
//printf("Print user id\n");
userid = userId(path);
printf("User Id is: %d\n",userid);
break;
case 'l':
printf("Optional one\n");
break;
default:
perror("Not a valid command-line argument");
break;
}
}
A:
When the getopt() loop finishes, the variable optind contains the index of the first non-option argument. That will be the dirname argument. So you can write:
char *directory;
if (optind < argc) {
directory = argv[optind];
} else {
directory = "."; // default to current directory
}
|
/*=============================
* 栈的顺序存储结构(顺序栈)
=============================*/
#ifndef SQSTACK_H
#define SQSTACK_H
#include <stdio.h>
#include <stdlib.h> // 提供malloc、realloc、free、exit原型
#include "Status.h" //**▲01 绪论**//
/* 宏定义 */
#define STACK_INIT_SIZE 100 // 顺序栈存储空间的初始分配量
#define STACKINCREMENT 10 // 顺序栈存储空间的分配增量
/* 顺序栈元素类型定义 */
typedef int SElemType;
// 顺序栈元素结构
typedef struct {
SElemType* base; // 栈底指针
SElemType* top; // 栈顶指针
int stacksize; // 当前已分配的存储空间,以元素为单位
} SqStack;
/*
* 初始化
*
* 构造一个空栈。初始化成功则返回OK,否则返回ERROR。
*/
Status InitStack(SqStack* S);
/*
* 判空
*
* 判断顺序栈中是否包含有效数据。
*
* 返回值:
* TRUE : 顺序栈为空
* FALSE: 顺序栈不为空
*/
Status StackEmpty(SqStack S);
/*
* 入栈
*
* 将元素e压入到栈顶。
*/
Status Push(SqStack* S, SElemType e);
/*
* 出栈
*
* 将栈顶元素弹出,并用e接收。
*/
Status Pop(SqStack* S, SElemType* e);
#endif
|
# Snapshot report for `test/e2e/components/lowercase.js`
The actual snapshot is saved in `lowercase.js.snap`.
Generated by [AVA](https://ava.li).
## basic style
> Snapshot 1
{
style: {
alignContent: 'normal',
alignItems: 'normal',
alignSelf: 'normal',
alignmentBaseline: 'auto',
all: '',
animation: 'none 0s ease 0s 1 normal none running',
animationDelay: '0s',
animationDirection: 'normal',
animationDuration: '0s',
animationFillMode: 'none',
animationIterationCount: '1',
animationName: 'none',
animationPlayState: 'running',
animationTimingFunction: 'ease',
backfaceVisibility: 'visible',
background: 'rgba(0, 0, 0, 0) none repeat scroll 0% 0% / auto padding-box border-box',
backgroundAttachment: 'scroll',
backgroundBlendMode: 'normal',
backgroundClip: 'border-box',
backgroundColor: 'rgba(0, 0, 0, 0)',
backgroundImage: 'none',
backgroundOrigin: 'padding-box',
backgroundPosition: '0% 0%',
backgroundPositionX: '0%',
backgroundPositionY: '0%',
backgroundRepeat: 'repeat',
backgroundRepeatX: '',
backgroundRepeatY: '',
backgroundSize: 'auto',
baselineShift: '0px',
blockSize: 'auto',
border: '0px none rgb(36, 41, 46)',
borderBottom: '0px none rgb(36, 41, 46)',
borderBottomColor: 'rgb(36, 41, 46)',
borderBottomLeftRadius: '0px',
borderBottomRightRadius: '0px',
borderBottomStyle: 'none',
borderBottomWidth: '0px',
borderCollapse: 'separate',
borderColor: 'rgb(36, 41, 46)',
borderImage: 'none',
borderImageOutset: '0px',
borderImageRepeat: 'stretch',
borderImageSlice: '100%',
borderImageSource: 'none',
borderImageWidth: '1',
borderLeft: '0px none rgb(36, 41, 46)',
borderLeftColor: 'rgb(36, 41, 46)',
borderLeftStyle: 'none',
borderLeftWidth: '0px',
borderRadius: '0px',
borderRight: '0px none rgb(36, 41, 46)',
borderRightColor: 'rgb(36, 41, 46)',
borderRightStyle: 'none',
borderRightWidth: '0px',
borderSpacing: '0px 0px',
borderStyle: 'none',
borderTop: '0px none rgb(36, 41, 46)',
borderTopColor: 'rgb(36, 41, 46)',
borderTopLeftRadius: '0px',
borderTopRightRadius: '0px',
borderTopStyle: 'none',
borderTopWidth: '0px',
borderWidth: '0px',
bottom: 'auto',
boxShadow: 'none',
boxSizing: 'border-box',
breakAfter: 'auto',
breakBefore: 'auto',
breakInside: 'auto',
bufferedRendering: 'auto',
captionSide: 'top',
caretColor: 'rgb(36, 41, 46)',
clear: 'none',
clip: 'auto',
clipPath: 'none',
clipRule: 'nonzero',
color: 'rgb(36, 41, 46)',
colorInterpolation: 'sRGB',
colorInterpolationFilters: 'linearRGB',
colorRendering: 'auto',
columnCount: 'auto',
columnFill: 'balance',
columnGap: 'normal',
columnRule: '0px none rgb(36, 41, 46)',
columnRuleColor: 'rgb(36, 41, 46)',
columnRuleStyle: 'none',
columnRuleWidth: '0px',
columnSpan: 'none',
columnWidth: 'auto',
columns: 'auto auto',
contain: 'none',
content: '',
counterIncrement: 'none',
counterReset: 'none',
cursor: 'auto',
cx: '0px',
cy: '0px',
d: 'none',
direction: 'ltr',
display: 'inline',
dominantBaseline: 'auto',
emptyCells: 'show',
fill: 'rgb(0, 0, 0)',
fillOpacity: '1',
fillRule: 'nonzero',
filter: 'none',
flex: '0 1 auto',
flexBasis: 'auto',
flexDirection: 'row',
flexFlow: 'row nowrap',
flexGrow: '0',
flexShrink: '1',
flexWrap: 'nowrap',
float: 'none',
floodColor: 'rgb(0, 0, 0)',
floodOpacity: '1',
font: 'normal normal normal normal 14px / 21px PingFangSC-Thin, sans-serif',
fontFamily: 'PingFangSC-Thin, sans-serif',
fontFeatureSettings: 'normal',
fontKerning: 'auto',
fontSize: '14px',
fontStretch: 'normal',
fontStyle: 'normal',
fontVariant: 'normal',
fontVariantCaps: 'normal',
fontVariantLigatures: 'normal',
fontVariantNumeric: 'normal',
fontWeight: 'normal',
grid: 'none / none / none / row / auto / auto / 0px / 0px',
gridArea: 'auto / auto / auto / auto',
gridAutoColumns: 'auto',
gridAutoFlow: 'row',
gridAutoRows: 'auto',
gridColumn: 'auto / auto',
gridColumnEnd: 'auto',
gridColumnGap: '0px',
gridColumnStart: 'auto',
gridGap: '0px 0px',
gridRow: 'auto / auto',
gridRowEnd: 'auto',
gridRowGap: '0px',
gridRowStart: 'auto',
gridTemplate: 'none / none / none',
gridTemplateAreas: 'none',
gridTemplateColumns: 'none',
gridTemplateRows: 'none',
height: 'auto',
hyphens: 'manual',
imageRendering: 'auto',
inlineSize: 'auto',
isolation: 'auto',
justifyContent: 'normal',
justifyItems: 'normal',
justifySelf: 'normal',
left: 'auto',
letterSpacing: 'normal',
lightingColor: 'rgb(255, 255, 255)',
lineHeight: '21px',
listStyle: 'disc outside none',
listStyleImage: 'none',
listStylePosition: 'outside',
listStyleType: 'disc',
margin: '0px',
marginBottom: '0px',
marginLeft: '0px',
marginRight: '0px',
marginTop: '0px',
marker: '',
markerEnd: 'none',
markerMid: 'none',
markerStart: 'none',
mask: 'none',
maskType: 'luminance',
maxBlockSize: 'none',
maxHeight: 'none',
maxInlineSize: 'none',
maxWidth: 'none',
maxZoom: '',
minBlockSize: '0px',
minHeight: '0px',
minInlineSize: '0px',
minWidth: '0px',
minZoom: '',
mixBlendMode: 'normal',
motion: 'none 0px auto 0deg',
objectFit: 'fill',
objectPosition: '50% 50%',
offset: 'none 0px auto 0deg',
offsetDistance: '0px',
offsetPath: 'none',
offsetRotate: 'auto 0deg',
offsetRotation: 'auto 0deg',
opacity: '1',
order: '0',
orientation: '',
orphans: '2',
outline: 'rgb(36, 41, 46) none 0px',
outlineColor: 'rgb(36, 41, 46)',
outlineOffset: '0px',
outlineStyle: 'none',
outlineWidth: '0px',
overflow: 'visible',
overflowAnchor: 'auto',
overflowWrap: 'break-word',
overflowX: 'visible',
overflowY: 'visible',
padding: '0px',
paddingBottom: '0px',
paddingLeft: '0px',
paddingRight: '0px',
paddingTop: '0px',
page: '',
pageBreakAfter: 'auto',
pageBreakBefore: 'auto',
pageBreakInside: 'auto',
paintOrder: 'fill stroke markers',
perspective: 'none',
perspectiveOrigin: '0px 0px',
placeContent: 'normal normal',
placeItems: 'normal normal',
placeSelf: 'normal normal',
pointerEvents: 'auto',
position: 'static',
quotes: '',
r: '0px',
resize: 'none',
right: 'auto',
rx: 'auto',
ry: 'auto',
shapeImageThreshold: '0',
shapeMargin: '0px',
shapeOutside: 'none',
shapeRendering: 'auto',
size: '',
speak: 'normal',
src: '',
stopColor: 'rgb(0, 0, 0)',
stopOpacity: '1',
stroke: 'none',
strokeDasharray: 'none',
strokeDashoffset: '0px',
strokeLinecap: 'butt',
strokeLinejoin: 'miter',
strokeMiterlimit: '4',
strokeOpacity: '1',
strokeWidth: '1px',
tabSize: '8',
tableLayout: 'auto',
textAlign: 'start',
textAlignLast: 'auto',
textAnchor: 'start',
textCombineUpright: 'none',
textDecoration: 'none solid rgb(36, 41, 46)',
textDecorationColor: 'rgb(36, 41, 46)',
textDecorationLine: 'none',
textDecorationSkip: 'objects',
textDecorationStyle: 'solid',
textIndent: '0px',
textOrientation: 'mixed',
textOverflow: 'clip',
textRendering: 'auto',
textShadow: 'none',
textSizeAdjust: '100%',
textTransform: 'lowercase',
textUnderlinePosition: 'auto',
top: 'auto',
touchAction: 'auto',
transform: 'none',
transformStyle: 'flat',
transition: 'all 0s ease 0s',
transitionDelay: '0s',
transitionDuration: '0s',
transitionProperty: 'all',
transitionTimingFunction: 'ease',
unicodeBidi: 'normal',
unicodeRange: '',
userSelect: 'text',
userZoom: '',
vectorEffect: 'none',
verticalAlign: 'baseline',
visibility: 'visible',
webkitAppRegion: 'no-drag',
webkitAppearance: 'none',
webkitBackgroundClip: 'border-box',
webkitBackgroundOrigin: 'padding-box',
webkitBorderAfter: '0px none rgb(36, 41, 46)',
webkitBorderAfterColor: 'rgb(36, 41, 46)',
webkitBorderAfterStyle: 'none',
webkitBorderAfterWidth: '0px',
webkitBorderBefore: '0px none rgb(36, 41, 46)',
webkitBorderBeforeColor: 'rgb(36, 41, 46)',
webkitBorderBeforeStyle: 'none',
webkitBorderBeforeWidth: '0px',
webkitBorderEnd: '0px none rgb(36, 41, 46)',
webkitBorderEndColor: 'rgb(36, 41, 46)',
webkitBorderEndStyle: 'none',
webkitBorderEndWidth: '0px',
webkitBorderHorizontalSpacing: '0px',
webkitBorderImage: 'none',
webkitBorderStart: '0px none rgb(36, 41, 46)',
webkitBorderStartColor: 'rgb(36, 41, 46)',
webkitBorderStartStyle: 'none',
webkitBorderStartWidth: '0px',
webkitBorderVerticalSpacing: '0px',
webkitBoxAlign: 'stretch',
webkitBoxDecorationBreak: 'slice',
webkitBoxDirection: 'normal',
webkitBoxFlex: '0',
webkitBoxFlexGroup: '1',
webkitBoxLines: 'single',
webkitBoxOrdinalGroup: '1',
webkitBoxOrient: 'horizontal',
webkitBoxPack: 'start',
webkitBoxReflect: 'none',
webkitColumnBreakAfter: 'auto',
webkitColumnBreakBefore: 'auto',
webkitColumnBreakInside: 'auto',
webkitFontSizeDelta: '',
webkitFontSmoothing: 'auto',
webkitHighlight: 'none',
webkitHyphenateCharacter: 'auto',
webkitLineBreak: 'auto',
webkitLineClamp: 'none',
webkitLogicalHeight: 'auto',
webkitLogicalWidth: 'auto',
webkitMarginAfter: '0px',
webkitMarginAfterCollapse: 'collapse',
webkitMarginBefore: '0px',
webkitMarginBeforeCollapse: 'collapse',
webkitMarginBottomCollapse: 'collapse',
webkitMarginCollapse: '',
webkitMarginEnd: '0px',
webkitMarginStart: '0px',
webkitMarginTopCollapse: 'collapse',
webkitMask: '',
webkitMaskBoxImage: 'none',
webkitMaskBoxImageOutset: '0px',
webkitMaskBoxImageRepeat: 'stretch',
webkitMaskBoxImageSlice: '0 fill',
webkitMaskBoxImageSource: 'none',
webkitMaskBoxImageWidth: 'auto',
webkitMaskClip: 'border-box',
webkitMaskComposite: 'source-over',
webkitMaskImage: 'none',
webkitMaskOrigin: 'border-box',
webkitMaskPosition: '0% 0%',
webkitMaskPositionX: '0%',
webkitMaskPositionY: '0%',
webkitMaskRepeat: 'repeat',
webkitMaskRepeatX: '',
webkitMaskRepeatY: '',
webkitMaskSize: 'auto',
webkitMaxLogicalHeight: 'none',
webkitMaxLogicalWidth: 'none',
webkitMinLogicalHeight: '0px',
webkitMinLogicalWidth: '0px',
webkitPaddingAfter: '0px',
webkitPaddingBefore: '0px',
webkitPaddingEnd: '0px',
webkitPaddingStart: '0px',
webkitPerspectiveOriginX: '',
webkitPerspectiveOriginY: '',
webkitPrintColorAdjust: 'economy',
webkitRtlOrdering: 'logical',
webkitRubyPosition: 'before',
webkitTextCombine: 'none',
webkitTextDecorationsInEffect: 'none',
webkitTextEmphasis: '',
webkitTextEmphasisColor: 'rgb(36, 41, 46)',
webkitTextEmphasisPosition: 'over',
webkitTextEmphasisStyle: 'none',
webkitTextFillColor: 'rgb(36, 41, 46)',
webkitTextOrientation: 'vertical-right',
webkitTextSecurity: 'none',
webkitTextStroke: '',
webkitTextStrokeColor: 'rgb(36, 41, 46)',
webkitTextStrokeWidth: '0px',
webkitTransformOriginX: '',
webkitTransformOriginY: '',
webkitTransformOriginZ: '',
webkitUserDrag: 'auto',
webkitUserModify: 'read-only',
webkitWritingMode: 'horizontal-tb',
whiteSpace: 'normal',
widows: '2',
width: 'auto',
willChange: 'auto',
wordBreak: 'normal',
wordSpacing: '0px',
wordWrap: 'break-word',
writingMode: 'horizontal-tb',
x: '0px',
y: '0px',
zIndex: 'auto',
zoom: '1',
},
}
|
Q:
Linq Join - Duplicates
I have Two tables.
1.Users table (Username , Name)
2.Picture table( ID , Username , IsPrimary)
Each user can have zero to many pictures.
I'm Trying to write a query that will return all users (with or without pictures) and a single picture Id (of the picture with IsPrimary = true).
I wrote this Linq query :
var v = from u in Users
join p in Photos on u.Username equals p.Username
select new
{
u.Username,
p.ID
};
This works but returns duplicate user rows. (if user has more than one photo).
I want to get one row per user.
Is that possible?
A:
This should do exactly what you want.
from u in Users
let p = Photos.Where(p => p.Username == u.Username).FirstOrDefault()
where p <> null
select new
{
u.Username,
p.ID
};
However, it is worth noting that you may be better off writing hand optimized SQL and retrieving the objects using db.ExecuteQuery<User> or similar.
|
Q:
Cuda error 30 disappears when I debug in Nsight
VS 2010k and Nsight v3
I am making mods to some cuda modules; when I run in Debug mode in VS this Host code:
result = cuLaunchKernel ( cuFunction, dimGrid.x, dimGrid.y, dimGrid.z, dimBlock.x, dimBlock.y, dimBlock.z, shared, stream, argsG, 0);
cudaDeviceSynchronize();
err = cudaGetLastError();
i get value of result is zero but err is 30 [or Unknown] on the first time thru this part of the code and every time thru.
So I fired up NSight thinking to trap the problem. Processed my whole input file without any errors. Turned on memory check in Nsight and reran. again processed whole file without a complaint.
So: Under Host debug every launch of this code results in error 30 but running under control of Nsight and no errors.
Anyone have an explanation ?
thanks
A:
Solved thr program issue: there was a bad global memory address being calculated due to bug. why Nsight did not catch the error but sailed right on thru -- even with memory check turned on i do not understand.
|
Scientists have identified a protein which regulates the activation of brown fat in both the brain and the body's tissues. Their research, which was conducted in mice, was published today, Friday 11 May, in the journal Cell.
Unlike white fat, which functions primarily to store up fat, brown fat (also known as brown adipose tissue) burns fats to generate heat in a process known as thermogenesis. The research, led by scientists at the University of Cambridge Metabolic Research Laboratories at the Institute of Metabolic Science, discovered that the protein BMP8B acts on a specific metabolic system (which operates in the brain and the tissues) to regulate brown fat, making it a potential therapeutic target.
The scientists believe that activating brown fat could help to support current weight loss programmes, which individuals often struggle to maintain.
Dr Andrew Whittle, one of the authors of the paper from the Institute of Metabolic Science, said: "Other proteins made by the body can enhance heat production in brown fat, such as thyroid hormone but often these proteins have important effects in other organs too. Therefore they are not good targets for developing new weight loss treatments. However, BMP8B seems to be very specific for regulating the heat producing activity of brown fat, making it a more ideal mechanism for new therapies."
The experiments showed that when mice lacked the protein BMP8B they found it more difficult to maintain their normal body temperature. They also became much more obese than normal mice, particularly when fed a high-fat diet. Additionally, when the researchers treated brown fat cells with BMP8B they responded more strongly to activation by the nervous system. Furthermore, when BMP8B was administered to specific parts of the brain it increased the amount of nervous activation of brown adipose tissue. The result was that these BMP8B-treated brown fat cells burned more fat and mice given BMP8B in the brain lost weight.
Professor Toni Vidal-Puig, lead author of the study from the Institute of Metabolic Science and a member of the MRC Centre for Obesity and Related Metabolic Diseases, said: "A major feature of current weight-loss strategies is that people lose a lot of weight early on, but then reach a plateau despite continuing to follow the same diet regime. This is because the human body is incredibly good at sensing a reduction in food consumption and slows the metabolic rate to compensate. A strategy to increase brown fat activity could potentially be used in conjunction with current weight loss strategies to help prevent the typical decrease in a person's metabolic rate.
"One could be sceptical that techniques to increase metabolic rate might just be compensated by the body trying to make you want to eat more, to fuel this increased metabolism. But our findings showed that treating mice with Bmp8b did not have this effect, it simply made them lose weight by burning more fat in their brown adipose tissue.
"There are obvious differences between mice and humans, and from a therapeutic perspective this work is preliminary. Validation will be necessary to see if manipulating BMP8B would be safe and effective in humans."
Explore further Orexin: A hormone that fights fat with fat
More information: The paper 'BMP8B Increases Brown Adipose Tissue Thermogenesis through Both Central and Peripheral Actions' will appear in the journal Cell on Friday, 11 May. Volume: 149; Issue: 4; Manuscript: 6246. Journal information: Cell The paper 'BMP8B Increases Brown Adipose Tissue Thermogenesis through Both Central and Peripheral Actions' will appear in the journalon Friday, 11 May. Volume: 149; Issue: 4; Manuscript: 6246. |
iron and vitamin supplements?
Is anyone's premature baby out there on Poly Visol supplement? My LO is having a lot of gas and discomfort associated with this supplement. She is taking her expressed breastmilk with the vitamin and then shortly after making this horrible (gas) discomfort noises and arching her back in pain. (This only happens after her supplement.) To top things off, she is a very difficult burper.
Does anyone have any other vitamin or vitamin with iron recommendations?
--
"Enjoy the little things in life, for one day you'll look back and realize they were the big things."- Robert Brault
Comments (14)
My LO has been on poly-vi-sol with iron since the NICU because he was severely anemic. It used to bother him a LOT (on top of having horrible reflux), but he's 9.5 mos old now and it doesn't seem to phase him.
Out LO is also on the poly-vi-sol and has been since the NICU. She was on separate iron for the anemia, but we recently bought the poly-vi-sol with iron and figured it would be the same as giving it separately. From the other post, it doesn't sound like it?
Yeah definitely don't give it on it's own if your LO is still young.. it causes apnea/bradys/desats sometimes - mainly because the baby has such an aversion to the taste of it they just freak out.
Isaic was on Poly-vi-sol with iron since the NICU as well. It used to give him terrible gas and constipation. He really did seem miserable on it. Now it doesn't seem to phase him. I just think it's something they grow out of.
Good luck.
--
Gina
Mom to Isaic (2-24-08 at 28 weeks 6 days 2lbs 4oz)
Baby #2 is on the way. Due may 8, 2010!!! (Secretly hoping for PINK!!! |
U.S. Economy Generated 156,000 Jobs In August, Short Of Estimates
The U.S. economy created an estimated 156,000 jobs in August, falling slightly short of analysts' estimates, according to the Labor Department. The unemployment rate was essentially unchanged at 4.4 percent; it had been at 4.3 percent.
Economists had predicted a gain of between 170,000 to 180,000 jobs last month. But August job growth often falls short of initial estimates, only to be revised higher later on. Seasonal adjustments for August are challenging because many hiring managers are on vacation and students leave summer jobs and head back to school.
Friday's report from the Bureau of Labor Statistics also cut previous estimates for job growth in June and July by a combined 41,000 — the July figure was revised to 189,000 from 209,000, and the June number dropped to 210,000 from 231,000.
Still, the average job growth for the past 3 months is 185,000 according to government data. That's quite robust for this point in a recovery, more than enough to put downward pressure on the unemployment rate. August was the 83rd consecutive month of job growth in the U.S. economy.
Wage growth also slowed in August, with the average hourly earnings for all employees on private nonfarm payrolls rising by 3 cents, to $26.39. In July, by contrast, average hourly wages rose by 9 cents, to $26.36.
"Over the past 12 months, average hourly earnings have increased by 65 cents, or 2.5 percent," the Bureau of Labor Statistics says.
But Ian Shepherdson, chief economist at Pantheon Macroeconomics, says the August wage increase was likely artificially low because the survey was completed before the 15th of the month when many workers get a paycheck. So any August pay increases for those workers wouldn't be reflected in the report. Shepherdson says in a research note that he "expects a rebound " in wage growth in September.
As for where the jobs were added, the BLS says the biggest gain was in manufacturing, with 36,000 jobs. The sector has now added 155,000 jobs since November 2016, when it hit a recent employment low. |
Marta Resource Centre for Women
You can help
For victims of trafficking, MARTA is there
We’re so encouraged when we hear stories of our partner organizations’ successes in making a difference in women’s lives. Here’s just one story of how the MARTA Resource Center for Women changed one young woman’s life. You can help her and other trafficked and exploited women by making a donation to support their work.Anna was trafficked into prostitution at the age of 13. It began innocently enough when an older school friend took her to a party. But Anna was drugged and then sold to a 50 year old man seeking a virgin. She was held in captivity and forced into prostitution.
Anna’s nightmare did not end when she was rescued by police. Her family refused her and even blamed her for being exploited. She had no safe place to go and was suffering from severe trauma.
This is the environment in which MARTA operates. Anna is now under their care, living with a foster family and receiving psychological treatment. She has also managed to finish high school. Anna has made progress, but her healing will take time and ongoing support.
We celebrate every life that MARTA has saved and we are asking you to help them save more. Tomorrow is the UN-designated World Day Against Trafficking in Persons – a day to support organizations like MARTA. Human trafficking is a $32 billion industry with more than 20 million victims.
Located in Latvia in the community it serves, MARTA is critical to a rehabilitation process that is not easy nor brief. In addition to treatment and support, MARTA is dedicated to putting an end to trafficking through advocacy, influencing policy and developing programs for prevention.
Please visit our site to read more about their incredible work and make a donation to show the Annas of the world that they are not forgotten.
Thank you for your continued support to improve women’s lives. |
Q:
correct syntax for using Dispatcher to switch to UI thread
What is the difference between
Dispatcher.CurrentDispatcher.Invoke(somemethod);
and
Application.Current.Dispatcher.Invoke(somemethod);
When I use the first one the execution of somemethod is way faster than the second one.I used stopwatch and measured elapsed milliseconds.I use this method to update some UI controls based on some data coming from external thread.
A:
Dispatcher.CurrentDispatcher will get you dispatcher associated with current thread on which you are invoking this method.
Whereas Application.Current.Dispatcher will get you dispatcher associated with UI thread (assuming you App is launched from UI thread).
In essence if you are invoking delegate from background thread and try to update UI component from it say
textBlock.Text = "Test";
First approach will fail because it will invoke delegate on background thread dispatcher and UI components can only be modified from UI thread.
Second approach will work because it will delegate task on UI thread.
When I use the first one the execution of somemethod is way faster
than the second one.I used stopwatch and measured elapsed
milliseconds.I use this method to update some UI controls based on
some data coming from external thread.
If first approach worked for you then there is no need to use Dispatcher at all because that means you are already on UI thread.
And you need to post sample data for your observation of timings so it can be validated.
|
Q:
SSL Error with WCF Service using Transport Security & Cert. Authentication
I don't know if this is a question more suited for Serverfault.com, really depends on the answer, but I have followed this tutorial in .NET C# to setup a WCF service under 'wsHttpBinding' (Transport Security & Certificate Authentication), I have created a test development certificate using the methods described here and I have also configured my HTTPS 443 port in Bindings for IIS.
Everything has been working pretty good each step, however I am receiving an error in the the Example "Hello World" service I created (again, all followed via the MSDN tutorial link I first stated) when hitting https://vd1/WcfWsHttpBindingTest/Service.svc (VD1 being my local computer name):
HTTP Error 403.7 - Forbidden
The page you are attempting to access requires your browser to have a Secure Sockets Layer (SSL) client certificate that the Web server recognizes.
I have followed both tutorials as stated to install my server certificate and the client certificate and it has been configured in IIS; Also if I negate the 'https' and just use 'http' I receive a 403.4 Forbidden stating I am trying to access a page which has been secured with SSL, so I'm pretty sure that side of it is working.
Any ideas folks?
I haven't deviated from the tutorials, I am running IIS 7.0 and Vista Business.
It would even help if somebody could start me from a clean slate by giving me better tutorial links for configuring a service with wsHttpBinding.
** If anyone had seen my initial post, you will notice I closed my answer as it has evolved to the problem above **
A:
Thanks for your help Tanner.
After two hours of scratching my head and tinkering, with help from a colleague we narrowed it down to one step which was not done correctly. The Certificate was being added to "Local User" not "Local Computer".
Thanks again.
|
'Why isn't this all over the news?': Fox News commentator lashes out over coverage of a police officer who stopped a school shooting one day before the Santa Fe massacre
Greg Gutfeld, a commentator on Fox News' "The Five," railed against media outlets for what he perceived as a lack of coverage of a thwarted mass shooting at a high school in Illinois.
A police officer shot and wounded the gunman.
Gutfeld accused other media outlets of bias for not covering the incident.
It was unclear which media outlet Gutfeld was referring to; however, the Fox News host frequently criticizes CNN's programming.
Other outlets also accused CNN of not covering the Illinois shooting.
CNN covered the shooting.
A Fox News commentator railed against media outlets for what he perceived as lackluster coverage of a thwarted mass shooting at an Illinois high school.
"An amazing thing took place Wednesday but I bet you didn't hear much about it," Greg Gutfeld, a host on Fox News' "The Five," said in his in his opening monologue. "So the obvious question is, why isn't this all over the news?"
On Wednesday morning, 19-year-old Matthew Milby opened fire on students who were rehearsing for their graduation. Milby, who was using his mother's semiautomatic rifle, exchanged gunfire with Dixon police officer Mark Dallas.
Milby received non-life threatening injuries and was the only person hurt during the shooting. He was released from the hospital the same day and was taken to Lee County Jail, where he was charged with three counts of aggravated discharge of a firearm.
Gutfeld argued that the would-be mass shooting did not meet what he called the media's "seal of approval."
"The problem is, in this case, the media isn't interested in what doesn't happen," Gutfeld said. "Lives were saved, thankfully, so the story didn't fit the narrative."
Gutfeld went on to suggest that media outlets did not report on the foiled shooting because the officer fired his weapon to stop the suspected gunman.
"In this non-news story, truth is revealed." Gutfeld said. "That one can save lives by actually protecting people. For the duration of a gun attack is always dictated by the arrival of a second gun."
It was unclear which news outlet Gutfeld was directing his displeasure toward, but the host has frequently criticized networks like CNN.
CNN was also another target in the wake of the Illinois shooting. In a tweet on Thursday, the National Rifle Association uploaded a video accusing the network of bias when covering shooting incidents: "When you give mass shooters non-stop coverage but ignore an armed resource officer who stopped a mass shooting, you are not journalists."
The NRA was wrong. CNN did not "ignore" the officer who stopped the would-be mass shooter in Illinois. CNN published a report on it at least one hour before the NRA tweeted its video.
CNN anchor Jake Tapper noticed the discrepancy and called out the NRA on Twitter: "A message to whoever is running your social media, @NRATV, saying we ignored this armed resource officer is a lie. Please don't lie. Thanks!"
But the NRA fired back with another video and message: ".@jaketapper show us where the @CNN 'reporters' are at Dixon High School covering the armed resource officer who stopped a mass shooting," the organization tweeted. "NRATV is there. #WheresCNN? Please don't lie. Thanks! #NRA"
Tapper took another swing: ".@NRATV I have already showed you evidence that CNN didn't 'ignore' the hero officer, as was your lie," he said in a tweet. "Now you're trying to change the question. Happy to continue the conversation after you delete the false tweet and apologize. Thanks!" |
Effects of early erythropoietin therapy on the transfusion requirements of preterm infants below 1250 grams birth weight: a multicenter, randomized, controlled trial.
Infants of </=1250 g birth weight receive multiple erythrocyte transfusions during their hospitalization. We hypothesized that early erythropoietin (Epo) and iron therapy would 1) decrease the number of transfusions received (infants 401-1000 g birth weight; trial 1) and 2) decrease the percentage of infants who received any transfusions (1001-1250 g birth weight; trial 2). A total of 172 infants in trial 1 and 118 infants in trial 2 were randomized to treatment (Epo, 400 U/kg 3 times weekly) or placebo/control. Therapy was initiated by 4 days after birth and continued through the 35th postmenstrual week. All infants received supplemental parenteral and enteral iron. Complete blood and reticulocyte counts were measured weekly, and ferritin concentrations were measured monthly. Transfusions were administered according to protocol. Phlebotomy losses and transfusion data were recorded. Treated and placebo/control infants in trial 1 received a similar number of transfusions (4.3 +/- 3.6 vs 5.2 +/- 4.2, respectively). A similar percentage of treated and control infants in trial 2 received at least 1 transfusion (37% vs 46%). Reticulocyte counts were higher in treated infants during each week of the study in both trials. Hematocrits were higher among treated infants from week 2 on in both trials. Ferritin concentrations were higher in placebo/controls than in treated infants at weeks 4 and 8 in trial 1 and at week 4 in trial 2. No adverse effects of Epo or supplemental iron occurred. The combination of early Epo and iron as administered in this study stimulated erythropoiesis in infants who were </=1250 g at birth. However, the lack of impact on transfusion requirements fails to support routine use of early Epo.neonate, intravenous iron, donor exposure. |
// Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Xml;
using System.Xml.Linq;
using Microsoft.Test.ModuleCore;
using System.IO;
namespace CoreXml.Test.XLinq
{
public partial class XNodeBuilderFunctionalTests : TestModule
{
public partial class XNodeBuilderTests : XLinqTestCase
{
public partial class NamespacehandlingWriterSanity : XLinqTestCase
{
#region helpers
private string SaveXElementUsingXmlWriter(XElement elem, NamespaceHandling nsHandling)
{
StringWriter sw = new StringWriter();
using (XmlWriter w = XmlWriter.Create(sw, new XmlWriterSettings() { NamespaceHandling = nsHandling, OmitXmlDeclaration = true }))
{
elem.WriteTo(w);
}
sw.Dispose();
return sw.ToString();
}
#endregion
//[Variation(Desc = "1 level down", Priority = 0, Params = new object[] { "<p:A xmlns:p='nsp'><p:B xmlns:p='nsp'><p:C xmlns:p='nsp'/></p:B></p:A>" })]
//[Variation(Desc = "1 level down II.", Priority = 0, Params = new object[] { "<A><p:B xmlns:p='nsp'><p:C xmlns:p='nsp'/></p:B></A>" })] // start at not root node
//[Variation(Desc = "2 levels down", Priority = 1, Params = new object[] { "<p:A xmlns:p='nsp'><B><p:C xmlns:p='nsp'/></B></p:A>" })]
//[Variation(Desc = "2 levels down II.", Priority = 1, Params = new object[] { "<p:A xmlns:p='nsp'><B><C xmlns:p='nsp'/></B></p:A>" })]
//[Variation(Desc = "2 levels down III.", Priority = 1, Params = new object[] { "<A xmlns:p='nsp'><B><p:C xmlns:p='nsp'/></B></A>" })]
//[Variation(Desc = "Siblings", Priority = 2, Params = new object[] { "<A xmlns:p='nsp'><p:B xmlns:p='nsp'/><C xmlns:p='nsp'/><p:C xmlns:p='nsp'/></A>" })]
//[Variation(Desc = "Children", Priority = 2, Params = new object[] { "<A xmlns:p='nsp'><p:B xmlns:p='nsp'><C xmlns:p='nsp'><p:C xmlns:p='nsp'/></C></p:B></A>" })]
//[Variation(Desc = "Xml namespace I.", Priority = 3, Params = new object[] { "<A xmlns:xml='http://www.w3.org/XML/1998/namespace'/>" })]
//[Variation(Desc = "Xml namespace II.", Priority = 3, Params = new object[] { "<p:A xmlns:p='nsp'><p:B xmlns:p='nsp'><p:C xmlns:p='nsp' xmlns:xml='http://www.w3.org/XML/1998/namespace'/></p:B></p:A>" })]
//[Variation(Desc = "Xml namespace III.", Priority = 3, Params = new object[] { "<p:A xmlns:p='nsp'><p:B xmlns:xml='http://www.w3.org/XML/1998/namespace' xmlns:p='nsp'><p:C xmlns:p='nsp' xmlns:xml='http://www.w3.org/XML/1998/namespace'/></p:B></p:A>" })]
//[Variation(Desc = "Default namespaces", Priority = 1, Params = new object[] { "<A xmlns='nsp'><p:B xmlns:p='nsp'><C xmlns='nsp' /></p:B></A>" })]
//[Variation(Desc = "Not used NS declarations", Priority = 2, Params = new object[] { "<A xmlns='nsp' xmlns:u='not-used'><p:B xmlns:p='nsp'><C xmlns:u='not-used' xmlns='nsp' /></p:B></A>" })]
//[Variation(Desc = "SameNS, different prefix", Priority = 2, Params = new object[] { "<p:A xmlns:p='nsp'><B xmlns:q='nsp'><p:C xmlns:p='nsp'/></B></p:A>" })]
public void testFromTheRootNodeSimple()
{
string xml = CurrentChild.Params[0] as string;
XElement elem = XElement.Parse(xml);
// Write using XmlWriter in duplicate namespace decl. removal mode
string removedByWriter = SaveXElementUsingXmlWriter(elem, NamespaceHandling.OmitDuplicates);
// Remove the namespace decl. duplicates from the Xlinq tree
(from a in elem.DescendantsAndSelf().Attributes()
where a.IsNamespaceDeclaration && ((a.Name.LocalName == "xml" && (string)a == XNamespace.Xml.NamespaceName) ||
(from parentDecls in a.Parent.Ancestors().Attributes(a.Name)
where parentDecls.IsNamespaceDeclaration && (string)parentDecls == (string)a
select parentDecls).Any()
)
select a).ToList().Remove();
// Write XElement using XmlWriter without omitting
string removedByManual = SaveXElementUsingXmlWriter(elem, NamespaceHandling.Default);
ReaderDiff.Compare(removedByWriter, removedByManual);
}
//[Variation(Desc = "Default ns parent autogenerated", Priority = 1)]
public void testFromTheRootNodeTricky()
{
XElement e = new XElement("{nsp}A", new XElement("{nsp}B", new XAttribute("xmlns", "nsp")));
ReaderDiff.Compare(SaveXElementUsingXmlWriter(e, NamespaceHandling.OmitDuplicates), "<A xmlns='nsp'><B/></A>");
}
//[Variation(Desc = "Conflicts: NS redefinition", Priority = 2, Params = new object[] { "<p:A xmlns:p='nsp'><p:B xmlns:p='ns-other'><p:C xmlns:p='nsp'><D xmlns:p='nsp'/></p:C></p:B></p:A>",
// "<p:A xmlns:p='nsp'><p:B xmlns:p='ns-other'><p:C xmlns:p='nsp'><D/></p:C></p:B></p:A>" })]
//[Variation(Desc = "Conflicts: NS redefinition, default NS", Priority = 2, Params = new object[] { "<A xmlns='nsp'><B xmlns='ns-other'><C xmlns='nsp'><D xmlns='nsp'/></C></B></A>",
// "<A xmlns='nsp'><B xmlns='ns-other'><C xmlns='nsp'><D/></C></B></A>" })]
//[Variation(Desc = "Conflicts: NS redefinition, default NS II.", Priority = 2, Params = new object[] { "<A xmlns=''><B xmlns='ns-other'><C xmlns=''><D xmlns=''/></C></B></A>",
// "<A><B xmlns='ns-other'><C xmlns=''><D/></C></B></A>" })]
//[Variation(Desc = "Conflicts: NS undeclaration, default NS", Priority = 2, Params = new object[] { "<A xmlns='nsp'><B xmlns=''><C xmlns='nsp'><D xmlns='nsp'/></C></B></A>",
// "<A xmlns='nsp'><B xmlns=''><C xmlns='nsp'><D/></C></B></A>" })]
public void testConflicts()
{
XElement e1 = XElement.Parse(CurrentChild.Params[0] as string);
ReaderDiff.Compare(SaveXElementUsingXmlWriter(e1, NamespaceHandling.OmitDuplicates), CurrentChild.Params[1] as string);
}
//[Variation(Desc = "Not from root", Priority = 1)]
public void testFromChildNode1()
{
XElement e = new XElement("root",
new XAttribute(XNamespace.Xmlns + "p1", "nsp"),
new XElement("{nsp}A",
new XElement("{nsp}B",
new XAttribute("xmlns", "nsp"))));
ReaderDiff.Compare(SaveXElementUsingXmlWriter(e.Element("{nsp}A"), NamespaceHandling.OmitDuplicates), "<p1:A xmlns:p1='nsp'><B xmlns='nsp'/></p1:A>");
}
//[Variation(Desc = "Not from root II.", Priority = 1)]
public void testFromChildNode2()
{
XElement e = new XElement("root",
new XAttribute(XNamespace.Xmlns + "p1", "nsp"),
new XElement("{nsp}A",
new XElement("{nsp}B",
new XAttribute(XNamespace.Xmlns + "p1", "nsp"))));
ReaderDiff.Compare(SaveXElementUsingXmlWriter(e.Element("{nsp}A"), NamespaceHandling.OmitDuplicates), "<p1:A xmlns:p1='nsp'><p1:B/></p1:A>");
}
//[Variation(Desc = "Not from root III.", Priority = 2)]
public void testFromChildNode3()
{
XElement e = new XElement("root",
new XAttribute(XNamespace.Xmlns + "p1", "nsp"),
new XElement("{nsp}A",
new XElement("{nsp}B",
new XAttribute(XNamespace.Xmlns + "p1", "nsp"))));
ReaderDiff.Compare(SaveXElementUsingXmlWriter(e.Descendants("{nsp}B").FirstOrDefault(), NamespaceHandling.OmitDuplicates), "<p1:B xmlns:p1='nsp'/>");
}
//[Variation(Desc = "Not from root IV.", Priority = 2)]
public void testFromChildNode4()
{
XElement e = new XElement("root",
new XAttribute(XNamespace.Xmlns + "p1", "nsp"),
new XElement("{nsp}A",
new XElement("{nsp}B")));
ReaderDiff.Compare(SaveXElementUsingXmlWriter(e.Descendants("{nsp}B").FirstOrDefault(), NamespaceHandling.OmitDuplicates), "<p1:B xmlns:p1='nsp'/>");
}
//[Variation(Desc = "Write into used reader I.", Priority = 0, Params = new object[] { "<A xmlns:p1='nsp'/>", "<p1:root xmlns:p1='nsp'><A/></p1:root>" })]
//[Variation(Desc = "Write into used reader II.", Priority = 2, Params = new object[] { "<p1:A xmlns:p1='nsp'/>", "<p1:root xmlns:p1='nsp'><p1:A/></p1:root>" })]
//[Variation(Desc = "Write into used reader III.", Priority = 2, Params = new object[] { "<p1:A xmlns:p1='nsp'><B xmlns:p1='nsp'/></p1:A>", "<p1:root xmlns:p1='nsp'><p1:A><B/></p1:A></p1:root>" })]
public void testIntoOpenedWriter()
{
XElement e = XElement.Parse(CurrentChild.Params[0] as string);
StringWriter sw = new StringWriter();
using (XmlWriter w = XmlWriter.Create(sw, new XmlWriterSettings() { NamespaceHandling = NamespaceHandling.OmitDuplicates, OmitXmlDeclaration = true }))
{
// prepare writer
w.WriteStartDocument();
w.WriteStartElement("p1", "root", "nsp");
// write xelement
e.WriteTo(w);
// close the prep. lines
w.WriteEndElement();
w.WriteEndDocument();
}
sw.Dispose();
ReaderDiff.Compare(sw.ToString(), CurrentChild.Params[1] as string);
}
//[Variation(Desc = "Write into used reader I. (def. ns.)", Priority = 0, Params = new object[] { "<A xmlns='nsp'/>", "<root xmlns='nsp'><A/></root>" })]
//[Variation(Desc = "Write into used reader II. (def. ns.)", Priority = 2, Params = new object[] { "<A xmlns='ns-other'><B xmlns='nsp'><C xmlns='nsp'/></B></A>",
// "<root xmlns='nsp'><A xmlns='ns-other'><B xmlns='nsp'><C/></B></A></root>" })]
public void testIntoOpenedWriterDefaultNS()
{
XElement e = XElement.Parse(CurrentChild.Params[0] as string);
StringWriter sw = new StringWriter();
using (XmlWriter w = XmlWriter.Create(sw, new XmlWriterSettings() { NamespaceHandling = NamespaceHandling.OmitDuplicates, OmitXmlDeclaration = true }))
{
// prepare writer
w.WriteStartDocument();
w.WriteStartElement("", "root", "nsp");
// write xelement
e.WriteTo(w);
// close the prep. lines
w.WriteEndElement();
w.WriteEndDocument();
}
sw.Dispose();
ReaderDiff.Compare(sw.ToString(), CurrentChild.Params[1] as string);
}
//[Variation(Desc = "Write into used reader (Xlinq lookup + existing hint in the Writer; different prefix)",
// Priority = 2,
// Params = new object[] { "<p1:root xmlns:p1='nsp'><p2:B xmlns:p2='nsp'/></p1:root>" })]
public void testIntoOpenedWriterXlinqLookup1()
{
XElement e = new XElement("A",
new XAttribute(XNamespace.Xmlns + "p2", "nsp"),
new XElement("{nsp}B"));
StringWriter sw = new StringWriter();
using (XmlWriter w = XmlWriter.Create(sw, new XmlWriterSettings() { NamespaceHandling = NamespaceHandling.OmitDuplicates, OmitXmlDeclaration = true }))
{
// prepare writer
w.WriteStartDocument();
w.WriteStartElement("p1", "root", "nsp");
// write xelement
e.Element("{nsp}B").WriteTo(w);
// close the prep. lines
w.WriteEndElement();
w.WriteEndDocument();
}
sw.Dispose();
ReaderDiff.Compare(sw.ToString(), CurrentChild.Params[0] as string);
}
//[Variation(Desc = "Write into used reader (Xlinq lookup + existing hint in the Writer; same prefix)",
// Priority = 2,
// Params = new object[] { "<p1:root xmlns:p1='nsp'><p1:B /></p1:root>" })]
public void testIntoOpenedWriterXlinqLookup2()
{
XElement e = new XElement("A",
new XAttribute(XNamespace.Xmlns + "p1", "nsp"),
new XElement("{nsp}B"));
StringWriter sw = new StringWriter();
using (XmlWriter w = XmlWriter.Create(sw, new XmlWriterSettings() { NamespaceHandling = NamespaceHandling.OmitDuplicates, OmitXmlDeclaration = true }))
{
// prepare writer
w.WriteStartDocument();
w.WriteStartElement("p1", "root", "nsp");
// write xelement
e.Element("{nsp}B").WriteTo(w);
// close the prep. lines
w.WriteEndElement();
w.WriteEndDocument();
}
sw.Dispose();
ReaderDiff.Compare(sw.ToString(), CurrentChild.Params[0] as string);
}
}
}
}
}
|
The Washington Capitals announced on Tuesday that they had signed 27 year-old right winger Wayne Simpson to a one-year, two-way contract that pays the Massachusetts native $140,000 if he plays the season in the AHL with the Hershey Bears, and the NHL league minimum $650,000 if he gets called up to play with the Capitals.
Washington, weak on the offensive wings with the departures of Justin Williams, Marcus Johansson, and Daniel Winnik, is looking to shore up the right side of its forward corps with the signings of Simpson and Devante Smith-Pelly.
Wayne Simpson - not to be confused with perennial Philadelphia Flyers stud Wayne Simmonds, nor beloved American sitcom buffoon Homer Simpson - is a bit undersized as these things tend to go, standing just 5’10” and tipping the scales at a meaty-but-not-monolithic 194 lbs. As the NHL in general shifts towards speed and skill over bruising brawn, his diminutive stature will not necessarily keep Simpson from cracking the Capitals roster, so long as he can move his feet quickly.
And it seems that he can. Simpson, 27, has never played a single game in the NHL after going undrafted out of hockey powerhouse Union College, but that hasn’t kept him from forcing teams to give him serious looks.
Every step of Simpson’s career, from the NCAA at Union College, to the ECHL with the South Carolina Stingrays, to the AHL with the Providence Bruins and Portland Pirates, Simpson’s point totals have improved year-over-year until he is promoted to the next level. Then, he repeats it.
In fact, Simpson remains the South Carolina Stingrays (now the Capitals’ ECHL affiliate) franchise leader in points in a single postseason, with thirty-eight points. This season with the AHL’s Providence Bruins (who eliminated the Hershey Bears from the Calder Cup playoffs), Simpson was second on the team in regular season points (49 pts, 0.64 pts/gm) and playoff points (14 pts, 0.82 pts/gm).
Is it likely Simpson will crack the Capitals NHL roster this season? No, not particularly. But with Washington’s fourth line in flux like so many malfunctioning 1.21 gigawatt ion capacitors, he certainly has as good a chance as anyone.
And besides: a 27 year-old, undrafted, undersized prospect who just keeps over-performing?
I know who I’ll be rooting for. |
Blade Runner: The Final Cut had a screening last night at the Jules Verne Adventure Film Festival, and Warner Bros. decided to host the after-party at the famous Bradbury Building in downtown Los Angeles. io9 was there, snapping photos and gawking. If you've seen the film, then you know it's the rat-infested condemned shithole where J.F. Sebastian lived. Let's party! Check out our huge gallery of pictures after the jump. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.