text
stringlengths 8
5.77M
|
---|
Q:
Swagger 2 (Spring fox) adds 'es' to my API's
I was just trying to integrate Swagger into my Spring Boot (JAX-RS) project built with Gradle.I was able to generate a docker (Swagger UI) for the same as following :
I have configured my swagger with the default settings as follows :
package com.abc;
import lombok.extern.slf4j.Slf4j;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Import;
import org.springframework.data.mongodb.repository.config.EnableMongoRepositories;
import springfox.documentation.swagger2.annotations.EnableSwagger2;
@EnableAutoConfiguration
@SpringBootApplication
@EnableMongoRepositories
@Slf4j
@Import({springfox.documentation.spring.data.rest.configuration.SpringDataRestConfiguration.class,springfox.bean.validators.configuration.BeanValidatorPluginsConfiguration.class})
@EnableSwagger2
public class ServiceApplication {
public static void main(String[] args) {
SpringApplication.run(ServiceApplication.class, args);
}
public static void run(String[] args) throws Exception{
log.info("Started application on: 8080");
}
}
As we can see in the image for GET Events API the docker shows /eventses .. So from where it has added es to /events API which is written as :
@GET
public HashMap<String, Object> getEventList(@DefaultValue("1") @QueryParam("page") int page,
@DefaultValue("10") @QueryParam("rpp") int rpp, @QueryParam("events") String eventIds) {
HashMap<String, Object> eventsResultMap= new HashMap<String, Object>();
List<Events> events = null;
if (eventIds != null && eventIds.length() > 0) {
List<String> eventsIdList = Arrays.asList(eventIds.split(","));
log.info("" + eventsIdList);
events = eventService.getEvents(eventsIdList);
} else {
events = eventService.getEvents(page - 1, rpp);
}
eventsResultMap.put("EVENTS", events);
HashMap<String, Object> recordsMetaMap = new HashMap<String, Object>();
recordsMetaMap.put("total", eventService.totalCount());
recordsMetaMap.put("page", page);
recordsMetaMap.put("rpp", rpp);
eventsResultMap.put("_metadata", recordsMetaMap);
log.info("The events you have queried for are:" + eventsResultMap);
return eventsResultMap;
}
Please, guide me where I am doing wrong.What custom configs need to be done.
I have taken Reference from spring official documentation.
A:
Everything in /eventses comes from Springfox's support for Spring Data REST and has nothing to do with the getEventList method in your controller. If you don't want to have auto-discovery of your entities like that, removing the class from the @Import line should do the trick.
A:
If you are using a jax-rs implementation with spring boot, you should use swagger-core jax-rs libraries rather than spring fox. Swagger team has provided very detailed instructions here on how to configure your application for different implementations like jersey, rest-easy etc. I found it very easy to integrate for jersey 2.x.
To make your swagger documentation rich, you should try to provide as much meta data as you can using different swagger annotations as documented here. Swagger makes great use of these annotations combined with jax-rs annotations in some cases (E.g. QueryParam vs PathParam identification).
If you let me know which jax-rs implementation you are using, I might be able to provide you some sample configuration.
Edit:
For Jersey 2.x you will need to add something like this in your Jersey Configuration class (which extends org.glassfish.jersey.server.ResourceConfig):
@Bean
public BeanConfig swaggerConfig() {
register(ApiListingResource.class);
register(SwaggerSerializers.class);
BeanConfig config = new BeanConfig();
config.setConfigId("your-config-id");
config.setTitle( "Your Title" );
config.setSchemes(new String[] { "https", "http" });
config.setBasePath("your application base path E.g. /api");
config.setResourcePackage("package to be scanned E.g. com.example");
config.setPrettyPrint(true);
config.setScan(true);
return config;
}
Other than that, you will need to annotate your endpoint(service) classes with swagger annotations. E.g.
@Path("/material")
@Service
@Api(value = "Material")
public class MaterialEndpoint {
@POST
@ApiOperation(value = "Create Material")
@ApiResponses(value = { @ApiResponse(code = 201, message = "Success", response = CreateMaterialResponse.class),
@ApiResponse(code = 409, message = "Failure", response = ErrorResponse.class) })
public Response createMaterial(CreateMaterialRequest createMaterialRequest){
// Code goes here
}
}
And your entities with swagger annotations. It is upto you how rich you want your swagger documentation to be. Depending on that you can choose to annotate more or less classes.
|
1971 Amstel Gold Race
The 1971 Amstel Gold Race (held Sunday March 28, 1971) was the sixth edition of the annual road bicycle race "Amstel Gold Race". It was held in the Dutch provinces of Limburg.
The race stretched 233 kilometres, starting in Heerlen and finishing in Meerssen. There were a total of 123 competitors, and 47 cyclists finished the race.
Result
External links
Results
Category:Amstel Gold Race
Category:1971 in cycle racing
Category:1971 in Dutch sport |
We invite education professionals in the Middle East and around the world to share their thoughts about education technology, blended learning and other teaching topics on this DWC High School blog. Become part of the dialog on our forum and share your learned opinions. We’ll consider guest blogger submissions relating to a variety of education topics. A technical note: if you’d like to contribute commentary to this platform on a regular basis, we recommend that you have a WordPress user account: http://en.support.wordpress.com/getting-started/
-edtech digest is a news site that chronicles trends in education technology. This recent editorial looks at how DWCHS integrates online coursework and classroom learning. Click on the icon to read the blog post.
Share this:
Like this:
Follow Blog via Email
Enter your email address to follow this blog and receive notifications of new posts by email.
Join 75 other followers
Blog Stats
37,016 hits
Dubai Women’s College High School is now accepting applications for the Spring semester
DWC High School is an innovative new private high school located on the campus of Dubai Women's College. Combining onsite instruction with online learning from K12, America's leader in online learning, DWC High School will provide its students with the high quality education and independent organization skills needed to succeed in a modern and changing global society. Click on the photo above to start the application process. |
> > > Can anyone tell me how do I copy files from one computer to other? I want to get some files from a computer which is of more than 200 MB size. I have tried with different softwares like net meeting, msn messenger and yahoo messenger. But I am not able to get those files because of size. Is there some software available? or Can I use my domain name through internet to get those files? If I can how do I do it?> > > This may not be a query about Foxpro. Thats why I put it in Off-topic category ;-) Hope anyone would help me.> > > Thanks in advance.> > > Regards,> > > Suvi Joseph> > > www.sssoftwares.com> > Try the FX program it's 200+kb in size, it's a DOS program and it transfers files fast. This FX uses cable2cable connection.> > Hi Mike,> Can you tell me where can I find FX program?> Regards,> Suvi Joseph> www.sssoftwares.com
Sorry Suvi, FX program is only recommended for Peer to Peer if LAN is not available. Why don't you try LAPLINK FTP? It does fast file transfer on the NET. |
namespace Babel.Core
{
public static class Constants
{
public const string RootFolderForResources = "BabelResources/Text";
public const string GeneralNamespace = "Babel";
}
} |
Q:
"Maybe Monad" for multi-pointed objects?
Background:
A pointed object $X$ in a category $C$ with terminal object $*$ is a map $*\rightarrow X$. Such objects with basepoint-preserving maps form their own category of pointed objects $C^{*/}$. There is a canonical forgetful functor $U:C^{*/}\rightarrow C$ that forgets the basepoint. Furthermore, this has a left adjoint $(-)_{+}:C\rightarrow C^{*/}$ which sends an object $Y$ to the coproduct $Y\coprod *$ equipped with the canonical basepoint inclusion. The adjunction $(-)_{+}\dashv U$ induces a monad on $C$ (think this is called the ``maybe monad"). The category of algebras over this monad is $C^{*/}$.
Question:
There is also a notion of a "multi pointed object:" an object $X$ equipped with a map from a coproduct of the terminal object with itself a bunch of times. The objects with the obvious maps form a category $C_{multi}$ . Does this category arise as a category of algebras over some sort of "maybe" monad?
Edit
To clarify: the objects are objects in $C_{multi}$ with a fixed (we can even assume finite) number of basepoints. The morphisms are maps that preserve those basepoints.
A:
Let $\mathcal{C}$ be a category with coproducts and $S \in \mathcal{C}$. Then we have the slice category $S/\mathcal{C}$. The objects of it are morphisms $S \to X$, where $X$ is an object of $\mathcal{C}$. This generalizes your construction. There is a forgetful functor $S/\mathcal{C} \to \mathcal{S}$ mapping $(S \to X)$ to $X$. It has a left adjoint mapping $X$ to $(S \to S + X)$, where the morphism is the coproduct inclusion. I claim that this adjunction is monadic.
The monad $T$ corresponding to the adjunction sends $X$ to $S+X$, the unit is the coproduct inclusion $X \to S+X$ and the multiplication is the obvious morphism $\mu : S+(S+X) = S+S+X \to S+X$ induced by the codiagonal of $S$. Hence, a $T$-module is an object $X$ together with a morphism $f : S+X \to X$ such that $f|_X = \mathrm{id}_X$ and such that $f \circ (S+f) = f \circ \mu$. Now $f$ is determined by the morphisms $f|_X$ and $f|_S$, but $f|_X=\mathrm{id}$ is fixed, and the equation $f \circ (S+f) = f \circ \mu$ simply says $f|_S = f|_S$ on the first $S$-copy, and $f \circ f|_S = f|_S$ on the second copy; but the latter follows from $f|_X = \mathrm{id}$. Hence, you see that $T$-modules correspond to morphisms $S \to X$. It is also easy to check that this correspondence is compatible with morphisms and clearly it is the canonical one from $S/\mathcal{C}$ to $T$-modules.
|
- 295.
-5*(a + 1)**2*(a + 59)
What is p in -5*p**2 - 100*p + 345 = 0?
-23, 3
Solve 2*c**2 - 50*c = 0 for c.
0, 25
Determine n so that -n**5 + 35*n**4 + 145*n**3/4 - 35*n**2/4 - 9*n = 0.
-1, -1/2, 0, 1/2, 36
What is m in 2*m**3/15 + 16*m**2/5 - 72*m/5 - 2592/5 = 0?
-18, 12
Find g, given that 5*g**4 - 245*g**3 - 250*g**2 = 0.
-1, 0, 50
Factor -f**3/3 - 41*f**2 - 1681*f - 68921/3.
-(f + 41)**3/3
Factor 5*h**3 + 85*h**2 + 200*h - 1500.
5*(h - 3)*(h + 10)**2
Let 2*j**5 - 20*j**4 + 12*j**3 + 56*j**2 - 14*j - 36 = 0. Calculate j.
-1, 1, 2, 9
Factor -5*d**5 + 185*d**4 - 355*d**3 + 175*d**2.
-5*d**2*(d - 35)*(d - 1)**2
Solve 2*n**5/5 + 2*n**4 + 2*n**3/5 - 26*n**2/5 - 4*n/5 + 16/5 = 0 for n.
-4, -2, -1, 1
Suppose -t**2/4 - 6395*t/2 - 40896025/4 = 0. Calculate t.
-6395
Find b, given that -325*b**4/4 + 3095*b**3/2 + 1527*b**2 + 458*b + 40 = 0.
-2/5, -2/13, 20
Let j**2 + 84*j - 85 = 0. Calculate j.
-85, 1
Determine o, given that 19*o**3/3 + 86*o**2/3 + 116*o/3 + 40/3 = 0.
-2, -10/19
What is a in 2*a**2 - 242*a = 0?
0, 121
Factor 5*j**4 - 55*j**3 + 170*j**2 - 120*j.
5*j*(j - 6)*(j - 4)*(j - 1)
Suppose -4*j**2 - 1152*j - 82944 = 0. Calculate j.
-144
Factor -27*p**3 + 56988*p**2 + 76020*p + 25344.
-3*(p - 2112)*(3*p + 2)**2
Let 80*k**5 - 89800*k**4 + 25222525*k**3 - 12549575*k**2 - 11009625*k - 1573605 = 0. What is k?
-1/4, 1, 561
Suppose -p**2 + 88*p - 87 = 0. What is p?
1, 87
Factor d**2/3 - 111*d.
d*(d - 333)/3
Find a such that -3*a**3/4 - 141*a**2/4 + 36*a = 0.
-48, 0, 1
Find i, given that 2*i**5/11 + 8*i**4/11 - 24*i**3/11 - 68*i**2/11 + 2*i + 60/11 = 0.
-5, -2, -1, 1, 3
Factor -j**3 - 268*j**2 + j + 268.
-(j - 1)*(j + 1)*(j + 268)
Suppose 3*n**5 - 42*n**3 + 36*n**2 + 39*n - 36 = 0. Calculate n.
-4, -1, 1, 3
Find h such that -2*h**4/9 - 8*h**3/3 - 92*h**2/9 - 40*h/3 - 50/9 = 0.
-5, -1
Factor 3*w**4/4 + 117*w**3 + 2307*w**2 - 175968*w + 1696512.
3*(w - 16)**2*(w + 94)**2/4
Factor -4*h**3/3 - 28*h**2/3 - 20*h - 12.
-4*(h + 1)*(h + 3)**2/3
Solve -2*l**5/7 - 50*l**4/7 + 6*l**3/7 + 22*l**2 + 100*l/7 = 0.
-25, -1, 0, 2
Factor 4*x**2 - 184*x + 2116.
4*(x - 23)**2
Suppose 2*v**2/5 + 4*v/5 + 2/5 = 0. What is v?
-1
Find p such that -5*p**4/4 - 5*p**3 + 105*p**2 - 180*p = 0.
-12, 0, 2, 6
Factor 12*v**2 + 1044*v + 22707.
3*(2*v + 87)**2
Factor -2*i**2/5 + 276*i/5 - 9522/5.
-2*(i - 69)**2/5
Let 2*x**3 - 56*x**2 - 2*x + 56 = 0. Calculate x.
-1, 1, 28
Factor 3*h**5 + 6*h**4 - 57*h**3 + 84*h**2 - 36*h.
3*h*(h - 2)*(h - 1)**2*(h + 6)
Determine a, given that 3*a**3 - 4332*a**2 + 8655*a - 4326 = 0.
1, 1442
Factor 5*o**3/2 + 549*o**2 + 30030*o - 12100.
(o + 110)**2*(5*o - 2)/2
Find i such that 3*i**3/4 + 21*i**2/4 + 45*i/4 + 27/4 = 0.
-3, -1
What is b in -3*b**5 - 93*b**4 + 297*b**3 - 303*b**2 + 102*b = 0?
-34, 0, 1
Factor -3*d**2/2 - 63*d - 351/2.
-3*(d + 3)*(d + 39)/2
Factor 9*x**4 - 25*x**3 - 6*x**2.
x**2*(x - 3)*(9*x + 2)
Factor -k**4 + 3*k**3 + 5*k**2 - 3*k - 4.
-(k - 4)*(k - 1)*(k + 1)**2
Factor -2*m**3/7 - 10*m**2/7 - 12*m/7.
-2*m*(m + 2)*(m + 3)/7
Factor 3*z**3 - 36*z**2 - 135*z.
3*z*(z - 15)*(z + 3)
Factor -3*s**2 - 30*s + 600.
-3*(s - 10)*(s + 20)
Suppose -20*n**5 + 315*n**4 - 1180*n**3 + 1525*n**2 - 420*n - 220 = 0. Calculate n.
-1/4, 1, 2, 11
Factor q**4 - 8*q**3 + 7*q**2.
q**2*(q - 7)*(q - 1)
Factor 2*w**3/7 - 6*w**2 + 148*w/7 + 480/7.
2*(w - 15)*(w - 8)*(w + 2)/7
Factor 4*f**2 - 1808*f + 1804.
4*(f - 451)*(f - 1)
Let -4*g**4 + 192*g**3 + 1048*g**2 + 672*g - 1908 = 0. What is g?
-3, 1, 53
Let -2*c**5/7 - 36*c**4/7 - 184*c**3/7 - 16*c**2 + 768*c/7 + 1024/7 = 0. Calculate c.
-8, -2, 2
Factor h**4 - 130*h**3 + 633*h**2 - 1004*h + 500.
(h - 125)*(h - 2)**2*(h - 1)
Factor z**4 + 5*z**3.
z**3*(z + 5)
Factor 4*g**4 - 10*g**3 + 2*g**2 + 4*g.
2*g*(g - 2)*(g - 1)*(2*g + 1)
Factor 5*b**2 + 15*b + 10.
5*(b + 1)*(b + 2)
Factor 3*g**2 - 96*g + 180.
3*(g - 30)*(g - 2)
Suppose -l**5 + 97*l**4/5 - 358*l**3/5 + 428*l**2/5 - 24*l = 0. What is l?
0, 2/5, 2, 15
Factor h**2 + 29*h - 30.
(h - 1)*(h + 30)
Factor -7203*a**3 - 6615*a**2 + 576*a - 12.
-3*(a + 1)*(49*a - 2)**2
Suppose t**2/5 + 59*t/5 = 0. What is t?
-59, 0
Factor -j**2/10 - j/10 + 36/5.
-(j - 8)*(j + 9)/10
What is y in -y**3/8 - 3*y**2/8 + 5*y/4 + 3 = 0?
-4, -2, 3
Solve 3*j**4/7 - 9*j**3/7 - 3*j**2 + 45*j/7 + 54/7 = 0 for j.
-2, -1, 3
Suppose -2*w**5/7 - 2*w**4/7 + 146*w**3/7 - 694*w**2/7 + 1248*w/7 - 792/7 = 0. What is w?
-11, 2, 3
Determine n, given that -5*n**2 - 1040*n - 54080 = 0.
-104
What is s in 2*s**2/7 - 4056*s/7 + 2056392/7 = 0?
1014
Suppose -j**3/3 - 3944*j**2/3 = 0. What is j?
-3944, 0
Factor -2*j**3 + 283*j**2/2 - 209*j/2 - 35.
-(j - 70)*(j - 1)*(4*j + 1)/2
Factor 45*i**3 - 5455*i**2 - 4280*i + 1220.
5*(i - 122)*(i + 1)*(9*i - 2)
Factor k**4/5 - 8*k**3/5 - 11*k**2/5 + 18*k/5.
k*(k - 9)*(k - 1)*(k + 2)/5
Factor 4*q**2 - 396*q + 1880.
4*(q - 94)*(q - 5)
Determine f so that -f**4/2 + 3*f**3/2 - 3*f**2/2 + f/2 = 0.
0, 1
Factor -12*g**2 - 224*g - 144.
-4*(g + 18)*(3*g + 2)
Suppose -20*v**5 - 135*v**4 + 1235*v**3 - 300*v**2 = 0. Calculate v.
-12, 0, 1/4, 5
Determine r so that -3*r**3/8 + 75*r/2 = 0.
-10, 0, 10
Factor -4*k**3/7 - 24*k**2 - 1152*k/7 + 13824/7.
-4*(k - 6)*(k + 24)**2/7
Suppose -g**5 + 135*g**4 - 4487*g**3 - 4759*g**2 + 4488*g + 4624 = 0. Calculate g.
-1, 1, 68
Factor -5*z**3 + 290*z**2 - 4205*z.
-5*z*(z - 29)**2
Determine m, given that -2*m**4/17 + 4*m**3/17 + 30*m**2/17 - 64*m/17 + 32/17 = 0.
-4, 1, 4
Find m, given that 3*m**3 - 375*m**2 + 12921*m - 73101 = 0.
7, 59
Solve -3*a**5 + 84*a**4 - 531*a**3 - 1686*a**2 + 13692*a - 17640 = 0 for a.
-5, 2, 3, 14
Factor -2*b**2 - 32*b - 56.
-2*(b + 2)*(b + 14)
Let -7*x**3/3 - 2*x**2 + x/3 = 0. What is x?
-1, 0, 1/7
Solve -o**2/2 + 132*o - 8712 = 0.
132
Let -y**3/7 - 8*y**2/7 + 44*y/7 - 48/7 = 0. What is y?
-12, 2
Let 2*s**5/3 - 16*s**4/3 + 40*s**3/3 - 20*s**2/3 - 14*s + 12 = 0. Calculate s.
-1, 1, 2, 3
Factor -18*p**2 + 100*p - 50.
-2*(p - 5)*(9*p - 5)
Determine t, given that 40*t**3 - 105*t**2 + 35*t + 30 = 0.
-3/8, 1, 2
Factor 33*r**2 - 89*r/3 - 10/3.
(r - 1)*(99*r + 10)/3
Factor -2*f**2/9 - 112*f/9 - 24.
-2*(f + 2)*(f + 54)/9
Factor -c**4/5 + 131*c**3/5 - 4087*c**2/5 - 13467*c/5.
-c*(c - 67)**2*(c + 3)/5
Determine b so that 5*b**2 - 25*b - 120 = 0.
-3, 8
Factor r**3/3 + 80*r**2/3 - 27*r.
r*(r - 1)*(r + 81)/3
Suppose 4*g**3/3 - 65*g**2/3 + 272*g/3 - 64/3 = 0. Calculate g.
1/4, 8
Factor 55*v**4 - 17815*v**3 + 1922940*v**2 - 69109200*v - 6298560.
5*(v - 108)**3*(11*v + 1)
Determine a so that -a**2/2 + 1881*a - 3538161/2 = 0.
1881
Factor 14*d**3/3 - 6*d**2 + 4*d/3.
2*d*(d - 1)*(7*d - 2)/3
Solve t**2/5 - t/5 - 22 = 0 for t.
-10, 11
Factor 5*l**5/2 - 355*l**4 + 1395*l**3 - 2080*l**2 + 2765*l/2 - 345.
5*(l - 138)*(l - 1)**4/2
Solve j**5 + 44*j**4 + 166*j**3 + 244*j**2 + 161*j + 40 = 0.
-40, -1
Factor -4*d**3/9 + 23*d**2/9 - 14*d/9 - 5/9.
-(d - 5)*(d - 1)*(4*d + 1)/9
Let 3*v**4/4 - 9*v**3/4 - 15*v**2/2 = 0. What is v?
-2, 0, 5
Solve -2*b**5 + 40*b**4 + 38*b**3 - 2596*b**2 - 4536*b + 7056 = 0 for b.
-6, -3, 1, 14
Let -2*g**4/7 - 11290*g**3/7 - 21240252*g**2/7 - 13310550392*g/7 + 13331801936/7 = 0. What is g?
-1882, 1
Factor 16*d**4/7 + 88*d**3/7 + 60*d**2/7 - 50*d - 500/7.
2*(d - 2)*(2*d + 5)**3/7
What is o in o**5/3 + o**4/3 - o**3/3 - o**2/3 = 0?
-1, 0, 1
Determine k, given that -2*k**4 + 6*k**3 - 8*k = 0.
-1, 0, 2
Factor -r**4/2 + 8*r**3 - 69*r**2/2 + 45*r.
-r*(r - 10)*(r - 3)**2/2
Factor 5*w**5 + 10*w**4 - 15*w**3 - 40*w**2 - 20*w.
5*w*(w - 2)*(w + 1)**2*(w + 2)
Factor -2*o**3 - 2396*o**2/5 - 28608*o + 11520.
-2*(o + 120)**2*(5*o - 2)/5
Determine x so that 4*x**5 + 36*x**4 + 92*x**3 + 28*x**2 - 96*x - 64 = 0.
-4, -1, 1
Factor 3*n**3/8 - 171*n**2/4 + 10395*n/8 - 9075/2.
3*(n - 55)**2*(n - 4)/8
Factor 158*q**4 - 311*q**3 + 148*q**2 + 5*q.
q*(q - 1)**2*(158*q + 5)
Factor 5*n**5 + 940*n**4 + 60390*n**3 + 1413980*n**2 + 5674525*n.
5*n*(n + 5)*(n + 61)**3
Factor -t**4/3 - 8*t**3 - 50*t**2 - 200*t/3 + 125.
-(t - 1)*(t + 5)**2*(t + 15)/3
Solve -5*d**4 - 50*d**3 = 0 for d.
-10, 0
Find z, given that -z**5 + 3*z**4 + 5*z**3 - 27*z**2 + 32*z - 12 = 0.
-3, 1, 2
Determine f so that 4*f**2/5 - 24*f + 180 = 0.
15
Factor -4*o**2 - 72*o - 68.
-4*(o + 1)*(o + 17)
Factor 2*g**4/19 - 34*g**3/19.
2*g**3*(g - 17)/19
Suppose -3*u**3/8 - 3*u**2/4 - 3*u/8 = 0. What is u?
-1, 0
Fac |
aku - what could be worst bro. im still here anyway. living with whatever left of me. still i could call u in the middle of the night, just to hangout, and have this brotherly sessions, in our old seats, mcd batu pahat. doing "bukon", and maybe u'll bring ur wife and kids too.
i think this group sessions helps. since most of us is working, having issues in daily life, relationship, workloads, and its all came down to one thing. trust.
how much do u trust people. especially the ones within the circle. telling them things and whats not. and if u ask me, i'll say i do. i'll put my trust in them. this group session was here on a particular reason. to help brothers, for brothers. why? because we came from different background, expertise, history, age, social status. and etc.
this one brother may be the the pack leader, having endless confidence, highly informative and charismatic aura, but apart from that, he also suffers from his pasts. maybe, just maybe. it could be anything. everybody has their "black dot" moments.
since the session consists of a several mens, i think these days, there is a need for these session to have a girl. to me, since the session might consist a point of view from a man towards any-related topics from relationship to life journey, and must not be only those mens giving out advise.
for example, if a bro is having a hard time with his partner, one brother might spark more fire by saying, do this, she deserves that. and having a "half-bro", "half-dude bro-sis" would affect his actions. she might say, "she didnt meant that, do consider" "try to put urself in her place" and yadda yadda yadda. and vice versa situation.
and yes, we could see things, from this "half-dude-bro-sis" side. as we plan to do something to surprize our partner, we could ask her 1st. is this okay? discuss with the circle. and decide on something. not all men carry their girlish side all times, right?
what could be better? assist this "half-dude-bro-sis" in her daily live journey. and u can completely trust her. treat him like ur own bloodbrother-circle. listens to her problem. provide options. confess to her.
a bro sessions is sometimes more intense than father-son or mother-son conversation. and i think it really affects us in a way. u just cant talk to ur mother that u bang a chick last night, but u can tell ur bro about this. and things like that.
choose ur bros wisely. and i think for now, this is the main reason why i think, we need a girl to be in any men's circle, or social group meetup.
she keeps a journal of her life journey.
she did fried mee using spaghetti
she wears lab coats.
she drives u crazy
she melts u with less effort.
she talks in accent, "dialeks"
ur friends didnt like her because she's better from the last one u had.
she's 16,197 km away, 229 hours of travelling my land.
she paints.
she's the one. a women from the past. |
Have something to say?
Ready to be published? LXer is read by around 350,000 individuals each month, and is an excellent place for you to publish your ideas, thoughts, reviews, complaints, etc. Do you have something to say to the Linux community?
LXer Weekly Roundup for 09-May-2010
In the Roundup this week we have a Faster and better Chrome 5 as well as 5 things you didn’t know VLC could do, Why rejecting Microsoft’s OSS contributions is counter-productive, Upgrading your distro should come with a warning and more. Enjoy!
No More Cheap Supercomputers? Sony Blocks Linux on PS3: Sony Computer Entertainment America (SCEA) faces a class action lawsuit following a recent an update to its PlayStation 3 console that removes the ability to put alternate operating systems on the console. The late March update for the PlayStation 3 restricts the installation of an alternative operating system to the console's native OS. The feature, called 'Install Other OS,' has been removed, three years after the console's introduction, "due to security concerns," the company said in a blog post.
CLI on the Web: ..ECMA CLI would have given the web both strongly typed and loosely typed programming languages. It would have given developers a choice between performance and scriptability. A programming language choice (use the right tool for the right job) and would have in general made web pages faster just by moving performance sensitive code to strongly typed languages.
Chrome 5: Faster and Better: The first thing you'll notice with Google's new beta of its Chrome Web browser is that it's faster, much faster, than the last version. You don't need any fancy tests to see that. All you have to do is use it and you'll see that it blows other browsers away.
4.4.3 Is Upon Us: KDE today released the 3rd monthly update to the 4.4 series, bringing a slew of bugfixes and translation updates to our users. Konsole has seen some love, so has Okular. Check out the changelog to get to know more about it. This release, as all our x.y.z releases (where z > 0) does not contain new features but concentrates on stabilizing the existing codebase. As such, the upgrade should be safe and painless, so we recommend updating to everyone running previous KDE SC versions.
5 Things You Didn’t Know VLC Could Do: There’s a good chance that if you’re reading this, you’re familiar with VLC, the high quality audio and video player for Linux, Mac, and Windows. Its speed, portability, and built-in support for most common codecs make VLC an extremely popular choice for playing video. While that’s all well and good, VLC can do a lot more than basic video playback, including things like video encoding, DVD ripping, volume normalization and more. Today we’ll look at some of VLC’s most interesting and little-known features that help make this an indispensable application for nearly all desktop platforms.
Upgrading your distro should come with a warning: It's that time of year again when a lot of the major distros are putting out new releases, and people are clambering to get the new versions installed. But there are two camps of people in this rush to get the latest and greatest. The upgraders, who prefer to leave their computer as is, and hit the "upgrade" button, hoping to come back to their computers in a couple hours and revel in their shiny new OS. Then their are those who prefer the "clean install" by backing up any important stuff, wiping the drive, and starting from scratch. But is the upgrade method really worth it?
Tilting at Windows. Why rejecting Microsoft’s OSS contributions is counter-productive: Yesterday I had a look at the response of the Joomla! community to the news that Microsoft had signed the Joomla! Contributor Agreement and was contributing code to the content management project. You probably won’t be surprised to find that some people don’t like the idea. The speed and vehemence of their rejection of Microsoft’s involvement in the project is entirely predictable, but none the less depressing for that.
Linux needs to do more for programmers: Much as I hate to admit it, Microsoft does some things better, much better, than Linux. Number one with a bullet is how Microsoft helps programmers and ISVs (independent software vendors). MSDN (Microsoft Software Developer Network) is a wonderful online developer resource. Linux has had nothing to compare. True, there is the Linux Developer Network, which, when it began, looked like it would be the Linux equivalent of MSDN, but it hasn't lived up to its promise. And, I can't overlook the Linux Foundation's Linux training classes. But, if I'm an ISV and I want to write software for Linux, I'm still going to need to piece together a lot of it by myself.
I had an epiphany (about Epiphany): The GNOME Web browser Epiphany — formerly based on Mozilla's Gecko engine and now based on Webkit — doesn't ship with Ubuntu (though it does with Debian and most GNOME-based distros/projects). But if you're running GNOME, I recommend you add it via your favorite package manager. What Epiphany offers is a streamlined, faster, less-resource-intensive browsing experience. |
Relationship between catheter contact force and radiofrequency lesion size and incidence of steam pop in the beating canine heart: electrogram amplitude, impedance, and electrode temperature are poor predictors of electrode-tissue contact force and lesion size.
Electrode-tissue contact force (CF) is believed to be a major factor in radiofrequency lesion size. The purpose of this study was to determine, in the beating canine heart, the relationship between CF and radiofrequency lesion size and the accuracy of predicting CF and lesion size by measuring electrogram amplitude, impedance, and electrode temperature. Eight dogs were studied closed chest. Using a 7F catheter with a 3.5 mm irrigated electrode and CF sensor (TactiCath, St. Jude Medical), radiofrequency applications were delivered to 3 separate sites in the right ventricle (30 W, 60 seconds, 17 mL/min irrigation) and 3 sites in the left ventricle (40 W, 60 seconds, 30 mL/min irrigation) at (1) low CF (median 8 g); (2) moderate CF (median 21 g); and (3) high CF (median 60 g). Dogs were euthanized and lesion size was measured. At constant radiofrequency and time, lesion size increased significantly with increasing CF (P<0.01). The incidence of a steam pop increased with both increasing CF and higher power. Peak electrode temperature correlated poorly with lesion size. The decrease in impedance during the radiofrequency application correlated well with lesion size for lesions in the left ventricle but less well for lesions in the right ventricle. There was a poor relationship between CF and the amplitude of the bipolar or unipolar ventricular electrogram, unipolar injury current, and impedance. Radiofrequencylesion size and the incidence of steam pop increase strikingly with increasing CF. Electrogram parameters and initial impedance are poor predictors of CF for radiofrequency ablation. |
Cong to protest BSP govt decisions on local bodies poll
Uttar Pradesh unit of the Congress today announced to launch an agitation at district level on June 24 against BSP government's decision to hold partyless urban local bodies elections in the state.
lucknowUpdated: Jun 20, 2010 21:19 IST
PTI
Uttar Pradesh unit of the Congress today announced to launch an agitation at district level on June 24 against BSP government's decision to hold partyless urban local bodies elections in the state.
"The party would observe black day, stage dharna and demonstrations at district and city level and burn copies of the notification issued by the state government on June 24," UPCC president Reeta Bahuguna Joshi said here.
Addressing a meeting of panchayati raj organisation, she said that after this "chetavani diwas" (warning day) would be division level.
If the government does not withdraw the notification, which is against the democratic institutions, the party would stage a massive demonstration at the state level, she said.
She said that the party was of the opinion that even panchayat elections should be held on partylines.
The party will strongly contest forthcoming three-tier panchayat elections and would strengthen its base further in rural areas, the UPCC president said.
The state cabinet had recently given its nod to the UP Municipalities (election of members, corporators, chairmen and mayors) rules 2010 with prohibits urban local body polls on party lines. |
Q:
How to best merge information, at a server, into a "form", a PDF being generated as the final output
Background:
I have a VB6 application I've "inherited" that generates a PDF for the user to review using unsupported Acrobat Reader OCX integration. The program generates an FDF file with the data, then renders the merged result when the FDF is merged with a PDF. It only works correctly with Acrobat Reader 4 :-(. Installing a newer version of Acrobat Reader breaks this application, making the users very unhappy.
I want to re-architect this app so that it will send the data to be merged to a PDF output generation server. This server will merge the data passed to it onto the form, generate a PDF image of this, and store it, so that any user wishing to view the final result can then simply get the PDF (it is generated just once). If the underlying data is changed, the PDF will be deleted and regenerated next time it is requested. The client program can then have any version of Acrobat Reader they wish, as it will be used exclusively for displaying PDF files (as it was intended). The server will most likely be written in .NET (C#) with Visual Studio 2005, probably as a Web Service...
Question:
How would others recommend I go about this? Should I use Adobe's Acrobat 9 at the server to do this, puting the data into FDF or Adobe's XML format, and letting Acrobat do the merge? Are there great competitors in the "merge data onto form and output a PDF" space? How do others do this? It has to be API based, no GUI at the server, of course...
While some output is generated via FDF/PDF, another part of the application actually sends lines, graphics, and text to the printer (or a form for preview purposes) one page at a time, giving the proper x/y coordinates, font, size, etc. for each, knowing when it is at the end of a page, etc. This code is currently in the program that displays this for the user to review, and it is also in the program that prints the final form to the printer. For consistency between reviewer and printer, I'd like to move this output generation logic to a server as well, either using a good PDF generation API tool or use the code as is and generate a PDF with a PDF printer... and saving this PDF for display by the clients.
Googling "Form software" or "fill form software" or similar searches returns sooooooooo much unrelated material, mostly related to UI for users to fill in forms, I just don't know how to properly narrow down my search. This site seems the perfect place to ask such a question, as other programmers must also need to generate similar outputs, and have tried out some great tools.
EDIT:
I've added PDF tag as well as PDF-generation.
Also, my current customer insists on PDF output, but I appreciate the alternative suggestions.
A:
can't help with VB6 solution, can help with .net or java solution on the server.
Get iText or iTextSharp from http://www.lowagie.com/iText/.
It has a PdfStamper class that can merge a PDF and FDF FDFReader/FDFWriter classes to generate FDF files, get field names out of PDF files, etc...
|
A novel, kinetically stable copper, zinc superoxide dismutase from Psychropotes longicauda.
Superoxide dismutases (SODs) are one of the most important antioxidant enzymes against oxidative damage. In the present study, we cloned and expressed a novel and stable Cu, Zn-SOD from a hadal sea cucumber Psychropotes longicauda (i.e., Pl-Cu, Zn-SOD). The purified recombinant enzyme was intracellular, dimeric with the Mr. of approximately 38 kDa, with the expressed activity from 0 °C to 60 °C at an optimal temperature of 20 °C and 30 °C and maximum activity at the pH of 8.0. The Km and Vmax values of Pl-Cu, Zn-SOD were 0.041 ± 0.004 mM and 1450.275 ± 36.621 U/mg, respectively. At tested conditions, Pl-Cu, Zn-SOD was relatively stable in chemicals, such as β-ME, EDTA, Tween 20, Triton X-100, and Chaps, especially in urea and guanidine hydrochloride, which can resist protease hydrolysis and tolerate high hydrostatic pressure of 100 MPa and 2 M NaCl. All these properties make Pl-Cu, Zn-SOD a candidate in the biopharmaceutical and nutraceutical fields, and help us better understand the adaptation mechanism of hadal area. |
Incidence and duration of chemotherapy-induced nausea and vomiting in the outpatient oncology population.
Nausea and vomiting are commonly recognized side effects of chemotherapy. However, the incidence and duration of these effects have not been systematically studied in a large outpatient oncology population. This survey was conducted over two consecutive 6-week periods in the adult oncology clinics of two university teaching hospitals. The objectives were: (1) to document the incidence and duration of chemotherapy-induced nausea and vomiting; (2) to identify variables that influence nausea and vomiting; and (3) to describe patterns of antiemetic prescribing and compliance. One hundred thirty-eight completed patient-maintained diaries were returned (70% response rate). Anticipatory nausea and vomiting were reported by 9.4% and 6.5% of patients, respectively. Fifty percent and 27% of patients reported nausea and vomiting, respectively, on the day chemotherapy was administered (day 1: acute nausea and vomiting phase). Percentages fell to 22% and 11% by day three and 14% and 2.5% on day 5. Of patients who reported nausea and vomiting during the five-day period, 52% and 33% experienced nausea and vomiting, respectively, during the delayed period only (days 2 through 5: delayed emesis phase). Emetogenicity of chemotherapy significantly influenced incidence and duration of those symptoms. Sixty-seven percent of patients reported taking antiemetics on one or more days during the survey period. Of patients who reported antiemetic use, 92% reported antiemetics on day 1, 51% on day 3, and 31% on day 5. At-home antiemetic use was related to the emetogenicity of chemotherapy received. Patients who receive moderate to strong emetogens as defined in this report should receive antiemetic therapy for a minimum of three days. Increasing the dose of antiemetic prescribed both in the clinic and at home may be of benefit. |
"""
This node intends to simplify the baking process by handling the baking and extraction itself.
It handles normalization of maps and cleanup of unecessary files.
"""
import os, socket, uuid
unique_id = None
def bake(bake_rop, path):
bake_rop.bypass(0)
bake_rop.parm("vm_uvoutputpicture1").set(path)
bake_rop.render()
bake_rop.bypass(1)
def set_current_mesh(node):
override_node = hou.node(node.parm("overridenode").eval())
override_node.parm("uuid").set("%s" % uuid.uuid4())
override_node.parm("current_low_mesh").set(node.node(node.parm("target_mesh_1").eval()).path())
override_node.parm("current_high_mesh").set(node.node(node.parm("source_mesh_1").eval()).path())
current_uuid = override_node.parm("uuid").eval()
temp_dir = node.parm("tempdirectory").eval()
override_node.parm("tempexrpath").set("%s/%s_tmp_exr_path%s.exr" % (temp_dir, current_uuid, 1))
def multi_render(node):
override_node = hou.node(node.parm("overridenode").eval())
multi_bake = hou.node("%s/multi_bake" % node.path())
uuid = override_node.parm("uuid").eval()
temp_dir = node.parm("tempdirectory").eval()
temp_exr_path = override_node.parm("tempexrpath")
current_low_mesh = override_node.parm("current_low_mesh")
current_high_mesh = override_node.parm("current_high_mesh")
mesh_count = int(node.parm("meshes").eval())
init_switch = override_node.parm("initswitch")
if (mesh_count > 1):
#node.parm("udimpostprocess").set("borderfill")
for i in range(mesh_count-1):
init_switch.set(0)
temp_exr_path.set("%s/%s_tmp_exr_path%s.exr" % (temp_dir, uuid, i+2))
current_low_mesh.set(node.node(node.parm("target_mesh_%s" % (i+2)).eval()).path())
current_high_mesh.set(node.node(node.parm("source_mesh_%s" % (i+2)).eval()).path())
multi_bake.render()
#node.parm("udimpostprocess").set("diffusefill")
temp_exr_path.set("%s/%s_tmp_exr_path%s.exr" % (temp_dir, uuid, 1))
current_low_mesh.set(node.node(node.parm("target_mesh_%s" % 1).eval()).path())
current_high_mesh.set(node.node(node.parm("source_mesh_%s" % 1).eval()).path())
#init_switch.set(1)
def custom_multi_render(node):
override_node = hou.node(node.parm("overridenode").eval())
custom_multi_bake = hou.node("%s/custom_multi_bake" % node.path())
uuid = override_node.parm("uuid").eval()
temp_dir = node.parm("tempdirectory").eval()
temp_exr_path = override_node.parm("tempexrpath")
current_low_mesh = override_node.parm("current_low_mesh")
current_high_mesh = override_node.parm("current_high_mesh")
mesh_count = int(node.parm("meshes").eval())
suffix = node.parm("custom_attribute_suffix_1")
custom_channel = node.parm("custom_attribute_name_1")
init_switch = override_node.parm("initswitch")
if (mesh_count > 1):
#node.parm("udimpostprocess").set("borderfill")
for i in range(mesh_count-1):
init_switch.set(0)
temp_exr_path.set("%s/%s_tmp_exr_path%s.exr" % (temp_dir, uuid, i+2))
current_low_mesh.set(node.node(node.parm("target_mesh_%s" % (i+2)).eval()).path())
current_high_mesh.set(node.node(node.parm("source_mesh_%s" % (i+2)).eval()).path())
custom_multi_bake.render()
#node.parm("udimpostprocess").set("diffusefill")
temp_exr_path.set("%s/%s_tmp_exr_path%s.exr" % (temp_dir, uuid, 1))
current_low_mesh.set(node.node(node.parm("target_mesh_%s" % 1).eval()).path())
current_high_mesh.set(node.node(node.parm("source_mesh_%s" % 1).eval()).path())
def multi_custom_planes(node):
file_path = node.evalParm("base_path")
override_node = hou.node(node.parm("overridenode").eval())
channel_count = node.parm("custom_channels").eval()
custom_batch = hou.node("%s/prepost3" % node.path())
for i in range(channel_count):
current_attribute = node.parm("custom_attribute_name_%s" % (i+1)).eval()
current_suffix = node.parm("custom_attribute_suffix_%s" % (i+1)).eval()
new_path = file_path.replace("$(CHANNEL)", current_suffix)
override_node.parm("currentcopoutput").set(new_path)
override_node.parm("currentcustomattr").set(current_attribute)
override_node.parm("currentcustomsuffix").set(current_suffix)
#if channel_count > 1:
custom_batch.render()
override_node.parm("currentcopoutput").revertToDefaults()
def extract_planes(node):
override_node = hou.node(node.parm("overridenode").eval())
render_plane = hou.node("%s/render_plane" % node.path())
file_path = node.parm("base_path").eval()
# Extract Maps
bake_parms = ["bake_basecolor", "bake_Nt", "bake_specrough", "bake_metallic", "bake_N",
"bake_Oc", "bake_Cu", "bake_P", "bake_Th", "bake_Ds", "bake_alpha" ]
normalized_maps = ["Nt", "N", "alpha"]
linear_maps = ['specrough', 'metallic', 'Oc', 'Cu', 'P']
remap_maps = ['Th', "Ds"]
if node.parm("bBasecolorLinearSpace").evalAsInt() == 1:
linear_maps.append("basecolor")
for bake_parm in bake_parms:
should_bake = node.parm(bake_parm).eval()
if should_bake:
channel_name = bake_parm.split("_")[-1]
override_node.parm("currentplane").set(channel_name)
suffix = node.parm(channel_name + "_suffix").eval()
if channel_name in normalized_maps:
override_node.parm("currentcoppath").set(node.node("extract_net/normalized").path())
override_node.parm("currentgamma").set(1)
elif channel_name in linear_maps:
override_node.parm("currentcoppath").set(node.node("extract_net/base_image").path())
override_node.parm("currentgamma").set(1)
elif channel_name in remap_maps:
override_node.parm("currentcoppath").set(node.node("extract_net/height_normalized").path())
override_node.parm("currentgamma").set(1)
else:
override_node.parm("currentcoppath").set(node.node("extract_net/base_image").path())
override_node.parm("currentgamma").set(2.2)
new_path = file_path.replace("$(CHANNEL)", suffix)
override_node.parm("currentcopoutput").set(new_path)
render_plane.render()
override_node.parm("currentcoppath").revertToDefaults()
override_node.parm("currentgamma").revertToDefaults()
override_node.parm("currentcopoutput").revertToDefaults()
override_node.parm("currentplane").revertToDefaults()
def pre_bake(node):
# gather node references
bake_texture = node.node("baketexture")
custom_bake_node = node.node("custom_bake")
alpha_bake_node = node.node("alpha_bake")
singles_rop = node.node("render_singles")
composite_rop = node.node("render_composite")
comp_file = node.node("extract_net/base_image")
point_wrangle_node = node.node("objnet/custom_export/pointwrangle1")
low_node = node.node("objnet/low/object_merge1")
high_node = node.node("objnet/high/high_merge")
file_path, dir_path, temp_path = (None for i in range(3))
global unique_id
try:
# build the filepath for the exr
file_path = node.parm("base_path").eval()
dir_path = os.path.dirname(file_path)
temp_path = os.environ['HOUDINI_TEMP_DIR']
unique_id = uuid.uuid4()
if not os.path.exists(temp_path):
os.makedirs(temp_path)
except Exception as e:
print(e)
bake_texture.parm("vm_bake_normalizep").set(0)
tmp_exrs = []
meshes_count = node.parm("meshes").eval()
for i in range(meshes_count):
low_mesh = node.parm("target_mesh_" + str(i+1)).eval()
high_mesh = node.parm("source_mesh_" + str(i+1)).eval()
low_node.parm("objpath1").set(node.node(low_mesh).path())
if high_mesh:
high_node.parm("objpath1").set(node.node(high_mesh).path())
else:
high_node.parm("objpath1").set(node.node(low_mesh).path())
exr_path = temp_path + "/" + low_mesh.split("/")[-1] + "%s_tmp_%s.exr" % (unique_id, hou.intFrame())
if i == 0:
bake_texture.parm("vm_uvpostprocess").set("diffusefill")
else:
bake_texture.parm("vm_uvpostprocess").set("borderfill")
bake(bake_texture, exr_path)
tmp_exrs.append(exr_path)
# generate composites
previous_file = None
previous_screen = None
for exr in tmp_exrs:
composite_net = node.node("composite_net")
file = composite_net.createNode("file")
file.parm("filename1").set(exr)
if previous_file:
screen = composite_net.createNode("over")
if previous_screen:
screen.setFirstInput(previous_screen)
else:
screen.setFirstInput(previous_file)
screen.parm("maskinput").set("first")
screen.parm("maskplane").set("alpha")
screen.parm("maskinvert").set(1)
screen.setNextInput(file)
screen.setDisplayFlag(1)
screen.setRenderFlag(1)
previous_screen = screen
else:
file.setDisplayFlag(1)
file.setRenderFlag(1)
previous_file = file
exr_path = os.path.join(temp_path, "%s_tmp_%s.exr" % (unique_id, hou.intFrame()))
composite_rop.parm("copoutput").set(exr_path)
composite_rop.parm("scopeplanes").set("*")
composite_rop.bypass(0)
composite_rop.render()
composite_rop.bypass(1)
extract_file = node.node("extract_net/base_image")
extract_file.parm("filename1").set(exr_path)
# Extract Maps
bake_parms = ["bake_basecolor", "bake_Nt", "bake_specrough", "bake_metallic", "bake_N", "bake_Oc", "bake_Cu", "bake_P", "bake_Th", "bake_Ds" ]
normalized_maps = ["Nt", "N"]
linear_maps = ['specrough', 'metallic', 'Oc', 'Cu', 'P']
remap_maps = ['Th', "Ds"]
if node.parm("bBasecolorLinearSpace").evalAsInt() == 1:
linear_maps.append("basecolor")
for bake_parm in bake_parms:
should_bake = node.parm(bake_parm).eval()
if should_bake:
channel_name = bake_parm.split("_")[-1]
singles_rop.parm("color").set(channel_name)
suffix = node.parm(channel_name + "_suffix").eval()
if channel_name in normalized_maps:
singles_rop.parm("coppath").set("../extract_net/normalized")
singles_rop.parm("gamma").set(1)
elif channel_name in linear_maps:
singles_rop.parm("coppath").set("../extract_net/base_image")
singles_rop.parm("gamma").set(1)
elif channel_name in remap_maps:
singles_rop.parm("coppath").set("../extract_net/height_normalized")
singles_rop.parm("gamma").set(1)
else:
singles_rop.parm("coppath").set("../extract_net/base_image")
singles_rop.parm("gamma").set(2.2)
new_path = file_path.replace("$(CHANNEL)", suffix)
singles_rop.parm("copoutput").set(new_path)
#extract_file.parm("reload").pressButton()
singles_rop.parm("scopeplanes").set("*")
singles_rop.bypass(0)
singles_rop.render()
singles_rop.bypass(1)
#for exr in tmp_exrs:
# os.remove(exr)
# Render out Custom Bake Nodes
custom_tmp_exrs = []
multiparm_count = node.parm("custom_channels").eval()
for i in range(multiparm_count):
for j in range(meshes_count):
low_mesh = node.parm("target_mesh_" + str(j+1)).eval()
high_mesh = node.parm("source_mesh_" + str(j+1)).eval()
low_node.parm("objpath1").set(node.node(low_mesh).path())
high_node.parm("objpath1").set(node.node(high_mesh).path())
exr_path = os.path.join(temp_path, low_mesh.split("/")[-1] + "%s_tmp_%s.exr" % (unique_id, hou.intFrame()))
if j == 0:
custom_bake_node.parm("vm_uvpostprocess").set("diffusefill")
else:
custom_bake_node.parm("vm_uvpostprocess").set("borderfill")
custom_bake_node.bypass(0)
custom_bake_node.parm("vm_uvoutputpicture1").set(exr_path)
custom_bake_node.render()
custom_bake_node.bypass(1)
custom_tmp_exrs.append(exr_path)
# generate composites
previous_file = None
previous_screen = None
for exr in custom_tmp_exrs:
composite_net = node.node("composite_net")
file = composite_net.createNode("file")
file.parm("filename1").set(exr)
if previous_file:
screen = composite_net.createNode("over")
if previous_screen:
screen.setFirstInput(previous_screen)
else:
screen.setFirstInput(previous_file)
screen.parm("maskinput").set("first")
screen.parm("maskplane").set("alpha")
screen.parm("maskinvert").set(1)
screen.setNextInput(file)
screen.setDisplayFlag(1)
screen.setRenderFlag(1)
previous_screen = screen
else:
file.setDisplayFlag(1)
file.setRenderFlag(1)
previous_file = file
suffix = node.parm("custom_attribute_suffix_" + str(i+1)).eval()
new_path = file_path.replace("$(CHANNEL)", suffix)
custom_channel = node.parm("custom_attribute_name_" + str(i+1)).eval()
point_wrangle_node.parm("channel").set(custom_channel)
exr_path = os.path.join(temp_path, "%s_tmp_%s.exr" % (unique_id, hou.intFrame()))
composite_rop.parm("copoutput").set(exr_path)
composite_rop.bypass(0)
composite_rop.render()
composite_rop.bypass(1)
# set params
singles_rop.parm("color").set("basecolor")
singles_rop.parm("coppath").set("../extract_net/base_image")
singles_rop.parm("copoutput").set(new_path)
# fire off render
singles_rop.bypass(0)
singles_rop.render()
singles_rop.bypass(1)
#for exr in custom_tmp_exrs:
# os.remove(exr)
composite_net = node.node("composite_net")
#for child in composite_net.children():
# child.destroy()
# Render out Alpha
should_bake = node.parm("bake_alpha").eval()
if should_bake:
custom_tmp_exrs = []
multiparm_count = node.parm("custom_channels").eval()
for j in range(meshes_count):
low_mesh = node.parm("target_mesh_" + str(j+1)).eval()
high_mesh = node.parm("source_mesh_" + str(j+1)).eval()
low_node.parm("objpath1").set(node.node(low_mesh).path())
high_node.parm("objpath1").set(node.node(high_mesh).path())
exr_path = os.path.join(temp_path, low_mesh.split("/")[-1] + "%s_tmp_%s.exr" % (unique_id, hou.intFrame()))
alpha_bake_node.parm("vm_uvpostprocess").set("borderfill")
alpha_bake_node.bypass(0)
alpha_bake_node.parm("vm_uvoutputpicture1").set(exr_path)
alpha_bake_node.render()
alpha_bake_node.bypass(1)
custom_tmp_exrs.append(exr_path)
# generate composites
previous_file = None
previous_screen = None
for exr in custom_tmp_exrs:
composite_net = node.node("composite_net")
file = composite_net.createNode("file")
file.parm("filename1").set(exr)
if previous_file:
screen = composite_net.createNode("over")
if previous_screen:
screen.setFirstInput(previous_screen)
else:
screen.setFirstInput(previous_file)
screen.parm("maskinput").set("first")
screen.parm("maskplane").set("alpha")
screen.parm("maskinvert").set(1)
screen.setNextInput(file)
screen.setDisplayFlag(1)
screen.setRenderFlag(1)
previous_screen = screen
else:
file.setDisplayFlag(1)
file.setRenderFlag(1)
previous_file = file
suffix = node.parm("alpha_suffix").eval()
new_path = file_path.replace("$(CHANNEL)", suffix)
exr_path = os.path.join(temp_path, "%s_tmp_%s.exr" % (unique_id, hou.intFrame()))
composite_rop.parm("copoutput").set(exr_path)
composite_rop.bypass(0)
composite_rop.render()
composite_rop.bypass(1)
# set params
singles_rop.parm("color").set("alpha")
singles_rop.parm("coppath").set("../extract_net/base_image")
singles_rop.parm("copoutput").set(new_path)
# fire off render
singles_rop.bypass(0)
singles_rop.render()
singles_rop.bypass(1)
for exr in custom_tmp_exrs:
os.remove(exr)
composite_net = node.node("composite_net")
#for child in composite_net.children():
# child.destroy()
def post_bake(node):
# DELETE EXR FILE
#file_path = node.parm("base_path").eval()
#dir_path = os.path.dirname(file_path)
override_node = hou.node(node.parm("overridenode").eval())
current_uuid = override_node.parm("uuid").eval()
temp_dir = node.parm("tempdirectory").eval()
mesh_count = int(node.parm("meshes").eval())
for i in range(mesh_count):
exr_path = os.path.join(temp_dir, "%s_tmp_exr_path%s.exr" % (current_uuid, i+1))
try:
os.remove(exr_path)
except:
pass
temp_cop_path = os.path.join(temp_dir, "%s_tmp_cop_path.exr" % current_uuid)
try:
os.remove(temp_cop_path)
except:
pass
init_cop = os.path.join(temp_dir, "initcop.tga")
try:
os.remove(init_cop)
except:
pass
hou.hscript("glcache -c;") #refresh textures
def CheckExportPath(node):
try:
hou.node(node.path()+'/objnet/warningchecking/FrameRangeWarning').cook(force=True)
except:
pass
def setplane(node):
bake_parms = ["bake_basecolor", "bake_Nt", "bake_specrough", "bake_metallic",
"bake_N", "bake_Oc", "bake_Cu", "bake_P", "bake_Th", "bake_Ds" ]
no_standard_bakes = 1
for bake_parm in bake_parms:
should_bake = node.parm(bake_parm).eval()
if should_bake:
channel_name = bake_parm.split("_")[-1]
no_standard_bakes = 0
return channel_name
if no_standard_bakes:
return "C"
def checkchannels(node):
bake_parms = ["bake_basecolor", "bake_Nt", "bake_specrough", "bake_metallic",
"bake_N", "bake_Oc", "bake_Cu", "bake_P", "bake_Th", "bake_Ds" ]
no_standard_bakes = 1
for bake_parm in bake_parms:
should_bake = node.parm(bake_parm).eval()
if should_bake:
no_standard_bakes = 0
return 1
if no_standard_bakes:
return 0 |
Build Your Own Triple Bunk Beds
Discover free woodworking plans and projects for build your own triple bunk beds. Start your next project for build your own triple bunk beds with one of our many woodworking plans. Woodworking project plans available for immediate PDF download.
Woodworking Projects & Plans for "Build Your Own Triple Bunk Beds":
Triple Bunk Beds From Aspen Logs
This shows my custom made aspen log bunk beds for my grandkids. Subscribe - there will be more aspen log furniture!...
Twin Over Twin Bunk Bed - 023
Since making this video I made another one and have plans available. Click here if you are interested...
Triple Bunk Bed Made With The Shopbot Cnc Router
This is a triple bunk bed that I made using a ShopBot CNC router. I think I have to give props to Bill Young of ShopBot for the mortise and tenon wedge joint design, and to Rodrigo y Gabriela for the music. Enjoy!...
Woodworking Projects With Pallets
Site: woodworking projects with pallets, "woodworking projects with pallets" woodworking plans free Using Bed Woodworking Plans To Build A Custom Bed Make Your Own Bed With Easy Woodworking Bed Plans Bed Woodworking Plans - How To Use Them To Build A Bed Click to learn more: Teds Woodworki...
Build Your Own Bunk Bed | Easy Step By Step Teds Woodworking Plans
Build Your Own Bunk Bed | Easy Step by Step Woodworking Plans Here is an honest review of TedsWoodworking. Pros: If you want to start a woodworking project, you need all the necessary information, including schematics, blueprints, materials lists, dimensions etc. That is where TedsWoodworking comes ...
Space Saving Kids Triple Bunk Beds
TRIPLE BUNK BED. Triple Bunk Beds on Pinterest Discover Pins about triple bunk beds on Pinterest. See more about triple bunk, bunk bed plans and triplets bedroom. An Enormous Selection of Bunk Beds for 3 or More An Enormous Selection of Bunk Beds for 3 or More. BUNK BEDS FOR THREE OR MORE Twin Bunk ...
Construction Of A Triple Bunk Bed.
Construction of a triple bunk bed...
How To Make Bunk Beds
Go to for the free plans. How to make a set of bunk-beds by BuildEazy. Bunk beds for the kids bedroom...
How To Make A Triple Surprise Bunk Bed For Meowlody And Purrsephonewerecat- Doll Crafts
Learn how to make a triple surprise bunk bed for Meowlody and Purrsephone Werecat. This bed accommodates 9 Monster High Dolls at one time. Has a hidden pull-out bed compartment and storage. Today I will use this dish drying rack and transform it into a bunk bed with pull out and storage that can acc...
How To Build A Built In Bunk Beds Or Alter For Loft Bed
Step by step video on freeing up room space in a kids room with a built-in bunk bed. This short detailed video will show you how to build a built-in bunk bed. Perfect for a girl or boys room, vacation home, or in a play space for guests. This home project is easier than it looks and give you a custo...
10x12 Shed Plans With Loft
This site: 10x12 shed plans with loft, "10x12 shed plans with loft" Teds Woodworking 16, 000 Ted's Woodworking Plans Review Make your own furniture TedsWoodworking 16, 000 Woodworking Plans Review. If you are one of those people who enjoys building woodworking crafts projects and Click to ...
How To Build A Triple Lego Bunk Bed
How to build a lego bunk bed...
Building An Ag Doll Triple Bunk Bed
It took about an hour but it was super fun! Me and my grandpa love the results. Btw there was 6 steps sorry I never showed the ladder. It was a little over 3 feet tall and 20 inches long. And each bunk was 16 inches apart so they can sit up. Hope you enjoy ;)...
How To Build A Bunk Bed
Watch more Home Repair & DIY videos: Bunk beds are a practical solution when space is at a premium. Here's how to put one together on your own. Step 1: Determine the size Use the tape measure to measure the size of your mattress. Twin mattresses are typically 6 1/2 feet long and 3 1/2 feet wide,...
Bunk Bed Plans
Using bunk beds is a good way to maximize space, especially if your floor area is very limited. This type of furniture is actually very popular and commonly found in the rooms of children, residence halls of colleges and universities, ships and trains. If you are interested in creating your own bunk...
Building Triple Bunk Beds (pt 2)
Continuing to build frames and top for bunk beds that will hang from the wall submarine style...
Triple Bunk Bed Plans - Diy Plans And Blueprints
Learn how to build a DIY triple bunk bed for your kids, complete with 16, 000 furniture plans and blueprints. 16, 000 Step By Step Wooden Furniture Plans This package contains plans that is covered from head to toe. From step-by-step instructions and easy to follow guides. These easy-to-understand p...
2x4 Bunk Bed Build Pt 1
This is a series of how to easily build a bunk bed that not only will last a lifetime, but is very kid safe due to the low height. Follow along in social media using the hashtag #bunkbedbuild. A Slice of Wood Workshop Website: Etsy: Facebook: Twitter: Instagram: Pintrest...
Bunk Bed Build Pt 2: Chopping Mortises- How To
After the tenons are cut it is time to cut the mortises. Come along and see how these are cut and the tools you will need! A Slice of Wood Workshop Website: Etsy: Facebook: Twitter: Instagram: Pintrest...
How To Build Triple Layer Bunk Beds.
Do you want to learn how to build a triple level bunk bed for more activities? Look no further folks, follow along with out super easy guides Josh and Vaughn as they teach you how to build the best! P.S don't try this at home, try this at a friend's house. or your neighbours...
Triple Bunk Bed
Triple bunk bed...
Woodworking Projects For Profit
Click: woodworking projects for profit, "woodworking projects for profit" easy how to woodworking projects Using Bed Woodworking Plans To Build A Custom Bed Make Your Own Bed With Easy Woodworking Bed Plans Bed Woodworking Plans - How To Use Them To Build A Bed Click to learn more: Teds Wo...
Triple Bunk Bed Plans
SUBSCRIBE for a new DIY video almost every single day! If you want to learn more about how to build a triple bunk bed plans, we recommend you to pay attention to the instructions described in the video. Invest in high quality materials and select the right triple bunk bed plans...
Make A Bunk Bed - 180
Subscribe for new videos every week. Since buying our house my wife and I have decided to not purchase any more furniture. If a furniture need arises we will (I will) just make what we want instead of buying. Due to the possibility of family visiting for the holidays a bunk bed for our spare bedroom...
How To Build A "three Sleeper" Bunk Bed In Less Than Four Minutes (time Lapse)
This is me constructing a three sleeper bunk bed in my kids' bedroom. Time lapse was shot using "FastMotion Time Lapse" on a Nokia 700. The bed is supplied by Majestic Furnishings and I was impressed with the build quality. The music seemed appropriate...
Triple Bunk Beds (pt 1)
Part 1 of how to build unique, space saving twin beds for boys that will hang from the walls...
How To Build A Twin/full Bunk Bed On Your Own (diy Dad #14)
The kid's new bunk bed arrived today, and I was eager to get it put together. "Tokyo Love Patrol (Live)" and "Train Ride to the Moon" written by Stephen A Richards, (c) 2013. Download here: Thanks for watching :D FACEBOOK (Official): TWITTER: MUSIC...
How To Make A Doll Triple Bunk Bed
Free Loft Bed Plans - Building a Loft Bed, Build Your Own Loft Bed, this is the one I want in the boys room! More drawers I think for 3 kids!...
Wonder if I can turn this into a bunk bed for the boys room...
Triple Bunk Plan (not a bed) to Build Your Own Extra-Tall with Trundle Bed and Hardware Kit for Bunk and Trundle to Make a Quadruple Bunk Bed that Sleeps Four (Wood NOT Included) - Indoor Furniture Woodworking Project Plans ... |
- Uncertainty Decomposition in Bayesian Neural Networks with Latent Variables [[pdf]](https://arxiv.org/abs/1706.08495) [[pdf with comments]](https://github.com/fregu856/papers/blob/master/commented_pdfs/Uncertainty%20Decomposition%20in%20Bayesian%20Neural%20Networks%20with%20Latent%20Variables.pdf)
- *Stefan Depeweg, José Miguel Hernández-Lobato, Finale Doshi-Velez, Steffen Udluft*
- `2017-06-26`
****
### General comments on paper quality:
- Quite well-written and interesting paper.
### Comments:
- The toy problems illustrated in figure 2 and figure 3 are quite neat. I did however find it quite odd that they did not actually perform any active learning experiments here?
- Figure 4b is quite confusing with the "insert" for beta=0. I think it would have been better to show this entire figure somehow.
|
Reframing is used to re-size an image or video content, e.g. for displaying video signals with a given aspect ratio on a display having a different aspect ratio. For example, High Definition (HD) video content might not be well suited for display on a small portable device.
EP 1748385 A2 discloses dynamic reframing based on a human visual attention model, in which source video content is appropriately cropped in order to keep the region of interest. The output signal may be encoded and transmitted via a network.
C. Chamaret, O. LeMeur, “Attention-based video reframing: validation using eye-tracking”, 19th International Conference on Pattern Recognition ICPR'08, 8-11 Dec. 2008, Tampa, Fla., USA, also describes reframing applications.
O. LeMeur, P. LeCallet and D. Barba, “Predicting visual fixations on video based on low-level visual features”, Vision Research, vol. 47, no. 19, pp. 2483-2498, September 2007, describes the calculation of a dynamic saliency map, based on a visual attention model. |
Case: 14-20473 Document: 00513097952 Page: 1 Date Filed: 06/29/2015
IN THE UNITED STATES COURT OF APPEALS
FOR THE FIFTH CIRCUIT United States Court of Appeals
Fifth Circuit
FILED
June 29, 2015
No. 14-20473
Lyle W. Cayce
Clerk
TEST MASTERS EDUCATIONAL SERVICES, INCORPORATED,
Plaintiff - Appellant
v.
STATE FARM LLOYDS,
Defendant - Appellee
Appeal from the United States District Court
for the Southern District of Texas
Before BARKSDALE, SOUTHWICK, and HIGGINSON, Circuit Judges.
STEPHEN A. HIGGINSON, Circuit Judge:
Test Masters Educational Services, Inc. (“TES”) filed this lawsuit against
State Farm Lloyds, requesting a declaratory judgment that State Farm owes
TES a duty to defend. The district court granted summary judgment in favor
of State Farm. For the reasons articulated below, we AFFIRM.
FACTS AND PROCEEDINGS
The underlying lawsuit in this duty-to-defend appeal is the latest in an
ongoing series of lawsuits involving TES and Robin Singh Educational
Services, Inc. (“Singh”). 1 Both TES and Singh provide test preparation
services, and both use the trade name or service mark “Testmasters.” TES’s
1 TES and Singh have been suing each other for over a decade. See, e.g., Test Masters
Educ. Servs., Inc. v. Singh, 46 F. App’x 227, 2002 WL 1940083 (5th Cir. July 24, 2002) (per
curiam); Test Masters Educ. Servs., Inc. v. Singh, 428 F.3d 559 (5th Cir. 2005).
Case: 14-20473 Document: 00513097952 Page: 2 Date Filed: 06/29/2015
No. 14-20473
corporate name is “Test Masters,” it uses the mark “Testmasters” on its
website, and its website’s domain name is “testmasters.com.” Singh uses
“TestMasters” as its trade name and service mark, and its website domain
name is “testmasters.net.”
In the underlying lawsuit that triggered this appeal, TES sued Singh,
alleging trademark infringement and various other claims. See Test Masters
Educ. Servs., Inc. v. Robin Singh Educ. Servs., Inc., No. H-08-1771, 2013 WL
1404816, at *4 (S.D. Tex. Apr. 5, 2013). Singh then filed counterclaims against
TES. Singh’s original counterclaim alleged that TES’s website purported to
offer live LSAT preparation courses across the country under the
“Testmasters” name and mark, mimicked a map on Singh’s website, and made
material misrepresentations about TES’s services to trick consumers into
believing that TES’s services were associated with Singh’s. TES tendered the
original counterclaims to State Farm, and State Farm, with a reservation of
rights, agreed to pay for TES’s defense.
The State Farm policy in effect at the time provided liability coverage for
“advertising injury” claims. “Advertisement[s]” included “notices that are
published . . . on the Internet.” The policy’s definition of “advertising injury,”
in turn, included “injury arising out of . . . infringing upon another’s copyright,
trade dress or slogan in your advertisement.” (emphasis added). Thus, the
policy covered trade dress claims, but not trademark claims.
When State Farm initially provided a defense, it explained that it was
providing coverage because Singh’s “counterclaim may allege facts sufficient
to indicate trade dress infringement.” In its original counterclaims, Singh
alleged that TES’s website contained a clickable map image of the United
States that “mimicked” a map on Singh’s website. Singh, however, filed an
Amended Counterclaim that removed all allegations related to the map. After
it reviewed the Amended Counterclaim, State Farm withdrew its defense,
2
Case: 14-20473 Document: 00513097952 Page: 3 Date Filed: 06/29/2015
No. 14-20473
claiming that the Amended Counterclaim did not allege trade dress
infringement, and instead only alleged trademark infringement.
TES then filed a lawsuit against State Farm, requesting a declaratory
judgment that State Farm has a duty to defend against Singh’s Amended
Counterclaim. After the parties filed cross-motions for summary judgment, the
district court granted State Farm’s summary-judgment motion and denied
TES’s. This appeal timely followed.
STANDARD OF REVIEW
This court reviews a district court’s grant of summary judgment de novo,
applying the same standards as the district court. Rogers v. Bromac Title
Servs., L.L.C., 755 F.3d 347, 350 (5th Cir. 2014).
DISCUSSION
Texas law governs this diversity case. To determine whether an insurer
has a duty to defend, Texas courts apply the eight-corners rule. “Under that
rule, courts look to the facts alleged within the four corners of the [third-party
plaintiff’s] pleadings, measure them against the language within the four
corners of the insurance policy, and determine if the facts alleged present a
matter that could potentially be covered by the insurance policy.” Ewing
Constr. Co. v. Amerisure Ins. Co., Inc., 420 S.W.3d 30, 33 (Tex. 2014). When
reviewing the pleadings, courts must focus on the factual allegations, not the
asserted legal theories or conclusions. Id. Courts consider the factual
allegations “without regard to their truth or falsity” and resolve “all doubts
regarding the duty to defend . . . in the insured’s favor.” Id. Even if the
underlying complaint only “potentially includes a covered claim, the insurer
must defend the entire suit.” Zurich Am. Ins. Co. v. Nokia, Inc., 268 S.W.3d
487, 491 (Tex. 2008) (emphasis added). “Thus, even if the allegations are
groundless, false, or fraudulent the insurer is obligated to defend.” Id. (internal
quotation marks, alteration, and citation omitted). “Courts may not,
3
Case: 14-20473 Document: 00513097952 Page: 4 Date Filed: 06/29/2015
No. 14-20473
however, (1) read facts into the pleadings, (2) look outside the pleadings, or
(3) imagine factual scenarios which might trigger coverage.” Gore Design
Completions, Ltd. v. Hartford Fire Ins. Co., 538 F.3d 365, 369 (5th Cir. 2008)
(internal quotation marks and citation omitted); see also Nat’l Union Fire Ins.
Co. of Pittsburgh, Pa. v. Merchants Fast Motor Lines, Inc., 939 S.W.2d 139, 142
(Tex. 1997) (per curiam) (“We will not read facts into the pleadings.”). “The
insured has the initial burden to establish coverage under the policy.” Ewing
Constr. Co., 420 S.W.3d at 33.
TES’s insurance policy with State Farm covered trade dress—not
trademark—claims. Thus, a central question in this appeal is: what is trade
dress? “Trade dress” is distinct from a “trademark” or a “service mark.” See
Int’l Jensen, Inc. v. Metrosound U.S.A., Inc., 4 F.3d 819, 822 (9th Cir. 1993)
(contrasting trademarks and trade dress). Although the concepts often overlap,
“trade dress protection is generally focused more broadly” than trademark
protection. See 1 J. Thomas McCarthy, McCarthy on Trademarks and Unfair
Competition § 8:1 (4th ed. 1996) [hereinafter McCarthy]. Under the Lanham
Act, a “trademark” and a “service mark” include “any word, name, symbol, or
device . . . used . . . to identify and distinguish [goods or services, respectively].”
15 U.S.C. § 1127. Relatedly, a “trade name” means “any name used by a person
to identify his or her business . . . .” Id. Thus, “Testmasters” is a company “trade
name” and also a “service mark.”
The Act does not define “trade dress,” but courts have filled that gap. The
term “refers to the total image and overall appearance of a product and may
include features such as the size, shape, color, color combinations, textures,
graphics, and even sales techniques that characterize a particular product.”
Amazing Spaces, Inc. v. Metro Mini Storage, 608 F.3d 225, 251 (5th Cir. 2010)
(internal quotation marks and citation omitted); see also KLN Steel Prods. Co.
v. CNA Ins. Cos., 278 S.W.3d 429, 441 (Tex. App. 2008) (“Trade dress . . .
4
Case: 14-20473 Document: 00513097952 Page: 5 Date Filed: 06/29/2015
No. 14-20473
consists of the total image of a product or service, including product features
such as design, size, shape, color, packaging labels, [and] color
combinations . . . .” (alterations in original) (internal quotation marks and
citation omitted)).
An unregistered trade dress may be protectable under section 43(a) of
the Lanham Act if the trade dress is distinctive and nonfunctional. See TrafFix
Devices, Inc. v. Mktg. Displays, Inc., 532 U.S. 23, 28–29 (2001); Eppendorf-
Netheler-Hinz GMBH v. Ritter GMBH, 289 F.3d 351, 354–55 (5th Cir. 2002).
When alleging a trade dress claim, the plaintiff must identify the discrete
elements of the trade dress that it wishes to protect. See 1 McCarthy § 8:3; see
also Yurman Design, Inc. v. PAJ, Inc., 262 F.3d 101, 116 (2d Cir. 2001) (“[W]e
hold that a plaintiff seeking to protect its trade dress in a line of products must
articulate the design elements that compose the trade dress.”).
Courts have extended trade dress protection to the overall “motif” of a
restaurant, see Two Pesos, Inc. v. Taco Cabana, Inc., 505 U.S. 763, 765, 767
(1992), and to the use of a lighthouse as part of the design for a golf hole, see
Pebble Beach Co. v. Tour 18 I Ltd., 155 F.3d 526, 537, 539–42 (5th Cir. 1998),
abrogated on other grounds by TrafFix Devices, Inc., 532 U.S. 23; see also 1
McCarthy § 8:4.50 (citing additional examples of trade dress, including a
magazine cover design, and the layout and appearance of a mail-order catalog).
A growing number of courts have confronted whether trade dress protection
can extend to websites, so-called “web dress” protection. See, e.g., Fair Wind
Sailing, Inc. v. Dempster, 764 F.3d 303, 310 (3d Cir. 2014); see also 1 McCarthy
§ 8:7.25 (discussing the possibility of “web dress” or “site dress” claims).
With this definition of “trade dress” in mind, we turn to the allegations
in Singh’s Amended Counterclaim. To start, Singh cites section 43(a) of the
Lanham Act, which encompasses trade dress infringement claims. See 15
U.S.C. § 1125(a)(1); TrafFix Devices, Inc., 532 U.S. at 28–29; Eppendorf-
5
Case: 14-20473 Document: 00513097952 Page: 6 Date Filed: 06/29/2015
No. 14-20473
Netheler-Hinz GMBH, 289 F.3d at 354. This provision, however, also covers a
range of other claims, including trademark infringement and false
advertising. 2 Cf. Seven-Up Co. v. Coca-Cola Co., 86 F.3d 1379, 1383 (5th Cir.
1996) (recognizing that section 43(a) “provides protection against a myriad of
deceptive commercial practices” (internal quotation marks and citation
omitted)). More importantly, this statutory citation alone is not sufficient to
trigger coverage. “[C]ourts look to the factual allegations showing the origin of
the damages claimed, not to the legal theories or conclusions alleged.” Ewing
Constr. Co., 420 S.W.3d at 33.
Next, TES highlights Singh’s allegation that “TES changed its website
so that it was confusingly similar to Singh’s.” TES argues that this allegation,
at the very least, introduces some uncertainty as to whether Singh was
alleging a trademark or trade dress infringement claim. To be clear, the duty-
to-defend standard requires the court to resolve all doubts and ambiguities in
TES’s favor. Id.; Zurich Am. Ins. Co., 268 S.W.3d at 491. And it is possible that
an allegation that TES is using a “confusingly similar” website could be the
basis of a trade dress infringement claim. TES, however, does not read this
allegation in its full context. This paragraph of Singh’s Amended Counterclaim
alleges that “TES changed its website so that it was confusingly similar to
2 Section 43(a) of the Lanham Act, 15 U.S.C. § 1125(a)(1), provides protection against
the use in commerce of:
any word, term, name, symbol, or device, or any combination thereof, or any
false designation of origin, false or misleading description of fact, or false or
misleading representation of fact, which—
(A) is likely to cause confusion, or to cause mistake, or to deceive . . . as
to the origin, sponsorship, or approval of his or her goods, services, or
commercial activities by another person, or
(B) in commercial advertising or promotion, misrepresents the nature,
characteristics, qualities, or geographic origin of his or her or another
person’s goods, services or commercial activities . . . .
6
Case: 14-20473 Document: 00513097952 Page: 7 Date Filed: 06/29/2015
No. 14-20473
Singh’s, purporting to offer LSAT preparation courses in every state, although
TES had never before offered LSAT courses anywhere, and had never before
offered any test preparation courses outside of the state of Texas.” Read in its
entirety, the Amended Counterclaim focuses on factual misrepresentations on
TES’s website, not on any alleged trade dress, or “look and feel,” in the website
itself. 3
The district court reached the same conclusion. The district court
concluded that “[t]he Amended Counterclaims allege that [TES’s] website was
‘confusingly similar’ to Singh’s because of the use of the Test Masters Mark
and [TES’s] list of putative nationwide course locations, but does not allege any
facts regarding any inherently distinctive ‘look and feel’—or ‘trade dress’—of
the website.” The district court further emphasized that “[a]bsent some
allegation of aesthetic similarity to another’s advertisement, a claim that
defendant infringed a trademark does not itself comprise a claim for trade
dress infringement.” We agree with the district court.
This analysis is consistent with the conclusion this court reached in
America’s Recommended Mailers Inc. v. Maryland Casualty Co., 339 F. App’x
467 (5th Cir. 2009) (per curiam). Like here, the insurance policy in that case
had an identical definition of “advertising injury,” which covered trade dress
claims, but not trademark claims. See id. at 468. The insured allegedly copied
the AARP logo in a direct mail advertising. Id. at 469. This court observed that
using this logo was a trademark claim, not a trade dress claim. Id. As a result,
this court affirmed the district court’s grant of summary judgment, holding
At oral argument, TES contended that Singh’s earlier allegations about the clickable
3
map were still in the case even though Singh had removed the map allegations from the
Amended Counterclaim. TES provided no legal support for this argument, and we will not
read allegations into the Amended Counterclaim that do not exist. See Gore Design
Completions, Ltd., 538 F.3d at 369; Nat’l Union Fire Ins. Co. of Pittsburgh, Pa., 939 S.W.2d
at 142.
7
Case: 14-20473 Document: 00513097952 Page: 8 Date Filed: 06/29/2015
No. 14-20473
that there was no duty to defend. Id. at 469–70. Although America’s
Recommended Mailers is not binding circuit precedent, this unpublished
opinion is persuasive authority supporting the district court’s conclusion. See
United States v. Weatherton, 567 F.3d 149, 153 n.2 (5th Cir. 2009).
KLN Steel Products Co. v. CNA Insurance Cos., a case that TES cites,
also supports the conclusion that Singh alleged a trademark—not a trade
dress—infringement claim. 278 S.W.3d at 440–42. Again, KLN Steel
confronted a similar definition of “advertising injury” in the insurance policy—
it included trade dress, but not trademark, claims. Id. at 440. The underlying
complaint alleged misappropriation of the dimensions and other design
features of a bed. See id. at 442. The court explained that these allegations
could not be read as pleading a trade dress claim triggering coverage because
none of the features was alleged to be distinctive and nonfunctional. See id.
The same conclusion is true here. The term “trade dress” is not
mentioned in Singh’s Amended Counterclaim, and there are no allegations
suggesting that Singh even has a protectable trade dress. For example, there
are no allegations describing the content or overall image of Singh’s website.
Moreover, an allegation that TES is using a “confusingly similar” website is
not sufficient to trigger coverage. Consumer confusion is an element of both a
trademark infringement claim and a trade dress infringement claim. See
TrafFix Devices, Inc., 532 U.S. at 28 (explaining that a trade dress “may not be
used in a manner likely to cause confusion”); Nola Spice Designs, L.L.C. v.
Haydel Enters., Inc., 783 F.3d 527, 536 (5th Cir. 2015) (recognizing that
likelihood of confusion is an element of a trademark infringement claim). The
central focus in this coverage dispute, however, is not on the confusion, but on
what allegedly is causing the confusion. The alleged confusion in this case
stems from the use of a similar service mark (“Testmasters”), and the false
representation that TES offers a similar service (live LSAT courses offered
8
Case: 14-20473 Document: 00513097952 Page: 9 Date Filed: 06/29/2015
No. 14-20473
nationwide). None of the allegations possibly states a claim for confusingly
similar trade dress.
In short, the factual allegations in Singh’s Amended Counterclaim do not
potentially include a trade dress infringement claim. Instead, the Amended
Counterclaim alleges trademark infringement and false advertising claims.
Neither of those claims is covered under the policy. The district court was
therefore correct to grant summary judgment in favor of State Farm.
CONCLUSION
For the reasons stated above, we AFFIRM the judgment of the district
court.
9
|
We are happy to announce that Loopex has joined the Crowdholding platform and have launched their project page with a task.
Loopex is an international platform, that provides various cryptocurrency services to its users. They are creating a cryptocurrency financial center with the help of advanced blockchain technologies. Their mission is to bring the exchange of crypto assets to the new level. Loopex does everything to make trading on the platform as much comfortable, convenient, safe and inexpensive as possible.
One advantage to Loopex’s system is low trading commission for cryptocurrency exchange on the platform. The amount does not exceed 0.1%. By paying fees with the XLP token, users receive a 50% discount on execution of all trading transactions. If the user has 400 000 XLP and more on the Loopex account, he receives VIP status, which makes his trading free of charge.
They will be launching an ICO in the foreseeable future. If you want to read more about their project or read their whitepaper you can do this by clicking here.
You can keep up to date with what is happening at Crowdholding by visiting our platform or Check out Crowdholding out on our social media: |
All relevant data are within the paper and its Supporting Information files.
Introduction {#sec001}
============
Trabecular meshwork induced glucocorticoid response protein (TIGR) was identified almost 20 years ago by two independent groups \[[@pone.0206801.ref001], [@pone.0206801.ref002]\] desiring to understand genes upregulated following dexamethasone treatment, since steroid-induced glaucoma constitutes a major subset of glaucoma patients. At the time, Northern blots suggested high expression of TIGR in the eye as well as expression in skeletal muscle and heart \[[@pone.0206801.ref003]--[@pone.0206801.ref005]\]. Given the tissue distribution and considering that the N-terminal of the protein shares approximately 25% identity with myosin, *TIGR* was later renamed myocilin (*MYOC*). To date, *MYOC* is the gene with mutations most strongly-linked to glaucoma and is reported in approximately one-third of all juvenile open angle glaucoma (JOAG) patients \[[@pone.0206801.ref006]\] and up to 4% of all primary open angle glaucoma (POAG) cases \[[@pone.0206801.ref007], [@pone.0206801.ref008]\]. More than 70 pathological MYOC mutations have been reported and most are found in the C-terminal of the protein (refer to [http://www.myocilin.com](http://www.myocilin.com/)). The C-terminal of MYOC contains an olfactomedin (OLF) domain and shares 40% identity with the nearest OLF family member. Similar to most OLF family members, myocilin is a secreted protein, but MYOC with C-terminal pathological mutations are not secreted *in vitro* \[[@pone.0206801.ref009]\]. Despite intense study (for review see \[[@pone.0206801.ref010]\]), it is unknown definitively how mutant MYOC causes glaucoma and the function of wild-type (wt) MYOC has remained elusive.
Several mouse models over-expressing wt MYOC or MYOC mutant proteins have been established to study intraocular pressure (IOP) and glaucoma disease development \[[@pone.0206801.ref011]--[@pone.0206801.ref014]\]. Although the eye and glaucoma have been the primary focus when studying pathological MYOC mutations, there is interest in knowing if *MYOC* mutations result in pathology in other tissues. Patients with POAG and a mutation in the *MYOC* gene have been reported to be phenotypically similar to other POAG patients without a *MYOC* mutation \[[@pone.0206801.ref015]\]. In 2002, Tamm stated that it was remarkable that patients with pathological *MYOC* mutations were at high-risk for glaucoma, but apparently had no other disease \[[@pone.0206801.ref010]\]. Could this be an area that has been overlooked? As such, studying MYOC in other tissues could provide missing insight into MYOC biology. Additionally, knowledge gained by studying myocilin in other tissues may assist physicians in early identification of patients suspected to carry a pathologic *MYOC* mutation.
Myocilin transcripts are high in muscle \[[@pone.0206801.ref003]--[@pone.0206801.ref005]\] and a BAC transgenic mouse with 15-fold over-expression of wt mouse MYOC protein was reported to have skeletal muscle hypertrophy with an approximate 40% increase in gastrocnemius muscle weight \[[@pone.0206801.ref016]\]. Thus, it is possible that MYOC is impacting cells in tissues other than those of the eye. Our present study is the first to examine the impact of over-expressing MYOC with a pathologic mutation in skeletal muscle. We utilized a transgenic mouse with CMV-driven expression of cDNA encoding for the human MYOC Y437H mutant protein \[[@pone.0206801.ref014]\], which in humans is a severe *MYOC* mutation associated with JOAG \[[@pone.0206801.ref007], [@pone.0206801.ref017]\]. In the skeletal (gastrocnemius) muscle of these transgenic mice, we did not observe evidence of sarcoplasmic/endoplasmic reticulum (SR/ER) stress associated with mutant MYOC nor did we observe muscle hypertrophy; however, there is a novel phenotype pertaining to the sarcomere M-line suggestive that there is compromised sarcomere integrity. We found that CMV-MYOC-Y437H transgenic mice had reduced muscle creatine kinase (CKM) a reduction of which has been reported to result in diminished exercise capacity \[[@pone.0206801.ref018]\]. We believe that mutant MYOC may be causing this muscle pathology through protein-protein interactions and/or due to accumulation of intracellular protein aggregates. Our findings from this transgenic animal suggest that people carrying pathological *MYOC* mutations may have a skeletal muscle phenotype. This information could aid physicians in early identification of patients carrying a pathological *MYOC* mutation and at high risk for glaucoma.
Results {#sec002}
=======
Re-derived CMV-MYOC-Y437H mice did not have a glaucoma phenotype ([S1](#pone.0206801.s001){ref-type="supplementary-material"} and [S2](#pone.0206801.s002){ref-type="supplementary-material"} Figs). Based on the literature, it was anticipated that by 3 months of age the CMV-MYOC-Y437H mice would display a significant elevation in nighttime IOP (14mm Hg in wt versus 20mm Hg in transgenic) and by 12 to 14-months of age 30% of their RGCs would have been lost \[[@pone.0206801.ref014]\]. In the CMV-MYOC-Y437H mice we did not observe any mean IOP difference between the wt and MYOC Y437H transgenic and we did not detect a PM IOP elevation for the animals ([S1 Fig](#pone.0206801.s001){ref-type="supplementary-material"}). This experiment was repeated several times using different aged cohorts of animals and similar results between the groups were always obtained. In addition, we did not observe differences in axon number when comparing the wt to transgenic animals ([S2 Fig](#pone.0206801.s002){ref-type="supplementary-material"}). Note that there are a few dark colored axons that could be seen in images from both wt and MYOC Y437H transgenic mice; however, it is typical that even healthy optic nerve has certain axon turnover and there is no apparent difference in the incidence of these dark colored axons between the wt and transgenic. A possible explanation for our discrepancies for phenotype with that of the literature may be attributed to mouse strain \[[@pone.0206801.ref019]\] or epigenetics and it could be argued that, despite repeating experiments several times, had we greatly increased N values for some experiments it may have disclosed minor differences. However, our inability to detect human mutant MYOC protein in MYOC-CMV-Y437H transgenic whole eye lysates ([S3A Fig](#pone.0206801.s003){ref-type="supplementary-material"}) was unexpected as was our inability to detected elevated MYOC in isolated anterior eye tissues of the transgenic ([S3B Fig](#pone.0206801.s003){ref-type="supplementary-material"}). Furthermore, eye lysates from wt and transgenic mice did not indicate ER stress ([S3B Fig](#pone.0206801.s003){ref-type="supplementary-material"}). The mosaic tissue expression of the CMV-MYOC-Y437H mice had previously been noted \[[@pone.0206801.ref014]\], so we desired to study other tissues with mutant MYOC expression in the hope that we would gain novel *in vivo* insights into mutant MYOC pathology. We did find that the CMV-MYOC-Y437H mice expressed the human transgene in skeletal (gastrocnemius) muscle and heart (Figs [1](#pone.0206801.g001){ref-type="fig"} and [2](#pone.0206801.g002){ref-type="fig"}).
 (<https://www.gtexportal.org/home/>) in May 2017.](pone.0206801.g001){#pone.0206801.g001}
{#pone.0206801.g002}
Myocilin transcripts have been reported to be in adult mouse skeletal muscle and heart \[[@pone.0206801.ref004], [@pone.0206801.ref005]\] and a search of the Genotype-Tissue Expression (GTEx) database, which provides information regarding human tissue gene expression, supported this tissue expression finding ([Fig 1A](#pone.0206801.g001){ref-type="fig"}). Our RT-PCR results ([Fig 1B](#pone.0206801.g001){ref-type="fig"}) confirmed wt mouse Myoc transcript in mouse skeletal muscle (C~T~ = 25 ± 1.8) and heart (C~T~ = 26 ± 2.1). As we were unable to identify a commercial anti-MYOC antibody specific for mouse MYOC with no human MYOC cross-reactivity, we performed RT-PCR for mouse and human myocilin to estimate transcript levels in the CMV-MYOC-Y437H transgenic. The C~T~ value for human MYOC in the CMV-MYOC-Y437H transgenic skeletal muscle was found to be 19 ± 0.8 ([S4 Fig](#pone.0206801.s004){ref-type="supplementary-material"}). In contrast to the available anti-MYOC mouse tools, we were able to identify several commercial antibodies that work well to detect human MYOC by Western blot. As our transgenic mouse over-expressed human MYOC, we were able to identify in transgenic animals the expression of human mutant MYOC Y437H protein in both gastrocnemius muscle and heart ([Fig 2A](#pone.0206801.g002){ref-type="fig"}). In these tissues the human form of MYOC was found to migrate as a doublet ([Fig 2A](#pone.0206801.g002){ref-type="fig"}) and this doublet is due to partial *N-*glycosylation ([Fig 2B](#pone.0206801.g002){ref-type="fig"}). *N-*glycosylation is a post-translational modification for human MYOC \[[@pone.0206801.ref020]\] and is not a shared feature with mouse MYOC. The weight of the gastrocnemius muscle and the weight of the heart were found to be similar for control animals and CMV-MYOC-Y437H transgenic mice ([Fig 2C](#pone.0206801.g002){ref-type="fig"}), so this suggested no muscle hypertrophy.
A potential concern when studying skeletal muscle of transgenic mice is their genetic background. The SJL stain develops spontaneous myopathy due to a splice-site mutation in the Dysferlin (*Dysf*) gene which results in decreased levels of dysferlin protein and this makes the SJL stain a good model for limb girdle muscular dystrophy \[[@pone.0206801.ref021], [@pone.0206801.ref022]\]. The progressive myopathy in the SJL mice can be detected within a few weeks of age and is characterized by a progressive loss of muscle mass and muscle strength \[[@pone.0206801.ref023]\]. Muscle fibers in the SJL mice are replaced by fat and skeletal muscle of the SJL mice show overt histopathological abnormalities \[[@pone.0206801.ref023]\]. B6.SJL wt and CMV-MYOC-Y437H transgenes utilized in our study: 1) were not albino; 2) did not have less Dysf transcript ([S5 Fig](#pone.0206801.s005){ref-type="supplementary-material"}) or less DYSF protein ([S6 Fig](#pone.0206801.s006){ref-type="supplementary-material"}) in comparison to C57BL/6J mice (abbreviated herein as C57); and 3) did not have body weight or gastrocnemius muscle weight profoundly different from each other ([Fig 2C](#pone.0206801.g002){ref-type="fig"}) nor greatly different from the C57BL/6J mice utilized in this study ([S7 Fig](#pone.0206801.s007){ref-type="supplementary-material"}). Additionally, our EM images herein did not show replacement of muscle fibers with fat. Hence, our results indicate that the MYOC Y437H transgene did not adversely impact *Dysf* gene expression and having the CMV-MYOC-Y437H transgenic mice established on the B6.SJL background circumvents some concerns associated with a pure SJL background.
Expression of mutant MYOC has been reported to produce a robust ER stress response *in vitro* \[[@pone.0206801.ref024], [@pone.0206801.ref025]\] and *in vivo* \[[@pone.0206801.ref014]\]. In muscle, an ER response will result in upregulation of numerous ER resident proteins \[[@pone.0206801.ref026]\]. Our Western blot analysis for ER proteins in the gastrocnemius muscle and heart lysates indicated a small increase in expression of GRP78 (BiP) in the transgenic, but no change was observed for CALR or GRP94 proteins ([Fig 3A](#pone.0206801.g003){ref-type="fig"}). This lack of uniform elevation of ER resident proteins in the transgenic does not support the concept that mutant MYOC is causing pronounced *in vivo* ER stress. Additional Western blots of the tissue lysates for pro-apoptotic proteins associated with ER stress ([Fig 3B](#pone.0206801.g003){ref-type="fig"}) showed no differences between the control and transgenic. As skeletal muscle is not considered a secretory tissue, we desired to know if MYOC Y437H mutant protein would show signs of severe ER stress in the form of ER expansion. To examine the ER directly, we performed Electron Microscopy of mouse gastrocnemius muscle samples. Electron micrographs of the mouse skeletal muscle showed no indication of ER expansion ([Fig 4A](#pone.0206801.g004){ref-type="fig"}). Therefore, we conclude that the skeletal muscle of the mutant MYOC Y437H transgenic mice showed no evidence indicative of ER stress.
{#pone.0206801.g003}
{#pone.0206801.g004}
Further examination of CMV-MYOC-Y437H transgenic mouse skeletal muscle EM images ([Fig 4A and 4B](#pone.0206801.g004){ref-type="fig"}) revealed an increase in the number of visible M-bands within the sarcomeres when compared to wt mouse muscle. This observation suggested that mutant MYOC protein expression in skeletal muscle may impact muscle ultrastructure. Mitochondria morphology is known to be highly diverse in skeletal muscle fibers \[[@pone.0206801.ref027], [@pone.0206801.ref028]\]. As the number of mitochondria and sarcomere size varied slightly among our EM images, we performed quantitative analysis. Mitochondrial number was found to be similar for both wt and CMV-MYOC-Y437H transgenics ([Fig 4C](#pone.0206801.g004){ref-type="fig"}).
To determine if our findings for mutant MYOC in skeletal muscle were consistent with other MYOC transgenics, we examined two human MYOC BAC transgenic mice lines, one expressing wt MYOC and the other expressing mutant Q368X MYOC ([S4 Fig](#pone.0206801.s004){ref-type="supplementary-material"}). Wt MYOC BAC and mutant Q368X MYOC BAC mice had MYOC transcripts observed to be in heart and gastrocnemius muscle ([S4 Fig](#pone.0206801.s004){ref-type="supplementary-material"}). Similar to the CMV-MYOC-Y437H transgenic, these BAC animals exhibited gastrocnemius muscle weights and heart weights comparable to control mice ([S7 Fig](#pone.0206801.s007){ref-type="supplementary-material"}). Electron micrographs of the gastrocnemius muscle indicated that the wt control and the wt MYOC BAC mice showed a very distinct and compact M-band. In comparison, the Q368X mutant MYOC BAC gastrocnemius muscle M-band was less defined than the controls ([S8 Fig](#pone.0206801.s008){ref-type="supplementary-material"}), appearing disperse and more similar to that observed for the CMV-Y437H-MYOC transgenic. H&E staining of C57 control, wt MYOC BAC transgenic, and Q368X mutant MYOC BAC transgenic gastrocnemius muscles did not indicate hypertrophy in the BAC mice ([S9A and S9B Fig](#pone.0206801.s009){ref-type="supplementary-material"}).
M-band composition consists of myomesin (Myom) family members bridged by accessory proteins such as muscle creatine kinase \[[@pone.0206801.ref029], [@pone.0206801.ref030]\]. Western blot analysis of mouse skeletal muscle for proteins typically found within the M-band showed no differences in MYOM2 or MYOM3; however, lower levels of MYOM1 protein and CKM protein were observed for the mutant CMV-MYOC-Y437H transgenic mice ([Fig 5A and 5B](#pone.0206801.g005){ref-type="fig"}). A yeast-two-hybrid screen using wt MYOC and a skeletal muscle library had previously been completed and reported CKM as binding MYOC \[[@pone.0206801.ref031]\]. We confirmed the MYOC-CKM interaction by an immunoprecipitation (IP) experiment using CKM-FLAG-tagged protein as bait. Our IP results do support a physical interaction between MYOC and CKM protein ([Fig 6](#pone.0206801.g006){ref-type="fig"}).
{#pone.0206801.g005}
{#pone.0206801.g006}
In trabecular meshwork cells, mutant MYOC is misfolded \[[@pone.0206801.ref032]\] and MYOC mutant protein is reported not to be secreted \[[@pone.0206801.ref009]\]. In our NTM5 cells transfected with MYOC cDNAs, we did observe secretion of wt MYOC as well as non-secretion of the mutant protein ([Fig 7](#pone.0206801.g007){ref-type="fig"}). Normally, misfolded proteins in the ER are retro-translocated to the cytoplasm and efficiently cleared by a process known as ER-associated degradation (ERAD) which utilizes the proteasome. Ubiquitin (Ub) Western blot smearing observed for the CMV-MYOC-Y37H transgenics ([Fig 5A](#pone.0206801.g005){ref-type="fig"}) indicates more high-molecular weight (HMW) ubiquitinated proteins relative to the wt animal which is suggestive of more misfolded protein in the transgenics. From our experiment with NTM5 cells, we did find that some of the MYOC Y437H mutant protein was detectable in the non-soluble cell fraction ([Fig 7](#pone.0206801.g007){ref-type="fig"}). Thus, pathology associated with mutant MYOC likely occurs due to non-secretion and aggregation of the mutant MYOC protein with itself and in complexes with other proteins.
{#pone.0206801.g007}
Discussion {#sec003}
==========
There is a strong genetic link between mutant *MYOC* and glaucoma. Despite almost 20 years of intensive effort, the function of wt MYOC protein is unknown as is definitively how mutant MYOC protein contributes to disease. In the eye, MYOC protein is highly expressed in the trabecular meshwork, ciliary body, and retina and Northern blots indicate myocilin transcripts in skeletal muscle and heart \[[@pone.0206801.ref001], [@pone.0206801.ref004], [@pone.0206801.ref005]\]. To date, there has only been one publication regarding MYOC in skeletal muscle and that work was limited to wt mouse *Myoc* \[[@pone.0206801.ref016]\], so the impact of mutant myocilin on skeletal muscle had not been investigated.
The CMV-MYOC-Y437H mice \[[@pone.0206801.ref014]\] were acquired through a licensing agreement and the line was re-derived in accordance with CRO requirements. The data we collected from different cohorts of these MYOC transgenic mice of different ages were compared to age-matched wt animals. A limitation to our work is that we did not include a control of transgenic mice expressing CMV-wt-MYOC to compare with the mutant CMV-MYOC-Y437H mice. However, we did compare the mutant MYOC mice against wt control mice based on the established precedent in the literature \[[@pone.0206801.ref011], [@pone.0206801.ref012], [@pone.0206801.ref014]\] that has shown that looking at the mutant MYOC alone is sufficient to understand its biology. We found that the CMV-MYOC-Y437H mice did not have high IOP nor did the aged transgenics have an abnormal axon number. Western blots of ocular tissue lysates showed no detectable expression of the human mutant MYOC protein and this is the probable reason why these transgenics did not exhibit features of glaucoma. When we examined non-ocular tissues we found that the MYOC-Y437H transgene was expressed in gastrocnemius muscle. To gain insights into mutant MYOC biology/pathology, we further studied this muscle group. Ending of the licensing agreement meant the re-derived mice had to be terminated, so we have not conducted related experiments that may have been informative (e.g., histological studies of additional muscle groups for fiber cross-sectional area).
The literature reports a C57BL/6J mouse line with over-expression of mouse BAC wt *Myoc* \[[@pone.0206801.ref033]\]. This mouse line was intercrossed to produce mice homozygous for the transgene and these Tg/Tg mice exhibited a robust 15-fold wt MYOC protein over-expression \[[@pone.0206801.ref033]\]. In this 2004 publication, the authors did not report any overt phenotype or weight differences for these wt *Myoc* BAC transgenic mice in comparison to the wt controls. Later, this transgenic mouse with over-expression of wt mouse BAC *Myoc* was reported to have skeletal muscle hypertrophy (\>40%) and a large (40%) increase in gastrocnemius muscle weight \[[@pone.0206801.ref016]\]. To put this muscle enlargement in perspective, myostatin-null mice have been reported to have a 47% increase in gastrocnemius muscle weight relative to wt animals \[[@pone.0206801.ref034]\]. In none of our MYOC transgenic mice did we observe this reported weight difference in gastrocnemius muscle. A reason for this discrepancy may due to the wt mouse *Myoc* BAC transgenic being homozygous for the transgene \[[@pone.0206801.ref016]\] which meant those mice had a more robust over-expression of MYOC protein in comparison to any of our study animals.
MYOC is a secreted protein which is processed in the ER and is *N-*glycosylated. Pathologic MYOC mutant proteins are not secreted \[[@pone.0206801.ref009]\] and are retained within the cell. It has been proposed that mutant MYOC protein *in vivo* induces severe ER stress to cause pathology \[[@pone.0206801.ref014]\]. We found that mutant MYOC Y437H over-expressed in skeletal muscle of transgenic mice has no major impact on expression of ER proteins. We observed a slight increase in GRP78 (BiP) expression, but we did not see a collective upregulation of numerous ER-resident proteins which would be expected for an ER stress response. Our Western blot data is predominantly qualitative, so one could speculate that there is a chance that minor differences in protein amounts could be resolved by methods more sensitive than Western blot and, if shown, these minor protein differences could potentially have biological importance. As skeletal muscle is not considered a secretory tissue, it is expected to be very sensitive to retention of a secretory protein resulting in SR/ER expansion. Our electron micrographs of CMV-MYOC-Y437H transgenic skeletal muscle did not show evidence of expanded SR/ER. Rather, we observed a novel M-band phenotype in these animals with the MYOC Y437H mutant mice appearing to have multiple M-bands within their sarcomeres. Multiple M-bands can be seen in an EM image in the literature \[[@pone.0206801.ref035]\], but has not been suggested as a phenotype until now. Interesting, in the literature, the appearance of multiple M-bands is only seen in the diseased tissue (i.e. dog myocardium).
The M-band is a distinct and dense protein structure at the center of the sarcomere of skeletal and cardiac muscle. The M-band is comprised of bridged M-lines and the ultrastructure correlates with contraction speed and muscle type suggesting that the M-band components have a significant physiological role \[[@pone.0206801.ref036]\]. Electron microscopy has shown that muscle fibers at high degrees of stretch can have the M-band become faint or undetectable within the sarcomere while in muscle fibers with extremely shortened sarcomeres the M-lines can have distinct subdivisions appearing as multiple M-bands \[[@pone.0206801.ref037]\]. M-bridges connect the M-lines to the myosin rods and serve to maintain thick filament alignment across the sarcomere during muscle contraction. In addition to myosin rods, numerous other proteins can be found within the M-band region and these proteins can contribute to numerous cellular activities including cytoskeletal remodeling, signal transduction, mechanosensing, metabolism, and proteasome degradation (for review see \[[@pone.0206801.ref038]\]). To date, the most well characterized M-band proteins are obscurin, myomesins, and muscle creatine kinase (CKM).
Obscurin contributes to the assembly and stabilization of the M-region by linking myomesin to the SR \[[@pone.0206801.ref038]\]. Obscurin (Unc-89 in Drosophila) is a M-line protein and in a knockdown Drosophila model the result is a missing sarcomere M-band and an inability of adults to fly \[[@pone.0206801.ref039]\]. Similarly, obscurin-deficient mice were found to lack the M-band and had compromised exercise endurance with the diaphragm being the skeletal muscle most severely affected \[[@pone.0206801.ref040]\]. These results suggest that M-band proteins are necessary to maintain structural integrity of the skeletal muscle fibers. We were not able to adequately detect obscurin by Western blot ([Fig 5](#pone.0206801.g005){ref-type="fig"}), so we focused on the other M-band proteins.
The major constituent of the M-band is myomesin. The function of the M-line myomesin proteins is to stabilize the sarcomere. Myomesins are structural proteins of the M-line that form an elastic structure \[[@pone.0206801.ref041]\] that interacts with titin and myosin \[[@pone.0206801.ref029], [@pone.0206801.ref042], [@pone.0206801.ref043]\]. *Myosin* genes are regulated by myocyte enhancer factor 2c (MEF2C) which is a muscle-specific transcription factor \[[@pone.0206801.ref044]\]. In skeletal muscle of *Mef2c*-null mice, the myomesin genes were found to be downregulated and EM images from these mice suggested that skeletal muscle myofibers deteriorated due to a loss of M-line integrity \[[@pone.0206801.ref044]\]. An increase in M-region cross-sectional area is postulated to enhance stiffness of the M-bridges \[[@pone.0206801.ref036]\]. In the M-line, MYOC could be impacting the stiffness of the sarcomere and/or the packing of myomesin and/or how the M-line is associated with the cell cytoskeleton. CKM localizes at the M-band due to interactions with myomesin \[[@pone.0206801.ref029], [@pone.0206801.ref045]\] and it is hypothesized that CKM may serve both structural as well as enzymatic functions \[[@pone.0206801.ref036]\]. *Ckm*-null mice exhibit decreased voluntary running ability and a decrease in force production due to an inadequate supply of local ATP \[[@pone.0206801.ref018], [@pone.0206801.ref046], [@pone.0206801.ref047], [@pone.0206801.ref048]\]. Electron micrographs of longitudinal sections through myofibers of *Ckm*-null mouse gastrocnemius muscle showed less intense M-band staining \[[@pone.0206801.ref018]\]; thus, CKM protein is essential for M-band cross-bridge formation.
We examined by Western blot the M-band proteins in the CMV-MYOC-Y437H mutant transgenic animals and found the transgenics to have less MYOM1 and less CKM protein in comparison to wt littermates ([Fig 5A and 5B](#pone.0206801.g005){ref-type="fig"}). A decrease in CKM activity has been suggested to be a contributor to gradual loss of muscle function associated with aging \[[@pone.0206801.ref049]\]. This data suggests that the M-band phenotype may arise due to diffusion of the original and compact M-band ([Fig 8](#pone.0206801.g008){ref-type="fig"}). It is possible that mutant MYOC Y437H is contributing to this phenotype by: 1) binding CKM thereby disrupting the M-band normal bridging; and/or 2) through mutant MYOC protein aggregation with proteins found in skeletal muscle needed for normal sarcomere structure or maintenance. As this M-band phenotype only appears in literature for diseased tissue \[[@pone.0206801.ref035]\], it is highly probable that the change in M-band architecture is a physiological adaptation and may have an adverse impact on normal/optimal skeletal muscle function. These differences in muscle ultrastructure could have functional consequences and impact the animals' responses and/or behavior. *In vivo*, 1) the MYOC Y437H transgenic animals were observed to be more irritable/aggressive in comparison to wt littermates; and 2) most transgenic mice did not recover from isoflurane exposure (Bing Li, observations). Tissue distribution of M-line proteins is limited to muscle, so it is likely that the CKM and MYOC association is unique to muscle sacromeres and would not be a factor in pathology of trabecular meshwork cells.
{#pone.0206801.g008}
Skeletal muscle deteriorates in both size and strength with age \[[@pone.0206801.ref050], [@pone.0206801.ref051]\]. In rats with reduced physical activity, it is reported that Myoc transcript increases in skeletal muscle with age \[[@pone.0206801.ref052]\]. As reduced exercise capacity can occur due to loss of CKM \[[@pone.0206801.ref018]\], it is likely that the CMV-MYOC-Y437H mutant mice also have a physiological phenotype. Future investigations to determine the potential impact of this ultrastructure change on muscle function could provide valuable insight into people who carry a pathological *MYOC* mutation. Do these people have diaphragm issues or complications associated with anesthetics? Do these people exhibit modified exercise capacity or experience excessive muscle pain/stiffness? If people with a pathologic *MYOC* mutation have a skeletal muscle phenotype, this information may aid physicians in early identification of those at high risk for glaucoma.
In the cell, mutant MYOC can interact with other proteins and these interactions could impact cell integrity as well as compromise cell structure and/or function. In humans, data from the GTEx database suggests that MYOC is expressed in non-ocular tissues. People carrying a pathologic *MYOC* mutation are at extremely high-risk to develop glaucoma \[[@pone.0206801.ref053]\], and information that contributes to early identification of these individuals is essential for immediate intervention to help limit the impact of this devastating and blinding disease. The results we have presented in this report suggest that physicians should not only consider a patient's family glaucoma history, but also consider the patients' muscle ailments as a potential indication of a pathologic *MYOC* mutation and recognize the necessity for genotyping and consultation with Ophthalmologists.
Materials and methods {#sec004}
=====================
CMV-MYOC-Y437H mice {#sec005}
-------------------
Mutant CMV-MYOC-Y437H mice have previously been described \[[@pone.0206801.ref014]\] and these animals were obtained through a licensing agreement. Preliminary experiments (data not shown) were completed using these animals. Maintenance of the line was transferred to a contract research organization (CRO) that rederived CMV-MYOC-Y437H mice in the B6.SJL background. All experiments were performed with F3 and later generations of intercrossed mice. Mice were genotyped as described and wt littermates served as controls \[[@pone.0206801.ref014]\] for the CMV-MYOC-Y437H transgenic mice. All animal experiments were performed in accordance the Association for Research in Vision and Ophthalmology (ARVO) policy on the Use of Animals in Vision Research and all protocols were reviewed and approved by the Institutional Animal Care and Use Committee (IACUC) at Novartis Institutes for Biomedical Research. All animals were housed in rooms in which the temperature, humidity, and lighting (12h:12h light-dark cycle) were controlled and water and food were available *ad libitum*. Harvested tissue samples were stored at-- 80°C until utilized.
MYOC wt BAC & MYOC Q368X BAC mice {#sec006}
---------------------------------
Bacterial artificial chromosome (BAC) containing the human *MYOC* gene (RP11-1152G22) was utilized to create the BAC transgenic mouse lines. This BAC contains the complete *MYOC* coding sequence, with 68 kb 5' and 51 kb 3' flanking sequences. A modification of the BAC to introduce a C\>T mutation at position 1102 of the coding sequence was done by GeneBridges (Heidelberg, Germany). Purified BAC constructs were injected into the pronuclei of C57BL/6J mice. The offspring were screened by PCR to identify transgenic founders and Western blot analysis confirmed expression of the human MYOC protein in these mouse lines. One founder for each of the wt MYOC BAC and mutant Q368X MYOC BAC were used to establish transgenic lines for study. No breeding complications or viability issues were noted for any of the BAC mice. All experiments were performed with F3 and later generations of intercrossed mice. All BAC mice were studied as hemizygotes with their wt littermates serving as controls.
Measurement of conscious IOP {#sec007}
----------------------------
Animals underwent training of conscious IOP measurement for more than three weeks. IOP was measured with a TonoLab rebound tonometer (Colonial Medical Supply, Franconia, NH) twice a week at same time of day (8:00am to 11:00am and 3:30pm to 6:30pm). Ten measurements per eye/time were deemed reliable by internal software, to generate and display an average IOP reading. An average of these readings was then calculated and was reported as calculated mean IOP. Animal cohorts for IOP typically contain \>10 animals per group (never less than N = 6/group) and multiple examiners perform the IOP measurements. Note that normal variability expected for multiple examiners using tonometers is approximately 1 mm Hg \[[@pone.0206801.ref054]\].
*Optic nerve semi-thin cross-section and paraphenylenediamine (PPD) staining---*Mouse optic nerve samples were collected and fixed with half-strength Karnovsky's fixative (2% formaldehyde + 2.5% glutaraldehyde, in 0.1M sodium cacodylate buffer, pH 7.4 (Electron Microscopy Sciences) for a minimum of 48 hours. After fixation, samples were rinsed with 0.1M sodium cacodylate buffer, post-fixed with 2% osmium tetroxide in 0.1M sodium cacodylate buffer, then dehydrated with graded ethyl alcohol solutions, transitioned with propylene oxide and resin infiltrated in tEPON-812 epoxy resin (Tousimis) utilizing an automated EMS Lynx 2 EM tissue processor (Electron Microscopy Sciences). Processed tissues were oriented in tEPON-812 epoxy resin and polymerized in silicone molds using an oven set for 60°C for 48 hours. Cross-sections were cut at 1-micron with a Histo-diamond knife (Diatome) on a Leica UC-7 ultramicrotome (Leica Microsystems) and collected on slides then dried on a slide warmer. Cross-sections were collected at \~1mm distance posterior to the optic nerve head. The slides were stained with 2% aqueous PPD (MP Biomedicals LLC) solution for 45 minutes at room temperature, rinsed in tap and deionized water solutions, air-dried, then mounting medium and a glass coverslip was applied over the sections for light microscopic analysis of myelinated axon analysis.
Axon quantification {#sec008}
-------------------
Optic nerve cross sections (1μm thickness) were cut at the location of 1mm from the optic nerve head and processed for PPD stain of myelin as described in the method. Nine (110μm X 82μm) square areas were sampled per nerve for axon quantification as depicted in the schematic ([S2 Fig](#pone.0206801.s002){ref-type="supplementary-material"}). The number of axons was counted using ImageJ \[ImageJ → Open image → Convert image type to 8-bit → Adjust image threshold (choose "Otsu" "B&W" "Dark background") → Analyze (choose "Analyze particles" and set show "outlines", "Display results", "Exclude on Edges", "In situ show"\]. Mean axon density was obtained as the average of the axon count of the nine square areas. Total area of the optic nerve cross section was measured by ImageJ. Total number of axons was calculated as mean axon density multiplied by the total area. Although this axon quantification method is published \[[@pone.0206801.ref055]\] and is similar to other accepted quantification techniques \[[@pone.0206801.ref056]\], a limitation of this automated counting method is that we did not formally test it for validation against manual counts. As such, there is potential that a technical glitch could prevent us from detecting a loss of axons indicative of a glaucoma phenotype, but given the past success of this tool and the qualitative appearance of the nerves, we consider this unlikely.
RNA isolation & RT-PCR {#sec009}
----------------------
Skeletal muscle (gastrocnemius muscle) harvested from 4-month-old wild-type (wt) mice was minced with scissors and homogenized (Omni Tissue Master 125 Homogenizer, Omni International) and RNA isolated using a RNA isolation kit (Qiagen, 74704) according to manufacturer instructions. RNA concentration was determined using a ThermoScientific NanoDrop 2000. For Real-time PCR (RT-PCR), an Applied Biosystems ViiA7 real time PCR system (Life Technologies) was utilized. Additional materials were TaqMan RNA to C~T~ 1-step kit (Applied Biosystems, 4392938) and TaqMan Gene Expression Arrays (Applied Biosystems) for human MYOC (Hs00165345), mouse Myoc (Mm00447900_m1), mouse Gapdh (MM99999915_g1) and mouse Dysf (Mm00458050_m1). Samples were analyzed in triplicate.
Reverse-transcriptase PCR for mouse Dysf was completed using SuperScript IV One-Step RT-PCR System (Invitrogen, 12594025) according to manufacturer instructions and a Bio-Rad C1000 Touch Thermal Cycler. PCR primers to distinguish wt and mutant mouse Dysf have previously been described \[[@pone.0206801.ref021]\] and were synthesized by ThermoFisher Scientific. A 10μL aliquot of the PCR product had 6X Gel Loading Dye added (Cell Signaling Technology, B7021S) and each sample was loaded into wells of a 0.8% Agarose / 1X TAE gel. DNA standard was 1 Kb Plus DNA Ladder (Invitrogen, 10787--018). Agarose gels were imaged using a Bio-Rad GelDoc XR+ instrument. The wt Dysf PCR product is 500bp while the Dysf of the pure background SJL mouse is 329bp \[[@pone.0206801.ref021]\].
Tissue culture and transfection conditions {#sec010}
------------------------------------------
A normal trabecular meshwork human cell line (NTM5) has previously been described \[[@pone.0206801.ref057]\] and was utilized in this study. NTM5 cells were grown in 10cm culture dishes in a 37°C incubator with 90% relative humidity and 5% CO~2~. Cell media was DMEM (Gibco, 11995--065) supplemented with 10% FBS (Gibco, 10082147) and 1% P/S (Gibco, 15140--122). NTM5 cells at 70 to 80% confluence were transiently-transfected with FLAG-tagged plasmids (8μg total cDNA per 10cm plate), human MYOC (Origene, RC206556; Accession number NM_000261), or mouse Myoc (Origene, MR224777; Accession NM_010865) using FuGENE6 transfection reagent (Promega, E2691). All plasmids had been purified using a Qiagen plasmid maxi kit (Qiagen, 12163) and for transfections FuGENE6 was used at a 5:1 ratio with the cDNA. 48-hours post-transfection, cells were washed with 1X PBS (Gibco, 20012--027) and lysed on ice using a RIPA buffer (50mM Tris pH 7.5, 150mM NaCl, 1mM EDTA pH 8.0, 1mM EGTA pH 8.0, 0.1% SDS, 1% Triton X-100, 0.5% NaDOC, 1mM DTT) with Complete protease inhibitors (Roche, 11873580001). Cell debris was removed by centrifugation at 4°C (Eppendorf 5810R). Protein assay (Bio-Rad DC kit, 500--0113, 500--0114, 500--0115) was completed in accordance with the manufacturer instructions using a Tecan Infinite M1000 plate reader with Tecan i-Control 3.1.9.0 software.
HeLa cells utilized in this study were grown under the same growing conditions and using the same media as described for the NTM5 cells.
De-glycosylation experiment and Western blot conditions {#sec011}
-------------------------------------------------------
For Western blot, the doublet appearance of secreted human MYOC has been attributed to *N-*glycosylation \[[@pone.0206801.ref020]\]. PNGase F (Peptide-*N*-Glycosidase F) is an amidase that cleaves between GlcNAc and asparagine residues of *N*-linked glycoproteins and is the most effective method for removal of *N-*linked oligosaccharides from glycoproteins \[[@pone.0206801.ref058]\]. We wanted to see if mouse MYOC protein was glycosylated similarly to human MYOC. For this experiment, extracts from NTM5 were utilized. For the de-glycosylation experiment, 20μg cell lysates were treated with PNGase F (NEB, P07004S) according to manufacturer instructions. 5X SDS loading buffer (0.25M Tris, pH 7.0, 40% Glycerol, 8% SDS, 20% beta-mercaptoethanol, 0.1% Bromophenol Blue) was added to samples and Western blot analysis completed using 10% SDS-PAGE gels (Bio-Rad Mini-PROTEAN TGX Gels, 456--1034) in the Bio-Rad Mini-Protean Tetra System followed by wet transfer to PVDF membranes (Millipore, IPVH00010). Membranes were blocked using a solution of 5% non-fat milk in 1X TBS-T. Membranes were treated with primary antibody (1:1000 in a solution of 1% non-fat milk and 1X TBS-T) with gentle rocking over-night at 4°C. Membranes were washed with 1X TBS-T followed by 90 minute room temperature incubation with species appropriate alkaline phosphatase (AP)-conjugated secondary antibody (Abcam). Following 1X TBS-T washes, membranes were exposed to ECF substrate (VWR, RPN5785) and imaged using a Bio-Rad GelDoc XR+ Imaging System with Image Lab 5.2.1 software. Densitometry for quantification of western blots was performing using the GelDoc software.
As a loading control, membranes were stripped according to manufacturer instructions using 1X Strong Stripping Buffer (Millipore, 2504) and after blocking in a 5% non-fat milk 1X TBS-T solution the membranes were treated with anti-GAPDH.
Isolation of soluble, insoluble, and secreted protein fractions {#sec012}
---------------------------------------------------------------
NTM5 cells in 10cm culture dishes were transiently-transfected as described. The control vector was CMV-Tag1 (Aligent, 211170) and the CMV-MYOC plasmid cDNAs was constructed and sequenced by GeneWiz (Cambridge, MA). One plasmid contained cDNA for normal (wild-type) untagged human MYOC (Accession NM_000261) and the other plasmid had cDNA for MYOC with the Y437H mutation. 24 hours post-transfection, cells were washed five times with 1X PBS and 10mL serum-free DMEM added to each plate. 48 hours post-transfection cell fractions were collected/harvested. Serum-free media was centrifuged at room temperature for 5 minutes at 1000rpm to remove any debris and supernatants were concentrated 100X using cold centrifugation and Ambion Ultra-4 Centrifugal Filters (Millipore, UFC801008). Sample protein concentration was determined by the Bio-Rad DC protein assay and samples were analyzed by Western blot. For Western blot, 5X SDS loading buffer was added to each sample.
To isolate soluble protein fractions, a minimal amount of RIPA buffer with protease inhibitors was added to each plate of cells and the plates were placed on ice for ten minutes. Cells were scraped from the plates using a cell scraper (Corning, 3008) and samples were centrifuged at 4°C for 10 minutes at 7500rpm. The supernatant was collected as the soluble fraction. Bio-Rad DC protein assay of these samples was completed. 5X SDS loading buffer was added to each sample and samples were placed in a boiling water bath for five minutes.
To isolate insoluble cell fractions, 250μL RIPA buffer with protease inhibitors was added to each cell pellet and these samples were briefly sonicated three times (Misonix XL-2000, 7 volts). 100μL of 5X SDS loading buffer was added to each of these samples and the tubes were placed in a boiling water bath for 5 minutes. RIPA buffer was added to the insoluble samples to equalize the volume to that in the soluble fraction samples. Western blot analysis of all NTM5 sample fractions was completed as previously described and the Bio-Rad GelDoc XR+ image software utilized for densitometry of Western blot bands.
Isolation of *in vivo* protein {#sec013}
------------------------------
Tissue harvested from mice was minced with scissors and homogenized (Omni Tissue Master 125 Homogenizer, Omni International) in RIPA buffer with Roche Complete protease inhibitors. The tissue samples were sonicated using a Misonix Sonicator XL-2000 series with ultrasonic converter (Serial C6498) set at Power setting 1 (5 volts). Samples were cold centrifuged and supernatants saved. Sample protein concentration was determined by the Bio-Rad DC protein assay. For Western blot, 5X SDS loading buffer was added to each sample and samples were placed in a boiling water bath for five minutes before being loaded into wells of 10% SDS-PAGE gels.
Antibodies {#sec014}
----------
All primary antibodies were from commercial sources. With the exception of anti-GAPDH (1:10000) and anti-DYSF (1:250), all primary antibodies were utilized at a 1:1000 dilution in 1% non-fat milk and 1X TBS-T. All primary antibody incubations were over-night at 4°C with gentle rocking. Primary antibodies were: BiP (Cell Signaling Technology, 3183S), Caspase 3 (Abcam ab13847), Caspase 12 (Abcam, ab18766), CHOP (Abcam, ab11419), CALR (Cell Signaling Technology, 2891S), CKM (Abcam, ab174672), DYSF (Abcam, ab124684), FLAG (Sigma, F1804), GAPDH (Fitzgerald, 10R-G109a; 1:10000), GRP94 (Cell Signaling Technology, 2104S), MYOC (R&D Systems, AF2537; 1μg/μL), MYOC (Origene, TA323708), MYOC (Acris, AP10162PU-N), MYOM1 (Abcam, ab205618), MYOM2 (Abcam, ab93915), MYOM3 (Proteintech, 7692-I-AP), Obscurin (Millipore, ABT160), MURF1/TRIM63 (Abcam, ab172479), SQSTM/p62 (Cell Signaling Technology, 5114S), and Ubiquitin (Abcam, ab134953). All secondary antibodies were AP-conjugated and were utilized at a 1:2000 dilution in 1% non-fat milk and 1X TBS-T. All secondary antibodies were from Abcam (ab97107, ab97237, ab6722).
Electron microscopy {#sec015}
-------------------
Mouse gastrocnemius muscle was harvested from 4 to 6 month old mice and immediately placed in fresh 1/2 Karnovsky\'s Fixative (2% paraformaldehyde + 2.5% glutaraldehyde in 0.08M sodium cacodylate buffer + CaCl~2~; pH 7.4) for two hours at room temperature. The samples were then moved to 4°C for 18 to 24 hours. After this time, each tissue was trimmed so that an approximate 2mm square piece of tissue was obtained. This small piece of tissue was placed in 0.1 M sodium cacodylate buffer (pH 7.4) in flat bottom glass vials and stored at 4°C. Electron microscopy work was completed by an Electron Microscopy Specialist at Schepens Eye Research Institute (SERI) of Massachusetts Eye and Ear Infirmary.
Cross-sectional area of gastrocnemius muscle {#sec016}
--------------------------------------------
Mouse muscles harvested from 4 to 6 month old female mice were fixed in 10% neutral buffered formalin for 48 hours, dehydrated with graded ethanol, and embedded in paraffin by Tissue-Tek VIP processor (Sakura). Sections of 5μm thickness were stained with Haematoxylin and Eosin (H&E) in Tissue-Tek Prisma (Sakura) and mounted in Tissue-Tek Glas (Sakura). Slides were scanned using an Aperio AT2 scanner (Leica). Images collected and quantitatively analyzed using Halo v2.1 software (Indica Labs). The total number of muscle fibers in 1x10^5^μm^2^ cross-sectional areas of H&E stained gastrocnemius muscle were counted.
Immunoprecipitation {#sec017}
-------------------
NTM5 cells in 10cm culture dishes were transiently-transfected as previously described. The control vector was CMV-Tag1 (Aligent, 211170) and the plasmid cDNA was for wild-type untagged MYOC (Accession NM_000261). For immunoprecipitation (IP), tubes were prepared containing 10μg CKM-FLAG protein (Origene, TP302721) along with IP buffer (20mM Tris-HCl pH 7.5, 150mM NaCl, 1mM EDTA, 1mM EGTA, 1% Triton, 2.5mM NaDOC) with Roche Complete protease inhibitors. 50μL of anti-FLAG antibody was added to each tube and tubes were incubated over-night at 4°C with gentle rotation. Later, 40μL Pierce Protein A/G Agarose (ThermoScientific, 20241) beads were added to each tube for 90 minute incubation at 4°C with gentle rotation. Samples were centrifuged (1000xg at 4°C for 3 minutes) to pellet the A/G beads. Beads were washed five times with the IP buffer and then 15μL of 5X SDS Loading buffer added. Sample tubes were placed in a boiling water bath for five minutes and then centrifuged at room temperature at 13000rpm for 3 minutes. The supernatant was retained for Western blot analysis. This IP protocol is a modified version of that provided by Cell Signaling Technology ([http://www.cellsignal.com](http://www.cellsignal.com/)) for native proteins.
Supporting information {#sec018}
======================
###### The CMV-MYOC-Y437H transgenic (Tg) mice were found not to have IOP different from the wt mice.
*Top---*IOP was monitored for equal numbers of male and female mice that were 4 to 8 months of age for several weeks in the AM and PM hours. IOP data is representative of several experiments for different aged cohorts of animals that were monitored for several weeks. Minimal N = 6 animals per group. *Bottom--*IOP data obtained over several weeks was averaged and summarized. SD is indicated; t-test, p\>0.1. Abbreviations--wild-type, wt; transgenic, Tg.
(PDF)
######
Click here for additional data file.
###### The CMV-MYOC-Y437H aged transgenic (Tg) mice were found not to have axon loss when compared to wt mice.
*Top---*Cartoon figure depicting nine 110μm x 82μm rectangle areas in the optic nerve cross section that was sampled for axon quantification. *Middle and Bottom---*Representative images of the optic nerve that were used to determine axon numbers in wt and CMV-MYOC-Y437H transgenic (Tg) animals older than one year of age. Approximately equal numbers of male and females were included in each group, N = 5 wt and N = 12 MYOC Y437H transgenic; t-test, p = 0.98. Abbreviations--wild-type, wt; transgenic, Tg.
(PDF)
######
Click here for additional data file.
###### Western blots for MYOC and endoplasmic reticulum proteins in CMV-MYOC-Y437H transgenic mouse eye lysates.
(**A**) Western blot for human MYOC (using R&D Systems anti-MYOC antibody) in adult mouse whole eye lysates showed no expression of the transgene in the eye. Loading was 40μg tissue lysate per well of a 10% SDS-PAGE gel. In this western blot the total mouse N = 6 and each sample lane represents lysates from different animals. (**B**) Western blots for MYOC and ER proteins using lysates from pooled anterior eye tissue samples \[sclera and limbal ring/trabecular meshwork (TM)\] isolated from several wt and several CMV-Y437H-MYOC adult transgenic mice. This Western blot for MYOC used a combination of anti-MYOC antibodies \[1:500 each of Origene anti-MYOC (TA323708) and Acris anti-MYOC (AP10162PU-N)\] which cross-react with mouse and human MYOC. Abbreviations--wild-type, wt; transgenic, Tg; trabecular meshwork, TM.
(PDF)
######
Click here for additional data file.
###### RT-PCR for mouse Myoc and human MYOC using RNA isolated from wt, CMV-Y437H-MYOC, wt MYOC BAC, and mutant Q368X MYOC BAC gastrocnemius muscle.
Tissue samples were from female mice aged 4 to 6 months. Data has been normalized to mouse Gapdh and results indicate a similar transcript level of mouse Myoc for all the mice. The CMV-MYOC-Y437H transgenic had a C~T~ value for human MYOC six cycles earlier than that for mouse Myoc. +/- SD is indicated. Abbreviations--not detected, ND.
(PDF)
######
Click here for additional data file.
###### Dysf transcript level in wt and CMV-MYOC-Y437H transgenic mice was examined.
*Top---*Image is of a 0.8% agarose / 1X TAE gel loaded with PCR product samples. Reverse-transcriptase PCR for Dysf shows amplification of the wt Dysf band (\~500bp) while no PCR product is observed for the mutated form of Dysf (\~329bp). N = 6 different animals aged 4 to 6 months with a male and female representative for each of the three mouse lines. *Bottom---*Real-time PCR (RT-PCR) results using RNA isolated from wt and CMV-MYOC-Y437H transgenic mice. C~T~ values between the two groups was similar with no statistically significant differences. +/- SD; t-test p = 0.1. Abbreviations--wild-type, wt; transgenic, Tg.
(PDF)
######
Click here for additional data file.
###### DYSF protein expression in wt and CMV-MYOC-Y437H transgenic mice was examined.
Western blot showing DYSF protein expression in gastrocnemius muscle lysates from wt mice of two different backgrounds as well as from the CMV-MYOC-Y437H transgenics. Similar DYSF protein expression was observed for all animals. Arrow indicates predicted size (\~238kDa) of mouse DYSF protein. Western blots were stripped and probed with anti-GAPDH to serve as a loading control. N = 6 different animals aged 4 to 6 months with a male and female representative for each of the three mouse lines. Abbreviations--wild-type, wt; transgenic, Tg.
(PDF)
######
Click here for additional data file.
###### Skeletal muscles and hearts from C57 wt, wt MYOC BAC transgenic, and mutant Q368X MYOC BAC transgenics were harvested and weighted.
Weights of gastrocnemius muscle and heart did differ not among the wt and the BAC transgenic groups. The weight of the diaphragm of the mutant Q368X MYOC BAC transgenic was approximately 30% less than the other animals and \* represents t-test p\<0.001. All tissue samples were from female mice aged 4 to 6 months. N per group is ≥ 4 and +/- SD is indicated. Abbreviations--wild-type, wt; transgenic, Tg.
(PDF)
######
Click here for additional data file.
###### Electron micrographs of gastrocnemius muscle from 4 to 6 month old female C57 wt, wt MYOC BAC transgenic and mutant Q368X MYOC BAC transgenic mice.
Images show that the sarcomeres of C57 and wt MYOC BAC transgenic were very similar with a distinct and prominent M band. In comparison, the M-band in the mutant Q368X MYOC transgenic was faint and appeared dispersed. Direct magnification was 18500X.
(PDF)
######
Click here for additional data file.
###### Examination of cross-sections of gastronemius muscle from 4 to 6 month old female C57 wt mice, wt MYOC BAC mice, and mutant Q368X MYOC BAC mice did not show differences in muscle fiber size/number.
**(A)** Representative images from wt and BAC transgenic mice of 5μm gastrocnemius muscle cross-sections stained with H&E. Each mouse group had ≥ N = 3 mice per group. **(B)** The total number of muscle fibers in 1x10^5^μm^2^ cross-sectional areas of H&E stained gastrocnemius muscle were counted using Halo software. Data is shown +/- SD.
(PDF)
######
Click here for additional data file.
[^1]: **Competing Interests:**Authors are all employees of Novartis Institutes for BioMedical Research (NIBR) and receive salary. As NIBR is a publicly traded pharmaceutical company the authors may hold stock. This competing interest does not alter authors' adherence to PLOS ONE policies on sharing data and materials.
|
Understanding the Magnitude of the Viral Hepatitis Epidemics in the United States
In March I had the honor of meeting with an inspirational group of advocates, leaders and researchers who had come to Washington to educate lawmakers about viral hepatitis and its impact on our nation. Their braveness and authenticity was moving. They told their personal stories about living with viral hepatitis, the frustrating experiences with healthcare providers, the challenges and costs of getting appropriate treatment, the stresses of waiting to receive a life-saving liver, the loss of loved ones, and other deeply personal issues.
I was deeply moved by their experiences and their commitment to improving the health of people at-risk for and living with viral hepatitis. These leaders came not to get better care for themselves, but to make things better for their friends, their family members, for their communities, and for strangers that they'd never met. They came to stand up for the millions of Americans living with viral hepatitis.
How Many People Are Living With Viral Hepatitis in the U.S.?
I'm not sure that most people appreciate how many people are affected by viral hepatitis. The CDC website currently states that an estimated 2.7 million to 3.9 million people in the United States are living with chronic hepatitis C virus (HCV) infection. Another 700,000 to 1.4 million people are living with chronic hepatitis B virus (HBV) infection. If we add these together, ignoring the number of persons coinfected with HBV and HCV, this suggests that there are roughly 3.4 million to 5.3 million people living with viral hepatitis in the United States. That's a lot of people. These women, men, and children are at risk for developing severe liver disease if they do not get effective treatment and potentially transmitting the infection to others. Some estimates are even higher.
It's hard to appreciate just how many people this is. To get a better understanding of how many people this represents, I thought it would be useful to compare the number of people living with chronic viral hepatitis nationwide to the total number of people living in Washington, D.C., and other places. The Census Bureau estimates that there were 672,228 people living in Washington, D.C., in 2015. If we compare this population number to the estimates on the CDC website, there are between 5 to 7 times more Americans living with viral hepatitis than there are people in the nation's capital. Now, Washington, D.C., is a big place, but most states are even larger. What if we compare the number of people living with viral hepatitis to the population of the 50 states and Puerto Rico? The results were surprising to me.
There May Be More People Living With Viral Hepatitis in the U.S. Than Live in Your State
Let's start with the lowest estimate from the CDC website (3.4 million people). Twenty-one states and D.C. all have total populations that are smaller than the estimated number of people living with HCV or HBV in the United States.
If we use the highest estimate from the CDC (5.3 million people), 28 states, D.C., and Puerto Rico, each have populations that are smaller than the number of people living with viral hepatitis. That's more than half of the states in the entire country. These states can be found all across the United States.
In fact, if you add up the populations of D.C. and the 6 least populous states (Alaska, Delaware, North Dakota, South Dakota, Vermont, and Wyoming), the total (5,184,139) is less than the CDC's highest estimate of the number of people living with viral hepatitis in the United States.
Tools Available to Reduce Viral Hepatitis Infections
These numbers of people living with HBV and HCV are much too large. We have an effective vaccine to prevent HBV infection. We have tests to diagnose HBV and HCV infection that are recommended by the U.S. Preventive Services Task Force and covered by most health insurances without extra cost to patients. We have effective treatments that cure HCV infection. And we have other effective strategies (including drug treatment and syringe services programs) that can prevent transmission of viral hepatitis.
We have the ability to reduce the number of persons living with chronic viral hepatitis in the United States by vaccinating people for hepatitis B, preventing new HBV and HCV infections, and curing people with HCV. We have the tools to make this happen starting today, if we scale up the use of effective prevention and treatment strategies. The Viral Hepatitis Action Plan sets goals for the nation and provides a framework for what needs to be done. We need to do better as a nation. We owe it to the millions of people living with viral hepatitis, their families, and their communities.
Richard J. Wolitski, Ph.D., is acting director of the Office of HIV/AIDS and Infectious Disease Policy, U.S. Department of Health and Human Services.
(Please note: Your name and comment will be public, and may even show up in Internet search results. Be careful when providing personal information! Beforeadding your comment, please read TheBody.com's Comment Policy.)
TheBodyPRO.com is a service of Remedy Health Media, LLC, 750 3rd Avenue, 6th Floor, New York, NY 10017. TheBodyPRO.com and its logos are trademarks of Remedy Health Media, LLC, and its subsidiaries, which owns the copyright of TheBodyPRO.com's homepage, topic pages, page designs and HTML code. General Disclaimer: TheBodyPRO.com is designed for educational purposes only and is not engaged in rendering medical advice or professional services. The information provided through TheBodyPRO.com should not be used for diagnosing or treating a health problem or a disease. It is not a substitute for professional care. If you have or suspect you may have a health problem, consult your health care provider. |
Introduction {#Sec1}
============
Total mesorectal excision (TME) as described by Heald et al. in 1982 has \[[@CR1]\] become the gold standard of rectal cancer surgery. When performed meticulously, it yields low rates of local recurrence (LR) and improved survival \[[@CR1]\]. In Norway, a national rectal cancer project was launched in 1993 in order to improve outcome for rectal cancer patients. Of several interventions, TME was introduced as the preferred surgical technique and an educational programme, which included training courses and master classes, was implemented throughout Norway \[[@CR2]\]. As a result, the risk of LR was decreased by 50% \[[@CR3]\].
At Levanger Hospital, the modern principles of rectal cancer surgery were introduced in 1980, and a prospective protocol for operative strategy, radiotherapy and surveillance was established. Although excellent results were reported \[[@CR4]\], an offer to participate in the TME educational programme was rejected by Levanger Hospital because it was a common belief among our staff that we had mastered this technique. However, the first biannual report from the Cancer Registry of Norway in 1997 revealed an unacceptably high rate of LR at Levanger hospital, and actions had to be taken. The protocol was regarded as sufficient and left unchanged, but the focus on adequate preoperative assessment to assign the right patients to neoadjuvant therapy was increased. In addition, the team of surgeons who performed TME surgery was strengthened by having specialists in colorectal surgery present during operation. In this study, a complete cohort of patients who were treated for rectal cancer at Levanger Hospital during 1980--2004 was analyzed to assess temporal trends in treatment and oncologic outcomes. We examined every case of LR in this time period in search of possible protocol violations.
Patients and methods {#Sec2}
====================
A complete cohort was assured by using data from the Norwegian Cancer Registry. The hospital health records of all patients treated for rectal cancer at Levanger Hospital from 1980 to the end of 2004 were reviewed. The patients were assigned into three separate periods, which depended on their date of surgery: 1980--1989, 1990--1999 and 2000--2004. The hospital served a defined patient catchment area with a population that increased slightly from 85,741 in 1980 to 88,858 in 2004.
The upper limit for rectal cancer was defined as 15 cm from the anal verge measured on rigid proctoscopy. The surgery was performed by sharp dissection under visual guidance in the avascular plane surrounding the mesorectal fascia. A major resection with curative intent implied resection of the tumour bearing segment of rectum with no signs of metastases on preoperative investigations or by intraoperative examination, but included patients with microscopically involved margin and intraoperative perforations. The operation was considered curative if a microscopically free margin was confirmed and there was no bowel perforation. Circumferential resection margin (CRM) was defined as the shortest distance from the periphery of the tumour or tumour deposits to the resection margin. Residual tumour stage (R stage) was registered where no residual tumour locally was classified as R0 resection, microscopically involved margin as R1 resection and macroscopically residual tumour as R2 resection. To assign cancer stage we used the *TNM Classification of Malignant Tumours*, sixth edition \[[@CR5]\]. A histological examination was missing in 17 patients, of which 12 had received best supportive care without any operation and five had received nonresective procedures.
The staff of specialist surgeons was stable and increased over the past 25 years. During the years 1990--1999, many undergraduates were responsible for the preoperative examinations, although they performed the surgical procedures together with a specialist in gastrointestinal surgery. The responsibility for treatment of rectal cancer was dispersed on more persons than during 1980--1989. Since 2000, rectal cancer surgery was no longer required for surgeons specializing in general surgery, and more dedicated teams were responsible for the preoperative examinations and the treatment.
During the operation, the routine was 5 cm distal margin for tumours in the upper and middle rectum, but a 2-cm margin was accepted in lower tumours in order to avoid abdominoperineal resection.
The clinical follow-up programme after resection with curative intent was 5 years (range of follow-up 0--28.7 years), and the median follow-up with regard to survival was 9.4 years (range 5.0--28.7). The surveillance programme was principally the same for all 25 years, based on symptoms, clinical examination including proctoscopy, measure of carcinoembryonic antigen at every visit, chest radiography and colonoscopy at intervals. Liver ultrasound and CT scan were performed at intervals as these modalities became available.
LR was defined as recurrent disease in the pelvis or the perineum, regardless of whether it was distant metastasis or not. Mortality data were collected from the hospital patient administrative system. Postoperative mortality was defined as all deaths within 30 days after laparotomy or during the same hospital stay, regardless of time. Overall survival was estimated by inclusion of death from any cause. Relative survival was defined as the ratio of observed survival in rectal cancer patients to the expected survival of the general population of Norway.
Perioperative radiotherapy and operative strategy {#Sec3}
-------------------------------------------------
Preoperative radiotherapy was recommended for patients who had been diagnosed with locally advanced tumours throughout the period, initially given as 46 Gy, and in later years as 2 Gy × 25 with concomitant tumour-sensitizing 5-Fluorouracil. In 1980--1999, before the era of MRI, a tumour was considered locally advanced when it was fixed in the pelvis, or if it could not be moved in two planes at the preoperative examination. The surgery was performed according to the principles described by Bjerkeset and Edna \[[@CR4]\]. Postoperative radiotherapy was recommended in cases of an R1 resection or perforation.
Statistical methods {#Sec4}
-------------------
Two by two tables were analyzed using the unconditional *z*-pooled test, which is the unconditional version of Pearson's *χ*^2^ test \[[@CR6]\]. The exact Cochran--Armitage test was used for testing trends in proportions. The medians of three samples were analyzed using the Kruskal--Wallis test. To analyze the association between the period of treatment as an explanatory variable and the dependent variables tumour stage, ASA score and the number of specialists in colorectal surgery, we used a proportional odds logistic regression model, also called ordinal logit model, which has been recommended by Agresti \[[@CR7]\]. Kaplan--Meier survivor functions, with corresponding estimates and 95% confidence intervals (CI), were calculated. The survival functions were compared using the log-rank test.
Relative survival was estimated using actuarial methods and analyzed with STATA \[[@CR8]\]. Significance tests of excess mortality were done using a full likelihood approach. Norwegian population survival probabilities for every year from 1980, by sex and age, were downloaded from the Human Mortality Database \[[@CR9]\]. Data were not available for 2009, and according to standard practice, we made the assumption that the probabilities for 2009 were the same as for 2008.
Two-sided *p* values \<0.05 were considered significant. The analyses were performed using SPSS 15.0, STATA 10.0 and StatXact 8.0.
The study was approved by the Regional Committee of Ethics and performed according to the Helsinki declaration.
Results {#Sec5}
=======
A total of 394 patients, 247 males and 147 females, were treated between 1980 and 2004. The median age at diagnosis was 69.9 years (range 37--90) during 1980--1989 and 70.1 years in 1990--1999 and 2000--2004 (range 35--93 and 44--91, respectively).
Location {#Sec6}
--------
A total of 125 tumours (32%) were located 12--15 cm from the anal verge, 175 (44%) were 6--11 cm from the anal verge and 94 (24%) were 0--5 cm from the anal verge. The percentage of patients with a low rectal cancer was the same during the three periods studied.
Presentation {#Sec7}
------------
The type of presentation by time period is shown in Table [1](#Tab1){ref-type="table"}. The number of patients admitted as emergency cases with either obstruction or spontaneous tumour perforation was 4% (5/127) during 1980--1989, 6% (10/177) in 1990--1999 and 4% (4/90) in 2000--2004. The stage at diagnosis was similar for all three periods (Table [2](#Tab2){ref-type="table"}). Poor differentiation was reported in 13% of all rectal cancer cases in1980--1989 and 14% in both 1990--1999 and 2000--2004.Table 1Type of presentation in relation to period (%)Type of presentation1980--19891990--19992000--2004TotalObstruction without perforation0 (0)7 (4)2 (2)9 (2)Spontaneous perforation5 (4)3 (2)2 (2)10 (3)Elective presentation122 (96)167 (94)86 (96)375 (95)Total127 (100)177 (100)90 (100)394 (100)Table 2Stage in relation to period (%)Stage1980--19891990--19992000--2004TotalI26 (21)25 (14)16 (18)67 (17)II32 (25)53 (30)23 (26)108 (27)III32 (25)44 (25)19 (21)95 (24)IV22 (17)34 (19)24 (27)80 (20)Unknown15 (12)21 (12)8 (9)44 (11)Total127 (100)177 (100)90 (100)394 (100)Association between known stage and period of treatment: OR = 1.17 (0.90--1.51), *p* = 0.24
Treatment {#Sec8}
---------
Type of treatment in relation to time period is shown in Table [3](#Tab3){ref-type="table"}. Resection with curative intent was performed in 63.8% (81/127) of all rectal cancer cases during 1980--1989, 64.9% (115/177) in 1990--1999, and 62.2% (56/90) in 2000--2004 (n.s.). For patients who had a resection with curative intent, radiotherapy was given preoperatively or postoperatively as presented in Table [4](#Tab4){ref-type="table"}. Significantly more patients received preoperative radiotherapy during 2000--2004.Table 3Type of treatment in relation to period (%)Type of treatment1980--19891990--19992000--2004TotalBest supportive care, no operation12 (9)22 (12)10 (11)44 (11)Nonresective procedure: Stoma, bypass, explorative laparotomy, or laparoscopy7 (6)14 (8)9 (10)30 (8)Palliative resections17 (13)22 (12)11 (12)50 (13)Local resection (trans-anal/endoscopic)10 (8)4 (2)4 (4)18 (5)Major resection with curative intent Involved CRM without bowel perforation0 (0)8 (5)0 (0)8 (2) R0 resection and bowel or tumour perforation4 (3)5 (3)2 (2)11 (3) Involved CRM and perforation2 (2)1 (1)0 (0)3 (1) Curative resection75 (59)101 (57)54 (60)230 (58)Total127 (100)177 (100)90 (100)394 (100)Table 4Radiotherapy for patients treated with curative intent (%)1980--19891990--19992000--2004TotalNo radiotherapy73 (90.1)111 (96.5)41 (73.2)225 (89.3)Preoperative radiotherapy5 (6.2)1 (0.9)12 (21.4)18 (7.1)Postoperative radiotherapy2 (2.5)3 (2.6)3 (5.4)8 (3.2)Preoperative and postoperative radiotherapy1 (1.2)0 (0)0 (0)1 (0.4)Total81 (100)115 (100)56 (100)252 (100)*p* = 0.008 (Cochran--Armitage trend test of radiotherapy vs. no radiotherapy)
The rate of resections with curative intent performed with a specialist in colorectal surgery present increased significantly from the first to the last time periods (Table [5](#Tab5){ref-type="table"}). A surgical trainee performed the operation in 37% (30/81) of all cases during 1980--1989, 44% (51/115) in 1990--1999 and 12.5% (7/56) in 2000--2004 (*p* = 0.011).Table 5Number of specialist surgeons (%) attending resections with curative intentNumber of specialist surgeons1980--19891990--19992000--2004Total017 (21)1 (0.9)0 (0)18 (17.1)160 (74.1)91 (79.1)20 (35.7)171 (67.9)2--34 (4.9)23 (20)36 (64.3)63 (25)Total81 (100)115 (100)56 (100)252 (100)Association between number of specialist surgeons and period of treatment: OR = 7.7 (95% CI 4.7--12.9), *p* \< 0.001
The median operative time in operations with curative intent was 200 min (range 125--410) during 1980--1989, 165 min (84--370) during 1990--1999 and 150 min (67--310) during the last period (*p* = 0.002).
The median blood loss during operations with curative intent was 1,200 ml (300--9,000) during 1980--1989, 900 ml (200--10,000) during 1990--1999 and 650 ml (50--3,000) during the last period (*p* \< 0.001).
The proportion with distal resection margin of 2 cm or more in operations with curative intent was 68% (55/81) during 1980--1989, 70% (81/115) during 1990--1999 and 84% (47/56), during 2000--2004 (*p* = 0.056).
Sphincter-sparing surgery was performed in 64% (52/81) of resections with curative intent during 1980--1989, 73% (84/115) in 1990--1999 and 66% (37/56) in 2000--2004. Anastomotic leakage was diagnosed in 10% (5/52) during 1980--1989, 6% (5/84) in 1990--1999 and 8% (3/37) in 2000--2004.
Postoperative mortality {#Sec9}
-----------------------
After resection with curative intent, the postoperative mortality rate was 7.4% (6/81) during 1980--1989, 4.3% (5/115) in 1990--1999 and 3.6% (2/56) in 2000--2004 (*p* = 0.34). For patients treated with palliative intent, the corresponding numbers were 8.3% (2/24), 19.4% (7/36) and 0% (0/20). The overall postoperative mortality was 24% in patients presenting with spontaneous perforation or bowel obstruction.
Local recurrence {#Sec10}
----------------
The 5-year estimated LR rate after resection with curative intent was 4.5% (0--9.7), 18.7% (10.3--27.1) and 2.2% (0--6.7) in 1980--1989, 1990--1999 and 2000--2004 (*p* = 0.006), respectively. Out of 11 resections with curative intent, classified as R1 resections, ten patients developed LR (Table [6](#Tab6){ref-type="table"}). After curative resection with a distal clearance of less than 2 cm, LR developed in 11% (10/92) compared to only 3% (4/138) when the distal clearance was \>2 cm (*p* = 0.014). Out of ten patients with R1 resections and four patients with intraoperative tumour perforation who later developed LR, only one patient received postoperative RT. Radiotherapy was only given to one of the 23 patients who later developed an LR. When no obvious risk factor was present (T1-3 cancer, no perforation, no R1 resection or distal clearance \>2 cm), only four patients developed an LR.Table 6Operative characteristics and radiotherapy in patients with local recurrence after resection with curative intentSex and ageYearT-stage+ NodesCRM \>2 mmLevel of tumourDistal margin \>2 cmR statusPerforation of tumourPreoperative radiotherapyPostoperative radiotherapy♀ 8419844YNMiddleY1YNN♀ 5819844YNMiddleY1YNY♀ 8319853YUKLowerY0NNN♂ 6219892NUKMiddleN0NNN♀ 7519903YUKMiddleN0NNN♂ 7219913NUKUpperN0NNN♀ 7319913NUKLowerY0NNN♀ 7919914YNMiddleY1NNN♂ 7419923NUKMiddleN0NNN♂ 7219933YNUpperN1NNN♀ 7019932NYMiddleN0NNN♀ 7619933YNUpperY1NNN♂ 7319943NNMiddleN1NNN♂ 8119951NUKUpperN0YNN♂ 5219973NNUpperN1NNN♀ 7919974YNLowerN1NNN♂ 8419983YYLowerY0NNN♀ 6219983YYMiddleY0NNN♂ 5919992NUKMiddleN0NNN♂ 7519993NNMiddleN1NNN♂ 7419993YYMiddleN0NNN♀ 7819993NNMiddleN1NNN♂ 8520033YYUpperN0YNNLevel of tumour --- distance from the anal verge: 12--15 cm, upper rectum; 6--11 cm, middle rectum, \<6 cm, lower rectum*Y* yes, *N* no, *UK* unknown
Long-term survival {#Sec11}
------------------
For all stages together, the estimated 5-year overall survival was 48% (95% CI 40--58) during 1980--1989, 40% (33--48) in 1990--1999 and 50% (40--61) in 2000--2004 (n.s). The corresponding estimated 5-year overall survival after resection with curative intent was 65% (95% CI 55--76) during 1980--1989, 58% (49--68) in 1990--1999 and 71% (59--83) in 2000--2004 (n.s). For all stages together, the estimated 5-year relative survival was 63% (95% CI 51--73) during 1980--1989, 50% (41--58) in 1990--1999 and 59% (46--71) in 2000--2004 (n.s.). The corresponding estimated 5-year relative survival after resection with curative intent was 83% (95% CI 69--95) during 1980--1989, 71% (59--81) in 1990--1999 and 84% (68--97) in 2000--2004 (n.s).
Discussion {#Sec12}
==========
Excellent overall survival, relative survival and LR rates were achieved in 1980--1989 and 2000--2004. However, during 1990--1999, the LR rate was unacceptably high and survival was correspondingly low. In almost all cases of LR, violation of treatment guidelines could be identified.
This paper presents one of the longest running experiences with TME for rectal cancer. A complete cohort of patients was obtained, thereby avoiding selection bias. A protocol for treatment and surveillance was established in 1980 and was unchanged throughout the study period.
Although the protocol was implemented for prospective registration, this study still has the limitations of a retrospective study in that the data were analyzed retrospectively. This is a minor concern considering that overall survival and relative survival are robust parameters \[[@CR10]\], although this might have led to underestimation of LR rates. For instance, old and fragile patients received less follow-up, which could result in LR being undisclosed. However, half of all cases with LR were diagnosed in patients older than 75 years.
The results were significantly worse during 1990--1999, as can be seen by the estimated 5-year LR rates: 4.5% during 1980--1989, 18.7% in 1990--1999 and 2.2% in 2000--2004 after resections with curative intent. In order to understand why LR occurred, all cases of LR were analyzed with particular emphasis on whether the protocol outlined in 1980 had been followed. Preoperative radiotherapy was recommended in locally advanced cases to achieve R0 resections. However, none of the four patients with a stage T4 cancer who later developed LR received preoperative radiotherapy. During 2000--2004, significantly more patients received preoperative radiotherapy, and excellent results concerning LR were achieved.
The CRM is a strong predictor of LR, distant metastasis and survival \[[@CR11]--[@CR13]\] in rectal cancer. The main reason why TME has been so successful in lowering LR rates compared to traditional rectal resections is its ability to achieve R0 resections \[[@CR14]\]. In a recent publication \[[@CR15]\], a CRM \<2 mm was associated with poorer prognosis. In this study, we found that in a total of 11 R1 resections with curative intent, out of which nine where operated on during 1990--1999, ten patients later developed LR. Only one of these ten patients received postoperative radiotherapy. Although recommended in our protocol, Marijnen et al. \[[@CR16]\] found little evidence to support postoperative radiotherapy as this was of no benefit for patients receiving non-radical resections. On the other hand, preoperative radiotherapy was effective in cases of narrow margins (1.1--2 mm) and wider margins. Knowing this, selecting patients at risk for R1 resections for preoperative radiotherapy to achieve R0 resections is far more important than offering postoperative radiotherapy to patients with microscopically involved margin.
Intraoperative perforation during resection of rectal cancer increases the LR rate and reduces survival \[[@CR17], [@CR18]\]. Although this remains to be proven effective in this setting \[[@CR17]\], postoperative radiotherapy was recommended in our protocol and has since been implemented in both Norwegian and American guidelines \[[@CR19], [@CR20]\]. There is no upper age limit for adjuvant radiotherapy in Norway, but individual considerations must be taken for those aged above 75 years as radiotherapy substantially increases the risk of death from causes unrelated to rectal cancer in this age group \[[@CR21]\]. Out of four patients with tumour perforation, three did not receive postoperative radiotherapy, all aged above 80 years.
Although there is conflicting evidence concerning how long the distal margin should be in resections of rectal cancers, many researchers \[[@CR20], [@CR22]\] recommend a distal clearance of \>2 cm in AR, which is recommended in our protocol. The rationale underlying this is the finding that intramural spread from rectal cancers \>1 cm from the primary lesion is uncommon \[[@CR23]\] and even more so if the patient has been subject to preoperative RT \[[@CR24]\]. In the present study, significantly more patients developed LR with a distal clearance of less than 2 cm. Furthermore, none of these patients had received neoadjuvant RT. In a total of 15 patients with distal clearance \<2 cm and LR, 13 underwent operation during 1990--1999. In surgery of low rectal cancers, there is often a dilemma concerning oncologic radicality and the avoidance of stomas. This study supports the view that a distal clearance of at least 2 cm should be given priority and in cases where this is hard to achieve due to sphincter-preserving surgery, preoperative RT should be considered.
The number of specialists in colorectal surgery present at operation increased during the years of the present study. After the year 2000, rectal cancer surgery was no longer required for surgeons who are specialists in general surgery in Norway, and the number of operations performed by a trainee was significantly reduced. The current policy at Levanger Hospital is that rectal cancer resections should be performed by or under the supervision of a specialist surgeon and having two specialists present is recommended. Borowski et al. \[[@CR25]\] found no difference in anastomotic leak rates, operative mortality or survival between unsupervised trainees, supervised trainees and consultants. We still believe that the introduction of more competent teams performing the surgery has contributed to better results during 2000--2004. The better results during 1980--1989 compared to 1990--1999 are perhaps dependent on there being far more experienced trainees working at the hospital during the first period. During the 1980s, the trainees were more experienced in general surgery (6--11 years experience) than the surgeons in the 1990s (3--6 years experience).
Preoperative radiotherapy at biologically effective doses ≥30 Gy has been shown to reduce the risk of LR and death from rectal cancer, and postoperative radiotherapy has been shown to reduce the risk of LR \[[@CR21]\]. Preoperative chemoradiation is even more effective in lowering the LR rate, and an additional effect on survival is possible \[[@CR26]\]. At Levanger Hospital, adjuvant radiotherapy has been recommended since 1980, but was barely used during 1990--1999, which may explain why the rate of LR was high and survival correspondingly low during these years. For the years 2000--2004, when the LR rate and survival were excellent, 21% received preoperative radiotherapy. If the quality of surgery performed was evaluated by operative time, blood loss, the proportion with sphincter-sparing surgery, by the proportion without postoperative anastomotic leakage and by the proportion with a resection margin of at least 2 cm, the quality of the surgery performed for curative intent did not seem to be inferior during 1990--1999 compared with 1980--1989. Even if experienced surgeons assisted the many surgical trainees during the operations, and the operations were adequate, the ultimate outcome was inferior for some patients during 1990--1999, and this was probably mainly due to a lack of referral for radiotherapy. In many cases, the preoperative judgment concerning radiotherapy failed during 1990--1999. From year 2000, dedicated teams took care of preoperative evaluation and surgery, and the introduction of preoperative MRI for all patients with cancer of the rectum from this time on, made it possible with a much better and objective selection of those who needed preoperative radiotherapy.
In the present study, the estimated 5-year overall survival after resection with curative intent was 65% (95% CI 55--76) during 1980--1989, 58% (49--68) in 1990--1999 and 71% (59--83) in 2000--2004 (n.s). A higher rate of LR during 1990--1999 was accompanied by a lower survival rate. The negative prognostic effect of LR on survival is well documented \[[@CR2]\].
Conclusions {#Sec13}
===========
Excellent results were seen during the 1980s due to the implementation of modern principles of rectal cancer treatment at Levanger Hospital in 1980. However, reports from the national rectal cancer registry revealed poor results for patients treated in Levanger during the 1990s, and the current paper discloses that violations of the treatment guidelines, particularly with respect to radiotherapy for patients with advanced stages, had serious effects on patient prognosis during these years. Actions were taken to improve compliance regarding treatment guidelines for rectal cancer and to strengthen the surgical team which took care of the patients preoperatively as well as performed the TME surgery. During 2000--2004, the results were once again excellent. The present study illustrates that although treatment guidelines and surgical technique may be adequate, continuous focus on quality assurance and the collective efforts of the members of the multidisciplinary team are mandatory to maintain optimized outcomes for rectal cancer patients.
**Open Access** This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
Eivind Jullumstrø died of cancer on 20 December 2010. We dedicate this article in thankful memory to him.
|
On this day in 1977, the worst air disaster in aviation history* took place at Tenerife in the Canary Islands. The appropriately named Tenerife Airport Disaster occurred when two Boeing 747 passenger jets–then the largest airliner in the world–collided with each other on a runway at Los Rodeos Airport. 583 people were killed and only 61 survived. The sequence of events which led to the accident demonstrate how a catastrophe can occur when a seemingly unrelated set of decisions come together at the worst possible time. The accident and the subsequent investigation fundamentally changed how airline crew and air traffic control (or ATC) communicate and how airline cockpit crews interact with each other.
In the early afternoon of March 27, Pan Am Flight 1736 and KLM Flight 4805 were both making a normal landing approach to Gran Canaraia Airport on the island of Gran Canaria. At 1:15pm local time, a bomb planted by local insurgent group Fuerzas Armadas Guanches exploded in a flower shop in the terminal of Gran Canaria airport, injuring one person. An anonymous phone call had warned of the first bomb before it went off and subsequently another anonymous caller warned that a second bomb was planted at the airport and would explode soon as well. Airport and municipal officials immediately decided to close the airport and reroute all incoming flights. Both Pan Am 1736 and KLM 4805 were waived off from their approaches to Gran Canaria and ordered to land, along with five other large airliners, at the nearby Los Rodeos Airport.
Los Rodeos was, at the time, a much smaller regional airport that was unaccustomed with and unprepared to handle large airliners, let alone several of them landing in order. There was a limited airport apron to park all of the airliners, including the two massive Boeings, which could not be parked near the terminal. Instead, the ATC ordered both onto the taxiway that ran parallel to the airport’s single runway. This prevented any other aircraft from using the taxiway, instead the two 747s would have to taxi along the edge of the main runway and take off first, thus allowing the smaller airliners parked on the apron to use the taxiway for their own takeoffs.
After a short delay at Los Rodeos, officials at Gran Canaria reported that the airport was swept and no further bombs discovered; flights were cleared to begin landing at Gran Canaria again. Pan Am 1736 reported to ATC that they were immediately ready for takeoff, but they were hemmed in by KLM 4805 and a refueling truck. The captain of KLM, who was the airline’s chief flight instructor at the time, had decided to refuel while on the ground in Tenerife, a process which took 35 minutes. After KLM 4805 was refueled and all of its passengers re-boarded–except for one tour guide who lived on Tenerife and decided not to take the next leg to Gran Canaraia–ATC ordered them to taxi down the main runway and then position itself for takeoff. Pan Am 1736 was to follow and then exit the runway at the third exit to the left in order to allow KLM 4805 to takeoff.
While both planes taxied, an immensely dense fog swept over the airport. ATC lost visual contact with both aircraft and, having no ground radar to track them electronically, was forced to reply on radio calls from each cockpit in order to plot where they were on the runway. Once positioned at the end of the runway KLM 4805 was to ready for takeoff but, following standard procedure, waited for official clearance from ATC. In a confused exchange which followed the Dutch captain throttled up his four engines in preparation for takeoff while the co-pilot radioed their readiness to ATC. In response ATC issued instructions for the route KLM 4805 was to take in order to reach Gran Canaria but this instruction included the word “takeoff” which the Dutch crew interpreted as clearance to throttle down the runway. Meanwhile, Pam Am 1736 reported to ATC that they would radio when they were clear of the runway.
Neither KLM 4805, Pan Am 1736, nor ATC were in visual contact with one another due to the fog. But upon hearing Pan Am’s communication which indicated that they had not cleared the runway yet, the KLM crew, which was already moving down the runway at full throttle, became immediately concerned. The co-pilot of Pan Am 1736 spotted KLM 4805′s landing lights breaking through the fog as it roared down the runway. The pilot immediately throttled up his engines and tried to steer into the grassy area next to the runway while the KLM pilot, realizing his mistake, attempted a premature rotation in order to clear the Pan Am aircraft. The actions of both pilots were of no avail and KLM 4805 suffered a severe tailstrike which dragged it along the runway for 72 feet where it collided with Pan Am 1736. The engines, lower fuselage, and main landing gear of the KLM airliner collided with the upper right fuselage of Pan Am 1736 at 140 knots, ripping both aircraft apart before the KLM plane slung directly into the ground where it exploded into a fireball.
All 234 passengers and 14 crew of KLM 4805 perished in the accident along with 326 passengers and 9 crew on Pan Am 1736. 62 passengers and 7 crew, including all 3 cockpit crew on Pan Am 1736 survived. Passengers on the left side of the aircraft, which was not struck, were able to simply walk out onto the wing of the crippled airplane and then onto the ground where they awaited rescuers.
Since the accident, communications between airliner crews and ATC have been more thoroughly standardized and airlines have trained their crews to communicate more clearly with one another; e.g., ATC’s no longer use the word “Takeoff” unless a takeoff is specifically cleared or cancelled. The installation of ground radar at all major airports has also become standard. Los Rodeos Airport would remain closed until April 3 after all of the wreckage was finally cleared by the Spanish Army.
*The four airplanes involved in the September 11, 2001 attacks constitute the worst disaster in aviation history if you count those killed on the ground. The Tenerife Airport Disaster’s fatalities were solely from the two aircraft involved.
|
Okamule
Okamule is a village in Oshakati West constituency, in the Oshana region of northern Namibia. It was named after the death of the comrade Mr Kamule, a long time ago. Its headman is Mr Abner Shilenga. It is located in the remote areas and it is approximately 20 km from Oshakati town to the north side. There are many houses, and about 45% of the houses are built up with sticks and mahangu straws.
The population of the village is about 1500 people. The residents survive by growing crops especially mahangu and sorghum, and by taming domestic animals such as cattle, goats, sheep and donkeys that they use to plough their fields. They mostly depend on rainfall and during the rainy season they go to oshanas and pans for fishing. They dig wells and store the rain water in them and mostly use them in the spring season whereby the oshanas and pans are dry, for drinking by their animals.
The village was electrified in 2008.
References
Category:Populated places in the Oshana Region |
Q:
Google Chrome: Address bar search in Amazon.com searches Super User instead
I often use Chrome's "press Tab to search this website" feature in the address bar.
Somehow, Chrome is searching Super User when I press tab for amazon.com.
Is there a way to correct a mis-directed address bar search in Chrome?
A:
Try the following process to remove and re-add the search methods for amazon.com and/or superuser.com.
Go to the wrench menu, Options, Basics tab. In the section for Default search, click the Manage button.* In the list of search engines, remove amazon.com and/or superuser.com.
To re-add the search items, simply visit superuser.com (Chrome finds the search link in the head section of the HTML code) or preform a blank search on amazon.com (Chrome remembers the search URL). Once the search items reappear in the list, you can use the "press tab to search" functionality again.
(* Just found a shortcut: Right-click the address box and choose "Edit search engines".)
A:
You can right click inside the search box of a site, and you should see an option called "Add as search engine..."
Click that option, then you can edit the name of the entry on the search engines list, the keyword used to activate it using the Tab key, and Chrome will automatically fill in the search URL for you. If there are any conflicts with your entries, you'll see a validation icon to the right of each field; if they're all green check marks, you're good to go.
As noted above, right clicking the address box itself gives you the option to bring up the search engine manager, where you can delete and alter any entries. I just had to delete my Amazon entry and recreate it using the method I just described to get it working again. Right clicked inside the search box out of curiosity and found this nifty little trick.
|
Category Archives: Personal Brand -Preparation-
As I took an Online Marketing class with Nate Riggs. Knowing, understanding, and using social medias need hard work. Due to the fact, in Japan most social medias are relatively new. While I’m in Japan, I didn’t know what are popular social medias.
Unlike Western countries, no one social-media service is dominating the Japanese Web, and I think that may continue. Users here are using many services in parallel and switching between social-networks all the time-from Japan Times.
as they call it, has improved after the horrific earthquake and tsunami of two years ago. Since the telephone networks were not working, people turned to Twitter and Facebook to communicate. On the Twitter blog, they said that there was a 500% increase in Tweets from Japan when the earthquake hit.-Dr. Leslie Gaines-Ross
Therefore, As the beginning it was hard to figure out what I supported to do. However, with Nate’s help and advice I could get ideas what I should do little by little. Therefore, I would say, it would be nice if the early weeks’ assignments have more advice for what the students are expected to do and later weeks’ assignments have less and less advice to make them think. As this is a little bit difference than usual academic classes, ( I think this is more training sessions) it is nice to let students (including me) notice the difference step by step.
For 15 weeks I took an Online Marketing Class with Nate Riggs. The one most thing I gained from that class is the thinking process to figure our what is the company’s strategies and tactics, and how it actually leads customers.
This time I’m writing about social media marketing of reactions for Boston tragedy, so if you are not happy to see anyone write anything about that. Please Go Back from this page. (if you happen to find this page, I’m very sorry.)
They explained that the poster is made with four companies for New Balance, Nike, Puma, and Adidas to honor of Boston Tragedy.
Before I took the class, I would not care very much for one little post. But this time it made me think “how this poster could affect images of these four companies and with what media they posted this?”
Therefore, I checked Nike and Puma websites and Facebook pages but I didn’t see this poster. so several days ago I posted the questions to the Facebook group, I love Marketing (アイラブマーケティング), by telling “Did four companies really make this poster? I couldn’t find at Nike and Puma websites.” Today, I got reply; According to them “As they asked Adidas Japan, they were told four companies officially made this.“
Then, now it back to my original question “How this poster could affect images of these four companies and with what media they posted this?”While I was checking this poster is official or unofficial. I found several argument comments about “Ethic and Marketing (responsibility of a company)” I saw several positive comments and negative comments.
Not sure if they are actually trying to do communicate a positive message or just using a tragedy for marketing purposes.
Nike, Adidas, New Balance & Puma, where are you everday in Syria, Iraq, Mali, Afghanistan etc.? Not enough people in those countries who could actually buy your products?
I say companies should stay away from campaigns like this. The purposes are just too unclear and often reveal the uglier face of capitalism.-Com_Truise
I think it was done in relation to the fact that it happened during a running race. It’s in their ball park, so it’s in their best interests. These aren’t humanitarian companies, they’re shoe manufacturers; and they’ll capitalize where they can. Having said that, it is a nice sentiment.-Oscare Wafield
This is one of the most difficult questions to simply be judged as ethical or unethical. Also as Oscare Wafield mentioned, sometimes Companies may have to take actions for these cases. This made me to consider how company react these situation without being unethical or hurting anyone’s feeling.
3 steps to improve companies’ use of social media in ethical way from Augie Ray
Read and understand regulations and guide line for Marketing Ethics
Be aware that measure engagements with fun is not the only thing matters to make a brand
Always be True
I agree with his idea that gaining “LIke”, “Follow”, and “Retweet” is not the only thing Marketer should consider; they also need to consider responsibility and ethics when they make a marketing strategy.
How about you? What do you thin why these four companies make this poster? What is your opinion for their actions?
We pride ourselves on the enrichment offered through a diverse educational experience. What better way to prepare our students to go out in the world to make a difference than offering them a transformative learning experience that is more reflective of our diverse global community? Diversity in all of its forms serves to enrich the distinct educational experience of our students, faculty, and staff.
This book also explains frameworks and tips for Marketing to correlate in round for how to tell about companies and/or non-profitable organization for marketing campaign.( Like a picture below; which is from Beth’s Blog)
Top Down Influence Approaches:
This approach is about companies mainly pay to inform themselves. This way arrows companies to maintain more control.
The Groundswell:
This approach is about costumers inform about companies without payments. This way arrows companies to maintain less or no control.
Flanking Techniques:
This approach is about companies pay to inform themselves in order to make their customers to want inform about companies. ( In another words.Companies pay for marketing to make their customers existed and love enough to talk about them.)
Direct:
Directly tell about companies by using E-mail, mail, and so on.
( Again, you can check Beth’s Blog.
I really think her explanation is easy to understand even for a non-native English speaker like me.)
I think this concept is a bit hard to apply everything to make a personal brand as I usually don’t pay anyone to tell about me. However, I can still use this consents to build my personal brand.If I think my time and effort as something I pay instead of money, There are things I can apply.
Top Down:
Sending resume and Writing/ Posting on this blog, Facebook, Pinterest, Twitter
(I have control what I do with them)
Groundswell:
almost the most way people know about me
What my friends write/post online ( I don’t have any control. )
Flank:
I can’t pay my friends to encourage or stop what they write/ post about me online. but I thin I can spend my time to fix my privacy policy setting with Facebook and Twitter.
Direct:
Basally almost everything I do for my daily life; Taking with friends, posting online, sending E-mail.
At this point, to be honest I’m getting confused how I can create my personal brand as professions side. It is very hard to create my personal brand without putting my personal sides.
This concept leaves me 3 questions. Without thinking them, it is impossible and scary to start building my personal brand. so like last time, I will leave them as future assignments.
This book Tells How companies and non-profitable organization create
their Marketing Strategies to create campaign in a time of social media with keeping one united core message.
(I found One Blogger, Beth Kanter, explained about this book more.
so you can check this blog too.)
I would like to write about how I can use the concept from this book to build my personal brand.
I think I can divided my take away as two concepts.
What I want to campaign.
How I want to campaign.
This page, I only focus on “What I Want to Campaign.”
According to the book, it is very important to maintain one integrated message among all departments for marketing campaign; like a wheel. (as the picture below, which is from Beth’s Blog)
For personal brand this is important because most people (including me) has many aspects. For daily life we have several roles we should take responsible.
Also, people have backgrounds; such as something they like, interested in, hate, try, fail to do and so on. In many cases, we don’t really tell everything about them.
Therefore,
It is good a start for me to looking myself as a company and putting different parts of me into departments by following ideas of Marketing In The Round.
Advertising:
resume ( could be)
Public Relations:
Things I do for Japanese Student Association, and did for my work. ( what people tell and post online)
Corporate Communication:
I don’t have to communicate in order to built my personal brand. However, sometimes I should see and check what I write online actually reflects me. (Because it is very easy to make bigger stories than actually happened, mislead audience by choosing wrong words. )
Web/Digital:
Search Engine Optimization:
having few connection with Facebook, Linkedin
* other than my name, people can’t search with google to find me.
Content:
thing written online by myself. (Blog, Facebook, Twitter, Pinterest)
This is come from Knowledge from Working Experience In sales, and Japanese Student Association, Learning as Marketing Major in Ohio University and from Books, Looking different places as traveling.
Direct Mail:
Applying job, could be posting on someone’s sites (if I leave contact ad)
Social Media:
same as Web/Digital
Search Engine Marketing:
same as Search Engine Optimization
At this point, approaching this concept for my personal brand leaves 3 questions.
How much I want to make my name easier to search online?
What do I want to express online ?
To whom I want to express online?
I should really think about these 3 questions and find the way to avoid problems might occur by doing something wrong. Since I’m not a company, if I put too much personal information, that would scare my life. Companies can move and change their names, logo, characters (if they have one), and places to start over. But I can’t do that. so I will leave them as future assignments.
When I was a freshman in Ohio Unversity, My marketing career begging.
I become a friend with who has a talent of talking. (I believe he can sell umbrellas in a desert.) He told how interesting and fantastic Marketing is to me.
At that time I was looking for a subject I can get advantage of learning in the U.S.
so I thought Marketing was the perfect subject to learn. Moreover, He told me if I like socializing and organizing events then marketing is the perfect subject to be major. s
so I chose my major as Marketing.
As my send year, I became a Publicity of Japanese Student Association and my friend became a president. At this time I could adapt the knowledge I learned from business classes so I was happyHowever, With other officers we straggled to invited our members for our events.That time my main focus became Advertising. I tried several ways, attending socialized events to announce about our events, sending E-mails, asking friends to announce to their friends. Bud we didn’t succeed very much.
So this experience made me to think about good way to announce and attract people.
My third year, I had to leave Ohio University. And I started working as sales person for advertisements( this was my first job) in Japan. My main clients were small business companies or small factories, and few quite large companies. I was working for a publisher which makes advertisement mail;. The main sales point of our service is collecting expected customer’s information. As their readers resent the mails to get product catalogs, they provide their contact information.
It was interesting for me to sell this service in that time because that is the time a lot of companies mentioned they wanted to use internet effectively but not sure how to use it. I explained them that managers in companies are usually elder people and they don’t use internet very much, so if you want to sell your products to them. Online advertisement may not be perfect way. In that time, many people agreed my point and made advertisement mails. However, some companies mentioned “as they keep their customer E-mail address, they don’t need our service. Sending E-mail is less expensive way.” I thought that is true so I asked other clients how they keep their customer information (include expected customers; interested in the products but not yet buy any). Most companies told me after they made few phone calls, they didn’t do any. so I always advised them to follow-up. However, that made me to think that the use of our advertisement. I thought if they could use their Web-site or search-engine service effectively, they could gain more customer’s information.
So My interest moved into the use of combining online advertisement and paper advertisement. (but my company wasn’t very much interested in starting online advertisement at that time, so I left.)
1. The company has quite good website(As a customer, I thought their website looks neat)
2. They send paper advertisements to announce their customers sales information.
3. That company is famous for trying something new.
4. As I worked for B2B business, I wanted to work for B2C business. ( because the last job made me realized that even for B2B business, we have to attract managers. so it might be interesting to know how retail stores attracts their customers.)
I liked seeing what they do.
Things I like About their advertisement( or Marketing )
1. To advertise, they often uses not only actors or actress but also lpinists, movie directors, and so on ( who are not well-known but have their dreams .)
( I thought it makes their image bland become more relevant for customers)
2. Even weekend the store has sales, so without flyers advertisements customers come.
3. They tried several way to provides coupons with cell phone as seeking the use of new technology. ( such as become their members, busing certain products to get the cords, or special backgrounds for cell phones.)
(This is very interesting to see how this new trying could lead the store confused. This touch me the importance of operation. )
Things I dislike
1. They change their website too often and hard to find anything.
2. The poster and Web pictures mainly uses 20-30 aged people as their models.
3. To display the clothes (The company want to sell the most), we had to change placement everyday. so if customers return the store in several days, the placement of clothes are changed very much.
because
1. The company keep telling they want to make clothes for all ages. bud my store’s main customers are mostly age 30-40. ( so many people disappointed with the image the poster and website provided.)
2. Most costumers complained the website are not customer-friendly. They often said they never be able to find anything.
They need to consider the segment age more.
3. Because of changing placement too often, customers have problems when they come back to a store to busy specific items. It takes so much time to find for them or they have to find someone to ask.
This is not something dislike or like, just I thought interesting.
Even thought the company try to make clothes for everyone, young people, age 10-20, say our clothes are for more older people, and older people 30-80 say our clothes are for younger people.
My store’s sale wasn’t very good, so I started checking sales data of my store and other stores as I was hoping to find any solution for it.
With that data, I suggested several ideas but I couldn’t figure out how it actually worked. I didn’t know how to measure effectiveness of my suggestions and even the weakness of my store. The store manager was the only teacher to help me but she was busy and didn’t know much neither.
so I decided come back to Ohio University to learn more about marketing.
I really enjoyed Marketing Research classes. but Ohio University didn’t provide any other classes which related that, so I talked with my professor of that class. I also told her my experiences in a business field. She recommended me to take MKT4900 to learn about internet marketing. She told me this class can provide knowledge of internet advertisements and that involved with numbers, so I might find interesting learning it. ( As I am not sure I want to be a marketing researcher or not, she also told me it might be good to learn about many marketing methods to expand knowledge.) |
Q:
How to find maximum without local maxima
I want to maximize a function that has no local maximum in a certain interval.
For instance, consider the function $f(x)=x$ over the interval $[0,1]$. There is no local maximum, but the maximum value of the function is clearly $1$. Here is the function that I would like to maximize:
$$f(x_1,x_2,...,x_n)=\frac{1}{\Big(1-\sum\limits_{k=1}^{n}x_k\Big)^{1-\sum\limits_{k=1}^{n}x_k}\prod\limits_{k=1}^{n}x_k^{x_k}}$$
For instance, if $k=3$, we have
$$f(x_1,x_2,x_3)=\frac{1}{(1-x_1-x_2-x_3)^{1-x_1-x_2-x_3}x_1^{x_1}x_2^{x_2}x_3^{x_3}}$$
I want to maximize this function with the following constraints:
$$\sum\limits_{k=1}^{n}kx_k\leq 1$$
$$0\leq x_i\leq 1$$
This family of functions seems to have no local maximima when the constraints are applied. How can I work around this and find a maximal value(s)?
A:
Have you tried using NMaximize? For instance:
f[x1_, x2_, x3_] := 1/(Abs[1-x1-x2-x3]^(1-x1-x2-x3) x1^x1 x2^x2 x3^x3)
NMaximize[
{
f[x1, x2, x3],
0<x1<1 && 0<x2<1 && 0<x3<1 && x1 + 2 x2 + 3 x3 < 1
},
{x1, x2, x3}
]
{3.61072, {x1 -> 0.276953, x2 -> 0.182041, x3 -> 0.119655}}
|
In the south of Spain, a few miles east of Almería, there is a delightful area that offers miles of unspoilt beaches with secluded coves, sand dunes and much more within a protected coastal reserve. It is the Cabo de Gata, a natural park that I think is quite splendid. It is one of my favourite areas in the province of Almería.
It is a nature lover’s delight. There are thousands of different species there including the pink flamingo and the rare Italian wall lizard. There are eagles, kestrels, puffins, cormorants, oystercatchers and storks. The extraordinary wealth of wildlife is unbelievable. There are some species that are unique to the park. This includes the dragoncillo del Cabo, which flowers all the year round. Europe’s only native palm tree – the dwarf fan – is to be found here. In the sea, there are bream, grouper, prawn and squid. There are hundreds of species of seaweed, which are home to the many varieties of crustacean, mollusc and fish.
Perhaps the reason for the great variation in wildlife is due to the diverse habitats in this natural park. The 71,500 acres of the Cabo de Gata is volcanic in origin and comprises coastal dunes, steep cliffs, spectacular beaches, salt marshes, saltpans, arid steppe, dry riverbeds and a substantial marine zone. It is probably this ecological diversity that has led to the park being designated a UNESCO biosphere reserve. |
Redox regulation of the G1 to S phase transition in the mouse embryo fibroblast cell cycle.
The hypothesis that intracellular oxidation/reduction (redox) reactions regulate the G(0)-G(1) to S-phase transition in the mouse embryonic fibroblast cell cycle was investigated. Intracellular redox state was modulated with a thiol-antioxidant, N-acetyl-L-cysteine (NAC), and cell cycle progression was measured using BrdUrd pulse-chase and flow cytometric analysis. Treatment with NAC for 12 h resulted in an approximately 6-fold increase in intracellular low-molecular-weight thiols and a decrease in the MFI of an oxidation-sensitive probe, dihydrofluorescein diacetate, indicating a shift in the intracellular redox state toward a more reducing environment. NAC-induced alterations in redox state caused selective delays in progression from G(0)-G(1) to S phase in serum-starved cells that were serum stimulated to reenter the cell cycle as well as to inhibit progression from G(1) to S phase in asynchronous cultures with no significant alterations in S phase, and G(2)+M transits. NAC treatment also showed a 70% decrease in cyclin D1 protein levels and a 3-4-fold increase in p27 protein levels, which correlated with decreased retinoblastoma protein phosphorylation. Cells released from the NAC treatment showed a transient increase in dihydrofluorescein fluorescence and oxidized glutathione content between 0 and 8 h after release, indicating a shift in intracellular redox state to a more oxidizing environment. These changes in redox state were followed by an increase in cyclin D1, a decrease in p27, retinoblastoma protein hyperphosphorylation and subsequent entry into S phase by 8-12 h after the removal of NAC. These results support the hypothesis that a redox cycle within the mammalian cell cycle might provide a mechanistic link between the metabolic processes early in G(1) and the activation of G(1)-regulatory proteins in preparation for the entry of cells into S phase. |
The downside: The Sony Pictures release sucked a lot of air from the second weekends of Universal’s Palm Beach and Transmission Films’ Danger Close: The Battle of Long Tan.
The top 20 titles raked in $14.4 million, 3 per cent up on the previous frame, according to Numero. Mind Blowing Films’ Bollywood film Mission Mangal and Magnum Films’ Hong Kong thriller Line Walker 2 had buoyant launches while Universal’s A Dog’s Journey opened with neither bark nor bite, mirroring its US fate.
The lurid tale of a TV actor (DiCaprio) who wants to break into films and his stuntman/sidekick (Pitt), Hollywood rang up $6.7 million on 624 screens, outgunning the US debut of $41.1 million.
The film featuring Robbie as Sharon Tate and Herriman as Charles Manson has pocketed $114.3 million after four weekends in the US and $66.2 million in the rest of the world after launching in 46 overseas markets last weekend, No. 1 in 28.
The Australian bow eclipsed Django Unchained, which took $3.8 million in its first weekend and finished with $16 million, and Inglorious Basterds, which did $3 million/$13.8 million.
However Palm Beach eased by just 15 per cent at Cinema Nova, prompting general manager Kristian Connelly to observe: “That suggests word-of-mouth among the target audience is positive.”
Kriv Stenders’ Danger Close: The Battle of Long Tan declined by 42 per cent, capturing $451,000, resonating more strongly in rural and regional locations than in the capitals, which brings the total to $1.57 million.
Wallis Cinemas’ programming manager Sasha Close says: “Danger Close isn’t performing as well as expected, despite strong WOM and good reviews. I strongly suspect the MA rating affected the box office, given the main demographic we are seeing is over 50s, equally split between male and female.”
Connelly theorizes that Australia’s wartime past has never really resonated with urban audiences, pointing to Jeremy Sims’ Beneath Hill 60, which earned most of its business in rural venues, and Alister Grierson’s Kokoda, which did not connect with mainstream cinemagoers.
Meanwhile Disney’s The Lion King now ranks as the ninth biggest blockbuster of all time globally, banking $1.435 billion. Here, the Jon Favreau-directed musical fantasy adventure stands at $58.3 million after collecting $2.2 million in its fifth outing.
The sequel to the 2017 hit A Dog’s Purpose directed by Gail Mancuso, A Dog’s Journey fetched $631,000, no surprise considering the family film featuring Josh Gad, Dennis Quaid, Marg Helgenberger, Betty Gilpin and Kathryn Prescott ran out of puff after making $22.5 million in the US.
The Nisha Ganatra-directed comedy Late Night, created by and starring Mindy Kaling, plunged by 58 per cent to $320,000 after an uninspiring debut, delivering $1.6 million for Roadshow.
Directed by Jagan Shakti and starring Akshay Kumar, Mission Mangal, a drama loosely based on the scientists at the Indian Space Research Organisation who took part in India’s first interplanetary expedition to Mars, blasted off with $309,000 on 50 screens.
Sony/Marvel’s Spider-Man: Far From Home zoomed past Skyfall to rank as the studio’s biggest film ever worldwide, grossing $1.1 billion. In Australia the Jon Watts-directed sequel cruised to $37 million after bagging $175,000 in its seventh frame.
Disney/Pixar’s Toy Story 4 drew $173,000 in its ninth, climbing to $41.1 million as the worldwide tally topped $1 billion. That’s the Walt Disney Studios’ fifth billion dollar release this year and the eighth biggest animated title of all time.
Line Walker 2: Invisible Spy directed by Jazz Boon, which sees three cops team up to track down an international terrorist syndicate that kidnaps children, grabbed $106,000 including previews on just 15 screens. |
Q:
STM Nucleo I2C not sending all data
Edit:
The solution was to turn down the I2C clock in the initialization block. Although the STM could handle it, the data sheet for the LCD stated it could handle only up to 10kHz.
For the DMA there is an IRQ that must be enabled and setup in the CubeMX software that will enable DMA TX/RX lines.
I'm using an STM32 - Nucleo-F401RE board with freeRTOS. I've used freeRTOS a bit recently, but I have never really used I2C.
Trying to setup a simple LCD display using the I2C drivers that CubeMX generates.
So far it only sends about half the data I request to send.
I've tested it with the simple "show firmware" command and this works. So I can verify that the I2C instance is setup correctly.
/* I2C1 init function */
static void MX_I2C1_Init(void)
{
hi2c1.Instance = I2C1;
hi2c1.Init.ClockSpeed = 100000;
hi2c1.Init.DutyCycle = I2C_DUTYCYCLE_2;
hi2c1.Init.OwnAddress1 = 0;
hi2c1.Init.AddressingMode = I2C_ADDRESSINGMODE_7BIT;
hi2c1.Init.DualAddressMode = I2C_DUALADDRESS_DISABLE;
hi2c1.Init.OwnAddress2 = 0;
hi2c1.Init.GeneralCallMode = I2C_GENERALCALL_DISABLE;
hi2c1.Init.NoStretchMode = I2C_NOSTRETCH_DISABLE;
if (HAL_I2C_Init(&hi2c1) != HAL_OK)
{
Error_Handler();
}
}
It has international pull-up resistors set as well.
/* vdebugTask function */
void vdebugTask(void const * argument)
{
/* USER CODE BEGIN vdebugTask */
/* Infinite loop */
for(;;)
{
char hi[6];
hi[0] = 'h';
hi[1] = 'e';
hi[2] = 'y';
hi[3] = 'a';
hi[4] = 'h';
hi[5] = '!';
HAL_I2C_Master_Transmit_DMA(&hi2c1, (0x50), hi, 6);
vTaskDelay(10);
}
/* USER CODE END vdebugTask */
}
This is the code I am trying to run, I have not changed the HAL function at all. I don't think that it could be more simple than this, however this is all that happens.
I followed the timing constraints in the data sheet for the LCD, and the CubeMX software didn't warn or state anywhere that their I2C drivers had any special requirements. Am I doing something wrong with the program?
I have also tried using the non-DMA blocking mode polling transfer function that was also created by CubeMX
HAL_I2C_Master_Transmit(&hi2c1, (0x50), hi, 6, 1000);
This is even worse and just continuously spams the screen with unintelligible text.
A:
The solution was to turn down the I2C clock in the initialization block. Although the STM could handle it, the data sheet for the LCD stated it could handle only up to 10kHz.
For the DMA there is an IRQ that must be enabled and setup in the CubeMX software that will enable DMA TX/RX lines.
Note that the clock still must adhere to the hardware limitations. I assume that roughly 20% less than maximum stated in the data sheet will suffice. For my LCD this means that I am going to be using 80kHz.
First go to configuration :
And then click on DMA to setup a DMA request. I've only selected TX as I don't care about an RX DMA from the LCD.
|
/**
******************************************************************************
* @file startup_stm32f334x8.s
* @author MCD Application Team
* @brief STM32F334x4/STM32F334x6/STM32F334x8 devices vector table for GCC toolchain.
* This module performs:
* - Set the initial SP
* - Set the initial PC == Reset_Handler,
* - Set the vector table entries with the exceptions ISR address,
* - Configure the clock system
* - Branches to main in the C library (which eventually
* calls main()).
* After Reset the Cortex-M4 processor is in Thread mode,
* priority is Privileged, and the Stack is set to Main.
******************************************************************************
*
* Redistribution and use in source and binary forms, with or without modification,
* are permitted provided that the following conditions are met:
* 1. Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
* 3. Neither the name of STMicroelectronics nor the names of its contributors
* may be used to endorse or promote products derived from this software
* without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
* SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
* CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
* OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
******************************************************************************
*/
.syntax unified
.cpu cortex-m4
.fpu softvfp
.thumb
.global g_pfnVectors
.global Default_Handler
/* start address for the initialization values of the .data section.
defined in linker script */
.word _sidata
/* start address for the .data section. defined in linker script */
.word _sdata
/* end address for the .data section. defined in linker script */
.word _edata
/* start address for the .bss section. defined in linker script */
.word _sbss
/* end address for the .bss section. defined in linker script */
.word _ebss
.equ BootRAM, 0xF1E0F85F
/**
* @brief This is the code that gets called when the processor first
* starts execution following a reset event. Only the absolutely
* necessary set is performed, after which the application
* supplied main() routine is called.
* @param None
* @retval : None
*/
.section .text.Reset_Handler
.weak Reset_Handler
.type Reset_Handler, %function
Reset_Handler:
ldr sp, =_estack /* Atollic update: set stack pointer */
/* Copy the data segment initializers from flash to SRAM */
movs r1, #0
b LoopCopyDataInit
CopyDataInit:
ldr r3, =_sidata
ldr r3, [r3, r1]
str r3, [r0, r1]
adds r1, r1, #4
LoopCopyDataInit:
ldr r0, =_sdata
ldr r3, =_edata
adds r2, r0, r1
cmp r2, r3
bcc CopyDataInit
ldr r2, =_sbss
b LoopFillZerobss
/* Zero fill the bss segment. */
FillZerobss:
movs r3, #0
str r3, [r2], #4
LoopFillZerobss:
ldr r3, = _ebss
cmp r2, r3
bcc FillZerobss
/* Call the clock system intitialization function.*/
bl SystemInit
/* Call static constructors */
bl __libc_init_array
/* Call the application's entry point.*/
bl main
LoopForever:
b LoopForever
.size Reset_Handler, .-Reset_Handler
/**
* @brief This is the code that gets called when the processor receives an
* unexpected interrupt. This simply enters an infinite loop, preserving
* the system state for examination by a debugger.
*
* @param None
* @retval : None
*/
.section .text.Default_Handler,"ax",%progbits
Default_Handler:
Infinite_Loop:
b Infinite_Loop
.size Default_Handler, .-Default_Handler
/******************************************************************************
*
* The minimal vector table for a Cortex-M4. Note that the proper constructs
* must be placed on this to ensure that it ends up at physical address
* 0x0000.0000.
*
******************************************************************************/
.section .isr_vector,"a",%progbits
.type g_pfnVectors, %object
.size g_pfnVectors, .-g_pfnVectors
g_pfnVectors:
.word _estack
.word Reset_Handler
.word NMI_Handler
.word HardFault_Handler
.word MemManage_Handler
.word BusFault_Handler
.word UsageFault_Handler
.word 0
.word 0
.word 0
.word 0
.word SVC_Handler
.word DebugMon_Handler
.word 0
.word PendSV_Handler
.word SysTick_Handler
.word WWDG_IRQHandler
.word PVD_IRQHandler
.word TAMP_STAMP_IRQHandler
.word RTC_WKUP_IRQHandler
.word FLASH_IRQHandler
.word RCC_IRQHandler
.word EXTI0_IRQHandler
.word EXTI1_IRQHandler
.word EXTI2_TSC_IRQHandler
.word EXTI3_IRQHandler
.word EXTI4_IRQHandler
.word DMA1_Channel1_IRQHandler
.word DMA1_Channel2_IRQHandler
.word DMA1_Channel3_IRQHandler
.word DMA1_Channel4_IRQHandler
.word DMA1_Channel5_IRQHandler
.word DMA1_Channel6_IRQHandler
.word DMA1_Channel7_IRQHandler
.word ADC1_2_IRQHandler
.word CAN_TX_IRQHandler
.word CAN_RX0_IRQHandler
.word CAN_RX1_IRQHandler
.word CAN_SCE_IRQHandler
.word EXTI9_5_IRQHandler
.word TIM1_BRK_TIM15_IRQHandler
.word TIM1_UP_TIM16_IRQHandler
.word TIM1_TRG_COM_TIM17_IRQHandler
.word TIM1_CC_IRQHandler
.word TIM2_IRQHandler
.word TIM3_IRQHandler
.word 0
.word I2C1_EV_IRQHandler
.word I2C1_ER_IRQHandler
.word 0
.word 0
.word SPI1_IRQHandler
.word 0
.word USART1_IRQHandler
.word USART2_IRQHandler
.word USART3_IRQHandler
.word EXTI15_10_IRQHandler
.word RTC_Alarm_IRQHandler
.word 0
.word 0
.word 0
.word 0
.word 0
.word 0
.word 0
.word 0
.word 0
.word 0
.word 0
.word 0
.word TIM6_DAC1_IRQHandler
.word TIM7_DAC2_IRQHandler
.word 0
.word 0
.word 0
.word 0
.word 0
.word 0
.word 0
.word 0
.word COMP2_IRQHandler
.word COMP4_6_IRQHandler
.word 0
.word HRTIM1_Master_IRQHandler
.word HRTIM1_TIMA_IRQHandler
.word HRTIM1_TIMB_IRQHandler
.word HRTIM1_TIMC_IRQHandler
.word HRTIM1_TIMD_IRQHandler
.word HRTIM1_TIME_IRQHandler
.word HRTIM1_FLT_IRQHandler
.word 0
.word 0
.word 0
.word 0
.word 0
.word 0
.word 0
.word FPU_IRQHandler
/*******************************************************************************
*
* Provide weak aliases for each Exception handler to the Default_Handler.
* As they are weak aliases, any function with the same name will override
* this definition.
*
*******************************************************************************/
.weak NMI_Handler
.thumb_set NMI_Handler,Default_Handler
.weak HardFault_Handler
.thumb_set HardFault_Handler,Default_Handler
.weak MemManage_Handler
.thumb_set MemManage_Handler,Default_Handler
.weak BusFault_Handler
.thumb_set BusFault_Handler,Default_Handler
.weak UsageFault_Handler
.thumb_set UsageFault_Handler,Default_Handler
.weak SVC_Handler
.thumb_set SVC_Handler,Default_Handler
.weak DebugMon_Handler
.thumb_set DebugMon_Handler,Default_Handler
.weak PendSV_Handler
.thumb_set PendSV_Handler,Default_Handler
.weak SysTick_Handler
.thumb_set SysTick_Handler,Default_Handler
.weak WWDG_IRQHandler
.thumb_set WWDG_IRQHandler,Default_Handler
.weak PVD_IRQHandler
.thumb_set PVD_IRQHandler,Default_Handler
.weak TAMP_STAMP_IRQHandler
.thumb_set TAMP_STAMP_IRQHandler,Default_Handler
.weak RTC_WKUP_IRQHandler
.thumb_set RTC_WKUP_IRQHandler,Default_Handler
.weak FLASH_IRQHandler
.thumb_set FLASH_IRQHandler,Default_Handler
.weak RCC_IRQHandler
.thumb_set RCC_IRQHandler,Default_Handler
.weak EXTI0_IRQHandler
.thumb_set EXTI0_IRQHandler,Default_Handler
.weak EXTI1_IRQHandler
.thumb_set EXTI1_IRQHandler,Default_Handler
.weak EXTI2_TSC_IRQHandler
.thumb_set EXTI2_TSC_IRQHandler,Default_Handler
.weak EXTI3_IRQHandler
.thumb_set EXTI3_IRQHandler,Default_Handler
.weak EXTI4_IRQHandler
.thumb_set EXTI4_IRQHandler,Default_Handler
.weak DMA1_Channel1_IRQHandler
.thumb_set DMA1_Channel1_IRQHandler,Default_Handler
.weak DMA1_Channel2_IRQHandler
.thumb_set DMA1_Channel2_IRQHandler,Default_Handler
.weak DMA1_Channel3_IRQHandler
.thumb_set DMA1_Channel3_IRQHandler,Default_Handler
.weak DMA1_Channel4_IRQHandler
.thumb_set DMA1_Channel4_IRQHandler,Default_Handler
.weak DMA1_Channel5_IRQHandler
.thumb_set DMA1_Channel5_IRQHandler,Default_Handler
.weak DMA1_Channel6_IRQHandler
.thumb_set DMA1_Channel6_IRQHandler,Default_Handler
.weak DMA1_Channel7_IRQHandler
.thumb_set DMA1_Channel7_IRQHandler,Default_Handler
.weak ADC1_2_IRQHandler
.thumb_set ADC1_2_IRQHandler,Default_Handler
.weak CAN_TX_IRQHandler
.thumb_set CAN_TX_IRQHandler,Default_Handler
.weak CAN_RX0_IRQHandler
.thumb_set CAN_RX0_IRQHandler,Default_Handler
.weak CAN_RX1_IRQHandler
.thumb_set CAN_RX1_IRQHandler,Default_Handler
.weak CAN_SCE_IRQHandler
.thumb_set CAN_SCE_IRQHandler,Default_Handler
.weak EXTI9_5_IRQHandler
.thumb_set EXTI9_5_IRQHandler,Default_Handler
.weak TIM1_BRK_TIM15_IRQHandler
.thumb_set TIM1_BRK_TIM15_IRQHandler,Default_Handler
.weak TIM1_UP_TIM16_IRQHandler
.thumb_set TIM1_UP_TIM16_IRQHandler,Default_Handler
.weak TIM1_TRG_COM_TIM17_IRQHandler
.thumb_set TIM1_TRG_COM_TIM17_IRQHandler,Default_Handler
.weak TIM1_CC_IRQHandler
.thumb_set TIM1_CC_IRQHandler,Default_Handler
.weak TIM2_IRQHandler
.thumb_set TIM2_IRQHandler,Default_Handler
.weak TIM3_IRQHandler
.thumb_set TIM3_IRQHandler,Default_Handler
.weak I2C1_EV_IRQHandler
.thumb_set I2C1_EV_IRQHandler,Default_Handler
.weak I2C1_ER_IRQHandler
.thumb_set I2C1_ER_IRQHandler,Default_Handler
.weak SPI1_IRQHandler
.thumb_set SPI1_IRQHandler,Default_Handler
.weak USART1_IRQHandler
.thumb_set USART1_IRQHandler,Default_Handler
.weak USART2_IRQHandler
.thumb_set USART2_IRQHandler,Default_Handler
.weak USART3_IRQHandler
.thumb_set USART3_IRQHandler,Default_Handler
.weak EXTI15_10_IRQHandler
.thumb_set EXTI15_10_IRQHandler,Default_Handler
.weak RTC_Alarm_IRQHandler
.thumb_set RTC_Alarm_IRQHandler,Default_Handler
.weak TIM6_DAC1_IRQHandler
.thumb_set TIM6_DAC1_IRQHandler,Default_Handler
.weak TIM7_DAC2_IRQHandler
.thumb_set TIM7_DAC2_IRQHandler,Default_Handler
.weak COMP2_IRQHandler
.thumb_set COMP2_IRQHandler,Default_Handler
.weak COMP4_6_IRQHandler
.thumb_set COMP4_6_IRQHandler,Default_Handler
.weak HRTIM1_Master_IRQHandler
.thumb_set HRTIM1_Master_IRQHandler,Default_Handler
.weak HRTIM1_TIMA_IRQHandler
.thumb_set HRTIM1_TIMA_IRQHandler,Default_Handler
.weak HRTIM1_TIMB_IRQHandler
.thumb_set HRTIM1_TIMB_IRQHandler,Default_Handler
.weak HRTIM1_TIMC_IRQHandler
.thumb_set HRTIM1_TIMC_IRQHandler,Default_Handler
.weak HRTIM1_TIMD_IRQHandler
.thumb_set HRTIM1_TIMD_IRQHandler,Default_Handler
.weak HRTIM1_TIME_IRQHandler
.thumb_set HRTIM1_TIME_IRQHandler,Default_Handler
.weak HRTIM1_FLT_IRQHandler
.thumb_set HRTIM1_FLT_IRQHandler,Default_Handler
.weak FPU_IRQHandler
.thumb_set FPU_IRQHandler,Default_Handler
/************************ (C) COPYRIGHT STMicroelectronics *****END OF FILE****/
|
Q:
make acts-as-taggable gem case-sensitive
I am using the acts as tag gable gem and it is forcing some tags with capital letters to be all lowercase. For example, when i try to add 'Computer Science', it adds 'computer science' instead and the server logs show this:
ActsAsTaggableOn::Tag Load (0.6ms) SELECT "tags".* FROM "tags" INNER JOIN "taggings" ON "tags"."id" = "taggings"."tag_id" WHERE "taggings"."taggable_id" = $1 AND "taggings"."taggable_type" = $2 AND (taggings.context = 'tags' AND taggings.tagger_id IS NULL) [["taggable_id", 12], ["taggable_type", "Project"]]
=> ["computer science", "Computer Science"]
I do not want this. The actsastaggable github page says to add this:
ActsAsTaggableOn.strict_case_match = true
I have added that line to the application.rb file but it is still not working. How do I make actsastaggable case-sensitive?
A:
I have tested this and it works for me. Make sure you don't have the following written in your application.rb:
ActsAsTaggableOn.force_lowercase = true
If that doesn't solve it add more info.
Having said that, you might want to consider keeping your Tags lowercase for two reasons:
1.) clean URL's - you want to avoid upper case letters in your URL. They are not case sensitive but it's simply pretty. If people other than yourself are allowed to tag they could come up with string such as "hEll0PeEpS" and that you don't want in clean URL's, right?
2.) Have control over your design. This relates to the first point I made - if someone uses fancy tags using upper/lowercase randomly it will be written like this wherever you list your tags.
Save them lowercase instead and use .capitalize
However, if you requirements are different and require free choice of upper and lowercase letters then disregard my additional thoughts :)
|
Parents
School Times
We closely monitor attendance, punctuality and its impact on children’s learning. Your child must be in class by 8.50a.m. to register. The class doors and registers shut at 9.00a.m. and for the staff and children’s safety the door remains closed. If you arrive after 9.00a.m. please bring your child to the main entrance and sign them in.
Appointments
If your child has a dentist, doctor or hospital appointment he/she must be brought to school as normal. You will need to collect and return your child from the office, and sign our appointments book. Please bring your appointment card/letter with you so we can take a copy for our files.
Attendance & Holidays in Term Times
Children of school-age who are registered at a school must, by law, attend school every day. Attendance is important, not just because the law requires it, but because it is the best way of ensuring that a child makes the most of the educational opportunities which are available to him or her. There may be occasions when a child has to miss school – for example, if he or she is very unwell.
In particular, parents should avoid taking their children out of school during term-time in order to go on holiday.
Assemblies/Performances
School celebrates birthdays and achievements every week – please look at the newsletter to see if your child has been awarded!
We hold traditional and/or modern Christmas performances in December to which parents are invited.
In the Summer Term all classes from Nursery to Year 5 will hold a class assembly for parents. To celebrate their final weeks at Broom Barns, Year 6 will perform a production to the school and parents.
Child Protection
Schools have a duty to safeguard and promote the welfare of children. This duty requires schools to have a Child Protection Policy and to cooperate with local and national safeguarding procedures. Schools may need to share information and work in partnership with other agencies when there are concerns about a child’s welfare.
Broom Barns School has a Child Protection Policy which is available for parents/carers to see. The policy is also available at https://broombarns.herts.sch.uk/our-school/policies/
We work in cooperation with the Hertfordshire Safeguarding Children Board Inter-agency Child Protection and Safeguarding Children Procedures which are available at www.hertssafeguarding.org.uk
The Headteacher is the Designated Senior Person for Child Protection. She will be happy to discuss any questions or concerns parents/carers may have about Child Protection policies and practice.
Consultation Evenings
These are held in the Autumn and Summer Terms and are an opportunity to share and discuss your child/rens progress and achievements.
Contact with Family Support Worker
Mrs Taylor is available to meet with parents to discuss any non-educational concerns and offer support and guidance for parents.
Contact with Class Teachers
Parents may approach the Class Teacher to discuss any concerns or share news and information about their child/ren, please try to arrange to meet with them at the end of the day when the teacher will have more time.
Contact with Head Teacher
If you would like to speak to the Headteacher about any suggestions to improve the school or any concerns, please contact the school office to make an appointment.
Medicine
If your child is prescribed emergency, life saving medication such as asthma inhalers or epi-pens please bring the medication in to school in the original container with the prescription label attached. You will be asked to complete a ‘Request for School to Administer Medication’ form before we can administer any medication. Please note we cannot administer other medication or over the counter medications including cough lozenges and painkillers.
Parent Partners
Parents who would like to volunteer regularly are always welcome. We do have to run a Disclosure & Barring Service check (formerly known as Criminal Records Bureau check). Please contact Mrs Pomroy for further information.
Parking
The school car park is for staff and agreed visitors only. Please walk to school if you can. If you have to drive pleaseobserve the traffic regulations and park safely away from the school, avoiding congestion near the school so that children can cross safely. Do not park, or wait, on the yellow ‘KEEP CLEAR’ markings. These markings are self-explanatory and are there to keep children safe. Do not block other people’s driveways or garages with your vehicle. This is classed as an unnecessary obstruction and could result in your vehicle being ticketed and even removed. Do not park blocking dropped kerbs. These are in place to allow access to wheelchairs/prams. Blocking these will result in your vehicle being ticketed.
The safety of the children is our priority. Thank you to everyone that parks in a safe and considerate fashion.
Written Reports
These are issued in the Spring Term with achievement and progress to date and include targets for further development.
Trip Online Forms
Paper Forms
Dolce Ltd is the new school meals provider at Broom Barns. All dinners must be ordered through their website ( School Grid ) and all dinner money must be paid online directly to Dolce. Unfortunately, the school cannot take cash or order meals for pupils from September 2019.
The cost of the meals is as follows:
Infant (Reception, Year 1 & Year 2) free under the government initiative ‘Universal Infant Free School Meals’Junior £2.40 per day, £12.00 per week (as of 1st September 2019).
Free School Meals
Families on Income Support may be eligible for free meals. The school is NOT able to allow free meals to be taken without authorisation from the Education Office, so it is important to make applications prior to the start of the new school year.
Your child could get a free school meal if you receive any of the following:
Income Support
Income-based Jobseekers Allowance
Income-related Employment and Support Allowance
Support under Part VI of the Immigration and Asylum Act 1999
The guaranteed element of State Pension Credit
Child Tax Credit (as long as you’re not also entitled to Working Tax Credit and you don’t get more than £16,190 a year)
Working Tax Credit run-on (paid for 4 weeks after you stop qualifying for Working Tax Credit)
Universal Credit (with annual earned income of no more than £7,400after tax and not including any benefits you get).
Apply today if you receive any of these benefits. It takes 5 mins and in most cases we can tell you straightaway if your child can get free meals at school. Application forms are available at: Free School Meals Application Form
DOLCE F.A.Q.s
Will there be a charge if I pay by card?
There is no charge for making payments by either credit or debit card to your account.
Is there a limit to the amount I can pay?
No, you can choose to pay as little or as much as you would like. Your current balance can be checked at any time in ‘Account History’.
How often do I need to top up my account?
Payment by card is required in advance and you can top up your account as often as you like to ensure that your account is always in credit.
I am entitled to Free School Meals for my child – what do I do now?
If you have been awarded Free School Meal entitlement please the school office to ensure that your account has been updated.
How will my direct debit be collected?
Direct Debit payments are collected every month on the payment date you select. Dolce will send you a statement prior to collection, listing all of the meals that your child has eaten, the total amount to be taken and payment date reminder.
The school uniform is based on the school colours of grey and red. Good quality red sweatshirts, polo shirts together with grey fleeces are available for sale from most supermarkets and chain stores. School uniform makes an important contribution to the ethos of the school and the work and attitude of the children. Wearing a school uniform reinforces a sense of community belonging and avoids their best clothes being spoiled. Please ensure that your child is dressed in clothes that he/she can easily manage. Younger children benefit from pull-up trousers and skirts and non laced shoes.
Your co-operation in ensuring that your child wears uniform to school is essential.
The children will need red, black or white shorts and a white T-shirt for P.E. and games. P.E. lessons in the hall are usually done in bare feet, but trainers are required for outdoor lessons. Please ensure your child is provided with suitable kit. It is a county health and safety rule that jewellery is NEVER worn whilst doing physical activities. Children will be requested to remove earrings or they will be required to wear tape over earrings – Parents/Carers may be asked to provide tape.
Since many of the areas are carpeted, children are expected to have indoor shoes (plimsolls) to change into on arrival at school. Black slip-on plimsolls are ideal. A drawstring shoe bag to keep them in would be most helpful. A shoe bag is available for sale from the school office.
School Uniform Expectations
We insist on the following simple, but smart, uniform for all students at the school and rely on parents for their full support;
Mobile phones, tablets, iPods, MP3 players and earphones should not be brought into school.
ONLY Year 6 pupils have the privilege of bringing in a mobile phone on the understanding;
1. They hand it to the School Office where it will be kept and will only be returned at the end of the day.
2. The school will not accept responsibility for these items so please ensure you have appropriate insurance cover as the school’s insurance does not cover loss or damage.
If you would like to book any of the following clubs please do so through the School Gateway App.
The charge for Sports Clubs is £1 per child, per session and £2 per session for Art & Games Clubs. We are sure you will agree that they are is amazingly low for any club. For families claiming free school meals (FSM), not Universal Infant Free School Meals, the clubs are free. Children will only be offered a place at a club once payment has been received or the school has checked FSM allocations. If you wish to apply for FSM please apply online at: https://www.hertfordshire.gov.uk/home.aspx
All clubs are charged in advance.
WRAP & Sports Clubs are non-refundable if a child misses a session or drops out. If for any reason the school has to cancel a club you will be refunded for that session.
Due to rising numbers, WRAP Breakfast and WRAP Arts & Crafts Club will not charge if a child is booked in but is off sick or at least 48 hours notice has been given to the school either by email, text or phone call for any other cancellations.
Mr Kalaiarasu /
Mr Barrow
Times
Year
Cost
(£1 per session)
Dates
Boys Football &
Girls Football
3.20-4.20pm
5 & 6
£6.00
24th Feb to 30th Mar
Curling
3.20-4.20pm
3 & 4
£6.00
25th Feb to 31st Mar
Archery
3.20-4.20pm
5 & 6
£6.00
25th Feb to 31st Mar
Let’s Get Active
3.20-4.20pm
Rec, 1 & 2
£6.00
26th Feb to 1st Apr
Please check when individual clubs stop running.
Miss Harris
Times
Year
Cost
Dates
Monday – Games
3.20-4.20pm
Rec to Y6
£2 per session
7th Jan
to
16th
July
2020
Tuesday – Arts & Crafts
3.20-4.20pm
Rec to Y6
£2 per session
Wednesday – Toys
3.20-4.20pm
Rec to Y6
£2 per session
Thursday – Arts & Crafts
3.20-4.20pm
Rec to Y6
£2 per session
Friday – Computer & Books
3.20-4.20pm
Rec to Y6
£2 per session
Important please note:
A Late Collection Fee will be charged if your child/ren are not collected from clubs at 4.20pm. There will be a charge of £5 for every 5 minutes per child.
All children must be collected and signed out of the clubs by a known adult between October half term and February half term. Year 5 and Year 6 pupil will only be allowed to walk home from clubs after February half term until October half term with written parental permission.
If your child is receiving free school meals (not Universal Infant Free School Meals) the clubs are free. Please book through the office. Please note if your child misses three session in a row, they may lose their place.
Children will only be offered a place at a club once payment has been received.
All clubs are charged in advance and are non-refundable if a child misses a session or drops out. If for any reason the school has to cancel a club you will be refunded for that session.
Due to rising numbers, WRAP Breakfast and WRAP Arts & Crafts Club will not charge if a child is booked in but is off sick or at least 48 hours notice has been given to the school either by email, text or phone call for any other cancellations.
Nurture Club
At Broom Barns we offer a lunchtime nurture club. All sorts of children join nurture clubs and groups, for all sorts of reasons. The purpose of our nurture club is to encourage the child to want to come to school and feel that they are able to take part in all school activities; to take pride in their learning and to learn to grow at a pace that is appropriate to them. The nurture club also offers children a fantastic opportunity to learn broader life lessons in addition to their school work – for example learning to join in with their class, making new friends, discovering their own talents and baking cakes!
Stevenage Sporting Futures Team
With the collaborative partnership of our school and Stevenage Sporting Futures Team we are able to offer the children a fully inclusive variety of sporting experiences, festivals and competitions, along with support and training for our staff.
We are proud to be a Sports Premium Plus school of the Stevenage Sporting Futures Team.
The club is run by trained staff who plan the activities for children to have fun and learn whilst playing.
Activities include games, dance and painting. During the summer we offer outdoor activities. There is also a quiet space for reading and study, with a staff member to help and encourage the children.
Please note bookings need to be paid for at the time of booking. Bookings and payments are made through the School Gateway app.
Bookings and payments can be made quickly and easily via the School Gateway app below after you have registered.
Breakfast
When a decision is made to close the school in severe weather, it is to ensure the safety of all pupils, parents and staff. The decision is usually made in the morning and depends on local factors. You will be informed that the school is closed in the following ways:
Everbridge – the Hertfordshire County Council now use Everbridge website for their dedicated page listing the status (open or closed) of every school in the county. Log onto Everbridge by clicking HERE and click on the ‘Sign Up’ link. Once you have completed the registration for all the schools of your choice you will receive notification messages during severe weather.
Local Radio Stations – Three Counties Radio station will make regular announcements.
Schoolcomms – an e-mail/text message will be sent via Schoolcomms as soon as possible.
Please be aware the internet and mobile services get very busy at these times and you may need to check them on a regular basis. Please ensure we always have up-to-date contact numbers so we can text & email if necessary. Thank you.
Homework Help
A service for students from KS2, KS3, GCSE and AS/A2 Level to ask a teacher a question on homework, coursework or revision (replies in 24 hours) and to search the database of over 15,000 previously asked questions. |
(Newser) – "I was unaware of everything," says Noor Salman in her first interview since her husband killed 49 people at Orlando's Pulse nightclub. As the FBI considers pressing charges against her, including lying to authorities, Salman tells the dark tale of her marriage to Omar Mateen. Six months after they were married, while Salman was pregnant with their son, Mateen began beating her, Salman tells the New York Times. It started with a punch to the shoulder; soon, Mateen was pulling her hair, choking her, and threatening to kill her. "He had no remorse," says Salman, 30. Scared of her husband, Salman says she had no idea what Mateen was planning, though she saw him buy ammunition and once drove with him to Orlando from their home in Fort Pierce, Fla. Salman says she had no idea Mateen was canvassing Pulse during their visit; Mateen said only that he wanted to take "a drive."
The ammunition was no surprise, Salman says, since Mateen often went to a shooting range. Though Mateen watched jihadist videos, Salman says she didn't believe he was a threat because he'd been cleared by the FBI after telling colleagues he was a member of Hezbollah. A professor who studies women's roles in terror groups is skeptical of Salman's professed ignorance: "It's possible she didn't know because he was not confiding in her, but she does have every incentive in the world to retell this story as a different kind of victim," she says. In fact, Salman says she only learned about the Pulse shooting hours afterward, though she'd called looking for her husband around 4am. In a text message, he wrote, "I love you babe." "I don't condone what he has done. I am very sorry for what has happened," says Salman. "I just want people to know that I am human." Read the full interview here. (Read more Pulse Orlando shooting stories.)
|
// OJ: https://leetcode.com/problems/sum-of-mutated-array-closest-to-target/
// Author: github.com/lzl124631x
// Time: O(NlogN)
// Space: O(1)
// Ref: https://leetcode.com/problems/sum-of-mutated-array-closest-to-target/discuss/463306/JavaC%2B%2BPython-Just-Sort-O(nlogn)
class Solution {
public:
int findBestValue(vector<int>& A, int target) {
sort(begin(A), end(A));
int N = A.size(), i = 0;
while (i < N && target > A[i] * (N - i)) target -= A[i++];
return i == N ? A[N - 1] : round((target - 0.0001) / (N - i));
}
}; |
Christmas Dinner Menu
On Tuesday 11th December we are bringing you a special Christmas dinner. The cost will be £2 per meal with reception, year 1 and year 2 being free, you will need to order your meal via parent pay before Friday 7th December. |
1. Context {#sec80504}
==========
Bacterial pathogens and their toxins cause illnesses, which spread throughout the population. Some bacteria are producing enterotoxins such as cholera toxin, the heat-labile or heat-stable enterotoxins produced by *Escherichia coli*. Others produce cytotoxins like shiga toxins produced by *Shigella*, which damage cells. Both of them can cause diarrheal diseases ([@A17473R1], [@A17473R2]). When pathogenic bacteria overcome host microbiome of normal flora, diarrhea develops ([@A17473R3]). In 1980s, the main mortality rate of diarrhea was approximately 4.6 million per year, but it has decreased to 1.6-2.1 million since then. Most of these deaths occur in infants and young children under the age of 5 years in developing countries. Diarrhea has a lot of symptoms such as nausea, vomiting, fever, and abdominal pain. Important risk factors of diarrhea consist of age, gastric acidity, antibiotics, immunosuppression, and poor sanitation ([@A17473R4]). One kind of this syndrome is travelers\' diarrhea (TD) that is the most common cause of disability among international travelers to developing countries. Nowadays, infectious diarrhea has become one of the main health problems worldwide.
2. Evidence Acquisition {#sec80505}
=======================
A rapid detection method, including identification of the pathogens in population is critical in the disease control ([@A17473R5]). The major causes of TD are *E. coli*, *Shigella* spp., *Campylobacter* spp., *Salmonella* spp., *Aeromonas* spp., *Plesiomonas* spp., and non-cholera *Vibrio*s. Since 1970s, enterotoxigenic *E. coli* (ETEC) has been the most important pathogen responsible for TD ([@A17473R6]). Novel and important objectives in identification of the enteric bacteria are development of efficient, rapid, and simple methods to detect microorganisms ([@A17473R7], [@A17473R8]). We could classify rapid methods into modified conventional methods, biosensors, immunological methods, and nucleic acid based assays, which are being described in this article.
3. Results {#sec80533}
==========
3.1. Conventional Detection Methods {#sec80507}
-----------------------------------
In these methods, detection of bacteria and viruses mainly depends on the culture of the food sample (using microbiological media), biochemical identification of bacterial genera, or cell culture techniques ([@A17473R9]). These methods are sensitive and inexpensive, but they are both time- and material-consuming due to its initial enrichment (a minimum of 5-7 days are required to identify an isolated colony), which typically occur in a few samples. It can delay the proper diagnosis and treatment regime, resulting in longer hospital stays ([@A17473R10]). Culture is named to describe the biological amplification of viable and cultivatable bacteria with manufactured growth media. Isolation of the specific bacterial species from a mixed culture, without pre-enrichment is difficult. Therefore, it is possible to use a magnetic separation assay by a magnetic separator ([@A17473R11]). To improve conventional methods and reduce the costs, we used several modification in the preparation of samples, plating, and missing counting to provide faster and easier methods.
### 3.1.1. The Analytical Profile Index {#sec80506}
The analytical Profile Index (API) system is a version of conventional method that is developed for quick identification of the Enterobacteriaceae family members and other Gram-negative bacteria. This system consists of a plastic strip with 20 small reaction tubes, containing the separated compartments. The API test system is manufactured by bioMerieux Corp., Marcy Etoile, France. This assay is considered the "Gold standard" with an overall sensitivity of 79%. In this technique, a reaction occurs within 24 hours. This system is very useful for identifying pathogenic *Yersinia* isolates and has the highest sensitivity both at the genus and at the species level ([@A17473R12]).
3.2. Immunological-Based Methods {#sec80508}
--------------------------------
Immunodetection has become a broadly used method for enteric bacteria because it permits for sensitive and specific detection. Immunological assay based on antibodies is a technology employed for the detection of bacterial cells, spores, viruses and toxins ([@A17473R13]). Methods based on antigen--antibody interaction are used for the dedication of food-borne pathogens. Polyclonal and monoclonal antibodies are used in these methods. Although, the immunological detection methods are not as specific and sensitive as nucleic acid-based detection, they are faster, more powerful and have the ability to detect both contaminating organisms and their toxins that may not be expressed in the organism\'s genome. In this section, we will describe some of these methods ([@A17473R13]).
3.3. Enzyme-Linked Immunosorbent Assay {#sec80513}
--------------------------------------
This method is only based on immunological technique and belongs to heterogeneous assays. Enzyme-linked immunosorbent assay (ELISA) binds the specificity of antibodies and the sensitivity of simple enzyme assays by using antibodies or antigens attached with an easily assayed enzyme. ELISA is an assay similar to radioimmunoassay (RIA), but using an enzyme attached with an antigen or an antibody rather than a radioactive isotope. There are several kinds of this assay such as direct ELISA, indirect ELISA, and sandwich ELISA ([@A17473R14]).
### 3.3.1. Indirect Enzyme-Linked Immunosorbent Assay {#sec80509}
In this type, the target antigen is coated in a solid phase in an ELISA plate. When serum samples are added, specific antibodies will bind the coated antigen. The ELISA plates are washed to delete unbound antibodies. Anti-immunoglobulin antisera conjugated with a peroxidase enzyme are then added. When the substrate buffer is added, in positive cases the color of the substrate buffer will change. The color is measured at a defined wavelength using a spectrophotometer, which is proportional to the level of antibodies present in the sample ([@A17473R14]).
### 3.3.2. Competitive Enzyme-Linked Immunosorbent Assay {#sec80510}
The cELISA (Competitive ELISA) can be used to detect and quantify antibody or antigen using of a competitive method. The cELISA for detection of specific antibodies has largely replaced the iELISA in large-scale screening and serosurveillance. The cELISA offers significant advantages over an iELISA, as samples from many species may be tested without the need for species-specific enzyme-labeled conjugates. Many antigens are extremely difficult or time-consuming to purify. When used in an indirect assay, they can produce high background values because of their nonspecific binding. However, relatively crude antigens may be used in the cELISA, provided that the 'detecting antibody' has the desired specificity. The principle of a competitive assay (for the antibodies detection) is competition between the test serum and the detecting antibody. Specific binding of the detecting antibody is detected by using an appropriate anti-species conjugate. Reduction in the obtained expected color is caused by binding of the antibodies in the test serum with the antigen, which prevents binding of the specific detecting antibody ([Figure 1](#fig15617){ref-type="fig"}) ([@A17473R14]).
{#fig15617}
### 3.3.3. Immunofluorescence Assay {#sec80511}
In this way, antibodies are labeled with a fluorescent reporter molecule whose name is fluorescein isothiocyanate (FITC). This fluorescent antibody is used to directly detect bacteria in clinical specimens and applied for rapid detection of bacteria in foods ([@A17473R15], [@A17473R16]). It is to be noted that polyclonal antibodies used in the procedure lack specificity and need a well-trained microbiologist to do the test. The combination of the fluorescent antibody with DEFT were used to detect *E. coli*: O157:H7 in milk and apple juice ([@A17473R16]).This assay had the sensitivity of about 10^3^ cell/mL.
### 3.3.4. Immunomagnetic Separation {#sec80512}
This method, immunomagnetic separation (IMS), utilizes paramagnetic beads (about 2-3 μm in size, about 106-108/mL), which are surface activated and can be coated with antibody by incubating in the refrigerator for varying periods of time. The unattached antibody is taken away by washing. Then, the coated beads are added to a semi-liquid mixture of food that contains antigen (toxin or whole cells in Gram-negative bacteria), thoroughly mixed, and allowed to incubate for a few minutes to several hours (for reaction of antigen with antibody-coated beads). Hence, we use this assay for isolation of biological targets from samples. It has been successful in many fields, including molecular biology, immunology, and microbiology. Cells, nucleic acids, proteins or other biomolecules can be used as magnetic targets. This method reduces time and is useful for a large number of samples ([@A17473R17], [@A17473R18]).
3.4. Molecular-Based Methods {#sec80517}
----------------------------
### 3.4.1. Polymerase Chain Reaction {#sec80514}
Polymerase chain reaction (PCR) was invented in 1980. This assay can detect a single copy of a target DNA sequence, and amplifies a desired region of genome into billions of copies among a complex mixture of heterogeneous sequences ([@A17473R19]). PCR is used for the detection of the pathogenic microorganisms in food (by utilizing nucleic acid for detection). This assay has advantages over culture and other methods (for the detection of microbial pathogens) such as specificity, sensitivity, rapidity, accuracy, and capacity to detect small amounts of target nucleic acid in a sample ([@A17473R18], [@A17473R20]). PCR based methods are used for the detection of a broad range of pathogens like *Staphylococcus aureus* ([@A17473R21]), *Listeria monocytogenes* ([@A17473R22]), *Salmonella* spp.([@A17473R23], [@A17473R24]), *Bacillus cereus* ([@A17473R24]), *Campylobacter jejuni* ([@A17473R25]). The different forms of PCR based on their methods are real-time PCR ([@A17473R25]-[@A17473R28]), multiplex PCR, and reverse transcriptase PCR (RT-PCR) ([@A17473R23]). RT-PCR is also found as multiplex RT-PCR ([@A17473R29]-[@A17473R31]) and real-time RT-PCR ([@A17473R29], [@A17473R32]-[@A17473R34]).
### 3.4.2. Real-Time PCR (Kinetic PCR or Quantitative real Time PCR) {#sec80515}
In spite of the development of alternative amplification technologies, PCR stays the most used method in the research, detection, and diagnosis of pathogens. One kind of PCR method is Real-time PCR. This method provides an opportunity for rapid detection of pathogens in food ([@A17473R35]-[@A17473R38]). Real-time PCR combines PCR chemistry with fluorescent probe detection of the amplified product. This method is simpler to carry out compared to conventional PCR method and its test result comes much sooner too ([@A17473R39], [@A17473R40]). Two kinds of chemical agents are available for real-time PCR products: fluorescent probes that bind specifically to definite DNA sequences and fluorescent dyes that intercalate into any dsDNA ([@A17473R41]). The simplest and most cost-effective methods employed are sequence independent DNA-binding dyes such as SYBR Green I and SYBR Gold, which bound to dsDNA ([@A17473R42]). Therefore, sensitivity and specificity, low contamination risk, ease of performance, and speed, have made real-time PCR assay an appealing alternative to conventional culture-based or immunoassay-based testing methods ([@A17473R39]). TaqMan PCR (Fluorescent probe based real-time PCR) amplify target nucleic acid sequences from selected microbes in the samples collected from complex biological environments ([@A17473R39], [@A17473R42]).
### 3.4.3. Multiplex PCR {#sec80516}
This method (simultaneous amplification of multiple gene targets ) has been designed to use two or more primer pairs directed at pathogen-specific unique sequences within a single reaction and allows the simultaneous amplification of more than one target sequence by using multiple sets of oligonucleotides to amplify two or more targets of interest ([@A17473R43]). This method is applied for the simultaneous detection of several foodborne pathogens. For example, simultaneous detection of *E. coli* O157:H7, *Salmonella*spp. and *S. aureus*. Advantages of multiplex PCR include multiple targets that are amplified significantly without extra time, cost, or sample volume; however, there have been reports that multiplexing can reduce sensitivity compared with single reactions, (because of competition). The disadvantage of multiplex PCR is the competition between oligonucleotide pairs that can reduce PCR sensitivity ([@A17473R44]).
3.5. Microarrays {#sec80518}
----------------
Microarray is used as large scale screening systems for simultaneous identification and is very powerful tool with greater capacity (100--1000x) compared to other molecular methods (i.e. real-time PCR) that can only analyze a small number of targets. This assay is also being used for simultaneous diagnosis and detection ([@A17473R45]). A simple microarray includes a solid surface (such as a nylon membrane, glass slide, or silicon chips) onto that is attached small quantities of ssDNA from different known bacterial species. When ssDNA from many unknown species is exposed to these array (DNA chip), strains will bind to their individual sites on the chip.
3.6. Detection Based on Fluorescent In Situ Hybridization Assay {#sec80519}
---------------------------------------------------------------
Fluorescent in situ hybridization (FISH) is a molecular technique, which is sensitive, rapid, and useful for many phylogenetic, ecologic, diagnostic, and environmental studies in all fields of microbiology. In this method, a specific oligonucleotide probe is labeled with a fluorochrome for simultaneous identification of the pathogens by fluorescent microscopy. FISH has some advantages over conventional cultural methods, including avoidance of inhibitory substances; identification of viable but non-cultivable cells (VBNC); rapid availability of quantitative results; simultaneous identification of different species in the same sample; relatively low cost; and easy to do. PCR is more sensitive than FISH but sensitivity of FISH detection could be increased considerably after an enrichment step. This technique is applied in food samples for the detection of foodborne pathogens such as *Staphylococcus* spp., *E. coli*, *Salmonella* spp., *Campylobacter* spp, *L. monocytogenes* ([@A17473R46]).
3.7. Detection Based on Loop-Mediated Isothermal Amplification (LAMP) Assay {#sec80520}
---------------------------------------------------------------------------
Traditional methods for diagnosis of the disease are carried out by culturing bacteria on agar plates followed by its phenotypic and serological properties or histological examination ([@A17473R47]). These techniques have some disadvantages such as need for previous isolation of the pathogen and insufficient sensitivity to detect low levels of pathogen ([@A17473R48]). Molecular techniques like polymerase chain reaction (PCR) can be used to solve those problems and increase the sensitivity and specificity of the pathogen detection ([@A17473R49]-[@A17473R51]). Although PCR techniques are very sensitive, need for a high-precision thermal cycler has prevented these powerful methods from being widely used in the field or by private clinics as a routine diagnostic tool. Alternate isothermal nucleic acid amplification methods such as nucleic acid-based amplification (NASBA), and loop-mediated isothermal amplification (LAMP), which require only a simple heating device, have been developed for rapid and sensitive detection of target nucleic acid ([@A17473R52]-[@A17473R54]).
The LAMP method can produce a tremendous amount of DNA with a few copies in less than an hour with only one type of enzyme and 4--6 different specific primers without special reagents required. One advantage of LAMP over PCR is prevention of contamination, which can occur in PCR because all steps from amplification to detection are conducted within one reaction tube under isothermal conditions. Therefore, the LAMP assay is easy and requires only a water bath or heating block to provide a constant temperature as the amplification proceeds under isothermal conditions.
3.8. Detection Based on Metagenomics Assay {#sec80521}
------------------------------------------
Metagenomics is based on a culture-independent study on microbial populations (microbiome) by analyzing the sample's nucleotide sequence content ([@A17473R55]). This method amplifies and sequences the whole DNA and RNA content of a given sample, by extensive filtering of the obtained data using specific software solutions. This method is useful for random detection of existing or new pathogens. In this method, the ratio of the number of target to total amplified sequences ([@A17473R56]), sample selection (amount of pathogens in the targeted sample), time consuming data acquisition, and data analysis time are important factors. Two limitations are currently of major concern: as the method relies on finding similarities with known pathogens, there is no solution for definition of unmatched sequences; and second, software solutions that may facilitate the interpretation of the results. To minimize these disadvantages, three solutions are currently considered, the increased pathogen load (target samples with high probability of pathogen multiplication), reducing the resulting data sets by limiting the number of targeted pathogens, and excluding the host reference sequences from data analysis and optimizing bioinformatics for "profession related" application.
3.9. Detection Based on Pulsed Field Gel Electrophoresis {#sec80522}
--------------------------------------------------------
Pulsed field gel electrophoresis (PFGE) is useful and a gold standard for detection of food-borne zoonotic bacteria that the most important of them are *S. enterica*, *Campylobacter* spp., *E .coli*, *Shigella* spp., *Vibrio cholera*, and *L. monocytogenes* ([@A17473R57], [@A17473R58]). This technique is based on molecular assays, culture, and isolation of the bacterial strain from the food product. By this method, we can validate a full genome; however, genes with small size such as plasmids are not visible on PFGE. Therefore microbiological culture and isolation is needed for detection before the PFGE assay ([@A17473R59]).
3.10. Sensor-Based Pathogen Detection Systems {#sec80531}
---------------------------------------------
### 3.10.1. Biosensor {#sec80523}
Biosensor technology is an analytical device converting a biological response into an electrical signal. This technology is the fastest growing method for pathogen detection compared to PCR, immunology, culture methods, and gel electrophoresis. It includes a bioreceptor element such as, a microorganism, tissue, cell, enzyme, antibody, nucleic acid, biomimic, and bacteriophage (phage), which recognizes the target analyte and a transducer base on optical, acoustical, and electrochemical signal detection, for converting the recognition event into a measurable electrical signal.
### 3.10.2. Phage-Based Pathogen Detection Systems {#sec80524}
Bacteriophage is a kind of virus that infects specific strains of bacteria. Enzymes, antibodies, nucleic acids, and biomimetic materials are used as bimolecular agent detectors, which have both advantages and disadvantages. Bacteriophages are biorecognition elements for the detection of different pathogenic microorganisms. Bacteriophages (phages) are viruses that attach to specific receptors on the bacterial outside and inject their genetic material inside the bacteria. These articles have a size of 20-200 nm. They recognize the bacterial receptors by means of its tail spike proteins (e.g., the tail-spike protein of *Salmonella* phage P22). This recognition is highly specific. Therefore, it can be used for the typing of the bacteria and development of specific pathogen detection technologies ([@A17473R18], [@A17473R60], [@A17473R61]). The recognition of antigens on the surface of bacteria by using specific antibodies is an important subject. This approach does not need any time-consuming initial preparation of the sample; nevertheless, antibodies have problems, including their costly and cumbersome preparation.
Their limited shelf life is also important in their performance. Therefore, it was demonstrated that antibodies can be substituted with bacteriophages in the bacteria detection. Phages have several advantages in this assay such as long shelf-life, stability, and easy to isolation ([@A17473R62], [@A17473R63]). The bacteriophages are used in the ELISA-based assays for detection of bacterial strains. With this assay, specific strains of *S.* *enterica* and *E. coli* could be detected. The sensitivity of the assay was about 10^5^ bacterial cells/well (10^6^/mL), which is comparable with other ELISA tests detecting intact bacterial cells without an enrichment step. The specificity of the assay depends on the kind of bacteriophage used. Bacteriophages are abundant in their environment and their preparation is simple, rapid, cheap, and easy ([@A17473R63]).
### 3.10.3. Surface Plasmon Resonance {#sec80525}
A common method, which uses reflectance spectroscopy for the pathogen detection is Surface Plasmon Resonance (SPR). Application of ELISA is rapid for screening of the samples. This assay has many advantages such as selectivity, sensitivity, easy to perform, and simultaneous detection. This method can be joined to other methods such as SPR. A biosensor is an analytical tool composed of an immobilized biological ligand that 'feels' the analyte, and a physical transducer, which translates this phenomenon into an electronic signal. This assay uses reflectance spectroscopy for the pathogen detection and can detect small changes. One kind of this method is SPR-based biosensors that is used for the detection of food-borne pathogens such as *L.* *monocytogenes*, *Salmonella* spp., *E. coli* O157:H7, and *C. jejuni*. It demonstrated the use of multi-channel SPR biosensor for the simultaneous detection of multiple target analytes from complex mixtures ([@A17473R64]-[@A17473R66]).
This assay can identify concentrations in the picomolar range. Biosensors in SPR are a thin metal film between two transparent media of different refractive index, for example a glass prism and sample solution. It is one of the methods that can be used for quick toxin detection. SPR allows to study interactions in real time without labeling. It has high sensitivity (the picomolar range) and a thin metal film is utilized in SPR (gold is more suitable). One molecule binds to the other one, then attached to the surface thin metal film of the gold and changes the refractive index of solutions and finally the angle of the minimum reflected intensity shifts. SPR can directly determine the bacterial and plant toxins that have large molecular weight. In comparing ELISA to SPR, it was observed that ELISA is more sensitive than SPR, but the sample treatment with ELISA lasted six hours, while with SPR the treatment duration was only 20 minutes ([Table 1](#tbl20458){ref-type="table"}).
###### Bacterial Toxin Detection in Milk, Seawater Sample ([@A17473R67])
Toxin MW (Da) Type of Detection Detection Limit
------------------- --------- ------------------- -----------------
**Enterotoxin B** 28,400 direct 1.96 ng/mL
**Enterotoxin B** 28,400 direct not determined
**Enterotoxin B** 28,400 direct 10 ng/mL
**Enterotoxin B** 28,400 direct 10 ng/mL
**β-toxin** 35,000 direct not determined
**Tetanus toxin** 150,000 direct 0.028 Lf/mL
### 3.10.4. Detection Based on an Electrochemical Biosensor {#sec80526}
This assay is a rapid and novel electrochemical biosensor method. In this method, polypropylene microfiber membranes are coated with a conductive polypyrrole and antibody is functionalized for the biological capture and detection of enteric bacterium. The glutaraldehyde chemical can be used for attaching to conductive microfiber membranes, till a pathogen specific antibody are covalently bound to conductive microfiber membranes and then bovine serum albumin solution is used for blocking them. In this assay, use of biosensor antibodies is useful because the benefits of the antibody-antigen reaction include high binding efficiency and specificity for detection. Advantages of antibodies have made them, especially marketable for use in food materials. Then, the membranes are exposed to pathogen cells and are washed in Butterfield's phosphate buffer and added to a phosphate-buffer electrolyte solution. With the captured pathogen on the fiber surface, the resistance in the electro-textile electrode surface increases, which converts the biological recognition event into a measurable electrical signal, indicating a positive result. This method is generally less expensive than optical detection methods and is easier to use with turbid samples ([@A17473R68]).
### 3.10.5. Detection Based on Evanescent Wave Fiber-Optic Biosensors {#sec80527}
The development of biosensors has greatly improved the sensitivity, selectivity, and speed of the microbial pathogen and biological toxin detection. Biosensors are detection devices that use living organisms or biological molecules such as antibodies, nucleic acids, or enzymes, to recognize and bind target analytes in the sample matrix. After binding, the presence of the target analyte is detected by electrical signal, a colorimetric or fluorescent indicator reaction, or some other recognition response. Because the detection of microbial pathogens and biological toxins in food, water, and human specimens are difficult, this assay relies on immunological reactions for their capture or detection. It can identify such target analytes in minutes rather than days and directly from complex matrix samples using antibody-based assays, thus significantly improves the detection sensitivity, selectivity, and speed. In addition, live organism targets can be recovered from fiber-optic waveguides to determine microorganism viability, confirm their identification, and preserve as evidence.
This technology has the potential of rapid detection of microorganisms, toxins, and other analytes. Evanescent wave fiber-optic biosensors are biosensors that utilize evanescent wave detection techniques. Electro-magnetic waves propagate within an optical fiber by total internal reflection at the exposed surface. This process induces an evanescent electromagnetic field in any surrounding dielectric media, which decays exponentially with distance from the surface. When fluorescent probes are used with this system, bounded fluorophore molecules immediately adjacent to the fiber surface are strongly excited, and some of the fluorescent signals are coupled back into the optical fiber ([@A17473R69], [@A17473R70]).
The remaining fluorescent signals are scattered and absorbed before it can be passed through the sample. Unbound fluorophores further from the fiber surface encounter lower field strength and are not effectively excited, thereby providing considerable protection from bulk sample fluorescence. Microorganisms and toxins such as *Yersinia pestis* ([@A17473R69]), *E. coli* lipopolysaccharide endotoxin ([@A17473R70]), pseudexin toxin ([@A17473R71]), *Clostridium botulinum* toxin A ([@A17473R72]), Staphylococcal enterotoxin B ([@A17473R73]), ricin ([@A17473R74]), *Bacillus anthracis*, *Francisella tularensis*, *Escherichia coli* O157:H7, *S. typhimurium* ([@A17473R75]), have been successfully detected with evanescent wave fiber-optic biosensors (see [Table 2](#tbl20459){ref-type="table"}).
###### Examples of Analytes Detected by Evanescent Wave Fiber-Optic biosensors ([@A17473R76])
Target Detection limit
------------------------------------------------------------- -----------------
***Bacillusanthracis,*** **colony-forming units/mL** 10^5^
***Francisella tularensis,*** **colony-forming units/mL** 10^5^
***Salmonella typhimurium,*** **colony-forming units/mL** 10^5^
***Escherichia coli*** **O157:H7, colony-forming units/mL** 10^5^
***Yersiniapestis*** **F1 antigen, ng/mL** 50
**Staphylococcal enterotoxin B, pg/mL** 10
**Cholera toxin, ng/mL** 100
***E. coli*** **lipopolysaccharide endotoxin, 10 ng/mL**
**Ricin, ng/mL** 50
### 3.10.6. Detection Based on Rapid Bioluminescent Methods {#sec80528}
These techniques are divided into two classes:
1. methods based on bioluminescent adenosine triphosphate (ATP) assay
2. methods based on bacterial bioluminescence.
These methods are useful in the food industry and give their results in the shortest time.
### 3.10.7. Bioluminescent Adenosine Triphosphate (ATP) Assay {#sec80529}
Intracellular ATP is needed for all living cells. They utilize ATP for many mechanisms during all phases of the growth. ATP is destroyed within a few minutes, therefore it can be used for detection of the microbial biomass. A rapid ATP assay based on the firefly (*Photuris pyralis*) was developed as a replacement for the conventional plate count methods in microbiological analysis of food. Firefly (*P. pyralis*) luciferase produces light with ATP and luciferin (LH2) according to this reaction: LH~2~ + ATP + O~2~ Mg++ P + AMP + PPi + CO~2~ + hν.
The sensitivity of commercially available manual or automated luminometers is less than 0.1 pg (around 100 bacterial cells) ([@A17473R77]). This technique can be used for milk and milk products, meat and meat products, carbonated beverages and fruit juices.
### 3.10.8. Bacterial Bioluminescence {#sec80530}
*Vibrio*, *Photobacterium*, *Alteromonas*, and *Xenorhabdus*, are the major genera of bioluminescent bacteria (those capable of emitting light). In these bacteria, the bioluminescent reaction carried out by the enzyme luciferase. This process involves the oxidation of a long-chain aldehyde and reduction of riboflavin phosphate (FMNH2), which results in the emission of blue green light.
FMNH~2~ + O~2~ + RCOH luciferase FMN + RCOOH + H~2~O + light (490 nm).
These properties can be used in the food industry, including the detection of specific bacterial pathogens and indicator microorganisms, spore forming organisms, lactic acid bacteria, monitoring starter culture integrity, biocide and virucide, and recovery of sublethally-injured cells. Determination of ATP by firefly luciferase is a rapid technique and is useful to detect and enumerate cells, but this assay is unable to identify bacteria. We can transfer *lux* genes from luminescent bacteria into specific bacteria by their bacteriophages and observe the light emissions and detect strains such as *S. typhimurium*, *Campylobacter* spp, and *L. monocytogenes*. The sensitivity of this method is as few as 100 cells (100 per mL). It can be used to determine Gram-positive organisms. If spores of *Bacillus* spp. receive *lux* gene, light emission is observed after germination and growth phase, which is detectable by monitoring light emission ([@A17473R78]).
3.11. Partial List of Commercially-Available Rapid Detection Kits {#sec80532}
-----------------------------------------------------------------
The following text and tables list many of the commercially available rapid detection kites; they are classified by the principles underlying the procedure used ([Tables 3](#tbl20460){ref-type="table"}, [4](#tbl20461){ref-type="table"}, [5](#tbl20462){ref-type="table"}).
###### Partial list of Commercially-Available, Nucleic acid-Based Assays Used in the Detection of Food-borne Bacterial Pathogens ([@A17473R79], [@A17473R80]) ^[a](#fn18099){ref-type="table-fn"}^
Organism Trade Name Assay Format Manufacturer
------------------------------ --------------------------------------------------------------- ------------------------ ----------------------------------------
***Campylobacter*** GENE-TRAK probe Neogen
***Escherichia coli*** GENE-TRAK probe Neogen
***E. coli*** **O157:H7** Probelia PCR BioControl
***Listeria*** BAX; GENE-TRAK^[b](#fn18100){ref-type="table-fn"}^; AccuProbe PCR; Probe; probe Qualicon NeogenGEN-PROBE
***Salmonella*** GENE-TRAK BAX; Probelia; BIND Probe; PCR; PCR; phage Neogen; Qualicon BioControl BioControl
***Yersiniaenterocolitica*** GENE-TRAK Probe Neogen
^a^Abbreviations: PCR, polymerase chain reaction; BIND, bacterial ice nucleation diagnostic.
^b^Adopted AOAC Official First or Final Action.
###### Partial list of Commercially-Available, Antibody-Based Assays for the Detection of Food-borne Pathogens and Toxins ([@A17473R79], [@A17473R80]) ^[a](#fn18101){ref-type="table-fn"}^
Organism/toxin Assay Format Manufacturer
--------------------------------------- --------------- -----------------------
***Bacillus cereus*diarrhoeal toxin**
TECRA; BCET ELISA; RPLA TECRA; Unipath
*Campylobacter*
Campyslide LA Becton Dickinson
Meritec-campy LA Meridian
MicroScreen LA Mercia
VIDAS ELFA bioMerieux
TECRA ELISA TECRA
***C.perfringens*** **enterotoxin**
PET RPLA Unipath
**EHEC O157:H7**
RIM LA REMEL
*E. coli* O157 LA Unipath
Prolex LA PRO-LAB
Ecolex O157 LA Orion Diagnostica
Wellcolex O157 LA Murex
*E. coli* O157 LA TechLab
O157&H7 Sera Difco
Petrifilm HEC Ab-blot 3M
Dynabeads Ab-beads Dynal
EHEC-TEK ELISA Organon Teknika
Assurance ELISA BioControl
*E. coli* O157 ELISA LMD Lab
Premier O157 ELISA Meridian
*E. coli* O157 EIA/capture TECRA
Quix Rapid O157 Ab-ppt Universal HealthWatch
VIDAS ELFA bioMerieux
**Shiga toxin, Stx**
VEROTEST ELISA MicroCarb
Premier EHEC ELISA Meridian
Verotox-F RPLA Denka Seiken
**ETEC**
Labile toxin, LT RPLA Oxoid
Stabile toxin, ST ELISA Oxoid
***Salmonella***
Bactigen LA Wampole Labs
Spectate LA Rhone-Poulenc
Dynabeads Ab-beads Dynal
CHECKPOINT Ab-blot KPL
1-2 Test Diffusion BioControl
Salmonella-TEK ELISA Organon Teknika
Salmonella ELISA GEM Biomedical
Transia Plate Salmonella Gold ELISA Diffchamb
PATH-STIK Ab-ppt LUMAC
Clearview Ab-ppt Unipath
UNIQUE Capture-EIA TECRA
***Shigella***
Bactigen LA Wampole Labs
Enterotoxin
SET-EIA ELISA Toxin Technology
SET-RPLA RPLA Unipath
TECRA ELISA TECRA
VIDAS ELFA bioMerieux
***Vibriocholera***
Cholera SMART Ab-ppt New Horizon
Cholera Screen Agglutination New Horizon
**Enterotoxin**
VET-RPLA RPLA Unipath
^a^ Abbreviations: ELFA, enzyme-linked fluorescent assay; ELISA, enzyme-linked immunosorbent assay; EHEC, ETEC - enterotoxigenic *E. coli*; LA, latex agglutination.
###### Partial List of Other Commercially Available Rapid Methods and Specialty Substrate Media for Detection of Food-borne Bacteria ([@A17473R79]-[@A17473R81])^[a](#fn18102){ref-type="table-fn"}^
Organism Assay Format Manufacturer
-------------------- ------------------ -----------------
Isogrid HGMF/MUG QA Labs
Petrifilm media-film 3M
SimPlate media Idexx
Redigel Media RCR Scientific
ColiQuik MUG/ONPG Hach
LST-MUG MPN media Difco & GIBCO
CHROMagar Medium CHROMagar
***E. coli***
MUG disc MUG REMEL
CHROMagar Medium CHROMagar
**EHEC**
Rainbow Agar Medium Biolog
BCMO157:H7 Medium Biosynth
Fluorocult O157:H7 Medium Merck
***Salmonella***
Isogrid HGMF QA Labs
OSRT Medium/ motility Unipath (Oxoid)
Rambach Medium CHROMagar
MUCAP C8esterase Biolife
XLT-4 Medium Difco
^a^Abbreviations: HGMF/MUG, hydrophobic grid membrane filter/4-methylumbelliferyl-β -D-glucuronide; ONPG, O-nitrophenyl β-D-galactoside; MPN, most probable number.
4. Conclusions {#sec80534}
==============
Bacterial pathogens and their toxins can cause illnesses such as diarrhea and spread through population. These pathogens are causing more and more outbreaks of disease every year. Therefore, rapid and reliable detection methods are needed. Conventional methods for the detection of enteric pathogen bacteria are sensitive. However, Traditional standard culture methods require long turnaround time for enrichment and confirmation of presumptive isolates and may require several days to obtain results. These methods are based on immunochemical and nucleic acid technologies and are alternatives for conventional methods, because these methods can provide results within hours. The DNA microarrays can facilitate whole genome comparisons among diverse strains and the identification of strain-specific and lineage-specific sequences.
The conventional PCR methods, automated fluorogenic, and quantitative real-time PCR kits have been invented and become available on the market. But in the laboratories that have lower sample throughput, commercialized automated immunoassay-based methods are less expensive. Optical techniques like SPR have better sensitivity, but they are expensive and complicated. The methods based on biosensor are rapid in detecting microbial pathogens within hours or even minutes. LAMP assays, peptide nucleic acid probes, DNA microarrays, and DNA chips are more advanced and potentially new rapid methods for enteric pathogens detection. Therefore, these rapid methods have become increasingly popular among laboratories and could be accepted as cost-effective and standard methods for pathogen detection in the future.
All co-authors have read and agreed upon the contents of the manuscript and there was no financial interest to report. We certify that the submission is not under review at any other publication.
**Authors' Contributions:**Jafar Amani: wrote and revised the paper; Seyed Ali Mirhosseini: wrote the paper; and Abbas Ali Imani Fool: revised the paper.
|
[06/14/07 - 10:13 AM]FOX Reality Presents Its New Original Series ''The Search for the Next Elvira'' Coming to Life on October 13 The series consists of three, hour-long episodes during which contestants will compete to see if they "can look the part, and present the same wit, poise and courage as the real Elvira in hopes of becoming one of Halloween's most sinful icons."
[via press release from Fox Reality]
Fox Reality Presents Its New Original Series ''The Search for the Next Elvira'' Coming to Life on October 13
Open "Casket Call" to Be Held on Friday, July 13th at the Queen Mary in Long Beach, CA
LOS ANGELES -- Elvira, Mistress of the Dark, is searching for an evil handmaiden to assist with her Halloween hosting duties on the Fox Reality Original "The Search for the Next Elvira," which debuts on Fox Reality, the only all-reality, all-the-time cable and satellite network, on Saturday, October 13 at 9:00 PM PT / 12:00 AM ET. Natural 9 Entertainment ("Reality Remix," "Fox Reality Really Awards") is producing the series exclusively for Fox Reality.
On Friday, July 13, Fox Reality will host an open "Casket Call" at the Queen Mary in Long Beach, CA for their original series "The Search for the Next Elvira." From the hundreds of horrific hopefuls, Elvira will cut the group down to the unlucky thirteen participants, while the others will be told to "Rest in Peace." Later that evening, Elvira will be crowned with her new title "Queen of Halloween." Instructions for entering the Fox Reality Original "The Search for the Next Elvira" through the open "Casket Call" are posted on www.foxreality.com.
"We are thrilled to hold the 'Casket Call' on Friday the 13th at the Queen Mary � a notoriously haunted LA landmark. All walks of life are welcome to see if they have what it takes to become the Next Elvira," said Carol Sherman, Executive Producer, Natural 9 Entertainment.
"The Search for the Next Elvira" is a reality series consisting of three, hour-long episodes during which contestants will compete to see if they can look the part, and present the same wit, poise and courage as the real Elvira in hopes of becoming one of Halloween's most sinful icons. Following the second episode on October 20, audience members can vote for their favorite aspiring Elvira who will be crowned as the next "Mistress of the Dark" during the LIVE finale on Halloween.
"There are simply too many ghastly engagements for one 'Mistress of the Dark' to entertain," said Elvira. "I am searching for someone to share my tricks with � someone to help spread the Halloween spirit."
ABOUT FOX REALITY
Fox Reality launched May 24, 2005 to become the first destination for lovers of unscripted programming. The channel offers major US network favorites, exclusive international reality programming, Original Series and Specials. Fox Reality offers reality viewers more of their favorite reality programming with RealityRevealed in Primetime with never-before-seen footage, exclusive interviews, behind the scenes secrets and more reality fun. Fox Reality is currently seen in approximately 35 million households. To get more information on Fox Reality programs and schedules, please visit www.foxreality.com.
ABOUT NATURAL 9 ENTERTAINMENT
Natural 9 Entertainment is an independent production company based in Burbank, California and currently produces a daily entertainment recap Fox Reality Original series, "Reality Remix." Natural 9 has produced a wide array of daily series with Aaron Spelling Productions, weekly series and specials for ABC, NBC, FOX, FOX Sports and MTV, among many others. Natural 9 has also produced the Los Angeles Area Emmy Awards for the Television of Arts & Sciences as both a live event and television special. Before founding Natural 9, Carol Sherman was previously Head of Finance & Business Affairs for KABC. Jeff Androsky, original director at "Eye on LA," serves as President of Production & Development. |
Rather than have two concurrent Mario RPG series, Nintendo has kept most of that genre’s trappings confined to the Mario & Luigi series for over a decade. Paper Mario may have taken the torch from Super Mario RPG with its first two entries, but later titles strayed further and further from the formula. Super Paper Mario was a platformer for all intents and purposes, and Sticker Star took a different approach altogether. The 3DS title eliminated XP and leveling, severely handicapping any sense of progression. In addition, combat was regulated by a finite collection of stickers that Mario would collect in the world. As polarizing as Sticker Star was for fans of the series, Paper Mario: Color Splash doubles down on its most frustrating elements and makes them even worse.
Some stages make novel use of the craftwork aesthetic.
What makes Color Splash such a tremendous disappointment is the fact that so much of it is great. Throughout the game’s lengthy story, it consistently made me laugh with its clever writing and numerous nods to Mario history. Prism Island plays host to a wide variety of locations and activities, and I was always curious what the game would be having me do next. Restoring color to the world is Mario’s goal, and doing so tasks him with appearing on a game show, assembling a train, organizing a tea party at a haunted hotel, and a ton more. It even manages to sneak in some great parodies and references that rarely seem forced.
Just about everything in Color Splash is instantly likable except for the thing that you spend the most time doing. Each time I encountered an enemy, it felt like a punch to the gut. I’d often be walking around, admiring the game’s gorgeous visuals and wondering what it would be having me do next. Then, I’d encounter an area filled with enemies and I’d be reminded of how thoroughly Nintendo dropped the ball with this game.
Numerous things are terrible about the combat system, and any one of them is bad enough to bring down the quality of the game as a whole. Together, they have the ability to make the experience miserable at times.
Like Sticker Star, combat is regulated by single-use cards that Mario can buy or find in the environment. Since there isn’t any kind of infinite base level attack that can be pulled out at any point, I was frequently required to waste powerful cards on enemies that were already near death. This system can back you into a corner. If you’ve run out of hammers and all you have are a bunch of jump cards, good luck trying to take out that Shy Guy with a spiked helmet on his head.
Oftentimes, powerful cards will just be taken from you without warning. At random points, Kamek will fly by at the beginning of standard battles and turn all of your cards over. You’re forced to blindly choose cards to play, meaning that you could easily waste one of your most powerful attacks on a weak enemy. Some fights even feature enemies that hop onto the playing field and eat your cards before you have a chance to use them.
Go into the settings menu ASAP to remove this screen.
This is especially infuriating if it’s a Thing card. These are special cards that transform the battlefield into a photorealistic environment, and often do massive damage to your enemies. More often than not, these rare items are required to finish off a boss or advance the story. If you lose it in one of several random ways, you’re forced to exit the area you’re in and head back to the main hub world to buy another.
As boneheaded as the entirety of the combat system is, it’s made even worse thanks to the method in which you attack. It’s insane that GamePad functionality has been so clumsily incorporated this late in the Wii U’s lifecycle. Each time you want to attack, you have to scroll through a giant deck of cards on the GamePad screen with the stylus. You then slide the cards that you want to use up to the top of the screen. Once your cards are in place, you confirm that they are the cards that you wish to attack with. The GamePad takes you to another screen that has you tap and hold on each individual card to determine how much paint you want to put into them (paint increases attack damage). When your paint levels are where you want them to be, you hit confirm again. At the next screen, you flick the cards up with the stylus to actually attack. This song and dance happens every single time that it’s your turn during combat. There is an option in the settings menu that allows you to eliminate one of the “confirm” screens, but the process remains painfully slow.
Be prepared to see a lot of this screen.
This is all the more maddening when you realize how fruitless combat is to begin with. Sticker Star’s dumbed-down progression system is even more severely neutered in Color Splash. Mario can expand his paint reserves by collecting hammers after fights, and his HP goes up by 25 at six predetermined points in the story. Outside of a few upgrades that increase the number of cards that Mario can play in one turn, there is nothing else that you can do to feel more powerful.
Let’s break this down. You fight by playing single-use cards. If you win, you’re rewarded with coins. You use coins to...buy more cards. With that system in place, why would anyone ever want to encounter an enemy in the field? I never once felt like any of the standard fights were doing anything to progress the story or my character’s abilities. It’s maddening. I got to a point in which I started trying to flee from every fight. This works on occasion, but it’s terrible when Mario falls flat on his face while attempting to flee and you’re forced to go through another awful round of card-based combat.
There are other unfortunate elements in play that aren’t tied to the combat. Several stages require you to play through their entirety two or more times. At five different points in the story, progress is halted unless you’ve found an entire “rescue squad” of Toads that are spread throughout the world. It’s discouraging to think that you’re about to enter a new area, only to be told that you can’t continue without finding five or six Toads that are hiding in unspecified locations in previous levels.
The Magma Burger is one of the only important items you can buy with coins.
I changed my tune on one of my favorite areas by the end of it. The haunted hotel isn’t combat-heavy, and focuses more on puzzle solving. I enjoyed trying to hunt down a collection of Toad ghosts so that they could organize a tea party. This area has several clever puzzles, and the reduced focus on combat was really helping me spend time with the things I liked about the game. When I was down to the last Toad that I had to collect, a grandfather clock rang and I was met with a game over screen. It had failed to adequately explain to me that there was a time limit for this area, and I was forced to start over from the beginning.
Even the sidequests feel useless. The biggest one involves temples in which you compete in rock-paper-scissors. Your prize for winning? Coins that you use to buy cards, and cards that you use to win fights that give you coins.
Every level has blank spots for Mario to fill in with paint. I initially enjoyed this side activity and shot for 100-percent “colorization” on every stage. This pursuit stopped once I realized that a character called the Shy Bandit pops up randomly to suck the color out of levels with a straw. If you don’t catch him in time on the world map, your 100-percent colorization can go down to next to nothing. Even if you do get full colorization in an area, your reward is just unlockable music tracks.
Toads Toads Toads Toads Toads
Often, the method to advance the story will be completely unclear. Your talking paint can named Huey is supposed to help point you in the right direction if you press up on the d-pad, but he frequently has no advice beyond “Hey, maybe you should talk to some Toads around town!”
That’s never hard to do, because everything is a goddamn Toad in this game. Previous Paper Mario games have featured a wide variety of NPCs, complete with tons of different looks and personalities. In Color Splash, it’s just a bunch of Toads of different colors. Sometimes they’ll have scarves. A couple of them had pirate hats. In the end, they’re all just Toads. Oh, you need to climb a mountain to talk to a wise old sage? Just a Toad. He doesn’t even have a beard. Ghosts are all over this hotel? They’re just Toads with an aura effect around them. I think one of them had glasses.
I can’t remember the last time I’ve been so thoroughly divided on a game. One part of me loves it. It’s genuinely funny, and the writing and locations are fantastic. Prism Island is gorgeous, and the soundtrack meets the high bar of quality that Mario games are known for. In the end, though, I spent most of this game trying to avoid playing the biggest part of it. Every combat encounter reminds you of how broken a critical element of the game is, and they happen frequently. It’s staggering how much this one system routinely destroyed my enthusiasm for the game.
With more traditional RPG mechanics and a real progression system, Paper Mario: Color Splash could have been one of the best games in the series. Because of some unfathomably ill-conceived decisions during the development process, it’s one of the very worst. |
Q:
Download SQL database from Microsoft SQL Server Management Studio?
I have a server on Microsoft Server Management Studio that I need to download in order to work on it locally.
When I right click on the database and go to Tasks -- Export Data I get the SQL Server Import Export Wizard. I am able to pick a source, but I can't find the write destination to allow me to download the file locally.
I don't want to transfer the files to another server, I just want to have to local file to work with.
Is this the right approach? Or is there a better way to handle this task?
A:
I don't want to transfer the files to another server....
A SQL Server database is a complex binary structure. To read it / work with it, you need a copy of SQL Server on the machine you want to work with it on. The Developer edition would be a good option for downloading to a local machine, or you could install the free version and export data to a local database.
If what you are trying to do is to just see the data without any of the SQL Server functionality, then you can export them to different types of files through the import/export functionality. However, unless the amount of data involved is quite small, I would really recommend against this. The organization and cross-referencing of data can be quite extensive depending on how the database was designed.
These are about your only two options. What you end up doing may depend on what you are planning to do with the data.
|
Pages
Saturday, May 7, 2011
The time has finally come when I was roped into writing a blog post. It's not that I am opposed to blogs, its just more of a Julie thing than me. Anyways, onto the pictures. This year the Middle School took a trip south to Qingdoa 青岛 and the area around the city. We were able to visit Confucius temple, climbed Taishan mountain, saw Eric Liddel's memorial, and looked through an old German bunker from the 20th century. It was a very good trip.
The following are some pictures from Confucius temple.
This is a stone slab with some of Confucius sayings inscribed on it.
This is the teaching area - Confucius would sit in the middle and his students would sit around the outside and listen to the lesson.
This is the main temple at the park. It is ironic that Confucius never wanted to be worshiped or idolized, he only wanted his teachings to be heard. Yet people still offer prayer and incense to him. In this picture you can see people burning incense to Confucius.
There are 11 dragon pillars on the front side of the main building. Apparently there are only 35 or so in the whole country, and 30 of them are here at Confucius temple.
The day after we visited Confucius temple we were able to climb Taishan mountain. This is the 3rd largest mountain in China - about 6,666 steps up. The mountains in China often have steps going up the entire path, which is kind of a bummer cuz stairs get really old to climb over and over again. After this day whenever we climbed stairs some of the students would always refer to "Taishan".
I took a picture of this guy shortly after we started climbing. He was carrying this load up as were some other people. There was a midway point on the climb with some resturants and things, but all the way up there were little shops selling all sorts of things. Most people would take the cable car up and supply their shops that way, but not this guy. He took his supplies up on his back the whole way up. When I got to the top I was able to look back and see him coming.
This is a picture from the midpoint area of the mountain. It was a little bit of a bummer having to look up and see how much farther we had to go.
This is a picture from the top, you can see in the distance some of the stairs we had to climb on the way up. Needless to say it was a long hike up.
Cable car on the way down.
These are some pictures of the kite museum we went to. They only let us take pictures of the entrance way.
This guy is one of the kite makers, he builds them all by hand. He let us come into his shop and take pictures of him working as well as some of the kites he made. He was excited to see us there and he showed us his photo album. He had been to quite a few places; DC, San Fransisco, Vegas, Paris, even London.
We also went to Eric Liddell's memorial. We were able to see the internment camp wear he was held by the Japanese. (Now its a Chinese Middle School except for a small building with some information as well as propaganda.)
Thursday we went to an old German bunker. The Germans invaded Qingdao around 1890 after the death of 2 German missionaries. They stayed in the city until the Japanese took over in the 1920's. After the Germans defeat in WWI they pulled out of the city and the Japanese occupied it until their defeat in WWII. The Chinese military used the bunker until the mid 80's when they turned it over to their historical society.
The main torrent of the bunker. Obviously the cannon has been removed the it still rotates.
Some picture of the city from the top of the hill.
This is the oldest castle in the city.
Some pictures of a Protestant Church in Qingdao - the church was built in 1909 and is still used for service today.
Finally some pictures of our very small hotel room.
Over all the trip was a great experience. We had some discipline problems after the 2nd day, actually had to send 4 boys home for leaving their rooms after curfew. Needless to say no one will do that again any time soon. But its always a lot of fun to spend time with the kids and get out of the classroom. You get to see the kids in a different way and they see you not always being a teacher.
Johnson Family
Follow Us!
this is [me]
I live in Shenyang, China. I'm married to Adam, the love of my life. Our daughter, Willow Rose, was born on July 8, 2011. She is an energetic and loving girl. Our son, Noah Silas was born on August 7, 2013. He is a cool dude, with a peaceful vibe. I used to teach 3rd grade and direct secondary drama productions at Shenyang International School. Now, I enjoy the role of a stay-at-home-mom. Adam teaches middle school science and sponsors student council at SYIS. Here is one place where our life can be shared and remembered. |
This application pertains to generation of radiation, especially extreme ultraviolet (EUV) radiation, from a laser-produced plasma (LPP), for the purpose of illuminating a reflective photomask in a lithography or mask inspection process. The application relates specifically to spectral purity filtering (eliminating unwanted out-of-band radiation wavelengths in the illumination) and power recycling (returning some of the out-of-band radiation to the plasma to enhance generation of usable, in-band radiation).
Current-generation EUV lithography systems use an LPP illumination source, which generates EUV radiation from laser-irradiated tin droplets. In this process a high-power, CO2 laser pulse (at a 10.6-μm wavelength) heats a small, molten tin target to form an ionized plasma, which generates EUV radiation from decay of tin ions to their neutral state. The optimum target size is much smaller than the diffraction-limited laser beam, so the target is typically first vaporized by a shorter-wavelength (e.g., 1-μm) pre-pulse laser to expand its size before it is irradiated and ionized by the main CO2 laser pulse. [Hori et al.]
LPP sources are also useful for EUV inspection and metrology, which do not need as much power as lithography, but which require a very small, high-brightness plasma source. For these applications, a relatively short-wavelength laser (e.g., 1-μm) can be used to ionize the target without pre-pulse irradiation. [Rollinger et al.]
FIG. 1 illustrates the primary components of a prior-art EUV lithography system. [Hori et al.; Migura et al.] The LPP source 101 comprises apparatus for generating the ionized plasma 102 (including the drive laser, pre-pulse laser, and tin droplet generator—not shown), and a collection mirror 103. The collection mirror focuses plasma-generated EUV radiation to an intermediate focus (IF) 104, where it is spatially filtered by an intermediate-focus aperture (IF aperture) 105. The aperture-transmitted radiation is conveyed by illumination optics 106 to a reflective photomask 107 at object plane 108. The illumination optics control characteristics of the illumination such as its spatial profile on the photomask, the illumination's numerical aperture, and the coherence factor. The EUV-illuminated photomask is imaged by projection optics 109, at reduced magnification, onto a semiconductor wafer 110 in image plane 111. The illumination optics typically expand the illumination to a ring field on the photomask, and the photomask and wafer are mechanically scanned in tandem to effect full-field exposure.
An EUV inspection or metrology system could be similar to the lithography system of FIG. 1, but it would not need a pre-pulse laser and the wafer would be replaced by an image sensor.
The collection mirror uses a multilayer reflective coating, typically comprising about 40 or more Mo/Si bilayers of approximate thickness 7 nm per bilayer, to reflect plasma-generated radiation. The collection mirror reflects useful “in-band” EUV within a 2% wavelength band centered at 13.5 nm. (The band is limited by the multiple EUV reflections between the plasma and the image plane.) But the mirror also reflects a large amount of plasma-generated “out-of-band” radiation from the deep ultraviolet (DUV) to long-wave infrared (IR), which can be detrimental to lithography processes. [Park et al.] A variety of prior-art techniques have been developed or proposed for reducing the undesired out-of-band radiation in the LPP source output.
Current-generation LPP sources reject the IR via diffractive scattering from a surface-relief grating on the collection mirror, as illustrated in FIGS. 2 and 3. [van den Boogaard et al. (2012); Medvedev et al. (2013); Trost et al.; Kriese et al.; Feigl et al.] A CO2 laser 201 irradiates the plasma 102 with IR radiation 202 (10.6-μm wavelength), and the plasma emits in-band EUV radiation 203 and out-of-band radiation including the 10.6-μm laser wavelength 204. A lamellar (rectangular-profile) diffraction grating on the collection mirror 103 separates the IR 205 from the EUV 206 in the reflected radiation.
An enlarged view of the mirror surface, illustrating the grating 301, is shown in FIG. 3. The grating comprises annular grooves, shown in cross-section. (The grating is axially symmetric around an optical axis through the plasma 102 and IF 104.) The grating is configured to extinguish zero-order (i.e., undiffracted) IR radiation at the 10.6-μm drive-laser wavelength, scattering the reflected IR into first (±1) and higher diffraction orders.
FIGS. 2 and 3 illustrate a light cone 207 converging from the plasma to a particular mirror point 208. The grating structure near this point diffracts 10.6-μm IR into ±1-order diffracted beams with light cones 209 and 210. The grating period Λ is too long to significantly affect the EUV (13.5-nm) radiation, which is substantially undeviated from the zero order beam, indicated as light cone 211. The collection mirror has an ellipsoidal substrate shape with foci at the plasma 102 and the IF 104 so that zero-order reflected EUV radiation is focused toward the IF and through the IF aperture 105. The IR is diffractively scattered out of the IF aperture.
For near-normal incidence the zero-order IR is extinguished by making the grating height h (FIG. 3) approximately one-quarter of the wavelength (i.e., 2.65 μm to achieve zero-order extinction of the 10.6-μm laser wavelength). The angular deviation θ between the zero and first diffraction orders is roughly equal to the wavelength-to-period ratio (at the laser wavelength); e.g., for a typical grating period of 1 mm the IR laser wavelength is diffractively deviated by approximately (10.6 μm)/(1 mm), or 10 mrad. By comparison, the plasma source's subtend angle δ at the grating is typically of order 1 mrad (e.g., for a 200-μm plasma diameter and a 200-mm collection mirror focal length). All of the light cones 207, 209, 210, and 211 have roughly 1 mrad extent, so the 10-mrad IR scatter angle θ is more than sufficient to separate the first-order IR and zero-order EUV beams.
The grating also induces some diffractive scatter in the EUV, but the scatter angle is only of order (13.5 nm)/(1 mm), i.e., 13.5 μrad, which is insignificant in comparison to the plasma's 1-mrad angular extent.
A limitation of these types of systems is that they are generally designed to only extinguish the zero order at only one wavelength (10.6 μm), so they do not achieve full rejection of all out-of-band radiation. Feigl et al. describe a two-level grating structure that rejects two wavelengths (the drive laser's 10.6-μm wavelength and the pre-pulse laser's 1.06-μm wavelength). But it does not fully exclude other wavelengths, including the DUV spectrum.
The grating 301 typically has the form illustrated in FIG. 4A. A lamellar, surface-relief structure is patterned in a substrate 401, and a multilayer reflective film 402 is then deposited on the grating structure. But van den Boogaard et al. (2012) use a different approach, as illustrated in FIG. 4B: The multilayer reflective coating is deposited on a smooth substrate, which does not have a grating topography, and the lamellar grating structure is patterned directly in the multilayer film.
Moriya et al. (U.S. Pat. No. 8,592,787) similarly disclose a spectral-filter grating structure patterned in a multilayer film on a smooth substrate, but the structure is non-lamellar. For example, the illustrated “Embodiment 1” grating in FIG. 3 of Moriya et al., shown as FIG. 4C herein, comprises a blazed, sawtooth profile, which diffracts the drive-laser (10.6-μm) radiation out of the IF aperture. The grating operates functionally as illustrated in FIG. 2, although its structure differs from the lamellar grating illustrated in FIG. 3. The reflected in-band EUV is concentrated in or near the zero order, which intercepts the IF aperture, and the out-of-band radiation is diffractively diverted out of the IF aperture. (See Moriya et al. at 13:16-14:6 and 14:57-63.)
A drawback of the Moriya et al. design is that it requires many layers in the reflective film. For example, the exemplary “Embodiment 1” has 300 bilayers including an unpatterned, 50-bilayer base structure and a patterned, 250-bilayer grating structure. (See Moriya et al. at 11:44-61 and 13:4-10) A conventional multilayer reflective film can achieve high EUV efficiency with only approximately 50 bilayers, but Moriya et al. note (at 12:8-12) that “If the number of pair layers is less than 100, then . . . it is not possible sufficiently to separate the EUV radiation from the radiation of other wavelengths.”
The Embodiment 1 structure of Moriya et al. is specified as having 250 patterned bilayers with a bilayer thickness of 6.9 nm, implying a grating height of 1.7 μm. This is only about 16% of the 10.6-μm laser wavelength, but the height would actually need to be approximately 5.3 μm (one-half wavelength) to achieve first-order blazing and zero-order extinction at the laser wavelength. This would require approximately 768 patterned bilayers. With 250 patterned bilayers only a minor portion of the 10.6-μm IR would be diverted out of the IF aperture.
Moriya et al. cite prior-art proposals for spectral filters that use a blazed diffraction grating to separate the EUV from out-of-band radiation by diffracting the in-band EUV, rather than IR. [Chapman (U.S. Pat. No. 7,050,237); Bristol (U.S. Pat. No. 6,809,327); Kierey et al.; Sweatt et al. (U.S. Pat. No. 6,469,827)]
Chapman discloses an EUV-diffracting grating formed by cutting a thick, multilayer EUV-reflection coating at an inclined angle. A disadvantage of this type of grating is that it requires a very large number of Mo/Si bilayers (“at least two thousand” as recited in Chapman's claim 1).
Bristol and Kierey et al. disclose an EUV-diffracting grating disposed in a converging beam to separate the EUV from out-of-band radiation on a focal plane. Two disadvantages of this type of system are that it requires a separate optical element for spectral filtering, and the additional element significantly reduces EUV throughput.
Sweatt et al. disclose an alternative spectral filtering method that also uses a blazed diffraction grating to diffract the EUV and separate the EUV from out-of-band radiation. In some embodiments the grating is a near-normal-incidence reflective element, but the grating fabrication process differs from that of Moriya et al. Sweatt et al. note that “the blazed grating is preferably constructed on a substrate before a reflective multilayer, e.g., alternating Si and Mo layers, is deposited over the grating.” Gratings of this type are described in Voronov et al. FIG. 5 illustrates the grating structure in cross section. A surface-relief structure having a blazed, sawtooth profile is patterned in a substrate 501, and a multilayer reflective film 502 is then deposited on the structure.
The condenser mirror disclosed by Sweatt et al. does not focus the EUV radiation through an intermediate focus and illumination optics as in FIG. 1. Instead, it focuses the plasma source onto a ring image, which is projected directly onto the photomask. The system has limited practical utility because it lacks the illumination control capabilities of the illumination optics 106 in FIG. 1. Also, the system uses a filtering aperture in close proximity to the photomask (element 124 in FIG. 8 of Sweatt et al.), which could create problems with heat dissipation, optical back-scatter of out-of-band radiation, and mechanical clearance (e.g., interference with a photomask pellicle and wafer loading mechanics). These limitations do not exist with the prior art represented in FIG. 1. Chapman describes other limitations of the Sweatt et al. system, as understood in the prior art. (See Chapman at 2:25-43.)
Blazed EUV reflection gratings operating at near-normal incidence have been researched by Liddle et al. and by van den Boogaard et al. (2009), although it is unclear from these publications how such gratings might be incorporated into an LPP collector for spectral filtering.
Other spectral filtering methods that do not use grating diffraction have also been proposed. Chkhalo et al. and Suzuki et al. disclose free-standing transmission films that transmit EUV and reflect IR, but the fragility of the film and its EUV transmission loss make such films impractical. The collection mirror's multilayer reflective film can be designed to reflect EUV and suppress IR. [Medvedev et al. (2012)] This avoids the need for a separate, fragile transmission film, but the EUV reflection efficiency is significantly compromised.
In most prior-art spectral filtering systems the rejected out-of-band radiation is eliminated as waste heat. But Bayraktar et al. disclose an IR-diffracting grating that is similar to FIG. 2, except that one of the first diffraction orders at 10.6 μm is directed back onto the plasma to enhance generation of in-band EUV radiation by the plasma. This “power recycling” capability could help boost EUV in-band power at intermediate focus to 250 W, the industry target level at which EUV lithography can become commercially viable for high-volume semiconductor manufacture. (Current state-of-the-art LPP sources achieve about 100 W.) But the Bayraktar et al. power recycling method has several practical limitations: It is only able to recycle out-of-band radiation at one wavelength (the 10.6-μm drive-laser wavelength); it can only recycle radiation that intercepts the collection mirror; and the grating's diffraction efficiency at 10.6 μm is only about 37%. |
Does “Real Housewives of Beverly Hills” star Camille Grammer admit that she used to shave the body hair off of her ex-husband, Kelsey Grammer? Plus, what do her “Housewives” co-stars, Kyle Richards and Lisa Vanderpump, think of manscaping? |
Adsorption kinetics of a fluorescent dye in a long chain fatty acid matrix.
This work reports the adsorption kinetics of a highly fluorescent laser dye rhodamine B (RhB) in a preformed stearic acid (SA) Langmuir monolayer. The reaction kinetics was studied by surface pressure-time (π-t) curve at constant area and in situ fluorescence imaging microscopy (FIM). Increase in surface pressure (at constant area) with time as well as increase in surface coverage of monolayer film at air-water interface provide direct evidence for the interaction. ATR-FTIR spectra also supported the interaction and consequent complexation in the complex films. UV-vis absorption and Fluorescence spectra of the complex Langmuir-Blodgett (LB) films confirm the presence of RhB molecules in the complex films transferred onto solid substrates. The outcome of this work clearly shows successful incorporation of RhB molecules into SA matrix without changing the photophysical characteristics of the dye, thus making the dye material as LB compatible. |
Q:
Is there an Int.isWholeNumber() function or something similar?
I need to check if an input is an int or not. Is there a similar function to the String.IsNullOrEmpty(), like an Int.isWholeNumber() function?
Is there a way to validate this inside the if() statement only, without having to declare an int before? (As you need to do with TryParse())
EDIT
I need to validate an area code (five numbers)
A:
I gather from your comments that your actual problem is "how do I determine if this string contains a valid Swedish postal code?" A Swedish postal code is a five digit number not beginning with a zero. If that's the problem you actually have to solve, then solve that problem. Rather than trying to convert the string to an integer and then check the integer, I would simply write checks that say:
is the string five characters long? If not, reject it.
is the first character of the string 1, 2, 3, 4, 5, 6, 7, 8 or 9? If not, reject it.
are the second, third, fourth and fifth characters of the string 0, 1, 2, 3, 4, 5, 6, 7, 8 or 9? If not, reject it.
Simple as that. If you're never going to do math on it, don't convert it to an integer in the first place.
This approach will further generalize to more complex forms. Swedish postal codes, I gather, are often written in the form "SE-12 345", that is, with the prefix "SE-" and a space between digits two and three. It's going to be awfully hard to write an integer-validating routine that deals with that format, but writing a string-validating routine is straightforward.
More generally, this illustrates some good advice for writing questions. Ask a question about the problem you actually must solve. You assumed a solution -- parse the string as an integer -- and then started asking questions about your assumed solution. That automatically precludes anyone from giving advice that is specific to your real problem. Maybe someone reading this has already developed a library of postal-code validating software; if they have, they'd never know to tell you about it from your original question.
A:
I don't believe there's any such method within the BCL, but it's easy to write one (I'm assuming you're really talking about whether a string can be parsed as an integer):
public static bool CanParse(string text)
{
int ignored;
return int.TryParse(text, out ignored);
}
Add overloads accepting IFormatProvider values etc as you require them. Assuming this is for validation, you might want to expand it to allow a range of valid values to be specified as well...
If you're doing a lot of custom validation, you may well want to look at the Fluent Validation project.
|
Top Medical & Recreational Dispensary Alleghany
You Just Found The Best Alleghany Pot Shop
Lots of people may now shop at a pot shop within their city. In a few states you will need a medical card, and also in others you can purchase recreational marijuana if you need. Below are a few approaches to find and take advantage of the best Alleghany pot shop.
You\’re probably going to need to bring cash together with you to many shops. That\’s why you need to call ahead before going to ask the things they take and what it really will cost for what you wish. There are actually menus online, but even then that won\’t let you know if you can find any restrictions. It might be wise, should you don\’t have the time to call around, to just acquire some cash out to take whilst you Alleghany pot shop. Some places may have an ATM so consider this too in the event you don\’t have a lot of time.
The shops all have different strains. Although some may carry the more common ones that are popular in the region, you can find gong to be more obscure ones that you really should try. Looking around is usually pretty easy, you can easily check out a website that lists the shops and find out what exactly is about the menu. Certain areas get their own websites that they can update, too. In any case, you have to learn what you want and enjoy or otherwise need to test some different Alleghany pot shops to obtain a feel for the strains.
You\’re going to need to have your ID along, since if you\’re not old enough the Alleghany dispensary may not target you. Most of the time, anyone under 21 or no matter what the drinking age is where you are won\’t be able to buy any marijuana. Keep in mind that you need to have a driver\’s license or an ID which is current or they won\’t work together with you. Also, they can be cautious so don\’t try and pass off an imitation ID because which will get you banned. If you need shops to stay around, keep with the rules.
Do you know the main difference between sativa, indica, and hybrid plants? People claim that the sativa plants convey more of your head high while indica is more of the body one. The hybrids are a mixture of both kinds, with one of them sometimes being more prominent than the other. There are actually different smells, too, that mean different things. It\’s important to start out testing out what you can because many people get anxiety from certain kinds or some make people incapable of get much done as they are too tired. It\’s very much like a medicine in you need to know about what to prepare for when it comes to unwanted effects.
You will find new shops opening at all times, especially as being the laws change. When there are many recreational types available, you could expect because there being more strains for anyone. Otherwise, you really do need to get yourself a card. In certain places, if you want to buy edibles, you can\’t really get much THC with them simply because they have strict limits. Try to call a store and get them what they have if you\’re trying to find something specific, because you may need to have a card to obtain it.
People in the united states as well as in different countries all over the world are opening pot shops. Mainly because it becomes a growing number of legal, you can expect so that it is as effortless to obtain as everything else from stores eventually. Just know what you\’re buying therefore you don\’t waste your cash.
What To Look For Inside The Best Alleghany Medical Cannabis Dispensary
As more states legalize the usage of cannabis, more retailers are putting together dispensaries focused on serving an ever growing market. Medical cannabis users currently have numerous types of options to choose from, in terms of sourcing their marijuana products goes.
Please read on to find out more on ways to locate the best medical cannabis dispensary through the available options.
Legal Operation
As more states legalize the usage of medical marijuana, more dispensaries are opening their doors to provide the needs of the growing client base. Each declare that legalizes the sale and use of medical marijuana also puts forth various rules and regulations meant to ensure responsible using the substance.
When searching for the very best medical cannabis dispensary inside your locality, it is important that you consider how well the stores under consideration follow the applicable laws. It is because any store found to become flouting these rules is liable for closure, with it, eliminating your source of medical marijuana.
To find out whether a Alleghany dispensary adheres to the law, simply assess whether they check who they offer to, or simply just grant admission to everyone. Medical marijuana customers should have the essential documentation checked at the entrance. Keep in mind that recreational users must also come in because of their photo ID being a evidence of age.
Quality
Since you intend to make use of the products you get to treat your symptoms/condition, you don’t would like to risk suffering the nasty results of consuming contaminated cannabis. As a result, it is important that you get a cannabis dispensary dealing in quality products. You are able to ascertain the quality of these products for sale on the dispensaries under consideration by asking with regards to their quality control measures.
Edibles must be prepared in the hygienic environment, where food safety codes are adhered to. Be sure you ask about the origin of the cannabis, as this will even offer you a sign of its quality.
Selection
There is a huge variety of cannabis strains, each with its own associated high. To ensure that you discover the perfect product to treat your problem, its best that you get a cannabis dispensary stocking numerous products. This gives you the freedom to experiment and discover a strain which perfectly meets your needs.
To assist you obtain the perfect strain, some stores even offer free samples that customers can check out, from the premises obviously, prior to selecting an ideal fit.
Furthermore, it is additionally worth taking into consideration the option of a range of different marijuana products. Medical cannabis can be smoked, vaped, applied topically or ingested as edibles. Locating a dispensary with numerous types of products ensures that one could enjoy your favored way of consumption as well.
Professionalism
Buying medical marijuana is the same as visiting a pharmacy and buying your prescription medicine. The surroundings, staff and conduct in the cannabis dispensary you find yourself choosing ought to be nothing short of professional.
Search for a dispensary with knowledgeable staff, presentable and informative displays in addition to a professional environment. Smoking inside the premises ought to be a warning sign that demonstrates a lack of professionalism.
Remember To Be Open Minded
The legalization of medical cannabis is gaining momentum over recent years. Consequently the retail industry is in the infancy. When you go out looking to get the best medical cannabis dispensary, it is recommended that you continue a wide open mind. It really is common for customers to write off great dispensaries that don’t comply with the picture they already have under consideration.
Keep a wide open mind, and judge each Alleghany dispensary objectively, by considering the key elements mentioned above.
Site is currently not active
Disclaimer
By using the site you confirm you're 21+ and will not share with users under 21. By no means should you take any material on this site as medical advice and should consult your doctor for any questions you may have.
You must be 21+ and have a valid medical marijuana card to visit this site. |
1L6
The 1L6 is a 7 pin miniature vacuum tube of the pentagrid converter type. It was developed in the United States by Sylvania. It is very similar electrically to its predecessors, the Loktal-based 1LA6 and 1LC6. Released in 1949 for the Zenith Trans-Oceanic shortwave portable radio, this tube was in commercial production until the early 1960s .
The 1L6 was to be a specialty tube, produced in small quantities by very few manufacturers, mostly Sylvania for use by just a few manufacturers of shortwave portables, such as Zenith - in their Trans-Oceanics - and its short-lived rivals, such as the Hallicrafters TW-1000 and the RCA Strat-O-World and very few others. In fact, Zenith, Crosley and more than a few others used it in many radios. 1L6 based multi-band radios were made by Crosley, Airline (Montgomery Ward house-brand), Silvertone (Sears house brand), Hallicrafters, FADA, and several others. When the US military commissioned two versions of the Trans-Oceanics, they stockpiled 1L6s in the uncounted thousands, some of which still show up at surplus sales.
It was offered to Zenith by Sylvania in place of the larger 1LA6 - for which Zenith made production line changes as the first Miniature-Tube T/O was starting production. The original G500 chassis was punched for a Locktal socket, Zenith changed the phenolic wafer socket to accommodate the smaller tube. NOTE: a 1LA6 (or a 1LC6) will work as a near drop-in replacement for the 1L6 with the use of an adapter socket.
The closest European analog to the 1L6 is the DK92.
See also
List of vacuum tubes
References
External links
1L6 Tube at the Radio Museum
Category:Vacuum tubes |
TAMPA, Fla. -- The Tampa Bay Buccaneers lost a very winnable home opener against the L.A. Rams Sunday -- a game they needed entering the toughest portion of their schedule -- and head coach Dirk Koetter believes it boils down to a losing culture.
"There’s something in our culture -- and it’s my job to fix it, along with the coaches -- of letting games like this get away," Koetter said. "I wish I could grab it. I’ve been on teams that have had it, and you don’t want to let go of it. But when you don’t have it, it's hard to figure out what it is."
He added that this was no insult to the Rams. "I am concerned with what our team does," Koetter said. "We just have to get over that hump, and we’re not there.”
The Bucs grabbed the lead early in the second quarter on Charles Sims' 1-yard touchdown run, and followed that up with the first of two touchdowns to Cameron Brate, making it a 10-point game.
Then Roberto Aguayo missed a 41-yard field goal attempt with 4:46 to go in the third quarter with the Bucs ahead 20-17, and the Rams scored the next 21 points while the Bucs fell apart.
Jameis Winston fumbled the ball, with Robert Quinn knocking it from his hands, setting up Ethan Westbrooks' 77-yard scoop and score that gave the Rams a 31-20 lead.
Koetter said this falls squarely on his shoulders as head coach. "The lesson our team needs to learn is that every week is a battle and it doesn’t matter who the other team is," Koetter said. "Our culture is not where it needs to be, and that starts with me."
The Bucs have been trying to 'fix' their culture ever since Jon Gruden was fired and Raheem Morris and Mark Dominik were promoted in 2009.
After three seasons and a 10-game losing streak under Morris, the organization realized things had gotten too lax, so they brought in head coach Greg Schiano, a disciplinarian. With ownership's approval, Dominik armed him with the free agents Morris could only dream of having. But Schiano only made it through two seasons. His teams were competitive, but any team damn-well better be with Darrelle Revis, Dashon Goldson and Vincent Jackson. Schiano went 4-12 in his second season, and both he and Dominik were fired.
In came head coach Lovie Smith, who was supposed to restore a team that lost sight of its identity -- a defense that would make teams pay, especially at home, rather than rolling out the welcome mat. They brought in GM Jason Licht, a bright football mind who had a winning pedigree from his time with the Cardinals, Patriots and Eagles. Surely this tandem could get them back on track.
Patience wore thin quickly for Smith though, even as the Bucs jumped from 2-14 to 6-10 in his second season. The defense wasn't showing enough, but ownership was thrilled with what they saw from Dirk Koetter's offense and Licht's draft picks, including a 2015 draft that produced four starters. The team promoted Koetter to head coach in 2016, and he's continued to take this offense to new heights.
The Bucs have had just one winning season in the past seven, and each loss gets a little more deflating than the last despite a growing list of budding young stars who, at times, electrified on Sunday. Mike Evans and Adam Humphries had at least 100 receiving yards. Cameron Brate replaced tight end Austin Sefarian-Jenkins and hauled in two touchdown passes. Kwon Alexander had a pick-six. Lavonte David forced a fumble and Chris Conte recovered it. Usually those plays are more than enough.
But for every step forward, the Bucs took three steps back, stumbling. It was like seeing a horse at a distance that looks like a stallion, only to realize it's a wobbly colt still growing into its legs. That's what the Bucs are right now. They are not ready to be in the race with the big horses, and until they shed that losing mentality or whatever it is that has ailed them since Jon Gruden, Monte Kiffin and the organization's pillars like Derrick Brooks left, it's going to stay that way. |
# --------------------------------------------------------
# Decoupled Classification Refinement
# Copyright (c) 2018 University of Illinois
# Licensed under The MIT License [see LICENSE for details]
# Modified by Bowen Cheng
# --------------------------------------------------------
# Based on:
# Deformable Convolutional Networks
# Copyright (c) 2017 by Microsoft
# Licence under The MIT License
# https://github.com/msracver/Deformable-ConvNets
# --------------------------------------------------------
import yaml
import numpy as np
from easydict import EasyDict as edict
config = edict()
config.MXNET_VERSION = ''
config.output_path = ''
config.symbol = ''
config.gpus = ''
config.CLASS_AGNOSTIC = True
config.SCALES = [(600, 1000)] # first is scale (the shorter side); second is max size
config.TEST_SCALES = [(600, 1000)]
# default training
config.default = edict()
config.default.frequent = 20
config.default.kvstore = 'device'
# network related params
config.network = edict()
config.network.pretrained = ''
config.network.pretrained_epoch = 0
config.network.PIXEL_MEANS = np.array([0, 0, 0])
config.network.IMAGE_STRIDE = 0
config.network.RPN_FEAT_STRIDE = 16
config.network.RCNN_FEAT_STRIDE = 16
config.network.FIXED_PARAMS = ['gamma', 'beta']
config.network.FIXED_PARAMS_SHARED = ['gamma', 'beta']
config.network.ANCHOR_SCALES = (8, 16, 32)
config.network.ANCHOR_RATIOS = (0.5, 1, 2)
config.network.NUM_ANCHORS = len(config.network.ANCHOR_SCALES) * len(config.network.ANCHOR_RATIOS)
# dataset related params
config.dataset = edict()
config.dataset.dataset = 'PascalVOC'
config.dataset.image_set = '2007_trainval'
config.dataset.test_image_set = '2007_test'
config.dataset.root_path = './data'
config.dataset.dataset_path = './data/VOCdevkit'
config.dataset.NUM_CLASSES = 21
config.DCR = edict()
config.DCR.hard_fp_score = 0.3
config.DCR.sample_per_img = -1
config.DCR.top = 1.0
config.DCR.sample = 'DCRV1' # choose from DCRV1, RANDOM, RCNN
config.TRAIN = edict()
config.TRAIN.lr = 0
config.TRAIN.lr_step = ''
config.TRAIN.lr_factor = 0.1
config.TRAIN.warmup = False
config.TRAIN.warmup_lr = 0
config.TRAIN.warmup_step = 0
config.TRAIN.momentum = 0.9
config.TRAIN.wd = 0.0005
config.TRAIN.begin_epoch = 0
config.TRAIN.end_epoch = 0
config.TRAIN.model_prefix = ''
config.TRAIN.ALTERNATE = edict()
config.TRAIN.ALTERNATE.RPN_BATCH_IMAGES = 0
config.TRAIN.ALTERNATE.RCNN_BATCH_IMAGES = 0
config.TRAIN.ALTERNATE.rpn1_lr = 0
config.TRAIN.ALTERNATE.rpn1_lr_step = '' # recommend '2'
config.TRAIN.ALTERNATE.rpn1_epoch = 0 # recommend 3
config.TRAIN.ALTERNATE.rfcn1_lr = 0
config.TRAIN.ALTERNATE.rfcn1_lr_step = '' # recommend '5'
config.TRAIN.ALTERNATE.rfcn1_epoch = 0 # recommend 8
config.TRAIN.ALTERNATE.rpn2_lr = 0
config.TRAIN.ALTERNATE.rpn2_lr_step = '' # recommend '2'
config.TRAIN.ALTERNATE.rpn2_epoch = 0 # recommend 3
config.TRAIN.ALTERNATE.rfcn2_lr = 0
config.TRAIN.ALTERNATE.rfcn2_lr_step = '' # recommend '5'
config.TRAIN.ALTERNATE.rfcn2_epoch = 0 # recommend 8
# optional
config.TRAIN.ALTERNATE.rpn3_lr = 0
config.TRAIN.ALTERNATE.rpn3_lr_step = '' # recommend '2'
config.TRAIN.ALTERNATE.rpn3_epoch = 0 # recommend 3
# whether resume training
config.TRAIN.RESUME = False
# whether flip image
config.TRAIN.FLIP = True
# whether shuffle image
config.TRAIN.SHUFFLE = True
# whether use OHEM
config.TRAIN.ENABLE_OHEM = False
# size of images for each device, 2 for rcnn, 1 for rpn and e2e
config.TRAIN.BATCH_IMAGES = 2
# e2e changes behavior of anchor loader and metric
config.TRAIN.END2END = False
# group images with similar aspect ratio
config.TRAIN.ASPECT_GROUPING = True
# R-CNN
# rcnn rois batch size
config.TRAIN.BATCH_ROIS = 128
config.TRAIN.BATCH_ROIS_OHEM = 128
# rcnn rois sampling params
config.TRAIN.FG_FRACTION = 0.25
config.TRAIN.FG_THRESH = 0.5
config.TRAIN.BG_THRESH_HI = 0.5
config.TRAIN.BG_THRESH_LO = 0.0
# rcnn bounding box regression params
config.TRAIN.BBOX_REGRESSION_THRESH = 0.5
config.TRAIN.BBOX_WEIGHTS = np.array([1.0, 1.0, 1.0, 1.0])
# RPN anchor loader
# rpn anchors batch size
config.TRAIN.RPN_BATCH_SIZE = 256
# rpn anchors sampling params
config.TRAIN.RPN_FG_FRACTION = 0.5
config.TRAIN.RPN_POSITIVE_OVERLAP = 0.7
config.TRAIN.RPN_NEGATIVE_OVERLAP = 0.3
config.TRAIN.RPN_CLOBBER_POSITIVES = False
# rpn bounding box regression params
config.TRAIN.RPN_BBOX_WEIGHTS = (1.0, 1.0, 1.0, 1.0)
config.TRAIN.RPN_POSITIVE_WEIGHT = -1.0
# used for end2end training
# RPN proposal
config.TRAIN.CXX_PROPOSAL = True
config.TRAIN.RPN_NMS_THRESH = 0.7
config.TRAIN.RPN_PRE_NMS_TOP_N = 12000
config.TRAIN.RPN_POST_NMS_TOP_N = 2000
config.TRAIN.RPN_MIN_SIZE = config.network.RPN_FEAT_STRIDE
# approximate bounding box regression
config.TRAIN.BBOX_NORMALIZATION_PRECOMPUTED = False
config.TRAIN.BBOX_MEANS = (0.0, 0.0, 0.0, 0.0)
config.TRAIN.BBOX_STDS = (0.1, 0.1, 0.2, 0.2)
config.TEST = edict()
# R-CNN testing
# use rpn to generate proposal
config.TEST.HAS_RPN = False
# size of images for each device
config.TEST.BATCH_IMAGES = 1
# RPN proposal
config.TEST.CXX_PROPOSAL = True
config.TEST.RPN_NMS_THRESH = 0.7
config.TEST.RPN_PRE_NMS_TOP_N = 6000
config.TEST.RPN_POST_NMS_TOP_N = 300
config.TEST.RPN_MIN_SIZE = config.network.RPN_FEAT_STRIDE
# RPN generate proposal
config.TEST.PROPOSAL_NMS_THRESH = 0.7
config.TEST.PROPOSAL_PRE_NMS_TOP_N = 20000
config.TEST.PROPOSAL_POST_NMS_TOP_N = 2000
config.TEST.PROPOSAL_MIN_SIZE = config.network.RPN_FEAT_STRIDE
# RCNN nms
config.TEST.NMS = 0.3
config.TEST.max_per_image = 300
# Test Model Epoch
config.TEST.test_epoch = 0
config.TEST.USE_SOFTNMS = False
def update_config(config_file):
exp_config = None
with open(config_file) as f:
exp_config = edict(yaml.load(f))
for k, v in exp_config.items():
if k in config:
if isinstance(v, dict):
if k == 'TRAIN':
if 'BBOX_WEIGHTS' in v:
v['BBOX_WEIGHTS'] = np.array(v['BBOX_WEIGHTS'])
elif k == 'network':
if 'PIXEL_MEANS' in v:
v['PIXEL_MEANS'] = np.array(v['PIXEL_MEANS'])
for vk, vv in v.items():
config[k][vk] = vv
else:
if k == 'SCALES':
config[k][0] = (tuple(v))
else:
config[k] = v
else:
raise ValueError("key must exist in config.py")
|
A Marcal Paper Mills machine operator had his lunch cut short as fire alarms blared and black smoke began to fill the building Wednesday afternoon during the devastating eight-alarm fire.
Samuel Hall, who has been a machine operator at the paper factory for three years, recalled the moment he heard the alarms while on his lunch break. The response from supervisors was delayed, as they were all in a meeting, he said.
Hall finished his lunch and saw other workers coming to the front of the inside of the building. Then supervisors gathered everyone in the cafeteria to do a head count, he said.
CLOSE
This recording of a live video broadcast by NorthJersey.com shows portions of the famous and historic Marcal Paper Products building complex burning down on Jan. 30, 2019.
Michael V. Pettigano and Paul Wood Jr., North Jersey Record
As Hall and the other workers evacuated the building, they could see the fire and smoke building, he said.
"[The fire] got worse and worse," Hall said.
After everyone was outside, the fire grew too large, and they realized they had to leave the compound.
The fire tore through the 45,000-square-foot warehouse building, which acts as a distribution center for Marcal paper products. The flames consumed the building, and firefighters battled the intense heat of the flames amid the searing cold of the night.
The warehouse held large paper rolls that are used to make toilet paper, paper towels and tissues.
The eight-alarm fire would eventually engulf the building and consume the entire site.
Posted!
At their Number 11 paper machine, Marcal Chairman Robert Marcalus is flanked by son Peter Marcalus and grandson Michael Bonin with a freshly made jumbo roll of tissue, pictured in 2002. Photo courtesy of Peter Marcalus |
export * from './baAmChart.component';
|
"We're taking a look at the life of the world's biggest movie star, Tom Cruise, and how he's moving on with life as a single dad," PEOPLE senior writer Jennifer Garcia says, of this week's cover story. "When you read this week's PEOPLE magazine, you're also going to get a glimpse into Katie's life, and how she's moving on as a single woman."
"While Tom and Katie haven't spoken yet, both parties realize they are going to have to be in touch for many, many years – and this is, of course, [is] for the sake of their daughter Suri," Garcia says. "This week's issue will detail how these stars are moving on."
For much more on this story, including details on Tom's trusted inner circle, how he's coping and Holmes's new life, pick up the new issue of PEOPLE, on newsstands now |
Q:
Android Emulator 2.3.3 with No Internet Connection
Possible Duplicate:
Upgraded to SDK 2.3 - now no emulators have connectivity
I just updated my Android SDK to latest version which has come up as 2.3.3 and now when I visit any website using its Emulator browser, it say "Browser can not load the web page because there is no internet connection." Is this a known issue?
A:
Yes it is
|
Q:
Is there a way to add a path to all src attributes of image tags?
I would like to add a path "images/" to all img tags in my html file. Is that possible and if how can I achive that?
A:
I think this will work well:
$("img").each( function() {
$(this).attr("src", "images/" + $(this).attr("src"));
});
note that it has sense only if you have already relative path to your images.
|
Hypothermia among resort skiers: 19 cases from the Snowy Mountains.
Even in relatively temperate environments, accidental hypothermia is a potentially lethal complication of exposure. We have reviewed our experience of accidental hypothermia among recreational alpine skiers at an Australian resort during the 1983 and 1984 seasons. There were 19 cases of accidental hypothermia, which occurred in 10 men and nine women who were aged between six and 47 years (mean age, 15.9 years) and who had rectal temperatures that ranged from less than 35 degrees C to 36 degrees C. The temperature at presentation to the Ski Injury Clinic was less than 35 degrees C in seven cases. One patient presented to the Clinic with a gastrointestinal haemorrhage in addition to hypothermia, and one was initially thought to be suffering from alcohol intoxication. Two patients were lost in the snow overnight. All patients were removed from the snow, changed into warm dry clothes where necessary, and their body temperatures allowed to return to normal spontaneously (17 patients), or were exposed to heat actively by means of inhaled, heated, humidified air (two severely obtunded patients). All patients responded satisfactorily. There were no deaths and no sequelae. We conclude that all skiers should be advised to wear effective thermal insulation, and to ski with a partner to ensure that adequate care is taken to prevent accidental hypothermia. Inhalational "warming" is effective in the treatment of hypothermia in obtunded patients. |
I have a nice cordless drill that came with two rechargeable batteries. However, there are times when life gets in the way and I'm not using the drill at all over the course of several months. When I return to the drill, both of the batteries have died.
How should I store the batteries when they are not going to be used for several months? Is there a way to do so such that they'll have a charge when I need them next? Are there any steps I should be taking when storing the batteries so lengthen their overall lifetime?
4 Answers
4
Batteries in general have several points worth concerning their usage. Yes, you are asking about storage but it is worth knowing that their storage strategy is influence by their usage frequency as well. Which type you have is important. There is not one overall method.
Li-Ion (Lithium Ion)
Like most batteries they should be stored at room temperate and not in direct sunlight. Some sources mention that storing it slightly warmer can help it perform better but the consensus is storing it at 15°C (59°F) is ideal. A dry area is also an important factor.
In general we are advised to store the battery in a partial discharge state. Storing the battery in either a fully charged or discharged state can actually harm the battery. Most manufacturers recommend storing these batteries in a 30-40% charge level. Charge level can be determined usually by battery temperature. The better you take care of it the longer it will last. I don't see myself ever testing the charge level but since I use my drill a lot I get a feel for when its getting down in charge since it starts to lose its torque.
The above tactic is sound but it is important to fully discharge the battery every 30 or so cycles of use to reset the batteries Digital Memory
The digital memory effect is a failure mode whose effect results in the transmission of improper calibrations of the battery’s fuel gauge to a device.
To that effect the article goes on to say that
To correct the digital memory effect and properly re-calibrate the fuel gauge circuitry simply do a full cycle discharge/recharge every several dozen charges. There is no real hard number.
NiCd/NiCad (Nickel-Cadmium)
Similar temperature and moisture suggestions exist for NiCd in that they should be stored in a cool and dry location. Between −20 °C and 45 °C (-4°F - 113°F) is recommended but I have seen several sites suggesting that freezing should be avoided ( Guessing mostly due to potential ice crystal build up. This could be avoided by putting the battery in an air tight bag.)
The batteries themselves should be stored either fully charged or fully discharged. Note that they have a higher discharge rate when compared to Li-Ion batteries but are not permanently affected by this discharge.
If you plan on storing these batteries long term (more than a couple of months) it is important to use them periodically to prevent crystals from forming and shorting the cells. The crystals can lower battery performance and the extreme one can cause damage that is irreversible. While modern NiCd's don't have a true internal memory problem, the crystal formation factor can effect the battery in a similar way.
NiMH (Nickel Metal Hydride)
Depending on what you read these batteries had a rough start when used for power tools. They have been getting better but were not as widely accepted until more recently due to their disadvantages.
As far as temperature and periodic use the NiMH and NiCd have the same approach. The important difference is that NiMH have the highest discharge rate and it is more important to periodically use it to prevent damage. Some people have made special cradles to allow a trickle charge to prevent this effect although that is more for your standard AA,AAA batteries.
Refer to your manual
Assuming you still have it there could be more specific instructions included with your device.
Awesome resource
I have been reading a lot about this and this website has information that covers a broad overview and more in-depth coverage of batteries if you so chose.
Good answer (upvoted), but one minor point: battery temperature is a very inaccurate way of determining charge level. A multimeter will tell you voltage. Halfway between "dead" and "fully charged" is probably optimum for Li-ion.
– Aloysius DefenestrateJun 13 '15 at 4:07
It sounds like your drill has Nickel Cadmium (also called NiCd or NiCad) batteries. This type of battery self-discharges very rapidly and will not hold a charge as long as a Nickel-Metal Hydride (NiMH) or Lithium-based battery (such as Lithium-ion or Lithium-polymer).
Matt's answer includes a lot of technical detail on how to best store and maintain your batteries. I won't repeat that information but I will mention a few very noteworthy points regarding each of the three battery technologies commonly used for cordless tools.
When not under load or charge, a Ni–Cd battery will self-discharge
approximately 10% per month at 20°C, ranging up to 20% per month at
higher temperatures. It is possible to perform a trickle charge at
current levels just high enough to offset this discharge rate; to keep
a battery fully charged. However, if the battery is going to be stored
unused for a long period of time, it should be discharged down to at
most 40% of capacity (some manufacturers recommend fully discharging
and even short-circuiting once fully discharged), and stored in a
cool, dry environment.
You'll get more recharge cycles out of Lithium-ion (Li-ion) batteries if you recharge them once they drop to 20% capacity, and as Matt mentioned, you should store them at 30-40% charge.
NiMH batteries for tools are not very common in the US. Older NiMH technology had an internal discharge rate of about 3% per month, which is better than NiCd but not as good as Li-ion. Newer NiMH batteries have a very low internal discharge rate and can retain a charge better during long-term storage.
What does this mean in practical, real-world terms?
All this talk about storing batteries at 40% charge or topping off Li-ion batteries at 20% charge is almost purely academic for various reasons. You can have the best intentions of following the guidelines, but it's a pain in the neck to try to adapt your own behavior to suit the battery technology. The best advice I can give is to just buy a battery technology that doesn't require a babysitter. In the US, that's probably Li-ion. Outside the US, it may be Li-ion or NiMH.
As you said in your question, "there are times when life gets in the way...." Unless you're able to dedicate significant time to woodworking on an ongoing basis, you may not know when you'll use a tool again so it's impossible to know whether you should prep your batteries for long-term storage.
Some batteries or tools do have integrated meters which show you the battery's current charge level. One of my Li-ion drills has a 3-bar indicator but it isn't particularly accurate--by the time the tool reaches 1 bar, the battery is practically dead. If your tools or batteries don't have accurate charge meters or don't have any meters at all, good luck figuring out if you're charging your batteries at the right time or storing your batteries at the optimal storage capacity.
I've owned drills and other types of devices with both types of batteries that are common in the US, NiCd and Li-ion.
Every time I've gone to use my NiCd drill, it has been dead or it didn't last long enough to finish the job. I basically had to charge it every time I wanted to use it, and that takes several hours. Usually I would end up with an unplanned window of time during which I want to get something done, and that window of availability was used up just waiting for the battery to charge. When I did get to use the drill, I sometimes would deplete one battery and put it on the charger while I switched to the other battery and continued working. By the time I drained the second battery, the first one was still barely charged. Granted, there are more advanced NiCd batteries and chargers that allow for faster charging, but they are not always available for a given line of cordless tools. If they are available, they often aren't included and have to be purchased separately--in which case, you perhaps could have just chosen a more hassle-free type of battery.
In contrast, when I go to use my Li-ion drill, the battery loaded in it is always ready to go and has at least some charge, and the second battery is ready to go as soon as the first one runs out of juice. The batteries only take about 30 minutes to charge and last a long time. As soon as the first one is drained, I can put it on the charger and I know it'll be fully-charged by the time the second one runs out of juice. I've had the drill (and batteries) for several years now and haven't noticed any degradation of the batteries.
Summary
In just a few words, NiCd batteries require a lot of babysitting, NiMH batteries are better, and Li-ion are the lowest-maintenance.
No batteries do very well in extreme heat or extreme cold. During the summer, I store my batteries in my garage shop and try not to leave them in my car if I need to take them elsewhere. If your shop gets extremely hot, you may want to store your batteries someplace cooler. During the winter, I store my batteries inside my house and don't leave them in the garage or in my car to prevent them from freezing.
If you use your tools every day or even every week, NiCd might work for you. If you regularly go more than a month without using your tools, it's probably safe to say you sometimes go several months without using your tools, and you'd often need to charge your NiCd batteries before you can use them. In that case, you're probably best served by Li-ion or NiMH batteries.
Fortunately, many lines of tools now offer compatible batteries in both NiCd and Li-ion types. If you already have a tool with NiCd batteries but NiCd is not suitable for your work patterns, you may be able to "upgrade" to a Li-ion battery by buying a compatible replacement Li-ion battery (and Li-ion charger), or by buying a new tool whose batteries are compatible with your existing tool.
Do you think it might be better to compare the batteries in another question. This one is getting a little long. I was trying to avoid comparing them. Extra information is always good but a question about "What are the different types of batteries for cordless tools?" might serve better. What do you think?
– MattApr 13 '15 at 20:20
Ha, I didn't add that much between your "Great answer" comment and the "hold on, this is getting too long" one! But you may have a point; let me think about it. My intention was to answer the question from a practical angle but incidentally I added the comparisons for clarification.
– rob♦Apr 14 '15 at 4:55
Your addition did not influence the second comment. I was thinking about my quarter-sawn answer and how one question was asked but i started to answer another. It made me reread this and think about the scope of the OP.
– MattApr 14 '15 at 14:24
Welcome to SE. If your read the original Question fully the OP specifically mentions that he might not come back to the tool for several months. It's neither practical nor safe to leave many batteries in their chargers for extended periods, not even a few days..
– GraphusAug 1 '18 at 11:37
Can you explain why the garage would be worst? Is your garage heated?
– Maxime MorinApr 13 '15 at 17:10
It has to do with the temperature fluctuation. I do have a heater in the garage but run it only when I'm in there. It's attached so it's warmer than outside but still drops pretty low. There is no method for cooling.
– Dano0430Apr 13 '15 at 18:56 |
Q:
Can I use two guards within one model?
I need to authenticate users in two different ways using one model. Is it possible to define two guards and chose the preferable one e.g. on the controller level?
Also maybe there is a better way to implement that using laravel? Would appreciate any thoughts.
A:
yes it is possible. You should create two different LoginControllers with assigned routes, create two different Auth middleware and probably also change RedirectIfAuthenticated middleware a little bit.
In both LoginControllers you should define you guard like so:
protected function guard()
{
return Auth::guard('admin');
}
And if you want to separete routes for your guards than also in RedirectIfAuthenticated Middleware you shold define redirections for both guards
public function handle($request, Closure $next, $guard = null)
{
if (Auth::guard($guard)->check()) {
if($guard == 'admin') return redirect('/admin');
return redirect('/');
}
return $next($request);
}
|
THE PATRIOTS DIDN’T WIN THE SUPER BOWL. THE FALCONS LOST IT. BAD PLAY-CALLING COST THEM THE GAME.
The worst example of play-calling for the Falcons came after Quarterback Matt Ryan had moved his team deep into Patriots territory. According to a report on the Internet, “Atlanta was already in field-goal position at the Patriots’ 23-yard line. An 11-point Falcons lead might have been insurmountable for New England, with fewer than four minutes left to play.” But Atlanta’s offensive coordinator Kyle Shanahan chose to pass instead of running the ball and “Ryan was sacked for a loss of 12 yards. Still, the Falcons were looking at a long but still makable field goal of about 53 yards. Shanahan opted for another pass, and on the play, a tackle, Jake Matthews, was penalized for holding. Pushed out of field-goal range entirely, the Falcons were forced to punt. On their next drive, the Patriots tied the game with another touchdown and a second 2-point conversion.” One has to wonder if the 49er’s are making a good choice when hiring Kyle Shanahan as their head coach.
Welcome to Outdoor Wilderness Adventures
If you are interested in booking a hunting or fishing trip anywhere in the world, with over 800 destinations to choose from, contact Marvin Fremerman at [email protected] or call 417-773-2695. We will put you in direct contact with outfitters we recommend.
If you would like to review a list of our more than 800 outfitter destinations, click through the bear that appears below.
Hunting & Fishing Trips
Click Here
Personalized Counseling
Self-esteem building workshops and positive visualization seminars for athletes, sports teams, cancer patients and at-risk youth. Also available for speaking engagements. |
Q:
Value not defined when inside a function
When I attempt to print a value I get an error stating that the value is not defined.
def myFunc():
myValue = "Hello World!"
myFunc()
print(myValue)
I was expecting myValue to be printing "Hello World!" however this is not the case.
A:
Some doc (Programming FAQ, Python 3.7.4) says:
In Python, variables that are only referenced inside a function are
implicitly global. If a variable is assigned a value anywhere within
the function's body, it's assumed to be a local unless explicitly
declared as global.
Then your variable is local to the function that means that this variable doesn't exist outside it. So if you really need to access it outside, then declare it as global:
def myFunc():
global myValue
myValue = "Hello World!"
print(myValue)
|
On the use of musculoskeletal models to interpret motor control strategies from performance data.
The intrinsic viscoelastic properties of muscle are central to many theories of motor control. Much of the debate over these theories hinges on varying interpretations of these muscle properties. In the present study, we describe methods whereby a comprehensive musculoskeletal model can be used to make inferences about motor control strategies that would account for behavioral data. Muscle activity and kinematic data from a monkey were recorded while the animal performed a single degree-of-freedom pointing task in the presence of pseudo-random torque perturbations. The monkey's movements were simulated by a musculoskeletal model with accurate representations of musculotendon morphometry and contractile properties. The model was used to quantify the impedance of the limb while moving rapidly, the differential action of synergistic muscles, the relative contribution of reflexes to task performance and the completeness of recorded EMG signals. Current methods to address these issues in the absence of musculoskeletal models were compared with the methods used in the present study. We conclude that musculoskeletal models and kinetic analysis can improve the interpretation of kinematic and electrophysiological data, in some cases by illuminating shortcomings of the experimental methods or underlying assumptions that may otherwise escape notice. |
SnapShot: The Bacterial Cytoskeleton.
Most bacteria and archaea contain filamentous proteins and filament systems that are collectively known as the bacterial cytoskeleton, though not all of them are cytoskeletal, affect cell shape, or maintain intracellular organization. To view this SnapShot, open or download the PDF. |
{
"images" : [
{
"idiom" : "universal",
"filename" : "UndoIcon.pdf"
}
],
"info" : {
"version" : 1,
"author" : "xcode"
},
"properties" : {
"template-rendering-intent" : "template"
}
} |
詳細
The Gate towers as set to be Al Reem Island Landmark with the beautiful and unique towers structure and another elemnt to Abu Dhabi's skyline
The Gate Towers are mixed-use development comprising three towers and The Arc tower , as well as a retail and leisure podium.The three towers are toped with the Penthouses bridge, believed to be the highest of its kind in the world for a residential development.
Unit types : ApartmentsOwnership : Free hold for locals and expats
Residential Amenities
Gate Towers have 24 hour access to a swimming pool, tennis courts, a childrens playground and an exclusive parking space. |
I just started storing user uploaded images on Amazon's S3. It's pretty nice because it took care of my storage problem. However, I am struggling when it comes to having the browser cache the images.
I am using django-storages. In their docs they specify that you can put things on the request header for an image by setting the AWS_HEADER var in your settings. I am doing that and getting no results.
Basically when the app requests the image(s), I get a 200 EVERY TIME. ARG... when I take the browser straight to the image (copy and paste the link into a new window) I get a 200 then a 304 every time after that.
It's very frustrating because it re downloads the image every time. Some pages have up to 25 small thumbnails on them and it's redownloading everything every time the page is reloaded.
I am serving my static files using djangos staticfiles and they are working properly. I get a 200, then 304 after the file is cached.
here are my AWS settings in settings.py
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage' AWS_ACCESS_KEY_ID = '***' AWS_SECRET_ACCESS_KEY = '***' AWS_STORAGE_BUCKET_NAME = 'foobar_uploads' AWS_HEADERS = { 'Expires': 'Thu, 15 Apr 2020 20:00:00 GMT', 'Cache-Control': 'max-age=86400', } AWS_CALLING_FORMAT = CallingFormat.SUBDOMAIN
here are the request and response headers for when the app requests the image: (i've replaced what i feel might be sensitive information with '*')
##request## GET /user_uploads/*****/2012/3/17/14/46/thumb_a_28_DSC_0472.jpg?Signature=FVR6T%2BXFwHMmdQ9K3n7Ppp7QxoY%3D&Expires=1332023525&AWSAccessKeyId=***** HTTP/1.1 Host: *****_user_uploads_sandbox.s3.amazonaws.com Connection: keep-alive User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.79 Safari/535.11 Accept: */* Referer: http://localhost:8000/m/my-photos/ Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 ##response## HTTP/1.1 200 OK x-amz-id-2: Hn3S+3gmeLHIjKCpz+2ocE6aPsLCVHh56jJYTsPHwxU98y89x+9X1Ml202evBUHT x-amz-request-id: 528CEB880CA89AD3 Date: Sat, 17 Mar 2012 21:32:06 GMT Cache-Control: max-age=86400 Expires: Thu, 15 Apr 2020 20:00:00 GMT Last-Modified: Sat, 17 Mar 2012 20:46:29 GMT ETag: "a3bc70e0c3fc0deb974edf95668e9030" Accept-Ranges: bytes Content-Type: image/jpeg Content-Length: 8608 Server: AmazonS3
here are the request/response headers for when i manually request the image by copy and pasting link to the image:
##request## GET /user_uploads/*****/2012/3/17/14/46/thumb_a_28_DSC_0472.jpg?Signature=FVR6T%2BXFwHMmdQ9K3n7Ppp7QxoY%3D&Expires=1332023525&AWSAccessKeyId=***** HTTP/1.1 Host: porlio_user_uploads_sandbox.s3.amazonaws.com Connection: keep-alive Cache-Control: max-age=0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.79 Safari/535.11 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 If-None-Match: "a3bc70e0c3fc0deb974edf95668e9030" If-Modified-Since: Sat, 17 Mar 2012 20:46:29 GMT ##response## HTTP/1.1 304 Not Modified x-amz-id-2: FZH0imrbNxziMznhl5zAoo38CaM7Z+TFnd8R6HtTYB3eTmVpCih+1IniKaliRo18 x-amz-request-id: 3CACF77FBB39D088 Date: Sat, 17 Mar 2012 21:33:22 GMT Last-Modified: Sat, 17 Mar 2012 20:46:29 GMT ETag: "a3bc70e0c3fc0deb974edf95668e9030" Server: AmazonS3
I see there are a few differences such as the "If-None-Match:" or the "If-Modified-Since:" . I think that if I were to set those, then it should work like I'd like.
Is there an easy way to do this?
Thanks for any help!
EDIT 1: I read this article and couldn't translate it very well.. http://coder.cl/2012/01/django-and-amazon-s3/comment-page-1/ |
Vacation Review
Reboot? Remake? Rehash? Nostalgic trip down movie memory lane? Call it what you will, there’s no denying that Vacation, the latest film in the National Lampoon’s Vacation movie series, is eager to remind us that it isn’t just a brand spanking new comedy, but in actuality part of an established series of much-loved films from the 80’s and 90’s.
Ed Helms takes the lead as Rusty Griswold, the son from the original film, now grown up and eager to take his family on the same cross-country road trip to Walley World he was subjected to in his youth. From then on, the family find themselves in all manner of awkward and embarrassing situations, as Rusty’s attempts to bond with them lead to all sorts of trouble. Pretty much in the same way as the original film!
The sense of nostalgic whoring is pretty much on display from the get-go, from the use of the iconic song ‘Holiday Road’ from the original 1983 film, to a funny yet unsubtle fourth wall breaking discussion about the ‘original Vacation’. There’s repeated jokes from the 80’s film (the most iconic being the attractive lady in the Ferrari), cameos, and pretty much little to no original plot ideas to be had. Sure the humour is darker, more un-PC and more gross, but ultimately the writers/directors are certainly eager to cash-in on the nostalgic elements!
The real question though is whether the film is funny, nostalgic or not? The answer is yes, but it certainly takes a long time to kick into top gear. There’s some chuckles to be elicited early on, but ultimately the four main characters and their comedic foibles fail to remain as funny over the course of the film. It falls to the cameos and additional cast-members to provide the bigger laughs, from Chris Hemsworth’s cringe-inducing cameo to Charlie Day’s hilarious appearance as a Grand Canyon River Rafter, which provides the biggest highlight of the film. There’s a couple of additional cameos best kept secret for the sake of laughs as well. However, like most recent american comedies, it does often feel like the film is working from a check-list of required elements to make a modern US comedy, as opposed to trying out anything really new or original.
Ultimately, regardless of the amount of nostalgic elements put on-screen, the film will stand or fall based on its gag rate. With such a slow start, an over-reliance on repeated jokes, and a lack of heart or genuine pathos for the most part of its running time, Vacation does at times drag, but occasionally rewards with some solid laughs.
Never Miss An Article
From an early age, Matt Dennis dreamt of one day becoming a Power Ranger. Having achieved that dream back in the noughties, he’s now turned his hand to journalism and broadcasting.
A pop-culture fanatic, Matt also writes regularly for The Hollywood News, CultBox, and The News Hub, whilst also co-presenting the Geek Cubed podcast, which you can download from iTunes. It’s quite good. |
Mucormycosis is a life-threatening infection that occurs in patients immunocompromised by diabetic ketoacidosis, neutropenia, steroid use, and/or increased serum iron. Because of the rising prevalence of risk factors, the incidence of mucormycosis has dramatically increased (1300% over 15 years according to one source). Despite disfiguring surgery and aggressive antifungal therapy, the mortality of mucormycosis remains >50%, and approaches 100% in patients with disseminated disease. Clearly new strategies to prevent and treat mucormycosis are urgently needed. Clinical hallmarks of infection by Rhizopus oryzae, the most common cause of mucormycosis, include the unique susceptibility of patients with increased available serum iron, the propensity of the organism to invade blood vessels, and defective phagocytic function, which we hypothesize to be, at least in part, a result of iron toxicity. These clinical hallmarks underscore the critical role of iron metabolism, as well as interactions with endothelial cells lining blood vessels, in the organism's virulence strategy. We have found that R. oryzae damages endothelial cells in vitro and this process is dependent on iron. Additionally, we have cloned the R. oryzae high affinity iron permease (rFTR1) which scavenges iron from iron-depleted environments such as is found in the host. Finally we have developed clinically relevant models of infection in diabetic ketoacidotic mice. We hypothesize that iron uptake, and specifically rFTR1, is essential for R. oryzae to cause infection. To test this hypothesis, we propose to: 1) characterize the mechanism(s) by which iron regulates R. oryzae- induced endothelial cell injury;2) construct an isogenic rftrl null mutant and its corresponding rFTR1 complemented strain jn R. oryzae by site directed mutagenesis;3) compare the pathogenicity of the generated rftrl to that of the wild-type and rFTR1 complemented strains in our in vitro and in vivo models of infection;and 4) elucidate the role of iron in regulating the innate host response to R. oryzae. Accomplishing these specific aims will define the role of the central elements affecting the establishment and progression of mucormycosis as it relates to iron uptake. Ultimately, a superior understanding of the pathogenesis of mucormycosis will enable development of novel therapies for this disease. Completion of the proposed studies will enable investigation of treatments that block R. oryzae uptake of iron. |
Are you still struggling to get hitters on-time without losing swing performance?
Force Production Hitting Engineer Reveals How To Teach Hitters How To Predictably Get On-Time, While Maintaining High Ball Exit Speeds In A Reasonably Short Period Of Time Regardless Of Hitter Age, Gender, Or Size In Just 3 Weeks
Get Hitters ON-TIME without Losing Swing Performance with a Simple but Revolutionary Online Video Mini-Course
CLICK "Buy Now!" to put Proven Human Movement Science to Work in YOUR Swing Today...For a ONE-TIME Payment of $19.99
Buy now using our secure order form
Trying to end your struggle with getting hitters on-time, balanced, and keeping high Ball Exit Speeds, especially with off-speed and breaking pitches isn't nearly as easy as they make it seem, is it?
Stress has gotten the best of you. You have:
Tried YouTube drills and advice from "gurus" on Social Media,
Subscribed to hitting website after website, and/or
Purchased programs and books on hitting.
... and they all left you with frustrating results.
But There is Good News!
Hey there,
...my name is Joey Myers.
And yes, I know exactly how you're feeling.
These days, I've managed to achieve quite a bit of success... I'm able to teach hitters how to get on-time more often using the science of successful learning, I can also apply human movement and psychological principles validated by REAL science, NOT "because-I-said-so 'bro-science'" to hitting a ball, and I'm very lucky in that I can simplify the coaching cues, so that even a 4-year-old can understand and execute them.
But I'll be honest...
Things Definitely Didn't Start Out This Way...
Before I ever achieved even a tiny taste of success, there were a lot of roadblocks. So many times, I was ready to throw in the towel.
When I first started out, I didn't know how to teach hitters to get on-time without losing swing performance on a predictable basis, I was afraid I would be made fun of if I stepped out of conventional baseball and softball knowledge circles, and I couldn't bring myself to abandon what I thought I knew about hitting because I had a "Fixed" Mindset. None of it worked out for me and I just ended up ready to quit, thinking I would never be able to get hitters on-time without losing swing performance.
Of course, you know exactly what I'm talking about, right?
Does Any of This Sound Familiar?
You've tried YouTube drills and advice from "gurus" on Social Media, which just resulted in wasted time and effort, in addition to a loss in self confidence that maybe you don't have the "magic touch".
You've subscribed to hitting website after website, which led to high frustration, unpredictable results, and annoying emails loaded with useless crud that doesn't work.
You've purchased programs and books on hitting, which just ended up in feeling like the "gurus" are just out to pull one over on you and take your money.
And now you're still at square one. I understand just how you feel.
Just Before I Was Ready to Give Up, A Stroke of Luck Changed Everything.
You can only run into so many walls before you get so frustrated that the only option left seems to be quitting.I was just about at that point when I was doing research to fix chronic tightness on the inside of my right knee, thanks to this one-sided dominant sport stemming from decades of hitting and throwing right handed.
As soon as I discovered this, everything instantly got easier.
With this amount of success, I knew I couldn't keep it on lockdown.
Since then, I've run into multiple others who encountered the same, never-ending battle I went through. And it seemed pretty unfair to keep it to myself... especially since it's been such a huge stepping stone in my success.
So, I'd like to let you in on the "secret."
Introducing...
On-Time Hitter 2.0: Engineering The Alpha
The Special Online Video Mini-Course that will Help you get Hitters On-Time without Losing Swing Performance in the next 3 Weeks.
So what exactly is this going to do for you? Can a simple online video mini-course really turn everything around for you?
A step by step solution to pitch recognition training: This is good for you because hitters will learn how to pick up pitches sooner (first 30-feet of ball flight), so they can get on base more frequently, swing at better pitches, and get ON-TIME more often.
Value: $100.00
Effective Velocity expert Perry Husband's pitch tunnels secret: This is important because pitchers will be using the same EV system to break hitters' timing down, and your hitters will know how to counter this pitcher's weapon by getting ON-TIME and "hunting" pitches properly.
The answer to WHY I use the fine art of "variance" (and you should too!!):This a good thing because studies reveal this simple science of successful learning principle is fundamental to building ON-TIME game-ready swings.
Value: $100.00
Insight into WHY your hitters don't have to be a pro to stride like a pro: This is great because learning the proper elements of a good stride like width, direction, and stride open or closed will help your hitters get ON-TIME more often without losing swing performance.
Value: $75.00
You Don't Have to Take My Word For It...
I've already slipped the secret to a select group of people. Truth be told, I wanted to guarantee that this would actually work time and time again.
More importantly, I wanted to make sure you'd achieve the same results I have.
And well... I'll let them speak for themselves...
"13-year pro ball player here. I just wanted to say how much I enjoy your videos/emails and I couldn't agree more with the mechanics and techniques that you teach. There is so many concepts to hitting out there that are being taught, yet I feel like you're on the right track to unlocking what it takes physically & mechanically to translate what the best in the world are doing every day. Unfortunately for me the science has come too late in my career as I approach the light at the end of my "tunnel". Keep up the great work." -TGARCIA247 (YouTube username)
"Joey...4 things: 1) Enthralled w/ your stuff. Can't even watch football in peace for watching/reading/thinking hitting. Can't wait to share in the new year with the fighting Titans of Trinity Episcopal. 2) Learning so much. Especially as it relates to and supports what I cut my teeth on from baseball rebellion. 3) Given Black Friday deal I got, think I'm going to have to send you more money. And, 4) Any chance you're going to ABCA in Nashville? Love to meet you and at least hug your neck. Best wishes to you for 2016. Keep up the awesome stuff! Now your #1 fan. Tim" -Timothy Merry
"It may take some time but the rest of the world will eventually come around to understanding that with hitting the simple movements we are trying to explore and teach are the best. I love the fact that Hitting Performance Lab uses great MLB hitters, past and present, to prove that these ideas will work, are working, and have always worked. Keep it up Joey, you have me completely converted. The proof is in the hitters we work with. Many of mine are starting achieve exit velocities that they never thought possible. Averages are up, power is up. I have three girls who will be playing D1, 6 playing D2, 2 going to Juco's this next fall and one younger girl attending several D1 camps this month. Thanks to you for doing the science and providing convincing evidence. Keep up all the good work." -Ronny Weber
Yes! I'd Love to Get in On This, But What's the Catch?
You've heard what I have to say. This online video mini-course is wonderful. But what's the catch?
I'll be honest, you could skip over this offer today and always travel to ANOTHER guru's hometown to work with them one-on-one.
But you'd have to shell out at least $350.00 per day.
But don't worry.
You won't have to pay anywhere near that amount today.
In fact, I'm actually going to sweeten the deal for you right now.
When You Sign Up Today, You'll Also Get FREE Access To...
The "Open, Closed, Or Square Stance?" Video -Value: $50.00
This will help you optimize directional force in the stance, which will help your hitters maximize all 90-degrees of "fair" territory.
The "Making Adjustments to Pitch Location" Video - Value: $50.00
This will help you apply a high pitch power approach, low pitch power approach, and how-to practice these.
"Matt Nokes Back Foot Sideways Contact Swings Drill" Video - Value: $50.00This will help you fix hitters over-rotating the back foot optimizing directional force, focusing on intention: "hit ball as hard and as far as you can", and how Catapult Loading System principles help with this.
You'll Get $150.00 Worth of Great Products Completely Free!
But there is one thing...
This FLASH SALE is only good for 4-days. Once I close down the offer we won't release it again at this price. I'm doing this because I want action-taking coaches who are ready to put this information to use.
Not to worry though.
To make your decision extremely easy, I'm going to remove all risk!
Yup, that's right. I want to guarantee you take advantage of this offer today and feel good about it.
You're protected by our 60-day money-back guarantee. If for any reason at all you're not completely satisfied, get in touch with our team and we will give you a complete refund. It's that simple.
"Yes, I Want In! How Much Will This Cost Me?"
I'm stoked for you to jump in and get started. Even more so, I can't wait for you to see the results that are waiting for you on the other side.
Here's a quick recap of everything you'll receive when you secure your copy right now:
The answer to WHY I use the fine art of "variance" (and you should too!!) - $75.00
Insight into WHY your hitters don't have to be a pro to stride like a pro - $75.00
And You're also Getting:
The "Open, Closed, Or Square Stance?" Video - $50.00
The "Making Adjustments To Pitch Location" Video - $50.00
"Matt Nokes Back Foot Sideways Contact Swings Drill" Video - $50.00
When You Secure Your Copy of On-Time Hitter 2.0: Engineering The Alpha Today, You'll Get a Total Value of $625.00 For ONLY...
CLICK "Buy Now!" to put Proven Human Movement Science to Work in YOUR Swing Today...For a ONE-TIME Payment of $19.99
Buy now using our secure order form
Before I let you go, I wanted to send out a big thank you for reading this letter.
I'm truly excited for you to get started with On-Time Hitter 2.0: Engineering The Alpha and see the results.
Talk to you soon,
Joey Myers
P.S. You could skip over this offer, but then you'll stay right where you are now. Let me help you get out of the rut you've been in. Start achieving the results you deserve right now. Grab On-Time Hitter 2.0: Engineering The Alpha by clicking the buy button above.
P.P.S. Just a reminder, this FLASH SALE is for only 4-days. But don't worry. You're protected by our money back guarantee. So, you can try it out today, and enjoy peace of mind. All you have to do is click the buy button above to get started. |
1799 in Sweden
Events from the year 1799 in Sweden
Incumbents
Monarch – Gustav IV Adolf
Events
- Coffee are banned: due to the opposition, this unpopular law is abolished again in 1802.
- Maximum seu archimetria by Thomas Thorild
Births
13 March - Maria Dorothea Dunckel, playwright (died 1878)
22 March – Fredrik Vilhelm August Argelander
24 March - Nils Almlöf, actor (died 1875)
31 October - Maria Fredrica von Stedingk, composer (died 1868)
9 November - Gustav, Prince of Vasa, prince (died 1877)
11 December - Charlotte Thitz, educator (died 1889)
Helena Larsdotter Westerlund, educator (died 1865)
Deaths
2 February - Dorothea Maria Lösch, war heroine (born 1730)
11 March - Anna Lisa Jermen, entrepreneur (born 1770)
25 May - Barbara Ekenberg, entrepreneur (born 1717)
Anna Elisabeth Baer, ship owner (born 1722)
Anna Maria Brandel, industrialist (born 1725)
References
Category:Years of the 18th century in Sweden |
Harold W. McGraw Prize in Education
The Harold W. McGraw, Jr. Prize in Education is awarded annually by the McGraw-Hill Education, The McGraw Foundations, and Arizona State University to recognize outstanding individuals who have dedicated themselves to improving education through new approaches and whose accomplishments are making a difference in Pre-K-12 education, higher education, and learning science research around the world. The McGraw Prize was established in 1988 to honor the company's founder, James H. McGraw's lifelong commitment to education and to mark the corporation's 100th anniversary. In 2015 McGraw-Hill Education formed an alliance with Arizona State University to manage the annual McGraw Prize program.
As of 2015, McGraw Prize winners are determined by on an open nomination process as well as an assessment of the importance of an individual's career-long impact on the field of education by distinguished committee of jurors. The Prize includes three categories: Pre-K-12, higher education, and learning science research. Honorees receive an award of $50,000 and a bronze sculpture designed by students from Arizona State University.
Past honorees include:
CEO of EdX Anant Agarwal;
superintendent of Miami-Dade school district Alberto M. Carvalho;
CEO of the Afghan Institute of Learning Sakena Yacoobi;
founder of Khan Academy Sal Khan;
former U.S. Secretary of Education Richard Riley;
former U.S. Secretary of Education Rod Paige;
James B. Hunt, Jr., former Governor of North Carolina;
Ellen Moir, co-founder and executive director, New Teacher Center at the University of California, Santa Cruz;
James P. Comer, M.D., Maurice Falk Professor of Child Psychiatry, Yale University Child Study Center;
Mary E. Diaz, Ph.D., Dean of Education, Alverno College;
Christopher Cerf, a key creative force behind Sesame Street;
Dennis Littky, co-founder and co-director of The Big Picture Company, The Met School and College Unbound;
and Barbara Bush, founder of the Barbara Bush Foundation for Family Literacy and former First Lady.
External links
Harold W. McGraw, Jr. Prize in Education Website
McGraw Prize Categories
Nominations Being Accepted for the 2017 McGraw Prize
Category:Awards established in 1988
Category:American education awards
Harold W. McGraw, Jr. Prize in Education Winners |
<?xml version="1.0" encoding="utf-8"?>
<shape xmlns:android="http://schemas.android.com/apk/res/android"
android:sha<caret>/> |
No. 119,147
IN THE COURT OF APPEALS OF THE STATE OF KANSAS
PATTI MORGAN, Individually and as Natural Mother and Heir-at-Law of
ROBERT DOUGLAS COOK, and the Estate of ROBERT DOUGLAS COOK,
By and Through PATTI MORGAN as Special Administrator of the Estate,
Appellants,
v.
HEALING HANDS HOME HEALTH CARE, LLC,
Appellee.
SYLLABUS BY THE COURT
1.
A plaintiff may use a statute to establish a duty of care in a simple negligence case
even if that statute does not provide a private right of action.
2.
In a simple negligence suit, a plaintiff may use a statute to establish a duty of care
and violation of a statutory requirement to establish breach of that duty so long as the
injured party was a member of the class the statute sought to protect and the injury was of
the character the Legislature sought to protect the public against.
3.
A person need not have been previously declared incompetent, appointed a
guardian, or appointed a conservator to qualify as an "adult" within the meaning of
K.S.A. 39-1430(a) and K.S.A. 39-1431(a).
1
4.
Under the facts of this case, it was error for the trial court to grant partial summary
judgment in favor of defendant home healthcare company in ruling that K.S.A. 39-
1431(a) was inapplicable as a matter of law when establishing a duty in a negligence
action for the death of a person diagnosed with schizophrenia and diabetes who was
receiving twice daily in-home nursing visits from a home healthcare company.
Appeal from Sedgwick District Court; WARREN M. WILBERT, judge. Opinion filed October 11,
2019. Reversed and case remanded for new trial.
Thomas M. Warner, Jr., of Warner Law Offices, P.A., of Wichita, for appellants.
Stephen H. Netherton and Don D. Gribble II, of Hite, Fanning & Honeyman L.L.P., of Wichita,
for appellee.
Before BUSER, P.J., GREEN and MALONE, JJ.
GREEN, J.: Robert Cook had diabetes and "chronic, severe" paranoid
schizophrenia, and received twice daily in-home nursing visits from Healing Hands
Home Healthcare ("Healing Hands") nurses. He died of hyperthermia in June 2013. After
Cook died, his mother Patti Morgan sued Healing Hands for negligence. The trial court
granted Healing Hands' partial summary judgment motion, ruling that Kansas' mandatory
reporter statute, K.S.A. 39-1431(a), could not serve as a basis for a duty Morgan alleged
Healing Hands owed Cook and breached. After a trial, the jury found Healing Hands bore
no fault for Cook's death. Morgan now appeals. She argues that the trial court erred by
granting Healing Hands' partial summary judgment motion and erroneously failed to give
one of her requested jury instructions. For the reasons stated later, we reverse the trial
court's partial summary judgment ruling, but we find no error in the trial court's jury
instructions ruling.
2
Cook's schizophrenia made him "forgetful" and gave him "daily auditory
hallucinations and delusions." Cook was prescribed multiple medications for his
condition, including clozapine for his schizophrenia. Clozapine's side effects include
increased heart rate and a decreased capacity to tolerate heat. Cook's primary physician
ordered that Cook receive home healthcare; Healing Hands was Cook's home healthcare
provider. Nurses from Healing Hands were supposed to visit Cook twice every day.
Cook's psychiatrist signed off on a care plan for Cook every two months. The care
plans were to be facilitated by Healing Hands. Cook's final care plan was issued on May
27, 2013. The care plan noted that Cook had a potential for "self harm." It stated that
Cook "is alert and oriented but forgetful cont[inue]s to be delusional and hallucinate has
poor personal hygiene." It listed that for the last 60 days, Cook's vitals were as follows:
systolic blood pressure of 100-140; diastolic blood pressure of 62-84; pulse rate of 70-80;
respirations of 18-22; and blood sugar ranges between 80 and 220. The pulse rate listed in
this care plan did not match Cook's actual pulse readings for the past 60 days.
The care plan ordered that Healing Hands do the following:
- Evaluate Cook's cardiopulmonary status daily.
- Evaluate Cook's eating, hydration, and restroom habits as needed.
- Evaluate for infection as needed.
- Set up Cook's medications weekly and remind him to take them.
- Evaluate Cook's blood sugar on Monday, Wednesday, and Friday.
- Draw labs as needed.
- Evaluate and teach Cook about diabetic diet habits and care.
Beginning on May 20, 2013, Cook's nurses noted that the temperature in Cook's
apartment was very warm. The nurses noted the heat, Cook's hygiene, and Cook's
continued failure to turn on his air conditioning as follows:
3
- May 20, 2013: "apt. very warm & seems as if client may not use deodorant. Has a
potent smell to him."
- May 30, 2013: "Instructed client on personal hygiene."
- June 1, 2013: "house is very warm."
- June 4, 2013: "Pt [patient] very unkempt."
- June 5, 2013 morning: "Instructed client on personal hygiene. . . . Warm in his apt.
States he is ok, will turn on A/C later."
- June 5, 2013 evening: "very warm in apt. States he is comfortable."
- June 7, 2013: "Much better hygiene this AM. Not so warm in his apt."
- June 9, 2013: "unkempt odor of pt. house hombly [sic] warm."
- June 11, 2013: "very warm in his apt. States he is ok."
- June 12, 2013: "Strong B.O. Very warm in apt. [Instructed to [increase] fluid
intake."
- June 15, 2013: "Apt very warm P.T. [patient] states he's comfortable."
- June 17, 2013: "Talked to client about his apt being so warm. States it's ok for
himself."
- June 18, 2013: "Pt very unkept. Strong B.O."
- June 19, 2013: "Very warm in his apartment. States he is fine [with] it."
- June 20, 2013: "Has very poor hygiene, strong body odor. Keeps house very
warm. States he is ok, going to take a shower."
- June 21, 2013: "Very warm in apt. States he will use AC later. Informed him about
Red Cross giving out fans."
- June 22, 2013: "Pt [patient] appears anxious and house was hot."
- June 23, 2013: "Very warm in his apt. No AC on. Sweating heavily. Poor
hygiene. . . . Instructed client about it being so hot. States it's ok."
- June 24, 2013 morning: "Very poor hygiene. Sleeps in his clothes. Client states he
is ok with his apt being warm."
- June 24, 2013 evening: "Pt [patient] very unkept. Strong B.O."
4
- June 25, 2013 morning: "Instructed client is not going to use A/C. Sweating
heavy. Needs to replenish his fluids. Drink some Gatorade. Very bad body odor.
Sleeps in his clothes."
- June 25, 2013 evening: "[Instructed] to [increase] fluid intake to prevent
dehydration. Denies being overly warm."
- June 26, 2013 morning: "His apt is very warm inside. He smells really foul.
Instructed about heat [and] his sweating. Client states he is comfortable in his apt."
- June 26, 2013 evening: "Pt [patient] very unkept. Strong B.O."
- June 27, 2013 morning: "Very warm in his apt. Not using his A/C. States he's
ok. . . . Instructed client it's too warm in here, use you're A/C. States he will later."
- June 27, 2013 evening: "Apt very warm. Pt [patient] states he is comfortable.
Discussed drinking Gatorade or similar fluids."
Additionally, while Cook's most recent care plan listed his typical pulse range in the
previous 60 days as 70-80 beats per minute, the nurses' notes show that during June 2013,
his pulse was significantly higher. The nurses recorded his pulse as follows:
- May 30, 2013. Morning: 119; Evening: 100.
- May 31, 2013. Morning: no visit; Evening: 120.
- June 1, 2013. Morning: 119; Evening: 118.
- June 2, 2013. Morning: 110; Evening: 114.
- June 3, 2013. Morning: 113; Evening: no visit.
- June 4, 2013. Morning: 121; Evening: 123.
- June 5, 2013. Morning: 122; Evening: 109.
- June 6, 2013. Morning: 112; Evening: 117.
- June 7, 2013. Morning: 117; Evening: 106.
- June 8, 2013. Morning: 114; Evening: 118.
- June 9, 2013. Morning: 113; Evening: 117.
- June 10, 2013. Morning: 122; Evening: 116.
5
- June 11, 2013. Morning: 117; Evening: no visit.
- June 12, 2013. Morning: 100; Evening: 133.
- June 13, 2013. Morning: 126; Evening: 118.
- June 14, 2013. Morning: 122; Evening: 106.
- June 15, 2013. Morning: 120; Evening: 112.
- June 16, 2013. Morning: 102: Evening: 109.
- June 17, 2013. Morning: 112; Evening: No pulse listed.
- June 18, 2013. Morning: 117; Evening: 122.
- June 19, 2013. Morning: 113; Evening: 113.
- June 20, 2013. Morning: 126; Evening: No pulse listed.
- June 21, 2013. Morning: 130; Evening: 111.
- June 22, 2013. Morning: 119; Evening: No pulse listed.
- June 23, 2013. Morning: 130; Evening: 114.
- June 24, 2013. Morning: 69; Evening: 114.
- June 25, 2013. Morning: 130; Evening: No pulse listed.
- June 26, 2013. Morning: 125; Evening: No pulse listed.
- June 27, 2013. Morning: 120; Evening: 146.
On the morning of June 27, 2013, however, the pulse listed in Cook's blood
pressure log, which is also supposed to be taken by the nurse at the same time, was 150,
not 120. According to Debra Mann, the nurse who performed the June 27, 2013 morning
visit, this discrepancy is because she did not have her notes with her during the morning
visit.
On June 26 and 27, 2013, Sedgwick County, Kansas, was under a heat advisory.
On June 26, 2013, the high temperature was 101. On June 27, 2013, the high temperature
was 104 and the heat index reached 117 for several hours. Cook died sometime in the
night between June 27 and June 28, 2013. Healing Hands' report on Cook's death stated:
6
"[D]ue to extreme heat and no air client died in night at home, family found him." Cook's
death certificate listed "probable hyperthermia" as his cause of death.
After Cook's death, his apartment complex discovered that the air conditioner in
his apartment would not have worked even if he turned it on because "the disconnect was
removed" and additional parts needed to be replaced.
On June 24, 2015, Morgan brought a wrongful death and survival action against
Healing Hands, alleging that Healing Hands' negligence caused Cook's death.
"To recover for negligence, the plaintiff must prove the existence of a duty,
breach of that duty, injury, and a causal connection between the duty breached and the
injury suffered. Whether a duty exists is a question of law. Whether the duty has been
breached is a question of fact." Reynolds v. Kansas Dept. of Transportation, 273 Kan.
261, Syl. ¶ 1, 43 P.3d 799 (2002).
Did the Trial Court Err in Granting Partial Summary Judgment on the Mandatory
Reporter Statute?
On November 1, 2017, Healing Hands moved for partial summary judgment.
Healing Hands sought partial summary judgment on two issues: (1) that it legally had no
duty to notify Morgan about Cook's condition, and (2) that Kansas' mandatory reporter
statute, K.S.A. 39-1431(a), did not require Healing Hands or its employees to report
Cook's condition to law enforcement or state authorities.
The relevant portions of K.S.A. 39-1431 read as follows:
"(a) Any person who is . . . a licensed professional nurse, a licensed practical
nurse, . . . [or] the chief administrative officer of a licensed home health agency . . . who
has reasonable cause to believe that an adult is being or has been abused, neglected or
7
exploited or is in need of protective services shall report, immediately from receipt of the
information, such information or cause a report of such information to be made in any
reasonable manner. . . ."
Under this statute, reports must be made to the Kansas Department for Children and
Families or law enforcement. As a result, Morgan is incorrect when she asserts that
Healing Hands had a statutory duty to include her in the reporting requirement of K.S.A.
39-1431.
Moreover, it is a class B misdemeanor for a mandatory reporter to fail to make a
report when they have reasonable cause to believe an adult is abused, neglected,
exploited, or in need of protective services. K.S.A. 39-1431(e).
Healing Hands argued that it had no duty to report under the mandatory reporter
statute of K.S.A. 39-1431 because Cook lived independently and managed his own care;
he did not have a guardian or conservator, nor did Morgan act as his power of attorney.
Further, Healing Hands argued that the information about Cook's apartment and his
behavior was readily available to Morgan because she lived in Wichita and could visit
and call him.
Healing Hands then argued that the mandatory reporter statute did not apply to the
facts of the case because Cook was not an "adult" about whom reporting was mandated
within the meaning of the statute.
K.S.A. 39-1430(a) is a stipulative definition regarding the term "adult" within the
mandatory reporter statute of K.S.A. 39-1431(a):
"'Adult' means an individual 18 years of age or older alleged to be unable to
protect their own interest and who is harmed or threatened with harm through action or
inaction by either another individual or through their own action or inaction when (1)
8
such person is residing in such person's own home, the home of a family member or the
home of a friend, (2) such person resides in an adult family home as defined in K.S.A.
39-1501 and amendments thereto, or (3) such person is receiving services through a
provider of community services and affiliates thereof operated or funded by the
department of social and rehabilitation services or the department on aging or a
residential facility licensed pursuant to K.S.A. 75-3307b and amendments thereto. Such
term shall not include persons to whom K.S.A. 39-1401 et seq. and amendments thereto
apply."
Healing Hands argued that Cook was not "alleged to be unable to protect [his]
own interests" because there had been no allegations that Cook could not protect his own
interests before the events at issue in the suit. Healing Hands contended that in order for
the mandated reporter statute to apply, allegations that the adult was unable to protect his
or her own interests must come "prior to, or no later than at, the time that a harmful event
occurred." Healing Hands reiterated that because Cook lived independently and had no
conservator or guardian, he had not previously been alleged to be unable to protect his
own interests.
Further, Healing Hands argued that Cook's condition before his death did not
render him in need of protective services. K.S.A. 39-1430(f) states, "'[i]n need of
protective services' means that an adult is unable to provide for or obtain services which
are necessary to maintain physical or mental health or both." Subsection (g) of the same
statute states the following:
"'Services which are necessary to maintain physical or mental health or both'
include, but are not limited to, the provision of medical care for physical and mental
health needs, the relocation of an adult to a facility or institution able to offer such care,
assistance in personal hygiene, food, clothing, adequately heated and ventilated shelter,
protection from health and safety hazards, protection from maltreatment the result of
which includes, but is not limited to, malnutrition, deprivation of necessities or physical
punishment and transportation necessary to secure any of the above stated needs, except
9
that this term shall not include taking such person into custody without consent except as
provided in this act." K.S.A. 39-1430(g).
Healing Hands argued that Cook was not "unable to provide for or obtain"
"adequately . . . ventilated shelter" because he had managed his utilities appropriately
while living independently for the past decade. Healing Hands contended that "[t]he
reason why Mr. Cook's residence was warm was because Mr. Cook elected not to turn on
his air conditioner, despite defendant's nurses telling him to do so."
Morgan opposed Healing Hands' motion for partial summary judgment. First,
Morgan clarified that she was not arguing that Healing Hands owed a duty to her. Rather,
she argued that one of the "interventions" available to Healing Hands to alleviate the
dangerous conditions in Cook's apartment was to call her so that she could intervene, and
that failure to utilize this possible intervention constituted part of Healing Hands' breach
of its duty to Cook.
Next, Morgan argued that Cook was an adult covered by the mandatory reporter
statute of K.S.A. 39-1431(a). She argued that K.S.A. 39-1430 et seq. did not require that
an adult first be declared incompetent or appointed a guardian or conservator before they
were covered by K.S.A. 39-1431(a).
Morgan also argued that she alleged facts sufficient to establish both neglect and a
need for protective services in her pretrial questionnaire wherein she alleged the
following:
"Robert was an adult who was harmed or threatened with harm which was mental and
physical in nature, through action or inaction of either another individual, in this case
agents, employees and officers of Defendant, or through his own action or inaction . . . .
By staying in an apartment that was unventilated and extremely hot with an outside heat
index of 117 degrees during a dangerous heat advisory that had been disseminated to the
10
public by the National Weather Service, Robert unknowingly put himself at great risk for
harm or death."
Finally, Morgan argued that K.S.A. 39-1430 et seq. could serve as a basis for
establishing a duty.
The trial court apparently held a hearing on the partial summary judgment motion;
no transcript of this hearing appears in the record. The trial court ruled that because
Morgan had agreed that Healing Hands owed her no duty, that issue was moot. With
respect to the second issue, the trial court ruled that "the defendant in this case, though a
mandatory reporter, had no duty to report the decedent's specific circumstances to DCF."
The trial court ruled as follows:
"The evidence does not support the plaintiff s contention that Mr. Cook's
situation met the requirements of K.S.A. 39-1430, et. seq., thereby imposing a duty on
the defendant to report decedent's living situation to Law Enforcement or DCF. In this
case, the hazard for which Mr. Cook was alleged to be in need of protection was his
decision to stay in a hot apartment during an unusually hot June. Mr. Cook had no
caretaker. He was an adult living independently. Evidence has been provided by both
sides that he was capable of taking care of himself and had done so successfully for 16
years prior to his death. He had managed his diabetes, his finances etc. with no
assistance. Also, there was no indication that on that day the air conditioning was known
by the defendant to be inoperable or that the hot environment was not of Mr. Cook's own
choice.
"Therefore, the Court finds that Mr. Cook's situation did not require the
defendant to comply with the mandatory reporting requirements of K.S.A. 39-1431 on
June 27, 2013."
On January 23, 2018, a jury trial began. During the trial, Morgan moved for the
trial court to reconsider its ruling granting Healing Hands partial summary judgment. She
argued that the trial court erred when it stated, "[t]he parties have agreed there is no duty
11
by the Defendant to report to Patti Morgan." Morgan again clarified she asserted that
Healing Hands had a duty to call her as part of its duty to Cook. She also argued that
"[t]he only reasonable conclusion that can be drawn from the plain, easy to understand
language of K.S.A. 39-1430 is that the act covers Robert Cook as he is an 'adult.'"
Further, Morgan argued that the trial court's partial summary judgment ruling "did not
properly apply the facts of this case to K.S.A. 39-1430 et. seq."
The trial court heard arguments on the motion to reconsider in between testimony
from defense witnesses. The trial court and the parties agreed that the earlier order
misstated Morgan's argument that Healing Hands should have called her out of a duty of
care to Cook. The trial court and the parties agreed that Morgan could still make this
argument.
With respect to her second argument, Morgan contended that under the plain
meaning of the statute, K.S.A. 39-1430 et seq. applied to the case. Healing Hands argued
that it was unfair for Morgan to move for reconsideration so far into trial. On the merits,
Healing Hands stated it incorporated all of its arguments from the initial partial summary
judgment motion and hearing. The trial court denied Morgan's motion for reconsideration
for two reasons. First, the trial court stated that the prejudice caused to Healing Hands by
reversing the ruling so late in the trial outweighed the benefits. Second, the trial court
stated the following:
"Because, again, if he was an adult as defined, mandatory—there's no doubt in my mind
as a matter of law they're mandatory reporters and they're required. But the context of
[the trial court judge who heard the original motion's] ruling is at summary judgment it
wasn't demonstrated that he was an adult as defined by the facts of the case, and that's
where alleged comes in.
"So I'm not saying that at some point it could have been proven that he was a
mandatory reporter and I'm not saying it could or could not. I'm just saying at this point
we need to go forward with the case."
12
After denying Morgan's motion for reconsideration, the trial court heard testimony
from Morgan's remaining witnesses and a single defense witness. After both parties'
closing statements, the case went to the jury. The jury found that Healing Hands bore no
fault for Cook's death. Morgan timely appealed.
On appeal, Morgan argues that the trial court made an error of law by reading
requirements into the statute that are not there. Morgan also contends that the trial court
erroneously resolved genuine issues of material fact in favor of Healing Hands.
Our standard of review is familiar. Summary judgment may be granted only if no
genuine issue of material fact exists:
"'"Summary judgment is appropriate when the pleadings, depositions, answers to
interrogatories, and admissions on file, together with the affidavits, show that there is no
genuine issue as to any material fact and that the moving party is entitled to judgment as
a matter of law. The trial court is required to resolve all facts and inferences which may
reasonably be drawn from the evidence in favor of the party against whom the ruling is
sought. When opposing a motion for summary judgment, an adverse party must come
forward with evidence to establish a dispute as to a material fact. In order to preclude
summary judgment, the facts subject to the dispute must be material to the conclusive
issues in the case. On appeal, we apply the same rules and when we find reasonable
minds could differ as to the conclusions drawn from the evidence, summary judgment
must be denied."' [Citation omitted.]" Patterson v. Cowley County, Kansas, 307 Kan.
616, 621, 413 P.3d 432 (2018).
To the extent that Morgan alleges the trial court "read in" nonstatutory
requirements to the mandatory reporter statute, this summary judgment argument is an
issue of statutory interpretation. Appellate courts have unlimited review over questions of
statutory interpretation. Neighbor v. Westar Energy, Inc., 301 Kan. 916, 918, 349 P.3d
469 (2015). When interpreting statutes, appellate courts first attempt to ascertain the
13
Legislature's intent through the enacted statutory language, giving common words their
ordinary meanings. Ullery v. Othick, 304 Kan. 405, 409, 372 P.3d 1135 (2016).
K.S.A. 39-1430
Chapter 39 of the Kansas Statutes Annotated deals with the social welfare of
mentally ill, incapacitated, and dependent persons. Article 14 of this chapter dictates
reporting requirements in the event of "abuse, neglect or exploitation of certain persons."
K.S.A. 39-1431(a) identifies the mandatory reporters:
"Any person who is licensed to practice any branch of the healing arts, . . . a
licensed professional nurse, a licensed practical nurse, . . . an independent living
counselor and the chief administrative officer of a licensed home health agency . . . who
has reasonable cause to believe that an adult is being or has been abused, neglected or
exploited or is in need of protective services shall report, immediately from receipt of the
information, such information or cause a report of such information to be made in any
reasonable manner."
K.S.A. 39-1430(a) defines the term "adult" within the mandatory reporter statute
of K.S.A. 39-1431(a):
"'Adult' means an individual 18 years of age or older alleged to be unable to
protect their own interest and who is harmed or threatened with harm through action or
inaction by either another individual or through their own action or inaction when (1)
such person is residing in such person's own home, the home of a family member or the
home of a friend, (2) such person resides in an adult family home as defined in K.S.A.
39-1501 and amendments thereto, or (3) such person is receiving services through a
provider of community services and affiliates thereof operated or funded by the
department of social and rehabilitation services or the department on aging or a
residential facility licensed pursuant to K.S.A. 75-3307b and amendments thereto. Such
14
term shall not include persons to whom K.S.A. 39-1401 et seq. and amendments thereto
apply."
K.S.A. 39-1430(g) is an enlarging definition of the services that are needed to
maintain physical or mental health or both of an adult as defined under K.S.A. 39-
1430(a):
"'Services which are necessary to maintain physical or mental health or both'
include, but are not limited to, the provision of medical care for physical and mental
health needs, the relocation of an adult to a facility or institution able to offer such care,
assistance in personal hygiene, food, clothing, adequately heated and ventilated shelter,
protection from health and safety hazards, protection from maltreatment the result of
which includes, but is not limited to, malnutrition, deprivation of necessities or physical
punishment and transportation necessary to secure any of the above stated needs, except
that this term shall not include taking such person into custody without consent except as
provided in this act."
Healing Hands argued and the trial court agreed that under K.S.A. 39-1430(a),
"alleged to be unable to protect their own interest" meant allegations that Cook was
unable to protect his own interests must have come "prior to, or no later than at, the time
that a harmful event occurred." Healing Hands reiterated that because Cook lived
independently and had no conservator or guardian, he had not previously been alleged to
be unable to protect his own interests. Healing Hands reiterates those arguments on
appeal and argues that the past tense of the word "alleged" means that such allegations
must occur before the conduct at the heart of a mandatory report.
On the other hand, Morgan takes issue with this interpretation of the word
"alleged." She argues the following: "It is non-sensical to suggest that the Act only
applies to those individuals who have been previously determined to be incompetent.
15
Nowhere in the definition section of the Act or anywhere else in the Act does it state
this."
First, it is worth noting that K.S.A. 39-1430(a) is a stipulative definition.
Stipulative definitions are custom tailored to the particular needs of the document in
which they appear. Because a stipulative definition is both complete and exclusive, it
must contain all the possibilities in mind. Here, K.S.A. 39-1430(a) is a stipulative
definition because it uses the verb "means." See Child, Drafting Legal Documents:
Materials and Problems, Stipulative Definitions, p. 237 (1988).
For example, K.S.A. 39-1430(a) states the following:
"'Adult' means an individual 18 years of age or older alleged to be unable to
protect their own interest and who is harmed or threatened with harm through action or
inaction by either another individual or through their own action or inaction when (1)
such person is residing in such person's own home, the home of a family member or the
home of a friend, (2) such person resides in an adult family home as defined in K.S.A.
39-1501 and amendments thereto, or (3) such person is receiving services through a
provider of community services and affiliates thereof operated or funded by the
department of social and rehabilitative services or the department on aging or a
residential facility licensed pursuant to K.S.A. 75-3307b and amendments thereto. Such
term shall not include persons to whom K.S.A. 39-1401 et seq. and amendments thereto
apply."
This stipulative definition gives the term "adult" a particular and a restrictive meaning.
The stipulative definition also sets out what the State would have to allege
according to K.S.A. 39-1431(e) to establish a prima facie case against "[a]ny person
required to report information or cause a report of information to be made under [K.S.A.
39-1431(a)] who knowingly fails to make such report or cause[s] such report not to be
16
made . . . ." Let us assume that the State wanted to file an action against a mandatory
reporter for knowingly failing to report abuse, neglect, or need of protective services
information under K.S.A. 39-1431(a). The stipulative definition of the term "adult" under
K.S.A. 39-1430(a) would require the State to file a complaint alleging that the adult is an
individual 18 years or older who is alleged to be unable to protect his or her own
interests.
On the other hand, Healing Hands' argument and the trial court's ruling sets up a
condition precedent to the State proceeding with a viable action against a mandatory
reporter. For example, under Healing Hands' contention and the trial court's ruling, it
would require the State to show that Cook was unable to protect his own interests before
this "harmful event occurred" to maintain an action under K.S.A. 39-1431. If Healing
Hands' and the trial court's interpretation of the term "alleged" in K.S.A. 39-1430(a) is
correct, this would mean that all initial mandatory reporters could not be prosecuted
under K.S.A. 39-1431(e) for knowingly failing to report abuse, neglect, or need of
protective services information unless the State could allege the following: That the
individual is someone who has been alleged to be unable to protect his or her own
interests, which occurred before the conduct at the heart of this mandatory report. We do
not believe our Legislature would have enacted such legislation that gives all initial
mandatory reporters, to borrow a golf phrase, a mulligan (a free shot to a golfer whose
previous shot was poorly played) in situations similar to what we have in this case.
Moreover, the language at issue—"alleged to be unable to protect their own
interest" is used in K.S.A. 39-1430(a) to modify the noun phrase "an individual 18 years
of age or older." In addition to describing the noun "individual," the word "alleged" also
describes the subject adult and it completes the meaning of the verb means: "an
individual 18 years of age or older alleged to be unable to protect their own interest . . . ."
(Emphasis added.) A word that is used in this way is called a predicate adjective.
17
Unlike the perfect tenses (have, has, or had), the passive voice sometimes gives no
indication of the timing of events. Here, the statutory language in K.S.A. 39-1430(a) does
not say much about the timing of events. Nevertheless, the use of the word "alleged" as a
predicate adjective strongly supports that it refers to events occurring in the future. For
example, the United States Supreme Court has stated that past participles "describe the
present state of a thing," just the way "adjectives [] describe the present state of the nouns
they modify." Henson v. Santander Consumer USA, Inc., 582 U.S. ___, 137 S. Ct. 1718,
1722, 198 L. Ed. 2d 177 (2017). In addition, other courts have concluded that past
participles can refer to future events. For example, in Lang v. United States, 133 F. 201,
204 (7th Cir. 1904), the court stated that the past participle "begun" in the phrase
"prosecution . . . begun under any existing [a]ct" does not "express[] that verb in its past
tense." Indeed, the court held it "perform[s] solely the function of a . . . verbal adjective,
qualifying any prosecutions in mind, pending or future." (Emphasis added.) 133 F. at
204.
K.S.A. 39-1430(a)'s statutory purpose is plainly apparent on its face. It seeks to
protect adult individuals, 18 years of age or older, who are "unable to protect their own
interest and who [are] harmed or threatened with harm through action or inaction by
either another individual or through their own action or inaction . . . ."
We must ask this question: How would Healing Hands' and the trial court's
temporal limitation further the statutory objectives of K.S.A. 39-1430(a) and 39-1431(a)
and (e)? It would not do so in any way. Moreover, what reason would our Legislature
have for silently writing into K.S.A. 39-1430(a) language a temporal distinction with its
devastating consequences? Absolutely none.
Finally, another way to ascertain the Legislature's intent when it used the word
"alleged" is to look at how the word is used in other portions of the same Act. Alleged
appears in two other places in Article 14: K.S.A. 39-1411(b) and K.S.A. 39-1433.
18
K.S.A. 39-1411(b) states:
"(b) The secretary of health and environment shall forward any finding of abuse,
neglect or exploitation alleged to be committed by a provider of services licensed,
registered or otherwise authorized to provide services in this state to the appropriate state
authority which regulates such provider. The appropriate state regulatory authority, after
notice to the alleged perpetrator and a hearing on such matter if requested by the alleged
perpetrator, may consider the finding in any disciplinary action taken with respect to the
provider of services under the jurisdiction of such authority. The secretary of health and
environment may consider the finding of abuse, neglect or exploitation in any licensing
action taken with respect to any adult care home or medical care facility under the
jurisdiction of the secretary."
It is highly unlikely that the Legislature would have intended the phrase "alleged
to be committed by a provider of services" to mean allegations predating those central to
the secretary's finding. Thus, we conclude that when the Legislature similarly used the
word "alleged" in K.S.A. 39-1430(a), it did not impose a temporal requirement that the
allegations that a person was unable to protect his or her own interests must occur before
the conduct to be reported. Moreover, we conclude that the trial court committed an error
of law when it granted a partial summary judgment to Healing Hands with respect to its
interpretation of K.S.A. 39-1430 because Cook had never previously been adjudicated
incompetent or appointed a guardian or conservator. The Act itself, by its plain language,
imposes no such temporal requirement.
Additionally, Healing Hands argues that the mandatory reporter statute does not
apply as a matter of law because only "a subjective belief by the mandated reporter"
triggers a duty to report and Healing Hands' nurses did not believe Cook was abused,
neglected, or needed protective services. The plain language of K.S.A. 39-1431(a) refutes
this argument. It clearly states that any mandatory reporter "who has reasonable cause to
19
believe that an adult is being or has been abused, neglected or exploited or is in need of
protective services shall report, immediately from receipt of the information, such
information or cause a report of such information to be made in any reasonable manner."
(Emphasis added.) Thus, the duty to report is triggered when a mandatory reporter has
"reasonable cause to believe" a covered adult is being abused, neglected, or needs
services, not just when a mandatory reporter subjectively believes a report is called for.
This is a controverted fact question best left to the jury.
Genuine Issues of Material Fact
Morgan contends that "[n]othing in the Act excepts its application simply because
there is evidence that the 'Adult' had been able to take care of him or herself earlier in
their life." Further, while the trial court ruled that "[e]vidence has been provided by both
sides that he was capable of taking care of himself and had done so successfully for 16
years prior to his death[; h]e had managed his diabetes, his finances etc. with no
assistance," Morgan contends that these were not uncontroverted facts. Rather, she put
forth evidence that Cook was not capable of taking care of himself: "Someone who
requires nurses to see him two times a day, 7 days a week, 365 days per year due to his
schizophrenia, clozapine use and other medical conditions is not a normal 'adult'." On
appeal, Healing Hands still maintains that "the undisputed facts show that Mr. Cook
could protect his own interests."
Morgan's argument is compelling. Morgan submitted evidence that Cook had
severe schizophrenia since at least 1988, and had an outstanding care plan for twice daily
nurse visits to ensure he took his medications, monitor his cardiopulmonary status, and
ensure Cook avoided self-harm. The trial court held that it was an uncontroverted fact
that Cook was "capable of taking care of himself and had done so successfully for 16
years" and "managed his diabetes, his finances etc. with no assistance." This was, in
actuality, a controverted fact because of the evidence Morgan put forward.
20
Healing Hands next argues that even if Cook was an adult within the meaning of
the mandatory reporter statute, his circumstances nevertheless did not trigger a duty to
report as a matter of law. Healing Hands argues that Cook was not "unable to provide for
or obtain . . . adequately heated and ventilated shelter" so as to render him in need of
protective services.
Morgan, however, presented evidence controverting Healing Hands' claims on this
issue. A repairman for Cook's apartment complex testified that multiple parts on Cook's
air conditioner were broken and the unit would not cool air. Additionally, with respect to
whether Cook was "unable to provide for or obtain" well-ventilated shelter, Morgan
presented evidence about Cook's severe schizophrenia which included delusions and
hallucinations. This created a controverted fact as to whether, when his death occurred,
Cook could adequately provide for himself.
Statutes May Serve as the Basis for a Duty of Care Even If They Do Not Include a
Private Right of Action
Healing Hands also argues that the mandatory reporter statute cannot be used to
establish a standard of care in a negligence case. Nevertheless, Morgan cites Shirley v.
Glass, 297 Kan. 888, 308 P.3d 1 (2013), as relevant support for her claim that the
mandatory reporter statute can serve as the basis of a duty. In Shirley, our Supreme Court
held that a statute created a private cause of action, namely, a legal duty. For example, a
plaintiff in a negligence action must show four things: duty by the defendant, breach of
that duty, causation between the breach and the plaintiff's injury, and damages suffered
by the plaintiff. 297 Kan. at 894. In Shirley, our Supreme Court held that statutes can
serve as a duty in a negligence case so long as the plaintiff is a member of the class the
Legislature sought to protect with the statute and the injury is the kind the Legislature
sought to prevent. 297 Kan. at 895-97.
21
Healing Hands nevertheless argues that the mandatory reporter statute did not
create a duty here as a matter of law because the nurses "did not observe an affected adult
with reportable circumstances." As a result, Healing Hands maintains that the Shirley
holding is distinguishable from this case. We disagree. Healing Hands analogizes this
case to Hackler v. U.S.D. No. 500, 245 Kan. 295, 777 P.2d 839 (1989). In Hackler, a
child was hit by a car as he crossed the street to his home after he was dropped off by his
school bus on the side of the street opposite his home. His parent sued the school district
on his behalf, arguing that a regulation promulgated by the secretary of transportation
imposed a duty for the bus driver to require the child to cross the street in front of the bus
while the bus was stopped. The trial court granted the school district summary judgment,
finding that the school district did not owe the child such a duty. Our Supreme Court
affirmed the summary judgment on appeal, finding that any duty arising under this
regulation "clearly applies only to those students who must cross the street. So far as the
bus driver was aware, none of the children whom she transported crossed [the busy street
the child was hit on]." 245 Kan. at 299.
Nevertheless, the Hackler holding is distinguishable from this case and the Shirley
holding. For example, in this case as well as the Shirley decision, the statutes involved in
those cases clearly defined a duty of care which the defendants owed to the plaintiffs.
In Shirley, the plaintiff appealed the trial court's order denying her negligence per
se claim. Plaintiff's petition alleged a negligence action against a pawn shop and its
owners "based on their act of selling a firearm while knowing that the purchaser intended
that another individual would take possession of that firearm and without performing a
background check on the intended recipient of the firearm." 297 Kan. at 893. In her
answers to interrogatories and her response to the defendant's motion for summary
judgment, plaintiff inserted a negligence per se theory, but she was inconsistent in the
way she presented that theory: "At times, she presented negligence per se as a statutorily
22
created private cause of action, but at other times she argued that negligence per se
statutorily defines the standard of care in a negligence action." 297 Kan. at 893. Our
Supreme Court, however, concluded that plaintiff had not pleaded a negligence per se
claim as a separate cause of action created by statute but, instead, she was alleging only a
claim of "simple negligence." 297 Kan. at 894. Thus, in actuality, she was relying on
federal and Kansas statutes prohibiting the distribution of firearms to felons to define the
standard of care.
As a result, our Supreme Court held that it was "irrelevant" whether the statutes
gave rise to a private cause of action since the statutory violation was not the basis for her
claim. 297 Kan. at 894. Our Supreme Court then focused on whether plaintiff could use
the firearm-transfer statutes to establish a duty of care in a negligence action. The court
ultimately concluded that she could under the facts of her case. 297 Kan. at 895-97.
In so holding, our Supreme Court considered if the firearm-transfer statutes were
"intend[ed] to protect the class [of persons], even if it includes all members of society,
from a particular kind of harm." 297 Kan. at 896. In answering this question in the
affirmative, the court concluded that "the Kansas statute prohibiting the sale of firearms
to certain convicted felons is intended to protect the citizens of this state from violent
crimes committed by those felons." 297 Kan. at 897. As a result, the court held that
plaintiff could use the violation of the firearm-transfer statutes to establish a duty and
breach of duty to support her negligence claim. 297 Kan. at 897.
Applying the Shirley holding to this case, we must consider if the purpose of
K.S.A. 39-1430 and K.S.A. 39-1431 includes protecting Cook and other persons like him
from a particular kind of harm. Under K.S.A. 39-1430(a), there is a clear legislative
intent to promote the prevention of abuse of individuals who are "unable to protect their
own interest . . . through action or inaction by either another individual or through their
own action or inaction . . . ." To achieve this purpose, the Kansas Legislature has created
23
both mandatory and permissive reporting of suspected cases of abuse to the proper
authorities. K.S.A. 39-1431. Moreover, in the event of a report by a mandatory reporter,
"[no] employer shall terminate the employment of . . . any employee solely for the reason
that such employee made or caused to be made a report." under K.S.A. 39-1432(b).
Here, as Morgan argues in her brief, Cook was "tailor made for the mandatory
reporting statute." For example, Cook was cared for in his home by registered and
licensed practical nurses. Moreover, Cook was vulnerable because of his severe mental
illness. And he died from hyperthermia because of a lack of ventilation in his home
during a dangerous heat wave. Evidence showed that Cook neglected his hygiene and his
physical and mental health because of his severe mental illness. As a result, Cook needed
proper care services to maintain his physical and mental health. Thus, we conclude that
Cook belonged to a class of members that K.S.A. 39-1430 and K.S.A. 39-1431 intended
to protect. As a result, we hold that these statutes established a duty of care and the
violation of these statutes may be used by Morgan to establish a breach of duty.
Finally, Healing Hands argues that the mandatory reporter statute cannot serve as
the basis of a duty because the similar mandatory child abuse reporter statute does not
provide a private right of action. This is unpersuasive. As explained earlier, our Supreme
Court noted in Shirley that there is a difference between a private right of action created
by a statute and the use of a statute to establish a duty in a simple negligence case like the
one here. Shirley, 297 Kan. at 894 ("Whether these statutes give rise to an independent
private cause of action is irrelevant in the present case, however, because Shirley did not
plead a statutory violation as the grounds for her suit. She instead presented a case based
on simple negligence.").
We reverse the trial court's partial summary judgment ruling because it
erroneously ruled that it was an uncontroverted fact that Cook could care for himself, and
because it made an error of law by ruling that Cook must have been previously alleged to
24
be incompetent in order for the mandatory reporter statute to apply. We therefore remand
for a new trial where Morgan can argue that K.S.A. 39-1430 et seq. can serve as the basis
of Healing Hands' duty.
Did the Trial Court Err by Declining to Instruct the Jury on Healing Hands' Duty?
The trial court held its instruction conference after both parties rested. Morgan
submitted her first amended set of jury instructions. One of her proposed instructions was
derived from PIK Civ. 4th 123.02, which reads:
"A hospital's duty to a patient is to use the degree of reasonable care required by
that patient's known physical and mental condition. On medical or scientific matters, a
hospital's standard of reasonable care is the same care, skill, and diligence used by
hospitals in the same or similar communities and circumstances. A violation of this duty
is negligence."
Morgan's proposed instruction read:
"A home health care provider's duty to a patient is to use the degree of reasonable
care required by that patient's known physical and mental condition. On medical or
scientific matters, a home health care provider's standard of reasonable care is the same
care, skill, and diligence used by home health care provider's [sic] in the same or similar
circumstances. A violation of this duty is negligence."
Morgan stated that the instruction was proper because "we have to instruct on the
duty of the principal and we have to instruct on the duty of the agent." Healing Hands
opposed the instruction, arguing that the court addressed an agency relationship in a
separate instruction and that 123.02 refers to hospitals, nursing homes, and other inpatient
care facilities, not home healthcare agencies. The trial court excluded the proposed
25
instruction, and Morgan objected. Morgan also objected again to the trial court's
exclusion of an instruction on K.S.A. 39-1430 et seq.
Appellate courts address jury instruction challenges using a four-step process as
follows:
"'For jury instruction issues, the progression of analysis and corresponding
standards of review on appeal are: (1) First, the appellate court should consider the
reviewability of the issue from both jurisdiction and preservation viewpoints, exercising
an unlimited standard of review; (2) next, the court should use an unlimited review to
determine whether the instruction was legally appropriate; (3) then, the court should
determine whether there was sufficient evidence, viewed in the light most favorable to
the defendant or the requesting party, that would have supported the instruction; and (4)
finally, if the district court erred, the appellate court must determine whether the error
was harmless, utilizing the test and degree of certainty set forth in State v. Ward, 292
Kan. 541, 256 P.3d 801 (2011) . . . .' [Citation omitted.]
"In addressing an instructional error, an appellate court examines
'"jury instructions as a whole, without focusing on any single instruction, in order to
determine whether they properly and fairly state the applicable law or whether it is
reasonable to conclude that they could have misled the jury."' State v. Hilt, 299 Kan. 176,
184, 322 P.3d 367 (2014) (quoting State v. Williams, 42 Kan. App. 2d 725, Syl. ¶ 1, 216
P.3d 707 [2009] )." Biglow v. Eidenberg, 308 Kan. 873, 880-81, 424 P.3d 515 (2018).
Morgan contends that the trial court's failure to give her requested instruction was
error. Below, Morgan argued that "we have to instruct on the duty of the principal and we
have to instruct on the duty of the agent." Healing Hands opposed the instruction, arguing
that the court addressed an agency relationship in a separate instruction and that 123.02
refers to hospitals, nursing homes, and other inpatient care facilities, not home healthcare
agencies. The trial court excluded the proposed instruction and Morgan objected.
26
On appeal, Morgan argues that by failing to give her proposed instruction on a
home healthcare agency's duty, the trial court failed to properly instruct the jury on her
theory of the case. She further argues that this was likely confusing to the jury because
"[t]he trial was about the Defendant's breach of its duty to provide the degree of care
required by Robert Cook's known physical and mental condition. But, when it came time
to instruct the jury on the law as it applied to the Defendant, all they received was the
instruction on a nurse's duty of care."
Finally, she argues that "the jury was not instructed that Robert Cook's known physical
and mental condition drives the standard of care."
On appeal, Healing Hands argues that clear error review applies because Morgan
raises different arguments on appeal than she did below. See State v. Ellmaker, 289 Kan.
1132, 1138-39, 221 P.3d 1105 (2009) (applying clear error analysis when party objects to
instruction on one ground at trial but separate ground on appeal). Clear error analysis
would apply to any arguments beyond the scope of what Morgan argued below.
Nevertheless, here, Morgan consistently argued below and on appeal that the requested
instruction was necessary because the jury needed to be instructed about both a nurse's
duty and a home healthcare agency's duty. We disagree because Morgan's petition and
pretrial conference questionnaire lacked any independent claim of Healing Hands'
negligence except vicairously through the negligence of its nurses. As a result, Morgan's
sole theory of recovery against Healing Hands was based on a respondeat superior claim.
The trial court gave the jury a respondeat superior instruction. Thus, we conclude that
there was no error committed in the trial court's instructions to the jury.
Assuming for sake of argument that the instruction was legally and factually
appropriate, Healing Hands convincingly demonstrates that any error was harmless.
Under State v. Ward, 292 Kan. 541, 565, 256 P.3d 801 (2011), the appropriate harmless
27
error test for nonconstitutional errors, like the one here, is whether there is a "reasonable
probability that the error did or will affect the outcome of the trial in light of the entire
record." 292 Kan. 541, Syl. ¶ 6.
Morgan makes much of the trial court's failure to instruct on a separate standard of
care for a home healthcare agency, but in a different instruction given to the jury, the trial
court stated:
"Healing Hands Home Health Care, LLC is responsible for any negligent act or
omission of its employees, Lori Ford, Debra Mann, Rebecca Baca and Francis Smith.
"If you find Lori Ford, Debra Mann, Rebecca Baca or Francis Smith was
negligent, then you must find that the defendant Healing Hands Home Health Care, LLC
was negligent.
"But if you find Lori Ford, Debra Mann, Rebecca Baca and Francis Smith were
not negligent, then you must find that the defendant Healing Hands Home Health Care,
LLC was not negligent."
Here, the trial court expressly conflated Healing Hands' negligence with that of the
nurses and stated that if the jury did not find the nurses were negligent, it could not find
that Healing Hands was negligent. Morgan did not object to this instruction. Indeed,
Morgan's proposed jury instructions included two instructions similarly conflating
Healing Hands' negligence with that of its nurses.
Because Morgan assented to the trial court's express conflation of Healing Hands'
negligence with that of its employees, any error in the failure to give Morgan's proposed
instruction above was harmless. Thus, the trial court did not commit reversible error by
declining to give the requested instruction.
Reversed and case remanded for a new trial.
28
***
MALONE, J., concurring: I concur in the result.
29
|
"We will generally accept patches against code written by MythTV developers. I stress that last part because the example you chose, RTjpegN.cpp, is in fact third party code that we've included into our code base, we normally don't do code formatting and correctness fixes for those because it would make re- syncing them harder. RTjpeg may even be dropped in favour of more modern codecs. If you do supply patches, please split them up into bite-size chunks, preferably not touching multiple libs, or too many files at once. This will make them easier to review and easier to apply, especially if one of the files changes before we can get to that point."
+
''"We will generally accept patches against code written by MythTV developers. I stress that last part because the example you chose, RTjpegN.cpp, is in fact third party code that we've included into our code base, we normally don't accept code formatting fixes for those because it would make re-syncing them harder. If you do supply patches, please split them up into bite-size chunks, preferably not touching multiple libs, or too many files at once. This will make them easier to review and easier to apply, especially if one of the files changes before we can get to that point."''
</blockquote>
</blockquote>
Revision as of 23:45, 28 August 2013
MythTV is primarily written in C++. Feel free to talk to us about your code and experiments in our mailing lists and chat groups or code something and submit it as a patch or feature to us.
Getting started
The simplest and fastest way to get started coding on MythTV is simply by downloading the current development code, compiling it, and running it on your machine. To do this just simply follow any installation manual for MythTV. We suggest you use:
After this you can experiment with the code, recompiling it and test it on your machine. When you're familiar with the code, and after reading the Coding Standards Guidance, you could try to pick-up a ticket to solve a bug or you could fix a small bug that you have encountered.
Try not to start with a big project, small but valuable patches are the best way to learn the code and to familiarise yourself with the process of submitting your work for inclusion. You can build to bigger patches later on.
If you wish to add a new feature, it is strongly recommended that you consult with the developers on the Developer mailing list since you may be duplicating work, or they may have guidance that would save you wasting time on a patch that won't be accepted.
Things you can work on
You can work on MythTV by picking up a ticket or by implementing a new feature. Before starting on either one, you should let others know about your intentions by posting this to the Developer mailing list. Visit our tracker to find out what you can work on.
The following is from a quote from a mailing list post by Stuart Morgan.
"We will generally accept patches against code written by MythTV developers. I stress that last part because the example you chose, RTjpegN.cpp, is in fact third party code that we've included into our code base, we normally don't accept code formatting fixes for those because it would make re-syncing them harder. If you do supply patches, please split them up into bite-size chunks, preferably not touching multiple libs, or too many files at once. This will make them easier to review and easier to apply, especially if one of the files changes before we can get to that point."
When your code is ready
When your code is ready for the public, check the Submitting Bug Fixes page for information on how this is done.
How we work together
Almost all of the development activity goes through the mythtv-dev mailing list. When you have an interesting idea, subscribe to mythtv-dev, review its archive, and ask if anybody else is working on it. Some people might want to help you with figuring things out, and on the other hand, you might also be of assistance to existing efforts.
There is also the possibility to chat in realtime with other developers as you can findout here.
Plugins
Users have the possibility to extend their system with extra functionality by simply adding plugins. Users and developers can write their own plugins that run within MythTV and can be controlled by remote. A fast and easy way to get started is by taking a small existing plugin to start with. You can learn more about this here.
Themes
If you are more into design and graphics you will like the theme possibilities in MythTV to create new looks for the users to look at.
Theming can be as simple and complex as you want it to be. The best way to get started building your own theme is by experimenting with an existing theme. Learn more about this in the MythUI Theme Development guide.
Translations
If you have language skills, you can also help out by translating MythTV into other languages. Read more about it here and join the mailing list here.
Still not sure how to help?
Visit our IRC-based development discussions or send us a message through the mailing list. You can find more about this here.
I've got this great idea...
Every project has a leader, MythTV is led by its steering committee. But some users have taken the time to suggest items to go on a Feature Wishlist. If you are a developer looking for a project, this might be a place look for some ideas proposed from the user community.
Understand that these features may not represent what the project maintainers are looking for. A quick note to the mythtv-dev mailing list with your intentions will often result in feedback on the viability of your ideas in the project.
Your idea might already exist
There is a chance your idea already exists -- it's a big project, and the features are sometimes a little tricky to find. Before spending time developing a new concept, try these steps:
Look through all of the settings pages in mythfrontend and setup. Sometimes the feature you want isn't in the place you'd expect, or it's using a different name.
Read the keys.txt file in the MythTV distribution. Many options are documented here, and you may not know of their existence.
Search Trac reports to see if someone's submitted a bug report or feature request. You may be able to help by adding more information and clarifying the original report.
Search the mythtv-dev archive to see if someone's already discussed the feature. It may have been rejected because of some unforeseen consequence, or someone may already be working on it.
Some helpful pages to keep close
If you are new to developing MythTV, these pages can help you through the process: |
---
abstract: 'For robots acting in the presence of observers, we examine the information that is divulged if the observer is party to the robot’s plan. Privacy constraints are specified as the stipulations on what can be inferred during plan execution. We imagine a case in which the robot’s plan is divulged beforehand, so that the observer can use this [*a priori*]{} information along with the disclosed executions. The divulged plan, which can be represented by a procrustean graph, is shown to undermine privacy precisely to the extent that it can eliminate action-observation sequences that will never appear in the plan. Future work will consider how the divulged plan might be sought as the output of a planning procedure.'
author:
-
-
-
bibliography:
- 'mybib.bib'
title: 'What does my knowing your plans tell me? '
---
Introduction
============
Autonomous robots are beginning to be part of our everyday lives. Robots may need to collect information to function properly, but this information can be sensitive if leaked. In the future, robots will not only need to ensure physical safety for humans in shared workspaces, but also to guarantee their information security. But information leakage can occur in a variety of ways, including through logged data, robot’s status display, actions, or, as we examine, through provision of prior information about a robot’s plan.
Established algorithmic approaches for the design and implementation of planners may succeed at selecting actions to accomplish goals, but they fail to consider what information is divulged along the way. While several models for privacy exist, they have tended to be either abstract definitions applicable to data rather than an agent operating autonomously in the world (such as encryption [@menezes96crypto], data synthesis [@rubin93synth], anonymization [@Dwork2008], or opacity [@jacob16opacity] mechanisms) or are focussed on a particular robotic scenario (such as robot division of labor [@prorok2016macroscopic] or tracking [@OKa08; @zhang18complete]).
Figure \[fig:wheelchair\] illustrates a scenario where the information divulged is subtle and important. It considers an autonomous wheelchair that helps a patient who has difficulty navigating by himself. The user controls the wheelchair by giving voice commands: once the user states a destination, the wheelchair navigates there autonomously. While moving through the house, the wheelchair should avoid entering any occupied bedrooms, making use of information from motion sensors installed inside each bedroom. We are interested in stipulating the information divulged during the plan execution:
: A therapist monitors the user, ensuring that he adheres to his daily regimen of activity, including getting some fresh air everyday (by visiting the front yard or back yard).
g: However, if there is a guest in one of the bedrooms, the user does not want to disclose the guest’s location.
Actions, observations, and other information (such as the robot’s planned motion) may need to be divulged to satisfy the first (positive) stipulation. The challenge is to satisfy both stipulations simultaneously. Suppose the robot executes the plan shown in the right of , and that this plan is public knowledge. If, as it moves about, the robot’s observations (or actions) are disclosed to an observer, then we know that the robot will attempt to see if $M$ is occupied. Hence, on some executions, a third party, knowing there is a guest, would be able to infer that they’re in the master bedroom.
This paper examines in detail how divulging the plan, as above, provides information that permits one to draw inferences. In particular, we are interested in how this plan information might cause privacy violations. As we will see, the divulged plan need not be the same as the plan being executed, but they must agree in a certain way. In our future work, we hope to answer the question of how to find pairs of plans (one be to executed and one to divulged), where there is some *gap* between the two, so that information stipulations are always satisfied.
![An autonomous wheelchair navigates in a home. A plan, on the right, generates actions that depend on perception of the pink star (denoting that the bedroom is occupied). \[fig:wheelchair\]](scenario)

Problem Description
===================
In this problem, there are three entities: a *world*, a *robot*, and an *observer*. As shown in , the robot interacts with the world by taking observations from the world as input, and outputting an action to influence the world state. This interaction generates a stream of actions and observations, which may be perceived by the observer, though potentially only in partial or diminished form. We model the stream as passing through a function which, via conflation, turns the stream generated by the world–robot interaction into one perceived by the observer, the disclosed action-observation stream. As a consequence of real-world imperfections (possible omission, corruption, or degradation) or due to explicit design, the observer, thus, may receive less information. For this reason, the function is viewed as a sort of barrier, and we term it an *information disclosure policy*.
The observer is assumed to be unable to take actions to interact with the world directly—a model that is plausible if the observer is remote, say a person or service on the other side of a camera or other Internet of Things device. Given its perception of the interaction, the observer estimates the plausible action-observation streams, consistent with the disclosed action-observation stream. This estimate can be made ‘tighter’ by leveraging prior knowledge about the robot’s plan. The observer’s estimate is in terms of world states, so the notion of tightness is just a subset relation. In this paper, we will introduce stipulations on these estimated world states and our main contribution will be in examining how the divulged plan could affect the satisfaction of these stipulations.
Representation
--------------
To formalize such problem, we represent these elements with p-graph formalism and label map [@saberifar18pgraph]. The world is formalized as a planning problem $(W, V_{\goal})$, where $W$ is a p-graph in state-determined form (see definition of state-determined in [@saberifar18pgraph Def. 3.7]) and $V_{\goal}$ is the set of goal states. The robot is modeled as a plan $(P, V_{\term})$, where $P$ is a p-graph and $V_{\term}$ specifies the set of plan states where the plan could terminate. The plan solves the planning problem when the plan can always safely terminate at the goal region in finite number of steps (see definition of solves in [@saberifar18pgraph Def. 6.3]). The information disclosure policy is represented by a label map $h$, which maps from the actions and observations from $W$ and $D$ to an image space $X$. The observer is modeled as a tuple $(I, D)$, where $I$ is a filter represented by a p-graph with edge labels from $X$, $D$ is the p-graph representing the divulged plan with actions and observations labeled in the domain of $h$. The plan in $D$ might be less specific than the actual plan $P$, representing ‘diluted’ knowledge of the plan; to capture this, we require that all possible action-observation sequences (called executions for short) in $D$ should be a superset of those in $P$, denoted as $\Language{D}\supseteq \Language{P}$ (the set of executions is called the language, see [@saberifar18pgraph Def. 3.5], hence the symbol $\Language{\cdot}$).
The observer’s estimation of world states
-----------------------------------------
Given any set of filter states $B$ from filter $I$, the observer obtains an estimate of the executions that should’ve occurred to reach $B$, through a combination of the following sources of information [@zhang18planning Def. 13]:
1. The observer can ask: What are all the possible executions, each of which has its image, reaching exactly $B$ in the filter? The set of executions reaching exactly $B$ is represented as $\exactreachings{I}{B}$. The preimages of $\exactreachings{I}{B}$, which we denote as $h^{-1}[\exactreachings{I}{B}]$, are the executions which are responsible for arriving at $B$ in $I$.
2. The observer can narrow down the estimated executions to the ones that only appear in the divulged plan $D$. The set of all executions in $D$ are represented by its language $\Language{D}$.
3. Finally, the estimated executions can be further refined by considering those that appear in the world, i.e., $\Language{W}$.
Hence, $h^{-1}[\exactreachings{I}{B}]\cap \Language{W}\cap \Language{D}$ represents a tight estimation of the executions that may happen. This allows us to find the estimated world states, defined as $\compatablew{D}{B}$, by making a tensor product $T$ of graph $W$, $D$ and $h^{-1}\langle I\rangle$, where $h^{-1}\langle I\rangle$ is obtained by replacing each action or observation $\ell$ with its preimage $h^{-1}(\ell)$ on the edges of the p-graph $I$. For any vertex $(w, d, i)$ from the product graph $T$, we have: $$\compatablew{D}{B}=\compatablew{D}{B}\cup \{w\}, {\rm if}~ i\in B.$$
Information stipulations on the estimated world states
------------------------------------------------------
Information stipulations are written as propositional formulas on estimated world states $\compatablew{D}{B}$. Firstly, we will define a symbol $\mathpzc{w}$ for each world state $w$ in $W$. Then we can use connectives $\neg$, $\land$, $\lor$ to form composite expressions $\form{\Phi}$ that stipulate the estimated world states involving these symbols. The propositional formulas can be evaluated based on the following definition: $$\mathpzc{w}=\True\quad\text{if and only if} \quad w\in \compatablew{D}{B}.$$
With all the elements defined above, we are able to check whether the stipulation $\form{\Phi}$ is satisfied on every estimate $\compatablew{D}{B}$, given the world graph $W$, information disclosure policy $h$, and the observer $(I, D)$.
The observer’s prior knowledge of the robot’s plan
==================================================
The divulged plan $D$ is known by the observer prior to the robot’s monitoring of the disclosed action-observation stream. Depending on how much the observer knows, there are four possibilities, from most-to least-informed:
1. The observer knows the exact plan $P$ to be executed.\[item:exactplan\]
2. The plan to be executed can be hidden among a (non-empty) finite set of plans $\{P_1,P_2, \dots, P_n\}$.\[item:setplan\]
3. The observer may only know that the robot is executing *some* plan, that is, the robot is goal directed and aims to achieve some state in $V_{\goal}$. \[item:someplan\]
4. The observer knows nothing about the robot’s execution other than that it is on $W$. \[item:wanderingrobot\]
A p-graph exists whose language expresses knowledge for each of these cases:
Case \[item:exactplan\].When $D=P$, the interpretation is straightforward: the observer tracks the states of the plan given the stream of observations (as best as possible, as the operation is under $h$).
Case \[item:setplan\].If instead a set of plans $\{P_1,P_2, \dots, P_n\}$ is given, we must construct a single p-graph, $D$, so that $\Language{D} = \Language{P_1}
\cup \dots \cup \Language{P_n}$. This is achieved via the union of p-graphs $D
= P_1 \uplus P_2 \uplus \dots \uplus P_n$, cf. [@saberifar18pgraph Def. 3.6, pg. 18].
Case \[item:someplan\].If the robot is known only to be executing some plan, we must consider the set of all plans, ${P^{\infty} \defeq \{P_1,P_2, P_3, \dots, \}}$. As the notation hints, there can be an infinite number of such plans, so the approach of unioning plans won’t work. Fortunately, another structure, $P^{*}$, exists such that $\Language{D}=\Language{P^{*}}=\Language{P^{\infty}}$, which will be proved afterwards. Here $P^{*}$, a finite p-graph, is called the *plan closure*.
Case \[item:wanderingrobot\].When taking $D=W$ the executions are, again, intersected with $\Language{D}$ but as they already came from $\Language{W}$, this shows why the observer is the least informed in the hierarchy.
Next, we will show the construction of the plan closure $P^{*}$ and prove that $\Language{P^{*}}=\Language{P^{\infty}}$.
To start, we describe construction of $P^{*}$. The initial step is to convert $W$ to its state-determined form $W'=\sde{W}$ (this is an operation described in [@saberifar18pgraph Algorithm 2, pg. 30]). Then, to decide whether a vertex in $W'$ exists in some plan, we iteratively color each vertex green, red, or gray. Being colored green means that the vertex exists in some plan, red means that the vertex does not exist in any plan, and gray indicates that its status has yet to be decided. To start with, we initially color the goal vertices green, and non-goal leaf vertices (with no edges to other vertices) red. Using the iconography of [@saberifar18pgraph], we show action vertices as squares and observation vertices as circles. Then gray vertices of each type change their color by iterating the following steps:
- $\rightarrow$ : $\exists$ some action $a$ reaching $\mycircle{gre}$, which is not an initial state.
- $\rightarrow$ : $\forall$ action $a$ reaching $\mycircle{re}$.
- $\rightarrow$ : $\forall$ observation $o$ reaching $\mysquare{gre}$, which is not an initial state.
- $\rightarrow$ : $\exists$ some observation $o$ reaching $\mysquare{re}$.
The iteration ends when no vertex changes its color. The subgraph that consisting of only green vertices and their corresponding edges is $P^{*}$. And $P^{*}$ then contains only the vertices that exist in some plan leading to the goal states. For further detail of this algorithm for building $P^{*}$, we refer the reader to Algorithm \[alg:pstar\].
![The construction of a plan generating execution $s$ using $\pi$, computed as part of Algorithm \[alg:pstar\]. []{data-label="fig:s-pi"}](s-pi.pdf){width="0.8\linewidth"}
Next, we prove that the $P^{*}$ constructed from this procedure has the same language as $P^{\infty}$. The proof shows that any green vertex is on some plan, by showing that we we can construct a plan $\pi$, that will lead to a goal state within a finite number of steps form any such vertex.
\[lem:pinfinity\] =.
$\supseteq$: For any $s=s_0s_1s_2\dots s_k\in \Language{P^{\infty}}$, according to the definition of $P^{\infty}$, $s$ is in the execution of some plan $P'$. Though $s_k$ may not be a goal, using $P'$, $s$ can be extended: $\exists s'=s_0s_1\dots s_k t_0 t_1\dots t_n\in \Language{P'}$, $k > 0,n\geq 0$ to reach an element of $V_\goal$. Then $\reachedv{P'}{s'}$ comprises vertices associated with those in $W'$ marked green in $V'_\goal$. And, tracing the execution $s'$ on $P'$ backwards on $W'$, we find every vertex green back to a start vertex. But this means they are in $P^*$, and hence $s' \in \Language{P^{*}}$, means $s \in
\Language{P^{*}}$ as well.
$\subseteq$: For any execution $s=s_0s_1s_2\dots s_k\in \Language{P^{*}}$, $s$ reaches $V'_{\goal}$, or $s$ is a prefix of some execution reaching $V'_{\goal}$ in $W'$. We show that there is a plan that can produce $s$. The execution $s$ does not include enough information to describe a plan because: (1) it may not reach $V'_{\goal}$ itself, and (2) it gives an action after some observation that was revealed, but not every possible observation. To address this shortfall, we will capture some additional information during the construction of $P^{*}$, which we save in $\pi$. This provides an action that makes some progress, for states that can result from other observations. Now, using $s$ as a skeleton, construct plan where once a transition outside of $s$ occurs, either owing to an unaccounted-for observation or having reached the end of $s$, the plan reverts to using the actions that $\pi$ prescribes. (See for a visual example.) This is always possible because states arrived at in $W'$ under $s$ are green. This implies that all states in $W$ are also assured to reach a goal states. The resulting plan can produce $s$, so some plan produces $s$, hence $s \in \Language{P^{\infty}}$.
Initialize queues $\rm red$, $\rm green$, $\rm gray$ as empty $W'\gets \sde{W}$, and initialize $V'_{\goal}$ as the associated vertices of $V_{\goal}$ Initialize plan $\pi$ as empty $\rm green$.append($v$) $\rm red$.append($v$) $\rm gray$.append($v$) $Q$.extend(InNeighbor(${\rm red} \cup {\rm green}$)$\backslash ({\rm red} \cup {\rm green})$) $v\gets Q$.pop ${\rm flag}\gets$ $\rm red$.append($v$) $\rm green$.append($v$) ${\rm flag}\gets$ $\rm green$.append($v$) and $\pi[v]=a$ $\rm red$.append($v$) ${\rm flag}\gets$ $Q$.extend(InNeighbor($v$)$\backslash\{{\rm red} \cup {\rm green}\}$) $P^{*}\gets$ subgraph($W'$, ${\rm green}$) $P^{*}$ (and also $\pi$, if desired)
Thus, one may use $D = P^*$, for Case \[item:someplan\].
Experimental results
====================
We implemented the algorithms with Python, and execute them on a OSX laptop with a 2.4 GHz Intel Core i5 processor. To experiment, we constructed a p-graph representing the world in with $12$ states, and the plan with $8$ states. All the experiments are finished within $1$ second. The information disclosure policy maps all actions to the same image, but observations to different images. As we anticipated, the stipulations are violated when the exact plan is divulged. But we can satisfy the stipulations by disclosing less information, such as $D=W$.
Summary and future work
=======================
We examine the planning problem and the information divulged within the framework of procrustean graphs. In particular, the divulged plan can be treated uniformly in this way, despite representing four distinct cases. The model was evaluated, showing that divulged plan information can prove to be a critical element in protecting the privacy of an individual. In the future, we aim to automate the search for plans: given $P$ to be executed, find a $D$ to be divulged, where $\Language{D}\supsetneq \Language{P}$, such that the privacy stipulations are always satisfied.
Acknowledgements {#acknowledgements .unnumbered}
================
This work was supported by the NSF through awards [IIS-1453652](http://nsf.gov/awardsearch/showAward?AWD_ID=1453652), [IIS-1527436](http://nsf.gov/awardsearch/showAward?AWD_ID=1527436), and [IIS-1526862](http://nsf.gov/awardsearch/showAward?AWD_ID=1526862). We thank the anonymous reviewers for their time and valuable comments.
|
What I write after Joe and Henry go to bed
Two days before it was scheduled to be shut down, I took Henry to the St. Pete Pier so we could bid farewell to our favorite ailing tourist attraction.
Like most Bay area residents, I’ve known for years that this old landmark would soon be demolished. I also knew that once I had my son I would regret having not made memories with him on the old pier before a slick new pier one day opens in its place.
The fate of the Pier has become a hotly contested subject. I refuse to discuss the pros and cons of its replacement design, The Lens, out of sheer exhaustion. I’m tired of hearing about it. When it comes to CHANGE I’m as much a fan of progress as I am a curmudgeon, so I’ll refrain from offering what would likely be an uneducated opinion.
However, this fact remains true: the Pier’s infrastructure is falling apart, its concrete pilings, if left alone, would crumble into the bay. Studies revealed 10 years ago that the aging destination with its smattering of kitschy gift shops and empty restaurants wouldn’t survive another 20 years of saltwater erosion, never mind an impending economic blow.
When this news became public fodder in 2010, I added the Pier to my biking route. When Henry arrived in 2011, I added it to my running route. Knowing it would close before he’d be old enough to remember it, I decided to take him there often – always by foot or by bike.
Save for a handful of brooding old men drinking coffee and reading the paper, the food court inside the Pier’s dated building was usually vacant in the afternoon. Often it looked like Henry and I were the only people to order an ice cream cone for hours. In order to get the attention of the proprietor of the ice cream stand, I’d have to rap on the freezer doors and shout, “Yoo hoo! Anyone here?”
I once caught the guy asleep in a chair.
I wondered which was crumbling faster: the Pier’s infrastructure or its business.
Before I had Henry I was impatient with the world, critical of myself and sometimes of others.
I thought stay-at-home moms had it easy. Worse yet, I thought they were devoid of interests beyond the confines of motherhood. I pictured them schlepping kids from Gymboree class to play dates, dressed in yoga pants and a pained smile. I pictured them chained to the kitchen, the SUV, the laundry basket and the obligatory spin class. I pictured them dutifully scheduling time for mommy pep rallies that celebrate the pleasantries of breastfeeding, cloth diapering, baby wearing and holistic nutrition. (Dear Earth Mamas: I see nothing wrong with these things. As topics of discussion, however, I find them boring.)
I thought I’d lose my identity as a stay-at-home-mom. I thought I’d compromise my self-worth and freedom. I thought I’d be resentful of my husband and pissed at myself for having failed at being a working mother: the ultimate wonder woman. I thought I’d be considered a disgrace to the radical feminists who came before me and a quitter to the overachieving, have-it-all multitaskers of my generation.
Leaving my job at the newspaper would mean I’d dropped a significant ball in the heroic juggling act that is regularly executed by the modern working mother. I’d be forced to rethink everything I thought I’d do or wouldn’t do as a parent, as if you really know these things before you bring a tiny, demanding, Bambi-eyed being into this world.
I was wrong about working mothers AND stay-at-home mothers. (As an aside, I was right about yoga pants.)
Two of my closest girlfriends are pregnant right now, both of them due around the same time: late May/early June.
You already know one of them – my best friend Ro. And guess what? Her baby girl (Mia) is due on Henry’s BIRTHDAY: June 5. How’s that for timing?
It’s killing me to not be in New York right now. The last time I saw Ro she was 48 hours pregnant (I’m exaggerating) and supervising my kid at a park while my father and I went about the serious task of testing climbing the park’s playground equipment. Even then it was obvious she exhibited better parenting skills than myself.
Her baby shower is the day before St. Anthony’s Triathlon, thus I am unable to attend. ANOTHER MAJOR BUMMER. Consequently, it is possible that my best friend will fully gestate and I will never see her baby bump in person. UNFATHOMABLE. Fifteen years ago, when I filled three pages in her high school yearbook, I never imagined we’d BARGAIN SHOP without each other much less give birth to babies on opposite ends of the Eastern seaboard. Kvetching over the phone about the marvelousness and shiteousness of pregnancy is not the same as seeing it happen before your eyes. I’ll never get to feel Baby Mia kick Ro in the ribs – at least not in utero anyway.
Ah. But such is life. I signed up for this when I left Buffalo nine years ago. (NINE YEARS AGO?! WHAT?) After a decade away from home your absence no longer goes missed. It simply becomes a matter of fact. You miss Christmases. You miss birthdays. You miss pregnancies. You miss babies being born.
When Henry was an infant he went through a ghost phase. And by ghost phase I mean he saw ghosts (ie: waved at Nothing, smiled at Nothing and acknowledged the presence of Nothing in a way that was both unsettling and mystical to his reasonable parents.)
This phase lasted from about nine to 12 months of age. It began one morning when I waltzed plodded bright bleary-eyed into Henry’s room and spotted him staring into space, smiling and blah-blah-blahing at a very specific Nothing in the corner of his room.
“Good morning Henry,” I said.
No reaction. He was too preoccupied with the Thing I Could Not See to pay me any mind.
For three whole minutes my perfectly rowdy baby failed to whine, coo or so much as nod in my direction. Although I was invisible, the Thing I Could Not See remained perfectly in focus.
I stared at the Nothingness he was staring at.
What on earth was he looking at? Or better yet, WHO was he looking at?
“Henry? Yoo hoo? Good morning,” I sang croaked.
It took some effort to divert his attention. When he finally did turn to face me he gave a little goodbye wave to the apparition in his room.
“Sweetheart, did you see something over there?”
He smiled smugly as if to say YOU DUMB ADULT. YOUR EYES ARE TOO OLD TO SEE WHAT I SEE. Returning to his usual helpless state, he threw his arms in the air and grunted – the universal baby sign for GET ME OUT OF MY CRIB DAMMIT.
We dressed in warm clothes. We went on a picnic. We picked up our Hot Mama’s of St. Pete Co-op basket. We played kickball. We made a sweet organic salad using fixin’s from our basket. We tried to set up a trampoline, but ended up bouncing around the yard instead. We got excited when Joe came home. We played with a strange, creepy baby doll from the 1960s. We fell asleep happy.
I’m at the grocery store, standing in the produce department. An old Italian woman in a babooshka approaches my cart. She presses her face so close to Henry’s face that for a second his curious mug is eclipsed by her curious mug.
“He is a bootiful,” she says.
“Thank you,” I say.
“His face, it is a bootiful!”
“Thank you,” I say again.
“He is the only one?”
“Yes,” I reply. “He is my only one.”
“He is a so bootiful you could a make a thousand of him.”
I laugh, picturing a thousand Henrys.
“One day,” I say. “I might make another one of him. A thousand seems excessive.”
She kisses him on the top of his head, oblivious to my sarcasm and shuffles away to the cheese section. “Ciao ciao,” she says, her voice carrying over the clang of carts and drone of adult contemporary music.
Fifteen percent of the time I suck at being a mom. I do things other moms would find deplorable.
I lie to the pediatrician about how often I give my child his vitamin D supplement. (That would be never. We spend our days outside synthesizing Florida’s natural abundance of the vitamin.)
Also: I tend to wake up in a surly mood, not because I hate mornings, but because I hate 6 a.m. mornings. Most people think I’m bubbly. At 6 a.m. I’m as flat as an old can of root beer. I trudge into Henry’s room like a mom zombie. I close the door behind me and crawl under a blanket on his couch, during which time Henry tears through his toys, upends his collection of Legos, rips his clothes out of his bottom dresser drawer and squawks like an angry bird. This is not an ideal situation. If I’m lucky I can get away with closing my eyes for 10 minutes. If I’m unlucky, as I was Wednesday morning, I’m roused from my 6 a.m. coma via plastic truck to the face. For those of you who noticed, this was how I acquired the small gash on the bridge of my nose.
I’m a natural night owl. This doesn’t mix well with motherhood. Still, I’ve found ways to persevere.
Now that we’ve removed the front rail from Henry’s crib, I can easily crawl inside to catch a few minutes of shut-eye before his squawking reaches headache decibels. Last week I fell asleep in the crib while Henry gutted his bookshelf. When Joe got up for work he glanced at the baby monitor and saw footage of his wife curled up like a big galoot under a monkey blanket.
I’m not an ace mom. The first time I caught my kid eating dog food I made him rinse his mouth out with water. The second, third and fourth times I let him decide whether or not Kibble was palatable.
… along the water at Fort De Soto Park. It was beautiful and exhausting. I ate a lot of chocolate donuts. Joe built a fire using scrap wood from our unruly Brazilian Pepper tree. Hank ate a lot of dirt; chased dogs, squirrels, trucks and little girls on the playground. We woke up each morning in time to watch the sun rise over the Gulf of Mexico. Spending two nights in a tent with an 18-month-old and a snoring, farting pug will certainly make you appreciate your uncomfortable queen-sized bed and busted box spring.
It’s the day before Thanksgiving and I’ve got about 15 minutes until Henry wakes up, so let’s see what I can do with it.
Really, there’s too much to say. There’s always too much to say, so I’ll do what I always do and thank the higher powers and the lower powers and the super powers and the not-so-super powers for everyone and everything that makes life so beautiful, so raw and so fun.
Since this window is brief, I’ll focus on one thing, a recent development.
My son has started to give me kisses. Nothing lifts me like this does. Nothing. When he sees me from across a room, he’ll give me this look. It’s a cross between What Can I Break and What Can I Climb. If I’m perceptive enough to catch him in the middle of these two thoughts, I’ll throw my arms open and he’ll spring into my embrace, landing at my chest like a wild animal returning to its mother after a long hunt. Sometimes he turns his face to mine and plants a slobbery kiss on my chin, or my cheeks, or my forehead, or my glasses. Sometimes he’ll just stand there waiting for me to kiss him. This rare exhibit of patience astounds me.
I kissed a lot of boys in my day, but nothing prepared me for the joy of being kissed by my 18-month-old son. Joy is an understatement. It’s surreal actually. When you take the time to live in it, the heaviness and the lightness of the moment can spin you around. It’s essentially a flash, a spark in your day, and the more he does it the more you take it for granted.
It’s one of those feelings that as a writer I’ll never accurately describe. It puts into perspective the things that matter and the things that don’t. It wipes away the difficulties of motherhood. It conjures up in you the hopefulness of youth, the wisdom of adulthood, the profound sense of love that fills a body with warmth and gratitude. So much gratitude.
Oddities
Reading material
Me.
Joe.
Henry.
Chip.
Buzzy.
Why Lance?
This blog is named after my old friend Sarah's manifestation of a dreamy Wyoming cowboy named Lance, because the word blog sounds like something that comes out of a person's nose.
About me
I'm a journalist who spends my Mondays through Fridays writing other people's stories, a chronic procrastinator who needs structure. I once quit my job to write a book and like most writers, I made up excuses why I couldn't keep at it.
My boyfriendfiancé husband Joe likes to sleep in late on the weekends, but since we have a kid now that happens less than he'd like.
Before Henry and Chip, I used to spend my mornings browsing celebrity tabloid websites while our dog snored under the covers. Now I hide my computer in spots my feral children can't reach because everything I own is now broken, stained or peed on.
I created Lance in an attempt to better spend my free time. I thought it might jump start a second attempt at writing a novel.
It hasn't. And my free time is gone.
But I'm still here writing.
I'm 262728293031 323334 35 and I've yet to get caught up in something else, which is kind of a big deal for a chronic procrastinator. |
---
bibliography:
- 'bibliography.bib'
---
=1
|
BREAKING: US releases Category One UAV technology to INDIA
The US has confirmed that the critical Category One UAV technology from US-based General Atomics has been released, acceding to India’s strong request. Also, the Indian Air Force has requested for 100 units of Predator C Avenger aircraft worth $8 bn.
Highly placed sources told FinancialExpress, “The White House under President Donald Trump spearheaded the interagency process to make a very significant policy change in favour for India by granting this technology as desired by India based on senior Indian government requests.”
As reported earlier, Indian Navy had sent the letter of request for 22 Sea Guardians in June 2016 and under the Obama administration no tangible action was taken. However, the biggest tangible take away from the Trump-Modi deliberations in Washington DC recently was operationalisation of the major defence partner relationship.
Lall had commented, “We are extremely pleased President Trump and Prime Minister Modi have had excellent deliberations and the path forward for a game changer in US India defence relations has been charted. Given the Sea Guardian’s capabilities such a US response to the Indian Navy request demonstrates a major change in US policy because this type of aircraft capability is only exported to a very select few of America’s closest defence partners. This represents tangible implementation of US Congress’ designation of India as a major defence partner.”
According to sources, India has been requesting predator technology for several years, and it was only the combination of Trump and Modi that they were able to move the decision to this point. India was able to join MTCR after significant role of United States backing its entry. Observers term this as another major foreign policy success for Modi.
Earlier this year, the Indian Air Force (IAF) had also officially requested the US government for General Atomics Predator C Avenger aircraft. This request is being actively considered by the White House as a second step after operationalising the 22 Guardian aircraft for the Indian Navy.
As military aviation transforms globally to autonomous systems, US and India have a great opportunity to collaborate at the highest levels of technology and innovation. Overall Indian requirement for UAVs is approximately 650 units.
Do Share your Thoughts in the Comments section below and Do Not forget to Check Out our Trending Now section!
Stay tuned for more updates! |
Fibonacci pinwheel blanket for August
What yarn and needles did you use and where did you get them? Why did you pick this yarn?
I used Knit Picks Wool of the Andes Bulky in Navy, Pewter and Wine. August was my friend's first baby, so I knew he'd get other baby blankets. So, I picked wool to make the blanket a little more sturdy to be used as a floor cover for tummy time.
Tell us more about your design. Was it inspired by another pattern? Why did you pick those colors?
I was inspired by the fact that I like to try making different things, using different patterns and yarns. I also figured that it would be the only blanket like this that August got. The colors went well with the sailor design scheme in his room.
I found the pattern on Ravelry (similar one can be found here: http://knitsoquaint.blogspot.com/2009/04/reverse-pinwheel-blanket.html).
Did you run into any problems when knitting your baby blanket? How long did it take you to knit?
I had a lot of difficulty with starting this project, so I ripped it out and restarted numerous times. But once I got the center done and moved the project from double-pointed needles to circulars, it became smooth sailing.
Anything else you'd like to share about your baby blanket or knitting in general? |
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>cn.java666</groupId>
<artifactId>SZT-bigdata</artifactId>
<version>0.1</version>
</parent>
<artifactId>SZT-spark-hive</artifactId>
<description>
Building Spark Applications | 6.2.x | Cloudera Documentation
https://docs.cloudera.com/documentation/enterprise/6/6.2/topics/spark_building.html
</description>
<dependencies>
<!-- https://mvnrepository.com/artifact/org.apache.spark/spark-hive -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-hive_2.11</artifactId>
<version>${spark.version}</version>
<!--<scope>provided</scope>-->
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.spark/spark-sql -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.spark/spark-yarn -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-yarn_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.spark/spark-catalyst -->
<!--<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-catalyst_2.11</artifactId>
<version>${spark.version}</version>
</dependency>-->
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.27</version>
</dependency>
<!-- https://mvnrepository.com/artifact/com.hadoop.gplcompression/hadoop-lzo -->
<!--<dependency>
<groupId>com.hadoop.gplcompression</groupId>
<artifactId>hadoop-lzo</artifactId>
<!– cdh6.2.1 对应 0.4.15 版本,yarn 模式需要。但是无法保证 local 模式兼容,建议取消压缩 –>
<version>0.4.20</version>
</dependency>-->
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>${junit.version}</version>
<!--<scope>test</scope>-->
</dependency>
</dependencies>
<build>
<plugins>
<!-- 打包插件, 否则 scala 类不会编译并打包进去 -->
<plugin>
<groupId>net.alchim31.maven</groupId>
<artifactId>scala-maven-plugin</artifactId>
<version>3.4.6</version>
<executions>
<execution>
<goals>
<goal>compile</goal>
<goal>testCompile</goal>
</goals>
</execution>
</executions>
</plugin>
<!--<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
</configuration>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>-->
</plugins>
</build>
</project>
|
Ultraluminous Distant Ellipticals Revealed in ISO + SPITZER Surveys
The galaxy populations discovered in the deepest UV-optical and infrared surveys appear so different that the basic physics of galaxy evolution is still debated. In Rocca-Volmerange et al., 2007 (hereafter RVLS2007), we proposed an interpretation of the deepest mid-IR faint galaxy counts in accordance with the already-published UV-optical NIR count analysis (Fioc & Rocca-Volmerange, 1999).
The main constraint is to reproduce the galaxy number excess observed at 12 μm in the ISO-ESO-Sculptor Survey (Seymour et al., 2007) as at
15 μm and 24 μm in other ISO and SPITZER surveys. The originality of the analysis is to follow continuously the evolution of stellar and dust emissions from UV to infrared as a function of
galaxy type. We use the new version of our evolutionary code PÉGASE.3, extended to dust emission. Evolving UV to IR SEDs allow to compute robust k- and e- (evolution) corrections for all z and types to predict high z luminosities.
We find that the fractions by galaxy types from the optical counts are all detected in the mid-IR counts, with
the exception of a minor fraction of ellipticals (< 10% of all galaxies) which appear ultra-bright and dusty. The
differential brigthness excess (-2.5 mag at 12 μm, -5 at 24 μm) confirms the presence of dust, while the bump is explained by strong redshifted stellar emission of ellipticals (k+e corrections). The model is valid at 12 μm, 15 μm and 24 μm. It does not need any redshift-dependent starbursts. No number density evolution is included in our models. These ULIRGs are massive evolved galaxies formed at early epochs and likely hosts of AGNs. They will be essential targets for the future telescopes SPICA, HERSCHEL
and ALMA. |
How to reconcile risk sharing and market discipline in the euro area
Agnès Bénassy-Quéré, Markus K Brunnermeier, Henrik Enderlein, Emmanuel Farhi, Marcel Fratzscher, Clemens Fuest, Pierre-Olivier Gourinchas, Philippe Martin, Jean Pisani-Ferry, Hélène Rey, Isabel Schnabel, Nicolas Véron, Beatrice Weder di Mauro, Jeromin Zettelmeyer
The euro area continues to suffer from critical weaknesses that are the result of a poorly designed fiscal and financial architecture, but its members are divided on how to address the problems. This column proposes six reforms which, if delivered as a package, would improve the euro area’s financial stability, political cohesion, and potential for delivering prosperity to its citizens, all while addressing the priorities and concerns of participating countries.
After nearly a decade of stagnation, the euro area is finally experiencing a robust recovery. While this comes as a relief – particularly in countries with high debt and unemployment levels – it is also breeding complacency about the underlying state of the euro area. Maintaining the status quo or settling for marginal changes would be a serious mistake, however, because the currency union continues to suffer from critical weaknesses, including financial fragility, suboptimal conditions for long-term growth, and deep economic and political divisions.
While these problems have many causes, a poorly designed fiscal and financial architecture is an important contributor to all of them:
The ‘doom loop’ between banks and sovereigns continues to pose a major threat to individual member states and the euro area as a whole. An incomplete banking union and fragmented capital markets prevent the euro area from reaping the full benefits of monetary integration and from achieving better risk sharing through market mechanisms.
Fiscal rules are non-transparent, pro-cyclical, and divisive, and have not been very effective in reducing public debts. The flaws in the euro area’s fiscal architecture have overburdened the ECB and increasingly given rise to political tensions.
The euro area’s inability to deal with insolvent countries other than through crisis loans conditioned on harsh fiscal adjustment has fuelled nationalist and populist movements in both debtor and creditor countries. The resulting loss of trust may eventually threaten not just the euro, but the entire European project.
The deadlock over euro area reform
The members of the euro area are deeply divided on how to address these problems. Some argue for more flexible rules and better stabilisation and risk-sharing instruments at the euro area level, such as common budgetary mechanisms (or even fiscal union) to support countries in trouble. Others would like to see tougher rules and stronger incentives to induce prudent policies at the national level, while rejecting any additional risk sharing. One side would like to rule out sovereign-debt restructuring as a tool for overcoming deep debt crises, while the other argues that market discipline is indispensable for fiscal responsibility, and ultimately for financial stability. The seeming irreconcilability of these positions has produced a deadlock over euro area reform.
We believe that the choice between more risk sharing and better incentives is a false alternative, for three reasons. First, a robust financial architecture requires instruments for both crisis prevention (good incentives) and crisis mitigation (since risks remain even with the best incentives). Second, risk-sharing mechanisms can be designed in a way that mitigates or even removes the risk of moral hazard. Third, well-designed risk-sharing and stabilisation instruments are in fact necessary for effective discipline. In particular, the no bailout rule will lack credibility if its implementation leads to chaos, contagion, and the threat of euro area break-up – as euro area members experienced in 2010-12 and again during the 2015 Greek aftershock. Well-designed risk-sharing arrangements and improved incentives, in the form of both better rules and more market discipline, should hence be viewed as complements not substitutes.
Six areas for reform
Achieving this complementarity, however, is not straightforward in practice. It calls for stabilisation and insurance mechanisms that are both effective and do not give rise to permanent transfers. It also requires a reformed institutional framework. In a new CEPR Policy Insight (Bénassy-Quéré et al. 2017), we outline six main areas of reform to the European financial, fiscal, and institutional architecture that would meet these aims.
First, breaking the vicious circle between banks and sovereigns through the coordinated introduction of sovereign concentration charges for banks and a common deposit insurance. The former would require banks to post more capital if debt owed by a single sovereign creditor – typically the home country – exceeds a certain proportion of their capital, incentivising the diversification of banks’ portfolios of government securities. The latter would protect all insured euro area depositors equally, irrespective of the country and its situation when the insurance is triggered. Incentives for prudent policies at the national level would be maintained by pricing country-specific risk in the calculation of insurance premiums, and through a reinsurance approach – common funds could be tapped only after ‘national compartments’ have been exhausted.
At the same time, mechanisms to bail in creditors of failing banks need to be strengthened, supervisory pressure to reduce existing non-performing loans needs to increase (including on smaller banks), and bank regulatory standards should be tightened and further harmonised. To give capital markets union a push, the European Securities and Markets Authority (ESMA) should receive wider authority over an increasing range of market segments, and its governance should be reformed accordingly. Together, these measures would decisively reduce the correlation between bank and sovereign risk and pave the way for a cross-border integration of banking and capital markets.
Second, replacing the current system of fiscal rules focused on the ‘structural deficit’ by a simple expenditure rule guided by a long-term debt reduction target. The present rules both lack flexibility in bad times and teeth in good times. They are also complex and hard to enforce, exposing the European Commission to criticism from both sides. They should be replaced by the principle that government expenditure must not grow faster than long-term nominal output, and should grow at a slower pace in countries that need to reduce their debt-to-GDP ratios. A rule of this type is both less error-prone than the present rules and more effective in stabilising economic cycles, since cyclical changes in revenue do not need to be offset by changes in expenditure.
Monitoring compliance with the fiscal rule should be devolved to independent national fiscal watchdogs, supervised by an independent euro area-level institution, as elaborated below. Governments that violate the rule would be required to finance excess spending using junior (‘accountability’) bonds whose maturity would be automatically extended in the event of an ESM programme (the status of the existing debt stock would remain unaffected). The real-time market pressure associated with the need to issue such bonds would be far more credible than the present threats of fines, which have never been enforced. And the cost at which these junior sovereign bonds are issued will depend on the credibility of government policies to tackle fiscal problems in the future.
Third, creating the economic, legal and institutional underpinnings for orderly sovereign-debt restructuring of countries whose solvency cannot be restored through conditional crisis lending. First and foremost, this requires reducing the economic and financial disruptions from debt restructuring – by reducing the exposure of banks to individual sovereigns, as described above, and by creating better stabilisation tools and a euro area safe asset, as described below. In addition, orderly and credible debt restructuring requires legal mechanisms that protect sovereigns from creditors that attempt to ‘hold out’ for full repayment, and ESM policies and procedures that provide an effective commitment not to bail out countries with unsustainable debts.
When introducing such policies, it is essential that they do not give rise to instability in debt markets. For this reason, we do not advocate a policy that would require automatic haircuts or maturity extensions of all maturing debt in the event of an ESM programme. Furthermore, tougher ESM lending policies and sovereign concentration charges for banks should be:
phased in gradually;
announced at a time when the debts of all euro area countries that depend on market access are widely expected to be sustainable, as is currently the case if fiscal policies stay on track; and
combined with other reforms that reduce sovereign risk, such as the risk-sharing mechanisms proposed in our blueprint.
Fourth, creating a euro area fund, financed by national contributions, that helps participating member countries absorb large economic disruptions. Since small fluctuations can be offset through national fiscal policies, pay-outs would be triggered only if employment falls below (or unemployment rises above) a pre-set level. To ensure that the system does not lead to permanent transfers, national contributions would be higher for countries that are more likely to draw on the fund, and revised based on ongoing experience. This system would maintain good incentives through three mechanisms: ‘first losses’ would continue to be borne at national level, participation in the scheme would depend on compliance with fiscal rules and the European semester, and higher drawings would lead to higher national contributions.
Fifth, an initiative to create a synthetic euro area safe asset that would offer investors an alternative to national sovereign bonds. ‘Safety’ could be created through a combination of diversification and seniority; for example, financial intermediaries would purchase a standardised diversified portfolio of sovereign bonds and use this as collateral for a security issued in several tranches. Introducing such assets in parallel with a regulation on limiting sovereign concentration risk would further help avoid disruptive shifts in the demand for euro area sovereign bonds, and hence contribute to financial stability. Risks associated with the introduction of such assets must be mitigated both through careful design and by completing a test phase before the generation of such assets is ‘scaled up’.
Sixth, reforming the euro area institutional architecture. We propose two main reforms. The first is an improvement of the institutional surveillance apparatus. The role of the watchdog (‘prosecutor’) should be separated from that of the political decision-maker (‘judge’) by creating an independent fiscal watchdog within the European Commission (for example, a special Commissioner) or, alternatively, by moving the watchdog role outside the Commission (though this would require an overhaul of the treaties). At the same time, the Eurogroup presidency role (judge) could be assigned to the Commission, following the template of the High Representative of the Union for Foreign Affairs.
In addition, the policy responsibility for conditional crisis lending should be fully assigned to a reformed ESM, with an appropriate accountability structure. The latter should include a layer of political accountability – for example, by requiring the ESM Managing Director to explain and justify the design of ESM programmes to a committee of the European Parliament. Financial oversight should remain in the hands of ESM shareholders.
These proposals should be viewed as a package that largely requires joint implementation. Cutting through the ‘doom loop’ connecting banks and sovereigns in both directions requires the reduction of concentrated sovereign exposures of banks together with a European deposit insurance system. The reform of fiscal rules requires stronger and more independent fiscal watchdogs at both the national and European level. Making the no bailout rule credible requires not only a better legal framework for debt restructuring as a last resort, but also better fiscal and private risk-sharing arrangements, and an institutional strengthening of the ESM.
Concluding remarks
Our proposals do not venture into territory that requires new political judgements, such as which public goods should be delivered at the euro area level, and how a euro area budget that would provide such goods should be financed and governed. Their adoption would nonetheless be a game-changer, improving the euro area’s financial stability, political cohesion, and potential for delivering prosperity to its citizens, all while addressing the priorities and concerns of participating countries. Our leaders should not settle for less.
Authors’ note: All authors contributed in a personal capacity, not on behalf of their respective institutions, and irrespective of any policy roles they may hold or may have held in the past.
References
Bénassy-Quéré, A, M Brunnermeier, H Enderlein, E Farhi, M Fratzscher, C Fuest, P-O Gourinchas, P Martin, J Pisani-Ferry, H Rey, I Schnabel, N Véron, B Weder di Mauro and J Zettelmeyer (2018), “Reconciling risk sharing with market discipline: A constructive approach to euro area reform”, CEPR Policy Insight No. 91. |
A student seeks help with two separate questions: proving that one polynomial divides
another; and determining the integer values of a function given a product of its
variables. Doctor Vogler invokes modular arithmetic to crack the proof, and attacks
the function as a quadratic polynomial.
An n-dragon is a set of n consecutive positive integers. The first
two-thirds of them is called the tail, the remaining one-third the
head, and the sum off the numbers in the tail is equal to the sum of
the numbers in the head. Find the sum of the tail of a 99,999-dragon.
An adult seeks to encode a table of values into one number, with full recoverability.
Taking a cue from random number generators, Doctor Douglas suggests a decimal
representation, interleaving, and parsing protocol.
In class we are shown how to square both sides of an equation or take
the square root of both sides, but is there a rule like the addition
property of equality that formally says those are valid steps? |
Facebook Feeds
Facebook Status
It is official Christa Haggai Ramey is a member of the American Board of Trial Advocates. "The American Board of Trial Advocates is a national association of experienced trial lawyers and judges dedicated to the preservation and promotion of the civil jury trial right provided by the Seventh Amendment to the U.S. Constitution. First and foremost, ABOTA works to uphold the jury system by educating the American public about the history and value of the right to trial by jury." Truly an honor!
Super Lawyers magazine features the list and profiles of selected attorneys and is distributed to attorneys in the state or region and the ABA-approved law school libraries. Super Lawyers is also published as a special section in leading city and regional magazines across the country. Consumers can…
what would you do if you saw someone in peril? would you put yourself in harm's way? #WeAreRotary #HonoringHeros #RealLifeHeros #Rotary #WestchesterRotary I am a proud member of Westchester Rotary and Rotary International
Truck Accident Attorneys Los Angeles
One of the most fatal road accidents in Los Angeles is an accident involving a truck or 18-wheeler. If you or your loved one is involved in this kind of auto accident, it is wise not to delay the compensation you may be entitled and justice for the situation which is why a personal injury law attorney should be contacted immediately. For residents in the state of California, Haggai Law Firm is the name to trust especially in finding a personal injury lawyer.
Truck Accident Analysis
Anyone who is affected by an auto accident, particularly a truck accident in Los Angeles, needs help of a lawyer to get compensation for damages and medical expenses. This type of auto accident is quite disastrous with respect to the bulk, momentum, and size carried by the vehicle. Big rigs are typically the ones that cause major road disasters which is why it is common for the motorway to be closed for several hours because of the immensity and the weight.
In addition, there are extensive injuries that occur to the driver and the other persons involved in the truck accident. An auto accident that results in the falling of materials may inadvertently lead to more accidents that can prove to be difficult for a lawyer. This is especially true if the truck is with explosive components such as gunpowder, oil, and fireworks among others. It is at this point where a personal injury attorney or lawyer will look out for you at this time of crisis in California.
Why The Haggai Law Firm
Haggai Law Firm provides only expert personal injury attorney for their clients in Los Angeles. Our law firm has the experience necessary in pursuing successful personal injury law cases in the state of California, specifically in Los Angeles. Out of all the personal injury law firms in California, Haggai Law Firm is the only one that orders every lawyer to use an innovative, vigorous, and proactive approach when handling auto accident cases.
The lawyer that you will hire from them will use up to date technology while providing exceptional services in California. This is to ensure that there is a healthy attorney-client relationship. The attorney or lawyer is guaranteed to be caring and conscientious so as to provide assistance to the clients every step of the way. You will also be glad to know that every lawyer and the company itself have built a credible reputation as they work well with the insurance companies in Los Angeles and other major cities in California. In addition, your attorney has worked with the courts and medical providers.
You can count on the attorney provided by Haggai Law Firm to handle a variety of cases concerning any of the following personal injury cases in California:
Truck accident
Bicycle accident
Brain injury
Bus accident
Fractures
Burn injury
Motorcycle accident
Slip and fall
Pedestrian misfortune
Spinal cord damage
Wrongful death
Whatever personal injury case you have in Los Angeles, you are given the assurance that your attorney from this law firm will deliver results.
What You Will Obtain from a Personal Injury Attorney
In a sense, your attorney or lawyer will provide you help in gaining productive amounts in an emergency payment basis. Your lawyer knows that this modality of compensation could engender millions especially since this is a truck accident you are involved with. This type of auto accident generates multiple damage claims as the attorney counts the medical bills of the individuals included in the incident along with non-economic damages, road damages, and wage losses.
Some truck accidents in Los Angeles are not caused by the truck driver because there are many other factors that cause accidents. If this is your case, Haggai Law Firm will surely provide you with a skilled lawyer that will take care of your personal injury law defense in Los Angeles.
Free Case Review
First Name:
Last Name:
Address:
City:
Phone:
Email:
Disclaimer:*
I have read the disclaimer
The use of the Internet or this form for communication with the firm or any individual member of the firm does not establish an attorney-client relationship. Confidential or time-sensitive information should not be sent through this form. |
DAZ Studio 4.5.1.6 is now available
DAZ 3D Plugins that were available for version 4.5.0.114 were updated but will NOT need to be updated (re-downloaded) since they are compatible with this new version.
This release contains numerous bug fixes and other enhancements - including support for the DSON Importer for Poser. See the change log for more details. The Genesis Essentials Starter Bundle (default Genesis content) has also been updated to support this release but is not required to download for users of version 4.5.0.114.
Comments
Which version of the Genesis Starter Essentials is NOT required for users of DAZStudio v 4.0.5.114? I've downloaded 14812_GenesisStarterEssentials_1.6_trx.zip, and also 14812_GenesisStarterEssentialsPoserCF_1.6_dpc.zip, as well as
DSON_Importer_for_Poser_1.0.0.9_Mac64.zip. Yep, I'm a Mac user. I'm guessing its the first of these Genesis Starter Essentials --
14812_GenesisStarterEssentials_1.6_trx.zip --
that is already installed if one is using DAZStudio v 4.0.5.114. Is this correct?
According to the post below the MetaData had been updated in the GenesisStarterEssentials 1.6 and the DAZ Studio Pro 4.5.0.137 beta came with a lesser version number. So there is at least some update (even if potentially marginal, depending on your use of MetaData):
"The Genesis Essentials Starter Bundle (default Genesis content) has also been updated to support this release but is not required to download for users of version 4.5.0.114."
You can get away with not installing the Genesis Essentials Starter Bundle and it would still work with your current essentials. I'd wager most updates of the Genesis Essentials Starter Bundle were done due to the Poser support.
If you want to download and install the new essentials is to your liking, especially since the essentials bundle is a rather large download (450MB-ish)
Just like the last beta, when the installer runs the uninstaller for the previous version, the uninstaller errors and crashes, followed by the installer crashing.
Unlike the last beta, there's no option to install without uninstalling so I am now completely stuck, unable to install.
I wasn't the only person who posted about the problem with the uninstall on the last beta, so one might have thought DAZ would have fixed the issue, Instead, they've made it worse. Who was the smart person who decided to take the option not to uninstall out of the unistaller? They can come over here and fix my install.
Just like the last beta, when the installer runs the uninstaller for the previous version, the uninstaller errors and crashes, followed by the installer crashing.
Unlike the last beta, there's no option to install without uninstalling so I am now completely stuck, unable to install.
I wasn't the only person who posted about the problem with the uninstall on the last beta, so one might have thought DAZ would have fixed the issue, Instead, they've made it worse. Who was the smart person who decided to take the option not to uninstall out of the unistaller? They can come over here and fix my install.
Windows 7. The first crashed uninstall did manage somehow to wipe out the .dat file for the uninstaller, so on any subsequent attempts to install the uninstall failed again, this time erroring on the lack of the .dat file. And every time the uninstall fails for any reason, the installer itself crashes the moment control is returned to it by my acknowledging the failure of the uninstall.
This whole DAZ install/uninstall method is so unrobust I could cry. It is just so pathetic.
There should be an option to install without uninstalling, to deal with cases when the uninstall fails.
And the installer should not crash immediately following the failure of an uninstall.
This really isn't rocket science. Just basic, for heaven's sake. Build in some flipping robustness. Don't just assume every step will succeed so when one step fails the whole thing goes tits up.
This happened, and I reported it, virtually every DS 4 beta before the first DS4 public release. And since the public release and my demotion to the ranks of the plebs, it's kept on happening every public version and public beta of DS 4 and DS4.5 too.
Hello? Anyone listening? Well, I was never listened to as a tester, all my open bug reports were closed awhile ago even though I kept adding notes the bugs were still there (and no, I am NOT going to type them all out again in a new bug tracker - typing hurts, and you could have damn well migrated the open bugs, or checked them yourselves instead of just closing them), so I don't suppose I'll be listened to as a mere customer.
Yes, I've got it to install by going into my Program Files and manually deleting the broken uninstaller .exe. A user SHOULD NOT NEED TO DO THAT.
Then the installer ran to completion - but threw up an error message about a problem installing the CMS.
AGAIN.
JUST LIKE LAST BETA INSTALL AND THE ONE BEFORE.
Good thing i don't use the CMS - though ti would be nice to have it just in case I ever want it.
It would be REALLY NICE if DAZ actually spent some time fixing things, instead of just releasing new things (some broken) while leaving all the broken stuff unfixed. It would be ESPECIALLY NICE if, for the first time ever in all the beta and public versions of DS 4 and DS 4.5 JUST ONE would actually install without crashes of installers and uninstallers and error messages about parts of the install process and me needing to go rooting through the Program Files folders to find and delete things just to get the bloody thing to install.
I other thing I forgot to mention I never ever install Daz Studio or Content in to Program files, ever. It was a habit I got into with DS3 and since then my issues have been few. Yes I have had a few install/uninstall problems a while back like this but since I follow those steps I haven't had any issues at all when installing an update. And I don't mean to demean your issues David and I am sorry I can't help. The only thing I can suggest is that we keep up with reporting bugs via the bug tracking site.
I was on the 4.5.0.137 beta version before and did not update any plugins nor shaders for this new version. So far so good and no plugin complains, so e.g. dynamic cloth control, infinito, and GenX work still.
Windows 7. The first crashed uninstall did manage somehow to wipe out the .dat file for the uninstaller, so on any subsequent attempts to install the uninstall failed again, this time erroring on the lack of the .dat file. And every time the uninstall fails for any reason, the installer itself crashes the moment control is returned to it by my acknowledging the failure of the uninstall.
This whole DAZ install/uninstall method is so unrobust I could cry. It is just so pathetic.
There should be an option to install without uninstalling, to deal with cases when the uninstall fails.
And the installer should not crash immediately following the failure of an uninstall.
This really isn't rocket science. Just basic, for heaven's sake. Build in some flipping robustness. Don't just assume every step will succeed so when one step fails the whole thing goes tits up.
This happened, and I reported it, virtually every DS 4 beta before the first DS4 public release. And since the public release and my demotion to the ranks of the plebs, it's kept on happening every public version and public beta of DS 4 and DS4.5 too.
Hello? Anyone listening? Well, I was never listened to as a tester, all my open bug reports were closed awhile ago even though I kept adding notes the bugs were still there (and no, I am NOT going to type them all out again in a new bug tracker - typing hurts, and you could have damn well migrated the open bugs, or checked them yourselves instead of just closing them), so I don't suppose I'll be listened to as a mere customer.
Yes, I've got it to install by going into my Program Files and manually deleting the broken uninstaller .exe. A user SHOULD NOT NEED TO DO THAT.
Then the installer ran to completion - but threw up an error message about a problem installing the CMS.
AGAIN.
JUST LIKE LAST BETA INSTALL AND THE ONE BEFORE.
Good thing i don't use the CMS - though ti would be nice to have it just in case I ever want it.
It would be REALLY NICE if DAZ actually spent some time fixing things, instead of just releasing new things (some broken) while leaving all the broken stuff unfixed. It would be ESPECIALLY NICE if, for the first time ever in all the beta and public versions of DS 4 and DS 4.5 JUST ONE would actually install without crashes of installers and uninstallers and error messages about parts of the install process and me needing to go rooting through the Program Files folders to find and delete things just to get the bloody thing to install.
Crazed. I got the same problem. It appears to be an issue with the Remove-DAZStudio4_Win64.exe that was installed with the 4.5.0 release. Windows reports "...is not a valid Win32 application" when trying to run it directly.
After attempting to run the 4.5.1.6 installer, DAZ studio no longer even appears in the Control Panel list of installed programs (registry now screwed up?). Not possible to uninstall using any standard method that I can see. This is so pathetic and also so typical of DAZ. Before trying anything else, would be useful to find out what others have successfully done to get this to work.
Windows 7. The first crashed uninstall did manage somehow to wipe out the .dat file for the uninstaller, so on any subsequent attempts to install the uninstall failed again, this time erroring on the lack of the .dat file. And every time the uninstall fails for any reason, the installer itself crashes the moment control is returned to it by my acknowledging the failure of the uninstall.
This whole DAZ install/uninstall method is so unrobust I could cry. It is just so pathetic.
There should be an option to install without uninstalling, to deal with cases when the uninstall fails.
And the installer should not crash immediately following the failure of an uninstall.
This really isn't rocket science. Just basic, for heaven's sake. Build in some flipping robustness. Don't just assume every step will succeed so when one step fails the whole thing goes tits up.
This happened, and I reported it, virtually every DS 4 beta before the first DS4 public release. And since the public release and my demotion to the ranks of the plebs, it's kept on happening every public version and public beta of DS 4 and DS4.5 too.
Hello? Anyone listening? Well, I was never listened to as a tester, all my open bug reports were closed awhile ago even though I kept adding notes the bugs were still there (and no, I am NOT going to type them all out again in a new bug tracker - typing hurts, and you could have damn well migrated the open bugs, or checked them yourselves instead of just closing them), so I don't suppose I'll be listened to as a mere customer.
Yes, I've got it to install by going into my Program Files and manually deleting the broken uninstaller .exe. A user SHOULD NOT NEED TO DO THAT.
Then the installer ran to completion - but threw up an error message about a problem installing the CMS.
AGAIN.
JUST LIKE LAST BETA INSTALL AND THE ONE BEFORE.
Good thing i don't use the CMS - though ti would be nice to have it just in case I ever want it.
It would be REALLY NICE if DAZ actually spent some time fixing things, instead of just releasing new things (some broken) while leaving all the broken stuff unfixed. It would be ESPECIALLY NICE if, for the first time ever in all the beta and public versions of DS 4 and DS 4.5 JUST ONE would actually install without crashes of installers and uninstallers and error messages about parts of the install process and me needing to go rooting through the Program Files folders to find and delete things just to get the bloody thing to install.
Crazed. I got the same problem. It appears to be an issue with the Remove-DAZStudio4_Win64.exe that was installed with the 4.5.0 release. Windows reports "...is not a valid Win32 application" when trying to run it directly.
After attempting to run the 4.5.1.6 installer, DAZ studio no longer even appears in the Control Panel list of installed programs (registry now screwed up?). Not possible to uninstall using any standard method that I can see. This is so pathetic and also so typical of DAZ. Before trying anything else, would be useful to find out what others have successfully done to get this to work.
I've installed 4.5.1.6 on numerous machines: XP/32, XP/64, Vista/32, Vista/64, Win7/32,Win7/64,MacOS and Linux/Wine. No problems. Start the investigation on a Windows problem on your machine. It is likely that you have something interfering with the process.
Just verifying that you are up to date. The problem is in ATI's code. Nothing DAZ or anyone else can do right now. Also, some in other groups have stated that the problem can be traced to faulty hardware. I don't personally agree with that reasoning in this case. I think the problem is a problem in the ATI drivers.
Everything from JavaGL to Cinema4D is choking on the ATI dll. That is not a good thing.
Windows 7. The first crashed uninstall did manage somehow to wipe out the .dat file for the uninstaller, so on any subsequent attempts to install the uninstall failed again, this time erroring on the lack of the .dat file. And every time the uninstall fails for any reason, the installer itself crashes the moment control is returned to it by my acknowledging the failure of the uninstall.
This whole DAZ install/uninstall method is so unrobust I could cry. It is just so pathetic.
There should be an option to install without uninstalling, to deal with cases when the uninstall fails.
And the installer should not crash immediately following the failure of an uninstall.
This really isn't rocket science. Just basic, for heaven's sake. Build in some flipping robustness. Don't just assume every step will succeed so when one step fails the whole thing goes tits up.
This happened, and I reported it, virtually every DS 4 beta before the first DS4 public release. And since the public release and my demotion to the ranks of the plebs, it's kept on happening every public version and public beta of DS 4 and DS4.5 too.
Hello? Anyone listening? Well, I was never listened to as a tester, all my open bug reports were closed awhile ago even though I kept adding notes the bugs were still there (and no, I am NOT going to type them all out again in a new bug tracker - typing hurts, and you could have damn well migrated the open bugs, or checked them yourselves instead of just closing them), so I don't suppose I'll be listened to as a mere customer.
Yes, I've got it to install by going into my Program Files and manually deleting the broken uninstaller .exe. A user SHOULD NOT NEED TO DO THAT.
Then the installer ran to completion - but threw up an error message about a problem installing the CMS.
AGAIN.
JUST LIKE LAST BETA INSTALL AND THE ONE BEFORE.
Good thing i don't use the CMS - though ti would be nice to have it just in case I ever want it.
It would be REALLY NICE if DAZ actually spent some time fixing things, instead of just releasing new things (some broken) while leaving all the broken stuff unfixed. It would be ESPECIALLY NICE if, for the first time ever in all the beta and public versions of DS 4 and DS 4.5 JUST ONE would actually install without crashes of installers and uninstallers and error messages about parts of the install process and me needing to go rooting through the Program Files folders to find and delete things just to get the bloody thing to install.
Crazed. I got the same problem. It appears to be an issue with the Remove-DAZStudio4_Win64.exe that was installed with the 4.5.0 release. Windows reports "...is not a valid Win32 application" when trying to run it directly.
After attempting to run the 4.5.1.6 installer, DAZ studio no longer even appears in the Control Panel list of installed programs (registry now screwed up?). Not possible to uninstall using any standard method that I can see. This is so pathetic and also so typical of DAZ. Before trying anything else, would be useful to find out what others have successfully done to get this to work.
Some additional information. Just information and I don't recommend anyone else do this as I may regret my haste. In any case, I renamed the Remove-DAZStudio4_Win64.exe in Program Files\DAZ 3D\DAZStudio4\Uninstallers to Remove-DAZStudio4_Win64.exe.bak and re-ran the 4.5.1.6 installer. It installed. However, since an uninstaller did not run for the previous version, I do not know what impact this may have. I will probably run the uninstall and then re-install before proceeding.
Previous post indicates that this problem was reported during beta testing. If it is the same problem, there is simply no valid excuse for not fixing it prior to the general release of 4.5. Period.
After loadiing up DS 4.5 and rendering an old scene I think I found a bug in the Open GL GLSL render setting( stage 3 on the render slider). The program crashes and needs to be restarted. I'm running :
Windows Vista™ Home Premium
Version 6.0.6002 Service Pack 2
AMD Athlon(tm) 64 X2 Dual Core Processor 5200+, 2600 Mhz
Physical Memory (RAM) 8.00 GB
NVIDIA GeForce GT 430
Display Driver 3.06.23.
And I get this error and crash:
WARNING: interface\dialogs\dzbasicdialog.cpp(164): No parent specified for DzBasicDialog!
Rendering in OpenGL...
Compile failed on OpenGL Default Material Shaders, rendering without GLSL.
I tried it with DS 4.0.3.47 and did not have any problems. As far as I can tell I'm up to date on display drivers so it may be a bug. I haven't read through all of this thread so bear with me but I'm wondering if there are others with the same problem or if they are working on it.
Thank's in advance my friends.
Addendum:
I also get this in the crash Message:
DAZStudio.exe caused ACCESS_VIOLATION in module "C:\Windows\system32\nvoglv32.dll" at 0023:63B27DF4
The nvogl32.dll is an Nvida module so something needs to be addressed there, I also loaded and tried Firery Scene 2 and got the same crash. I reset the 3D setting for DS 4.5 in the Nvidia control panel and it still crashes. After reading threads I saw where some shaders may be a problem which is why I loaded the new ready to render scene and am still having problems.
Has Anybody else had this problem?
I'll Ask this again, has any body else had this problem or can try this for me .
Once again thank's in advance.
It seems to be still conflicting with the NVidia Open GL driver, If you can please report this to the bug tracker.
Windows 7. The first crashed uninstall did manage somehow to wipe out the .dat file for the uninstaller, so on any subsequent attempts to install the uninstall failed again, this time erroring on the lack of the .dat file. And every time the uninstall fails for any reason, the installer itself crashes the moment control is returned to it by my acknowledging the failure of the uninstall.
This whole DAZ install/uninstall method is so unrobust I could cry. It is just so pathetic.
There should be an option to install without uninstalling, to deal with cases when the uninstall fails.
And the installer should not crash immediately following the failure of an uninstall.
This really isn't rocket science. Just basic, for heaven's sake. Build in some flipping robustness. Don't just assume every step will succeed so when one step fails the whole thing goes tits up.
This happened, and I reported it, virtually every DS 4 beta before the first DS4 public release. And since the public release and my demotion to the ranks of the plebs, it's kept on happening every public version and public beta of DS 4 and DS4.5 too.
Hello? Anyone listening? Well, I was never listened to as a tester, all my open bug reports were closed awhile ago even though I kept adding notes the bugs were still there (and no, I am NOT going to type them all out again in a new bug tracker - typing hurts, and you could have damn well migrated the open bugs, or checked them yourselves instead of just closing them), so I don't suppose I'll be listened to as a mere customer.
Yes, I've got it to install by going into my Program Files and manually deleting the broken uninstaller .exe. A user SHOULD NOT NEED TO DO THAT.
Then the installer ran to completion - but threw up an error message about a problem installing the CMS.
AGAIN.
JUST LIKE LAST BETA INSTALL AND THE ONE BEFORE.
Good thing i don't use the CMS - though ti would be nice to have it just in case I ever want it.
It would be REALLY NICE if DAZ actually spent some time fixing things, instead of just releasing new things (some broken) while leaving all the broken stuff unfixed. It would be ESPECIALLY NICE if, for the first time ever in all the beta and public versions of DS 4 and DS 4.5 JUST ONE would actually install without crashes of installers and uninstallers and error messages about parts of the install process and me needing to go rooting through the Program Files folders to find and delete things just to get the bloody thing to install.
Crazed. I got the same problem. It appears to be an issue with the Remove-DAZStudio4_Win64.exe that was installed with the 4.5.0 release. Windows reports "...is not a valid Win32 application" when trying to run it directly.
After attempting to run the 4.5.1.6 installer, DAZ studio no longer even appears in the Control Panel list of installed programs (registry now screwed up?). Not possible to uninstall using any standard method that I can see. This is so pathetic and also so typical of DAZ. Before trying anything else, would be useful to find out what others have successfully done to get this to work.
I've installed 4.5.1.6 on numerous machines: XP/32, XP/64, Vista/32, Vista/64, Win7/32,Win7/64,MacOS and Linux/Wine. No problems. Start the investigation on a Windows problem on your machine. It is likely that you have something interfering with the process.
Kendall
I have installed hundreds of applications on various machines over the years. Very rarely had any issue with an installer. However, in every case, it came down to an issue with the installer and not with the system.
I have installed all of the DS4 updates over the past year without issue until now. At this point, I am quite confident This is a DAZ problem and not a Windows problem.
In over 30 years of developing software, one of the things that I know from experience is that just because it works on system A and not on system B, does not automatically mean that root cause lies with system B. It means that the software process better be robust enough to determine exactly what the root cause is when reported. Too often, wishful thinking combined with "...well, it works on my machine so the problem must be with yours..." will result in defects getting overlooked.
I have no idea to talk about compatibility with so many online games and 3D tools when I talk about one online game and that forum.
there seems no difference when talk about daz studio and video card
I just want to know I can use newest daz studio by my PC ATI graphic card with driver.
if I can not, want to know what driver is best. or I must need change the graphic card for new daz studio.
if I want
actually I could Open GL render (but seldom I use it, so it is not so important for me)
,with my ATI graphic card in 4.0.3. I had not changed my graphic card, only updated daz studio and driver.
and daz can announce the compatibility with graphic card, if daz know whati is the true user service.
is there any discribe about compatibility with ATI graphic card ?
Now I become angry about this problem, so I decide my all anger throw to AMD support :vampire: |
A Stream Restoration Tour of Donaldson Run's Tributary A in Arlington County
Please join us for a tour of Donaldson Run's Tributary A given by Jen McDonnell from the Arlington County Department of Environmental services. The tour will highlight the stream restoration techniques, and discuss the challenges of stream restoration planning and construction. The tour will also include a look at Tributary B, which will provide a comparable before/after comparison.
REGISTRATION is only $5 for all attendees. Please register by Friday, May 22nd by contacting Mathini at This email address is being protected from spambots. You need JavaScript enabled to view it.. Payment may be made online or at the door. (Make checks payable to AWRA-NCR Section). . |
Q:
With OpenVPN how can I only let LAN go through the VPN?
So I have the following setup:
Now from home I like to make a connection through OpenVPN to access my LAN from work. So I edit the config of the OpenVPN client on my home computer to:
remote 180.135.0.10 1194
Now I can connect to it but it won't allow me to access the LAN just out of the box. So I add a new line to the clients config:
redirect-gateway def1
This will make sure all traffic will go through the VPN. This works. However now I don't have internet. So I add the following lines:
dhcp-option DNS 8.8.8.8
dhcp-option DNS 8.8.4.4
Now I can access the LAN through my VPN and when I check WhatIsMyIp it is clear that the internet traffic is going through the VPN also as I now have the work WAN IP. This is not preferred. In my ideal situation the only traffic that should go through the VPN is the LAN of work all other traffic such as internet and my home LAN should just route normal.
Does anyone have an answer how to accomplish such a thing?
A:
If I understand your configuration and network topology, then you should delete the redirect-gateway directive and instead add:
route 192.168.188.0 255.255.255.0
If you want reach work machines by name you should configure your work dns server (if any):
dhcp-option DNS <your work dns server LAN IP>
Delete the two Google DNS entries.
Regards
Paolo Basenghi
|
77 F.Supp.2d 1215 (1999)
Jose E. GRAVES, Plaintiff,
v.
BURLINGTON NORTHERN AND SANTA FE RAILWAY COMPANY f/k/a/ The Burlington Northern Railroad Company, Defendants.
No. CIV-99-147-S.
United States District Court, E.D. Oklahoma.
September 8, 1999.
*1216 Ed Gage, Muskogee, OK, Drew C. Baebler and Daniel J. Cohen, St. Louis, MO, for plaintiff.
A. Camp Bonds, Jr., Muskogee, OK, for defendants.
ORDER
SEAY, District Judge.
Before the court for its consideration is a motion for partial summary judgment filed by the plaintiff, Jose Graves. In this motion for partial summary judgment, the plaintiff has requested this court enter an order granting judgment in its favor as it relates to defendant's defenses of preemption, estoppel, collateral estoppel and res judicata. Defendant filed an objection to this motion arguing these defenses are proper because the issues in this action have been previously litigated in a disciplinary proceeding brought pursuant to the collective bargaining agreement and the Railroad Labor Act (hereinafter RLA).[1]
*1217 BACKGROUND
The court finds the facts as follows. Plaintiff was employed by the defendant as a car inspector in Tulsa, Oklahoma. Plaintiff alleges that in June 1997, while attempting to release a hand brake, he fell off the top of a rail car sustaining injuries. Following this accident, the defendant, pursuant to the collective bargaining agreement and the RLA, instituted an investigation into the truthfulness of plaintiff's allegations. A hearing was held on the matter.
Plaintiff was represented at the hearing by a Union Representative. Plaintiff, as well as his Union Representative, were provided an opportunity to cross-examine witnesses, call witnesses and present testimony and evidence. After the hearing, plaintiff was found guilty of filing a false report of injury and was terminated. Plaintiff appealed the defendant's investigation, findings and conclusions resulting in his termination to the Public Law Board. On December 22, 1998, the Public Law Board affirmed the decision. A timely appeal of the Public Law Board's decision was not filed. Plaintiff filed his lawsuit in this court on March 30, 1999.
ARGUMENTS AND AUTHORITIES
Defendant believes based on the pleadings filed in this case, plaintiff is attempting to litigate a wrongful discharge claim in this Federal Employer's Liability Action (hereinafter "FELA"). Defendant argued the Public Law Board's decision is res judicata on the issue of plaintiff's termination and resulting lost wages and benefits. Plaintiff responded by arguing he is suing for his injuries, not for wrongful discharge.
In Andrews v. Louisville & Nashville Railroad Company, 406 U.S. 320, 324 92 S.Ct. 1562, 1565, 32 L.Ed.2d 95 (1972) the United States Supreme Court held that since the source of Andrews' right not to be discharged from his job was found in the collective bargaining agreement, petitioner must make his claim for wrongful discharge through the procedures set forth in the RLA. Thus, this FELA action is not the proper forum for plaintiff to litigate his wrongful discharge claim.
However, this court does not find, based on the pleadings, that plaintiff is attempting to re-litigate his termination. In his Complaint, the plaintiff does not mention or plead a cause of action for wrongful termination. Further, in his reply brief the plaintiff states "Plaintiff has brought suit under Sec. 51 of the FELA for personal injuries, not for wrongful discharge." (Plaintiff's Reply filed September 3, 1999 at 8). Plaintiff also states "Plaintiff is claiming that he will lose earnings in the future because Defendant negligently and permanently injured him." (Plaintiff's Reply filed September 3, 1999 at 7). These arguments and allegations indicate a lawsuit based only on personal injuries, not wrongful termination.
This court also finds the findings and conclusions made as a result of the disciplinary proceedings conducted pursuant to the collective bargaining agreement and the RLA are not entitled to preemption, estoppel, collateral estoppel or res judicata.
The United States Supreme Court has on numerous occasions declined to hold that individual employees, because of the availability of arbitration, are barred or pre-empted from bringing claims under federal statutes. Atchison, Topeka & Santa Fe Railway Company v. Buell, 480 U.S. 557, 564, 107 S.Ct. 1410, 94 L.Ed.2d 563 (1987). In Buell, the court stated:
The fact that an injury otherwise compensable under the FELA was caused by conduct that may have been subject to arbitration under the RLA does not deprive an employee of his opportunity *1218 to bring an FELA action for damages.... Id. at 564, 107 S.Ct. 1410.
The court reasoned:
Although the analysis of the question under each statute is quite distinct, the theory running through these cases is that notwithstanding the strong policies encouraging arbitration, "different considerations apply where the employee's claim is based on rights arising out of a statute designed to provide minimum substantive guarantees to individuals workers". (Citing Barrentine v. Arkansas-Best Freight System, Inc., 450 U.S. 728, 737, 101 S.Ct. 1437, 67 L.Ed.2d 641 (1981)) Id. at 565, 107 S.Ct. 1410.
The FELA not only provides railroad workers with substantive protection against negligent conduct that is independent of the employer's obligation under its collective-bargaining agreement, but also affords injured workers a remedy suited to their needs ... It is inconceivable that Congress intended that a worker who suffered a disabling injury would be denied recovery under the FELA simply because he might also be able to process a narrow labor grievance under the RLA to a successful conclusion. As then District Judge J. Skelly Wright concluded, "the Railway Labor Act ... has no application to a claim for damages to the employee resulting from the negligence of an employer railroad." Barnes v. Public Belt R.R. Commission for City of New Orleans, 101 F.Supp. 200, 203 (E.D.La.1951). Id.
45 U.S.C. § 51 allows a railroad employee to sue their employer for injuries they may have sustained which were caused by the negligence of the carrier. In the case at bar, plaintiff's claim for damages as a result of personal injuries is based on rights arising out of this section. Thus, the RLA does not pre-empted plaintiff's cause of action under FELA.
The court must now determine whether the findings and conclusions reached and then affirmed by the Public Law Board as to plaintiff's claim of personal injury are entitled to estoppel, collateral estoppel or res judicata. The RLA provides a comprehensive framework for the resolution of both major and minor labor disputes in the railroad industry. The court finds this is a minor dispute, as defined by 45 U.S.C. § 153(I), which states that "minor disputes" are those growing out of grievances or out of the interpretation or application of agreements concerning rates of pay, rules, or working conditions. The Public Law Board's resolution of minor disputes is deemed compulsory arbitration for the purpose of the RLA. Brotherhood of Railroad Trainmen v. Chicago River & Indiana Railroad Company, 353 U.S. 30, 39, 77 S.Ct. 635, 640, 1 L.Ed.2d 622 (1957). It becomes the task of the courts to determine whether preclusive effect should be given a finding made in arbitration. Dean Witter Reynolds v. Byrd, 470 U.S. 213, 223, 105 S.Ct. 1238, 1243, 84 L.Ed.2d 158 (1985). The court must review the procedures to determine whether federal interests warranting protection were sufficiently safeguarded in the disciplinary hearing and subsequent review conducted by the Public Law Board. Dean Witter at 223, 105 S.Ct. 1238.
First, it should be noted, the hearing was conducted by Mr. Mike Black, a terminal manager for Burlington Northern, the defendant, instead of a judge. (Defendant's Exhibit "A", Transcript of hearing held on July 1, 1997, at 1). Second, the plaintiff was represented by L.K. Hudson, Local Chairman for the Brotherhood of Railway Carman which was affiliated with the Transportation Communication Union. (Defendant's Exhibit "A", Transcript of hearing held on July 1, 1997, at 2). The notice of investigation seems to indicate plaintiff was only entitled to be represented by a Union official, not an attorney, at this proceeding. (Defendant's Exhibit "A", Transcript of hearing held on July 1, 1997 at 3-4). Third, the decision to terminate plaintiff and that the plaintiff's report of injury was falsified was made by Mr. Black and not an impartial fact finder such as a judge or jury. (Defendant's *1219 Exhibit "C", Letter of Decision from Mr. Mike Black dated July 18, 1997). Fourth, it did not appear the rules of evidence were utilized at the hearing. (Defendant's Exhibit "A", Transcript of hearing held on July 1, 1997). Finally, the Public Law Board's affirmance of the board's findings was based only upon materials exchanged between the parties and from the transcript of the investigation. (Defendant's Exhibit "B", Order dated December 22, 1998).
While the plaintiff was allowed to call and cross-examine witnesses and submit evidence for consideration, this court finds the nature of the proceedings as well as the procedures used in the fact finding process were insufficient to protect the plaintiff's statutory and constitutional rights. The plaintiff did not have the benefit of an attorney to represent him. Further, the hearing was conducted and the decision was made by an employee of the defendant. Finally, the entity that reviewed this decision was limited to the materials exchanged between the parties and the evidence submitted at the hearing. Defendant had the burden of proving res judicata, estoppel or collateral estoppel barred this FELA action. Kulavic v. Chicago & Illinois Midland Railway Company, 1 F.3d 507 (7th Cir.1993). Defendant has failed to meet this burden. This court finds plaintiff's claim for personal injuries due to the negligence of the defendant are not barred by the previous disciplinary hearing conducted pursuant to the Railway Labor Act and the collective bargaining agreement. See also, McDonald v. West Branch, 466 U.S. 284, 104 S.Ct. 1799, 80 L.Ed.2d 302 (1984) (holding arbitration does not preclude a subsequent 42 U.S.C. § 1983 action); Barrentine v. Arkansas-Best Freight System, Inc., 450 U.S. 728, 101 S.Ct. 1437, 67 L.Ed.2d 641 (1981) (holding arbitration has no preclusive effect on a claim under the Fair Labor Standards Act) and Alexander v. Gardner-Denver Company, 415 U.S. 36, 94 S.Ct. 1011, 39 L.Ed.2d 147 (1974) (holding arbitration has no preclusive effect on a Title VII claim).
Defendant also argued plaintiff's claim for lost wages and benefits since the date of his termination should be denied. The plaintiff argues he is entitled to lost wages after his termination because they are related to his litigated injury not his termination. Plaintiff cannot recover damages in this FELA claim related to his wrongful termination. Andrews v. Louisville & Nashville Railroad Company, 406 U.S. 320, 324 92 S.Ct. 1562, 1565, 32 L.Ed.2d 95 (1972) However, the court finds plaintiff is entitled to present evidence as to his future earning capacity as it relates to his injury. Plaintiff alleges that because of his injuries he has been unable to resume his employment duties as a carman and that at the trial of this case he will present evidence as to the permanency of his injuries. (Plaintiff's Reply filed September 3, 1999 at 7, 11). Accordingly, plaintiff will be allowed to present evidence of earnings he will lose in the future because of defendant's negligence in causing the litigated injury. Kulavic at 520-522.
CONCLUSION
The court grants plaintiff's motion for partial summary judgment as it relates to the defendant's defense of preemption, estoppel, collateral estoppel and res judicata. Further, the plaintiff will be allowed to present evidence as to his future earning capacity as it relates to his litigated injury.
IT IS SO ORDERED.
NOTES
[1] In this response, the defendant requested it be granted partial summary judgment. The court did not treat the response as a motion for partial summary judgment for two reasons. First, it was untimely. On April 29, 1999, this court entered a scheduling order setting the dispositive motion deadline as August 13, 1999. The defendant's response was filed August 24, 1999. Further, there was no request by the defendant which would have extended the time to file such a motion. Second, Local Rule for the Eastern District 7.1(B) requires that each motion or objection shall be filed as a separate pleading. In the case at bar, the response and motion for partial summary judgment were filed as one pleading. Thus, the motion for partial summary judgment also violated the local rules.
|
(ANSA) - AEREO PAPALE, 13 MAG - "Ho letto sul giornale che leggo al mattino che c'é questo problema ma ancora non conosco bene i dettagli e non posso esprimere una opinione. So che il problema c'é e desidero che le indagini vadano avanti, mi auguro che tutto ciò che c'é sugli scafisti venga fuori". Lo ha detto il Papa in volo da Fatima a Roma interpellato sulle recenti accuse rivolte ad alcune Ong di essere complici e di fare gli interessi di scafisti e trafficanti di esseri umani.
|
Windows 10 gets its own AI platform
12 months ago
Windows ML
Announced during the company’s Developer Day yesterday, Microsoft said its new “Windows ML” architecture will let devs add pre-trained machine learning models to their apps. It’s similar to rival on-device AI platforms being developed by companies such as Google and Qualcomm. Microsoft aims to encourage adoption of AI by making it easier for developers to add advanced models to their apps.
Models available on the platform will include AI targeted at common tasks such as computer vision, speech recognition and machine reading. They’ll be optimised for efficiency and designed to run across a broad spectrum of modern Windows hardware. One of the biggest challenges facing AI on Windows 10 is the vast array of hardware configurations that apps need to support, spanning everything from cutting-edge workstations to decade-old desktops.
Microsoft has partnered with silicon makers including AMD, Intel, NVIDIA and Qualcomm to develop Windows ML. It’s optimising the platform for efficiency, allowing developers to make the most of the hardware that’s available. As part of its work, Microsoft will add support for new device driver categories to support purpose-built AI coprocessors in future PCs.
AI models consumed from Windows ML will run across the entire Windows device family, including PCs, laptops, servers and IoT devices on the edge. Developers will also be able to target emerging forms of hardware, such as Windows Mixed Reality and Windows Holographic products. Models are provided in the ONNX format, a standard developed by an industry consortium that includes Microsoft, Facebook and Amazon Web Services.
Intelligent apps
According to Microsoft, the addition of a built-in AI platform will let developers build apps with more “intelligence.” The company pointed to new features in its own first-party apps, such as Cortana, Office 365 and Photos, that demonstrate how AI can aid computer users. Windows ML is an attempt to make AI more accessible, so additional developers and apps can begin to utilise it.
“At Microsoft, we’re making huge investments in AI and Machine Learning across the company,” said Microsoft. “With the next major update to Windows 10, we begin to deliver the advances that have been built into our apps and services as part of the Windows 10 platform. Every developer that builds apps on Windows 10 will be able to use AI to deliver more powerful and engaging features.”
Developers will get their first look at Windows ML with the launch of Visual Studio Preview 15.7. This will support automatic generation of AI model interfaces after adding an ONNX file to Windows Store app projects. Microsoft’s also planning integrations with the MLGen tool for older Visual Studio versions, as well as its Visual Studio Tools for AI suite. |
> Learning is not attained by chance, it must be sought for with ardor and diligence.
>
> -Abigail Adams
Well the heading says it all.
The society under the leadership of our president Dr. Ashish Jain has set the academic calendar of the society rolling. The program "Sameeksha" -- a revision program for the exam going postgraduates was conducted at Udaipur from February 28, to March 2, 2014, where the president himself was a faculty member and I was present as the host. The program was appreciated by all the postgraduates who attended the 3 days event and participated very well by interacting with the faculty members of the program. The feedback of the participants has shown that they would like to attend such program where they are able to interact with the faculty more easily and freely.
The 13^th^ National Postgraduate Convention was held on March 7 to March 9, 2014 at Mangalore where the postgraduates had an opportunity to present their papers and listen to the views of the faculty in their lectures.
The most appreciated program by the members of our society -- training of teachers is going to be held again this year at two different venues. It will be held in Bangalore from the June 20 to June 22, 2014 and in Lucknow on the September 19 and September 20, 2014. I request the life members to take advantage of this program and hone their teaching skill. It is a not to be missed opportunity with the best in the business. The venue details shall be available to the members on the website in a few days time.
The postgraduate orientation program has been scheduled to be held in four different regions of the country for the benefit of the newly joining postgraduate students, to orient them on the nitty-gritty and nuances of postgraduate education and training in periodontics. The programs will be held in Raichur, Pune, Bilaspur, and Chandigarh.
Another initiative is conducting professional enrichment programs (PEP) at various centers in the country to create awareness among the general dental practitioners regarding the current concepts and recent advances in periodontics. We had successfully conducted PEPs in various parts of the country last year and were well-appreciated by all. This year again the program will be conducted in collaboration with the Indian Dental Association in different parts of the country.
Another first this year will be The Integrated Science Program -- colloquim which will focus on periodontal medicine being scheduled to be held in National Capital Region, Kolkata and Bengaluru. The program will be held in the month of September and we have faculty from USA coming to share their knowledge.
The essay competition -- an activity that has been conducted by the society for many years is being conducted this year as well. The topics have been finalized and all the members will receive the notice soon. I request one and all to participate enthusiastically in the event.
The whole purpose of education is to turn mirrors into windows.
-Sydney J. Harris
|
Q:
4 channel software PWM using Atmega16 for controlling 4 ESCs for brushless DC motors
I am trying to implement software implemented PWM for controlling 4 ESCs using an atmega 16 microcontroller.
To achieve that I am sequentially generating the pulses for each ESC one by one in every period of the signal.
Here's the code -
#define TOTAL_ESC 4
int ESC_pulse[TOTAL_ESC];
int ESC_pins[TOTAL_ESC];
int currentESC;
int main(void)
{
initESCs();
initUSART();
sei();
while (1)
{
}
}
void initESCs()
{
ESC_pulse[0] = ESC_pulse[1] = ESC_pulse[2] = ESC_pulse[3] = 2000;
ESC_pins[0] = PIND4;
ESC_pins[1] = PIND5;
ESC_pins[2] = PIND6;
ESC_pins[3] = PIND7;
currentESC = TOTAL_ESC - 1;
ICR1 = 39999; //50Hz signal @ 16MHz clock
OCR1A = 1000;
DDRD |= 1<<ESC_pins[0] | 1<<ESC_pins[1] | 1<<ESC_pins[2] | 1<<ESC_pins[3];
PORTD &= ~(1<<ESC_pins[0] | 1<<ESC_pins[1] | 1<<ESC_pins[2] | 1<<ESC_pins[3]);
TIMSK |= 1<<OCIE1A;
TCCR1A |= 1<<WGM11;
TCCR1B |= 1<<WGM13 | 1<<WGM12 | 1<<CS11;
}
ISR(TIMER1_COMPA_vect)
{
PORTD &= ~(1<<ESC_pins[currentESC]); //End prev ESC pulse
if(currentESC == TOTAL_ESC - 1 && OCR1A != 1000)
{
OCR1A = 1000;
return;
}
currentESC = (currentESC + 1) % TOTAL_ESC;
PORTD |= 1<<ESC_pins[currentESC]; //Start next ESC pulse
OCR1A += ESC_pulse[currentESC];
}
So, I am trying to generate 50Hz signals with 1ms - 2ms pulses. My CPU Clock is 16MHz and the timer clock is prescaled to 2MHz.
I have an array ESC_pulse for storing the width of the pulses for each ESC. The values in it will range from 2000 to 4000 for the 1ms to 2ms pulses that the ESCs require. The logic I am applying is that everytime the timer compare interrupt occurs I clear the last ESC's output pin and set the next one's and update the OCR1A value with current OCR1A value + current ESC's pulse width as stored in ESC_pulse.
With the hardware generated PWM using the timer my ESC is able to run the brushless DC motor. However, the software generated technique doesn't work. I don't have any equipment to actually see what the generated PWM signals are actually. All that happens is that the ESC generates beep sounds signifying no signal.
I am not sure what I am doing wrong here.
A:
If I understand your code correctly, you have Timer1 in Fast PWM mode using OCR1A to measure PWM duty time and ICR1 for the period. When OCR1A matches the current timer value it triggers an interrupt. You then reload it with a longer time, the idea being that it will match the timer 1~2ms later for the next servo pulse.
The problem with this technique is that in PWM modes the output compare register is doubled-buffered and synchronized to the PWM period, so writing to it during the current PWM cycle will only have affect in the next cycle. This is described in the datasheet on page 98:-
The OCR1x Register is double buffered when using any of the twelve
Pulse Width Modulation (PWM) modes... The double buffering
synchronizes the update of the OCR1x Compare Register to either TOP or
BOTTOM of the counting sequence. The synchronization prevents the
occurrence of odd-length, non-symmetrical PWM pulses
So instead of getting an interrupt 1~2ms after you write to OCR1A, you get it 20ms + 1~2ms later.
I'm not sure if it's possible to do it your way using a non-PWM timer mode, but it might be easier to just use a basic timer to time each pulse separately, then add up all the times and subtract from 20ms to get the final pause time.
Many modern ESCs can handle frequencies of 250Hz or higher, so you might even get away with just pushing the pulses out as fast as possible one after the other.
A:
The cause of the problem: OCR1A double buffering
Your program fails because OCR1A is double buffered in all PWM timer modes and you are using such a mode (Fast PWM, TOP = ICR1). When you write a new value to OCR1A, you don't actually change the value used by the timer hardware. Instead, the value stored in OCR1A gets copied to the separate "shadow register" actually used by the timer only once the counter value reaches TOP and restarts from zero again. This is very useful for generating glitch-free hardware PWM, but prevents what you are trying to do (multiple OCR1A updates per timer cycle).
Since this OCR1A update happens only once per timer cycle (at 50 Hz) and your interrupt code is supposed to generate 4 1000 μs - 2000 μs delays + one long delay, you end up with a PWM period of 100 ms (5 timer cycles) and a high time of ~20 ms.
The fix is to configure the timer to a non-PWM mode. The mode best suited for your program is clear timer on compare match (CTC, TOP = ICR1) which works nearly identically but doesn't double buffer OCR1A and OCR1B. The WGM bits found in TCCR1A and TCCR1B should be set to 1100 to achieve this (see the datasheet for details).
Other minor issues:
If you access a variable in an interrupt service routine, you need to declare the variable volatile. Leaving the keyword out (among other things) allows the compiler to do optimizations which assume that only the main program flow can modify state.
PIND4, PIND5, PIND6 and PIND7 as used by your code are not found in the register definitions provided by AVR-GCC. The more generic macros PINx, PORTx and DDRx are declared in "portpins.h".
The sequencing logic could be written in clearer fashion. Your current code has 4 explicit states in the sequence (one for each output pin), of which the last is used for two compare matches: The first for setting the channel 4 output low, and the second for waiting until the sequence should restart. This is weirdly done by using a specific OCR1A value (1000) as a flag. This had me scratching my head for a while.
This might be opinion based, but I'd use uint8_t, int16_t and friends rather than e.g unsigned char or short in embedded software. This way you and others know exactly how big your variables are, and they are less verbose as well.
Your code sample should include <avr/io.h> and <avr/interrupt.h>, and it should provide a function declaration for void initESCs(). initUSART() is superfluous.
|
Of the 5,2 million given by crowdfunders and other private investors last year, 58.000 was left in May 2018. That is probably burned yet, also. The company owed 1,2M to banks and addtional 952K to suppliers. In the books are mainly immaterial rights and contracts – such as the (non-commercial) Space Act Agreement with NASA. No substance.
Now that’s clearly a serious situation, which explains the abrupt stopping of the “Astronaut Training program” in August. The app’s downloads have come to a standstill by October. It’s not far-fetched to expect the app disappear from the Google Play Store and Apple Store within 12 months, as it happened to Cohu Experience’s first app, CarbonToSoil.
In time for Slush 2018, Space Nation seems to come full circle where it started two years ago.
____________________________________________________
UPDATE 19.11.2018:
After diving to €0,80 [ask], Space Nation shares were suspended “until further notice” from Privanet’s stock bazaar. The trade register – neither the company nor Privanet – informs about the probable reason.
Space Nation has issued new shares, possibly to pay expenses, at least 15 times since December 2017. These were now registered on Nov 15. Further diluting previous investors’ shares by 205.000, it brings the overall count to 1.708.793. Thus the theoretical valuation would now be well below €1,4M, but as no deals were registered in the last 2 months, it’s surely closer to zero than a million. Last year, Space Nation had predicted it to be one billion by now.
Space Nation Oy (Ltd), formerly Cohu Experience, has now announced to file for bancruptcy. It managed to burn multi-million investments in less than 2 years. |
/* -*- mode: c++; c-basic-offset: 4; indent-tabs-mode: nil; -*-
* (c) 2017 Henner Zeller <[email protected]>
*
* This file is part of LDGraphy http://github.com/hzeller/ldgraphy
*
* LDGraphy is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* LDGraphy is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with LDGraphy. If not, see <http://www.gnu.org/licenses/>.
*/
#include <assert.h>
#include <math.h>
#include <signal.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <strings.h>
#include <time.h>
#include <unistd.h>
#include <memory>
#include <vector>
#include "containers.h"
#include "image-processing.h"
#include "laser-scribe-constants.h"
#include "ldgraphy-scanner.h"
#include "scanline-sender.h"
#include "sled-control.h"
constexpr float kThinningChartResolution = 0.005; // mm per pixel
constexpr float kInitialSledOffsetMM = 3; // sled skip initial markings.
// Interrupt handling. Provide a is_interrupted() function that reports
// if Ctrl-C has been pressed. Requires ArmInterruptHandler() called before use.
bool s_handler_installed = false;
volatile bool s_interrupt_received = false;
static void InterruptHandler(int) {
s_interrupt_received = true;
}
static void ArmInterruptHandler() {
if (!s_handler_installed) {
signal(SIGTERM, InterruptHandler);
signal(SIGINT, InterruptHandler);
s_handler_installed = true;
}
s_interrupt_received = false;
}
static bool is_interrupted() { return s_interrupt_received; }
static void DisarmInterruptHandler() {
signal(SIGTERM, SIG_DFL);
signal(SIGINT, SIG_DFL);
}
static int usage(const char *progname, const char *errmsg = NULL) {
if (errmsg) {
fprintf(stderr, "\n%s\n\n", errmsg);
}
fprintf(stderr, "Usage:\n%s [options] <png-image-file>\n", progname);
fprintf(stderr, "Options:\n"
"\t-d <val> : Override DPI of input image. Default -1\n"
"\t-i : Inverse image: black becomes laser on\n"
"\t-x<val> : Exposure factor. Default 1.\n"
"\t-o<val> : Offset in sled direction in mm\n"
"\t-R : Quarter image turn left; "
"can be given multiple times.\n"
"\t-h : This help\n"
"Mostly for testing or calibration:\n"
"\t-S : Skip sled loading; assume board already loaded.\n"
"\t-E : Skip eject at end.\n"
"\t-F : Run a focus round until Ctrl-C\n"
"\t-M : Testing: Inhibit sled move.\n"
"\t-n : Dryrun. Do not do any scanning; laser off.\n"
"\t-j<exp> : Mirror jitter test with given exposure repeat\n"
"\t-D<line-width:start,step> : Laser Dot Diameter test chart.\n"
"\t\tCreates a test-strip 10cm x 2cm with 10 samples with 'line-width' trace/clearance.\n"
"\t\tApply thinning to line beginning with 'start', increase for each of the 10 samples by 'step'. e.g. -D0.15:0.04,0.01\n");
return errmsg ? 1 : 0;
}
// Given an image filename, create a LDGraphyScanner that can be used to expose
// that image.
bool LoadImage(LDGraphyScanner *scanner,
const char *filename, float override_dpi,
bool invert, int quarter_turns) {
if (!filename) return false;
double input_dpi = -1;
std::unique_ptr<BitmapImage> img(LoadPNGImage(filename, invert, &input_dpi));
if (img == nullptr) return false;
if (override_dpi > 0 || input_dpi < 100 || input_dpi > 20000)
input_dpi = override_dpi;
if (input_dpi < 100 || input_dpi > 20000) {
fprintf(stderr, "Couldn't extract usable DPI from image. "
"Please provide -d <dpi>\n");
return false;
}
while (quarter_turns--)
img.reset(CreateRotatedImage(*img));
return scanner->SetImage(img.release(), 25.4 / input_dpi);
}
// Output a line with dots in regular distance for testing the set-up.
void RunFocusLine(LDGraphyScanner *scanner) {
// Essentially, we want a one-line image of known resolution with regular
// pixels set.
constexpr int bed_width = 100; // 100 mm bed
constexpr int res = 10; // 1/10 mm resolution
constexpr int mark_interval = 5; // every 5 mm
BitmapImage *img = new BitmapImage(1, bed_width * res);
for (int mm = 0; mm < bed_width; mm += mark_interval) {
img->Set(0, mm * res, true);
}
scanner->SetImage(img, 0.01);
ArmInterruptHandler();
while (!is_interrupted()) {
if (!scanner->ScanExpose(false,
[](int, int) { return !is_interrupted(); })) {
break;
}
}
fprintf(stderr, "Focus run done.\n");
}
void UIMessage(const char *msg) {
fprintf(stdout, "**********> %s\n", msg);
}
int main(int argc, char *argv[]) {
double commandline_dpi = -1;
bool dryrun = false;
bool invert = false;
bool do_focus = false;
bool do_move = true;
bool do_sled_loading_ui = true;
bool do_sled_eject = true;
std::unique_ptr<BitmapImage> dot_size_chart;
int quarter_turns = 0;
int mirror_adjust_exposure = 0;
float offset_x = 0;
float exposure_factor = 1.0f;
int opt;
while ((opt = getopt(argc, argv, "MFhnid:x:j:o:SERD:")) != -1) {
switch (opt) {
case 'h': return usage(argv[0]);
case 'd':
commandline_dpi = atof(optarg);
break;
case 'n':
dryrun = true;
break;
case 'i':
invert = true;
break;
case 'F':
do_focus = true;
break;
case 'M':
do_move = false;
break;
case 'x':
exposure_factor = atof(optarg);
break;
case 'j':
mirror_adjust_exposure = atoi(optarg);
break;
case 'o':
offset_x = atof(optarg); // TODO: also y. as x,y coordinate.
break;
case 'S':
do_sled_loading_ui = false;
break;
case 'E':
do_sled_eject = false;
break;
case 'R':
quarter_turns++;
break;
case 'D': {
float line_w, start, step;
if (sscanf(optarg, "%f:%f,%f", &line_w, &start, &step) == 3) {
dot_size_chart.reset(
CreateThinningTestChart(kThinningChartResolution,
line_w, 10, start, step));
} else {
return usage(argv[0], "Invalid Laser dot diameter chart params");
}
break;
}
}
}
const char *filename = nullptr;
if (argc > optind+1)
return usage(argv[0], "Exactly one image file expected");
if (argc == optind + 1) {
filename = argv[optind];
}
if (exposure_factor < 1.0f) {
return usage(argv[0], "Exposure factor needs to be at least 1.");
}
if (filename && dot_size_chart) {
return usage(argv[0], "You can either expose an image or create a "
"dot size chart, but not both.");
}
if (!filename && !do_focus && !mirror_adjust_exposure && !dot_size_chart)
return usage(argv[0]); // Nothing to do.
fprintf(stdout, "LDGraphy Copyright (C) 2017 Henner Zeller | http://ldgraphy.org/\n"
"This program comes with ABSOLUTELY NO WARRANTY.\n"
"This is free software and hardware, and you are welcome to "
"redistribute\nand modify it if the conditions of the GPL version 3 "
"are met.\n"
"See https://www.gnu.org/licenses/gpl.txt for details.\n\n");
bool do_image = false;
LDGraphyScanner *ldgraphy = new LDGraphyScanner(exposure_factor);
if (dot_size_chart) {
do_image = true;
ldgraphy->SetLaserDotSize(0, 0); // Chart already thinned image.
ldgraphy->SetImage(dot_size_chart.release(), kThinningChartResolution);
} else {
do_image = LoadImage(ldgraphy, filename,
commandline_dpi, invert, quarter_turns % 4);
if (filename && !do_image) return 1; // Got file, but failed loading.
}
if (do_image) {
const int eta = ldgraphy->estimated_time_seconds();
fprintf(stderr, "Estimated exposure time: %d:%02d min "
"(%.1fmm/min, "
// We don't actually know the optical power output of the
// laser diode, so let's just give it as comparative figure.
//"%.0fmJ/cm²)\n",
"normalized %.0f energy units/area)\n",
eta / 60, eta % 60, ldgraphy->exposure_speed_mm_per_sec() * 60,
ldgraphy->exposure_joule_per_cm2() * 1000);
}
SledControl sled(4000, do_move && !dryrun);
// Super-crude UI
if (do_sled_loading_ui) {
UIMessage("Hold on .. sled to take your board is on the way...");
sled.Move(180); // Move all the way out for person to place device.
UIMessage("Here we are. Please place board in (0,0) corner. Press <RETURN>.");
while (fgetc(stdin) != '\n')
;
UIMessage("Thanks. Getting ready to scan.");
}
sled.Move(-180); // Back to base.
float forward_move = kInitialSledOffsetMM; // Forward until we reach begin.
if (mirror_adjust_exposure) forward_move += 5;
forward_move += offset_x;
sled.Move(forward_move);
ArmInterruptHandler(); // While PRU running, we want controlled exit.
ScanLineSender *line_sender = dryrun
? new DummyScanLineSender()
: PRUScanLineSender::Create();
if (!line_sender) {
fprintf(stderr, "Cannot initialize hardware.\n");
return 1;
}
ldgraphy->SetScanLineSender(line_sender);
if (mirror_adjust_exposure) {
ldgraphy->ExposeJitterTest(6, mirror_adjust_exposure);
}
if (do_focus) {
fprintf(stderr, "== FOCUS run. Exit with Ctrl-C. ==\n");
RunFocusLine(ldgraphy);
}
if (do_image) {
fprintf(stderr, "== Exposure. Emergency stop with Ctrl-C. ==\n");
int prev_percent = -1, prev_remain_time = -1;
const float total_sec = ldgraphy->estimated_time_seconds();
ldgraphy->ScanExpose(
do_move,
[&prev_percent, &prev_remain_time, total_sec](int done, int total) {
// Simple commandline progress indicator.
const int percent = roundf(100.0 * done / total);
const int remain_time = roundf(total_sec -
(total_sec * done / total));
// Only update if any number would change.
if (percent != prev_percent || remain_time != prev_remain_time) {
fprintf(stderr, "\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b"
"%3d%%; %d:%02d left ",
percent, remain_time / 60, remain_time % 60);
fflush(stderr);
prev_percent = percent;
prev_remain_time = remain_time;
}
return !is_interrupted();
});
if (is_interrupted())
fprintf(stderr, "Interrupted. Exposure might be incomplete.\n");
}
delete ldgraphy; // First make PRU stop using our pins.
DisarmInterruptHandler(); // Everything that comes now: fine to interrupt
if (do_sled_eject) {
UIMessage("Done Scanning - sending the sled with the board towards you.");
sled.Move(180); // Move out for user to grab.
UIMessage("Here we are. Please take the board and press <RETURN>");
// TODO: here, when the user takes too long, just pull in board again
// to have it more protected against light.
while (fgetc(stdin) != '\n')
;
UIMessage("Thanks. Going back.");
sled.Move(-85);
}
return 0;
}
|
Fantasy Notes: Jays Blown Away
Okay, I get it. Toronto sweeps the defending Champion BoSox and then gets swept by an Oakland team expected to be among the AL’s weakest clubs this season. Now Toronto travels to Texas, who is off to a good start, and Baltimore, which is somehow the AL East leader. The Jays could limp home in a bit of a hole early.
It’s time to take another look at Travis Buck in AL-only leagues. He started the season in an 0-for-22 funk, but apparently all he needed to get his shit together was a dose of Toronto pitching. Buck went 7-for-16 in the three-game series, smacking six doubles, driving in four and scoring three. This kid won’t fly under the radar for long. In preseason, I predicted that Buck would sneak up on people.
Chris Denorfia is batting .400 in limited action for the A’s, but he can’t carve into Ryan Sweeney’s PT with Sweeney also hitting a robust .364. Wasn’t the A’s outfield supposed to suck? Not so far.
Are you getting as sick as I am waiting for Brandon League to develop into the top set-up man we all expected him to be? Five walks in 2 2/3 innings so far? I’d say he’ll wind up in Syracuse before he winds up in an important role in the Toronto pen. Shawn Camp, meanwhile, is looking like he’s ready to get the call if the Jays’ need relief help. He’s tossed six shutout innings through four appearances, with one save while giving up two hits and no walks and fanning nine.
Back with the A’s, Bobby Crosby is finally healthy and hitting better than he ever has so far. But we’re still waiting for that power to return, and until it does, his value will be limited. Watch him closely. That three-hit (including two doubles), three-RBI game on Wednesday could be a sign of things to come.
Is Oakland going to stick with Jack Hannahan at third while it waits for Eric Chavez to return? After a nice season opener, Hannahan is mired in a 4-for-29 skid. Donnie Murphy, back from last weekend’s finger injury, is starting Friday. It wouldn’t take much for him to overcome the underwhelming Hannahan.
After hitting in eight straight games to start the season, Kurt Suzuki is hitless in his last two, but he’s gained some decent waiver wire traction early with his performance. Don’t look for much power, but if your league counts OBP, Suzuki is definitely worthy of attention.
Emil Brown, batting .250, is about as useful as we expected. That is to say, not very much. Time to start the Carlos Gonzalez watch? I give this situation another two weeks. Assuming Brown continues his middling ways, if Gonzalez isn’t up by May 1, I’ll be very surprised.
It wasn’t pretty, but Keith Foulke proved he can still save a game when he closed it down for the finale against the Jays. He was getting the chance with Huston Street having pitched in the previous two games, but this bears watching. Once Oakland fades as expected, Street could be dealt, opening up the A’s closing duties. Foulke has looked hard to hit in the early going and he leads the AL in holds. In AL only-leagues, I’d say he’s worth picking up right away.
We’re loving what we’re seeing from Vernon Wells in the early going. Touted as a great buy-low candidate all offseason by RotoRob baseball writer Tim McLeod, Wells has been incredibly productive, as well as more patient this year. He’s driven in runs in five straight games, producing eight RBI in all over this stretch. Could we be witnessing the early stages of a career year from Toronto’s mega-rich flyhawk?
Let’s get back to that Oakland bullpen situation for a moment. With Rich Harden going down for the 57th time in his career, Joey Devine got the call from Triple-A, and he wound up earning the win in his season debut, tossing a pair of shutout frames. More importantly, he didn’t walk anyone. That’s the key for this former Brave phenom. If Devine can maintain good control, he’s a closer in waiting. Watch his role evolve over the next couple of weeks, and start to put him on your watch list if you need bullpen help, especially in AL-only leagues. |
#pragma once
#include <AP_HAL/AP_HAL.h>
#if CONFIG_HAL_BOARD == HAL_BOARD_SITL
#include "AP_HAL_SITL.h"
#include "AP_HAL_SITL_Namespace.h"
#include "SITL_State.h"
class HAL_SITL : public AP_HAL::HAL {
public:
HAL_SITL();
void run(int argc, char * const argv[], Callbacks* callbacks) const override;
private:
HALSITL::SITL_State *_sitl_state;
};
#endif // CONFIG_HAL_BOARD == HAL_BOARD_SITL
|
Q:
get data from JSONObject according to a key
I have this JSONObject.
{
"headerContent": [
{
"name": "User Id",
"key": "userId",
"type": "text",
"default": "Enter User Id"
},
{
"name": "User Name",
"key": "userName",
"type": "text",
"default": "Enter User Name"
},
{
"name": "Password",
"key": "password",
"type": "password",
"default": "Enter Password"
},
{
"name": "Mobile Number",
"key": "mobileNumber",
"type": "text",
"default": "Enter Mobile Number"
},
{
"name": "User Category",
"key": "userCategory",
"type": "select",
"default": "Select Category",
"options" : ["Admin", "Client", "Regulator"]
},
{
"name": "E-Mail",
"key": "email",
"type": "text",
"default": "Enter Email"
},
{
"name": "User Access",
"key": "userAccess",
"type": "select",
"default": "Select User Access",
"options" : ["All", "Site"]
}
],
"bodyContent": [
{
"userId": "user_1",
"userName": "DemoUser",
"mobileNumber": "99999999",
"userCategory" : "Admin",
"email" : "[email protected]",
"userAccess" : "All"
}
]
}
The headerContent describes the attributes of bodyContent.
Now, by default all data in the object (like user_1) will be displayed at details. In the html page I have a select box containing 3 values Admin, Client, Regulator; which are 3 different user category. When I select any of them, I want to display data with userCatagory="Admin".
My HTML page contains a select box to select the category.
<script src="JSCode.js"></script>
<table align='center' >
<tr>
<td> <select id='listSelect' onChange="updateList()">
<option value='' selected >All</option>
<option value='admin'>Admin</option>
<option value='client'>Client</option>
<option value='regulator'>Regulator</option>
</select></td>
</tr>
<tr>
<td> <p id='details'> </P> </td>
</tr>
</table>
when select options changes, the content in the details should change accordingly.
A:
JSFiddle here
HTML
<table align='center' >
<tr>
<td> <select id='listSelect' onChange="updateList()">
<option value='' selected >All</option>
<option value='Admin'>Admin</option>
<option value='Client'>Client</option>
<option value='Regulator'>Regulator</option>
</select></td>
</tr>
</table>
<div id="result"></div>
JS
var jsonObj = {
"headerContent": [
{
"...": "..."
}
],
"bodyContent": [
{
"userId": "user_1",
"userName": "DemoUser",
"mobileNumber": "99999999",
"userCategory" : "Admin",
"email" : "[email protected]",
"userAccess" : "All"
},
{
"userId": "user_2",
"userName": "DemoUser",
"mobileNumber": "99999999",
"userCategory" : "Client",
"email" : "[email protected]",
"userAccess" : "All"
}
]
};
function searchJSON (json, content, where, is) {
content = json[content];
var result = [];
for (var key in content) {
if (content[key][where] == is || is == '') {
result.push(content[key]);
}
}
return result;
}
function printObj (obj, container) {
var html = '<table>';
for (var i in obj) {
for (var j in obj[i]) {
html += '<tr><td>' + j + '</td><td>' + obj[i][j] + '</td></tr>';
}
html += '<tr><td class="black"></td><td class="black"></td></tr>';
}
document.getElementById(container).innerHTML = html;
}
function updateList () {
var e = document.getElementById("listSelect");
var value = e.options[e.selectedIndex].value;
printObj(searchJSON(jsonObj, 'bodyContent', 'userCategory', value), 'result');
}
updateList();
On change it executes updateList(). This function gets the value of the element. Then it executes searchJSON(). This function needs the data (jsonObj), the content in your data (in your case bodyContent), the key you are looking for (in your case userCategory) and the value you are looking for. The function loops through the data object and searches for the key. If the value is the same as your select, it adds the object to an array. When the loop is complete it returns the result.
Last function is a simple print function to place the data inside your html. To make sure it gets printed first time, just run the updateList() once.
LOADING .JSON SO link
var xmlhttp;
if (window.XMLHttpRequest) {
// code for IE7+, Firefox, Chrome, Opera, Safari
xmlhttp = new XMLHttpRequest();
} else {
// code for IE6, IE5
xmlhttp = new ActiveXObject("Microsoft.XMLHTTP");
}
xmlhttp.onreadystatechange = function() {
if (xmlhttp.readyState == XMLHttpRequest.DONE ) {
if(xmlhttp.status == 200){
jsonObj = xmlhttp.responseText;
}
else if(xmlhttp.status == 400) {
alert('There was an error 400')
}
else {
alert('something else other than 200 was returned')
}
}
}
xmlhttp.open("GET", "jsonInfo.json", true);
xmlhttp.send();
|
~ Paleo Friendly and Loving It!
Category Archives: reviews
Do you know the three words that strike fear into my heart? That’s right “Mom, I’m hungry”. You would think it would be something like “Mom, I’m hurt” or “Where’s my homework?” But no, I have hollow legged boys who are always hungry. Does it matter that they just ate, exactly 12 minutes ago? Does it matter that there is literally no food left in the fridge for their school lunches tomorrow? Is there are a reason I ask such pointless questions?? Sigh.
So, because I am constantly feeding the monsters, I need some cheats. I need a few packaged, easy to prepare, low brain power items for those mornings I simply cannot cope. You know, Mondays. And possibly Tuesdays, depending on how bad Monday was.
My new friend in GF world has become Gluten Free Bisquick. I bought it on sale, thank-you Sobey’s, and thought it might save me from a meltdown someday. Yes my friends, MY meltdown. The monsters will eat whatever you put in front of them, the problem is getting it to them fast enough!! Today was that day, I needed carbs, sweet and fast.
GF Bisquick is made with rice flour, sugar, potato starch, xanthan gum but MAY contain soy. So, if you are soy free and sensitive, maybe skip this one. Also, the package suggests keeping it in the fridge once opened. I made the waffle recipe right on the back of the box and it was glorious. Perfection. In fact, I think I might be in love.
And waited for them to be done. The first four, I cooked all the way through, so I could enjoy them immediately. The next four I under baked just a tad so they finish cooking in the toaster later. These babies freeze beautifully!
A little coconut oil on top and some maple syrup and I was in GF Waffle heaven. Love, love these waffles. My only complaint? The package is pretty small, so probably 2 batches and it’s done. But, it’s a small price to pay for convenience and bliss. Happy Monday!
I have dived head first into the Wahls Protocol. I finished the audio book and am armed with all of Dr. Wahls enthusiasm and knowledge. I made extensive notes and will probably have to listen to it again, just to pick up even more of the hidden gems. I’ve been eating my 9 cups of vegetables and fruits for most of a full week now, gluten free and almost dairy free. (Turns out there is casein in my coffeemate, sigh)
Here are my initial impressions:
1. Watch the Tedx Talk you tube video here, even if you haven’t read the book. It’s 18 minutes and really fired me up to commit to the protocol.
2. It is a “Protocol” not a “Diet”. Remind yourself this at every opportunity. We cheat on diets. We follow medical protocols. This is as important as taking your medication.
3. Do NOT stop taking your medication. Oops, learned that one the hard way. In my defense, the drugs don’t seem to be “doing anything” so I quit taking the Plaquenil. Two days later, there is Dr. Wahls on my audiobook reminding us to continue all medication until your doctor says otherwise. Ask me how awful my skin was. How it burned and turned blotchy, red and hot. Yes, it was brutal. I am back on my drugs, it seems they actually ARE “doing something”.
4. Because it is a protocol, there is more to it than just food. Stress management is extremely important too. Detoxification is also vital.
Initial Results:
Spoiled probably, because I quit taking my medication. Having said that, I did go through a serious sugar craving day. I also had a “false start”. Meaning, I went GFCF during the week but went off the protocol for the weekend. I did not suffer too much from this and it was kind of a last au revoir to gluten and dairy. (So glad we did the Pizza Hut pizza, not going to lie, that is a weakness for me) The sugar day was after I finally gave up the gluten and dairy. I have not experienced serious cravings or die-off which I did expect but maybe it is because I am so committed that I can ignore them.
My skin is not better. My energy is not better. My mouth sores have completely cleared up. (Which was the last straw for me to finally give in to the protocol. I couldn’t brush my teeth with mint toothpaste) I am not sleeping better.
I have lost 3 pounds (!) and I have discovered that I love beets and asparagus. I am not so excited about blackberries, but they are ok.
This week:
Focus on following the 9 cups of vegetables and fruits. No tomatoes or nightshades. No peanuts or other nuts. (I have a pre-existing suspicion of intolerance to these) Increase water consumption to at least 6 cups per day. Meat, 4-6 oz daily. Personal massage with coconut oil daily, as described in the book. (It is so relaxing) Sleep at least 7 hours daily, more if possible. I’m also taking detailed notes, which I won’t post, unless I start to see some patterns, positive or negative appear.
Upcoming steps:
Order Dulse flakes, more organic coconut oil, Avalon Organics shampoo (no sodium laurel sulfates) and Nordic Naturals Omega 3 from iherb.com. I have a terribly itchy scalp and my pharmacist suggested an SLS free shampoo. Why not try it? The dulse flakes and omegas are suggested as part of the protocol.
(Just a little plug, if you’ve never ordered from iherb.com, use coupon code MEN348 for a small discount. It is my go-to source for natural, organic and hard to find supplements, foods and personal care items. We live so far from Natural Health stores that it is a convenient, low cost way to access these items. Orders under 4 pounds ship Canada Post (I’ve never yet paid customs/duties), for $4.00)
I personally have sadly, reverted to the Standard American Diet (SAD), (and while yes, this IS a Canadian blog, I think SCD is already taken as an acronymn). The SAD in my case involves a decent amount of fresh food, meats, a slowly increasing amount of processed junk and plenty of coffee. I’d tell you I eat pretty healthfully, but I did have a Timmies plus 2 pop tarts on Friday, so who am I kidding. To be fair, I felt awful all weekend. Itchy, tired, sore. It is clearly time to take back some control.
My first book was Meals That Heal Inflammation by Julie Daniluk, RHN. I bought it at Chapters. You can too! The premise of the book is that you spend 8 weeks on an elimination diet and then add back, one at a time, potential allergens. GF/CF/SF/Corn free/Additive Free/Beef & Pork free among other things. Full of interesting recipes, none of which I have tried.
My second book was “The Wahls Protocol” by Dr. Terry Wahls. Dr. Wahls has Multiple Scelerosis (an auto-immune disease) and experimented on herself to find a way of reversing much of the debilitating effects of MS. She advocates a Gluten Free, Dairy Free, Egg Free (because she is allergic to eggs, and the protocol is actually in clinical trials, it has to be exactly what she did) diet in step one. In step 2, she moves to a Paleo diet and in step three, what she calls a Paleo Plus diet. (I’m still reading, so forgive the lack of detail)
The Wahls Protocol initial step is to clear out all the gluten and casein from your diet and incorporate 9 cups of vegetables and fruits into your diet daily. NINE CUPS. People, that is a lot of rabbit food. It needs to be 3 cups leafy greens, 3 cups brightly coloured, 3 cups sulpher rich. GF/CF fare is ok in this step. She even talks about the opioid and leaky gut syndrome. Seems like all that “crazy stuff” I did with my son 5 years ago isn’t so crazy after all?
My third book (and documentary) was Forks Over Knives. Interesting. They advocate a plant based diet. No meat. No dairy. No fish. If it had a face or a mother, don’t eat it. But, gluten is allowed. The authors have good research that plant based diets prevent heart problems, diabetes and cancer.
So, what to do. What to do? I haven’t been this overwhelmed since I first started GFCF so long ago.
I needed a plan. Could I go GF/CF again? All the books are in agreement about casein. It is apparently evil. To be avoided. To be banished from my kitchen. (But oh how do I love thee mozzarella cheese) Could I really eat NINE cups of vegetables and fruit a day?
Do I really want to ever feel as bad as I do now ever again? Do I want to get worse?
The plan: follow the 8 week elimination diet as advocated by Meals That Heal Inflammation, supported by the Wahls Protocol. Meaning, avoid all the allergy suspect foods while still going GF/CF and eating the 9 !! cups of vegetables & fruits.
Today’s Menu:
Breakfast: Cornstarch pancakes, 1 cup of cherries, mangos, strawberries and pineapple (it was either corn or gluten, take your pick)
Lunch: Carrot soup, salad with Kale, Cabbage, broccoli and other greens, plus some cranberries and a GF poppyseed dressing.
Supper: Apricot Chicken with rice and vegetables, more salad
Snack: fruit and organic strawberry tea
So far, at lunch, I feel satisfied. I’m still concerned my nose is going to start twitching before the week is out…
I just finished making and eating the most amazing Gluten Free Cinnamon buns. I’d love to take credit for the recipe, but that would be stealing and very wrong. So, instead, I’d like to introduce you to another Canadian gluten free blogger who should be crowned for sainthood, just for discovering and sharing this most amazing cinnamon bun.
People, these are incredible fresh. They are awesome reheated from frozen. They are not hard to make and they taste as good or better than their wheaty counterpart. How do I know this? Well, I cheated. I ate a “real” cinnamon bun after making several dozen for my whiny family. And, my thumb got rashy and my tummy hurt. So, there you go, learn from my stupidity, these GF cinnamon buns are the real thing.
Please, allow me to introduce you to The Baking Beauties and her Cinnamon Bun recipe. You could thank me, but first, swallow the last bite and wash your hands. Keyboards and cinnamon sugar just don’t mix…
These are so amazing – no artifical flavours or colours, gluten free, corn free,organic, Kosher Parve, made with fruit extracts and they taste awesome.
I ordered these for my boys because I wanted something gluten free but also without all the crap that normal lollipops (candy in general) seem to have. Since then, I’ve re-ordered for my office. I feel really good about sharing these with other people’s children (with permission, of course) because they are a high quality treat. We have lots of kids through here & I’ve never had a complaint about the flavour yet. With the “old” kind, we’d find the kids would take about 2 licks and hand it back.
I get mine from www.iherb.com in a bag of 50 for $6.00. They have packages of assorted flavours, which I recommend until you know which one is your favorite. Me, I’m partial to Mango Tango and Pomegranate Pucker (don’t you just love the names) but truly, any flavour will do.
For the more “grown-up” kid in your office, Yummy Earth also has individually wrapped candy drops. So, if you’re a little too old (professional) to have a sucker stick in your mouth, these drops are just as good. They even look nice in a candy dish on your desk.
As always, first time customers can save $5.00 on their first order by using coupon code MEN348 at check0ut. (Which would make the first bag $1.00 – just in case I haven’t convinced you to try it yet!)
As with all Kinnikinnick products, this mix is a high quality, great tasting product. It is truly my favorite GFCF mix.
The mix has directions to allow you to make as few buns as you want – it is 3 parts mix to 2 parts liquid (water, milk or milk alternative). 3 cups to 2 cups makes 6-8 buns in muffin tins. For us, that’s enough for a week without anything going moldy or stale.
The buns are light, tasty and have a good mouth feel. They take only 3 minutes to mix and 20 minutes to bake, so there’s no excuse for having no bread alternative for your GF kiddo.
Nathan just loves these plain, with peanut butter or as a sandwich. He ate 6 the first time I made them! Poor me, I was SURE I made 8, but when I counted them into the bread bag, there was only 6 left….Hmmmm…
Later on, I noticed him into the bread bag. I asked him what he thought of the buns and his response:
Don’t live in Manitoba? Well….all the top three stores will ship it to you. (Or you could move here. It’s true what they say about us. Our winters are brutal but the opportunities, cost of living and friendly people completely make up for it!)
I personally love LaraBars. They are filling. They taste wonderful. They are 100% gluten free. And, just in case you are the only person who hasn’t heard of them, here’s a link to the LaraBars site.
Move over Tim. There’s a gluten free donut that is better than yours. I’m not sure why I am telling you all this because the more people who know this secret, the less there is for me…….
But, in the interest of sharing my “finds”, here goes.
We love our donuts and have even tried making a GF one at home. Total bust. Worse than awful. So I reluctantly bought a package of Kinnikinnick Vanilla Dipped Donuts. They were “packaged”. In donut world, that means gross. But I was desperate for a fix and Kinnikinnick hadn’t let me down yet.
The instructions say to heat the donut in the microwave for 10-15 seconds (because they are frozen & because that makes them yummy soft). So, I did. The glaze melted just a little, the donut was warm, think fresh from the oven temperature.
I hesitated, braced for a bad taste and took a bite.
I won’t lie. The heavens opened up & angels sang Hallelujah. It really was that good. Warm, soft, flavourful, no hint of gluten-free grit. In fact, I’d challenge you to know it was a gluten free item.
So, I bought the Maple Dipped ones, the cinnamon ones and the chocolate ones. Each bite better than the last. They come 6 to a package, last forever in the chest freezer and thaw well for the lunchbox.
Overall, the kids like them in this order: Chocolate glazed, Cinnamon, Vanilla glazed.
I prefer the Maple glazed. (I am Canadian after all!) And, I’d guess the kids would too, if only I’d let them have one.
Do try them immediately. But don’t tell anyone how good they are. If word gets out about this stuff…….well, don’t say I didn’t warn you!
Who doesn’t love spaghetti? My guys ask for it at least once a week. We eat a lot of corn & quinoa macaroni noodles and brown rice noodles, but they really wanted the “long skinny noodles”.
I heard great things about Tinkyada so as soon as I found it, I had to try it. Well, everything you’ve heard is true. It is good. No, it is amazing! No, it isn’t wheat pasta, there is a different texture & taste, but different doesn’t have to mean Bad.
The pasta does take longer to cook – about 15 minutes, so that is the only downside. It will take a fair amount of “over cooking” but does tend to clump together. This is easily solved by tossing a little olive oil on the pasta, is breaks apart easily.
You can also cook the pasta by tossing it in boiling water for 2 minutes, turn off the heat & let stand for 20. This works really well too, especially if you are afraid of boiling your pot dry!
We generally pair our pasta with meat sauce, heavy on the ground beef. It’s a great way to get extra beef in the children. They often eat 2 plates each!
I recommend this pasta 100% We paid 3.99 for 454 g, so not toooo much more than a wheat pasta, so that’s pretty nice too. We eat almost a full package for our family of 4, just in case you are wondering.
I keep hearing, whispers, about this thing called a “Lara Bar”. It is supposed to be one of the all-time greatest treats in the GFCF diet. So, when I finally found some, I had to give it a try. (should have bought more than 1 flavour, I think, but that’s another story)
That’s it. No artificials. States right on the label: No added sugar, Gluten free, dairy free, soy free, vegan, Kosher. So, clearly, unless you have a nut allergy/intollerance, this should be a “safe” product.
It is very heavy and a substantial snack. At 210 calories & 13 grams of fat (only 2g are saturated, no trans fats), it might almost be a meal-replacement bar. You definately aren’t going to eat two of them in a sitting anyways!
The flavour – well, it is peanutty. I wouldn’t go so far as to say it replaces my PB Cookies though. It has small chips of peanuts through the bar which provides some texture. It is a chewy bar & the peanut flecks add a little more crunch. I really like it but I don’t think it is one my kids will like. The peanut bits would annoy them and it isn’t “sweet” so wouldn’t replace a cookie as nicely.
For myself, this Lara Bar is a good snack. I paid around $2.00 for it, so it is pricey compared to say a Chocolate Bar (which wouldn’t be GFCF, ok…) but as an occasional treat, is not too much. I think I would stock up on these for loooooong car trips as an alternative to the junk food at the convenience store but not for kids lunch boxes. My vote is: buy it for yourself, don’t waste it on the kids! |
Q:
Ruby on Rails: Jquery/ajax dynamic form fields
I'm in the process of implementing dynamic form fields. So far, I have some jQuery that does the trick, however I have to manually refresh the page to see the change.
How can I make this dynamic? I'm under the impression I'll need to include ajax somewhere, but I'm having issues finding good resources.
Here is the code I'm using;
jQuery ->
$(
if ($('#subject_enrolled').val() == '0')
$('#subject_reason_not_enrolled').show()
else
$('#subject_reason_not_enrolled').hide();
);
Any help/resources would be greatly appreciated.
A:
I'm assuming #subject_enrolled is a checkbox. You just need to listen for the change events on that checkbox and then update the display of your #subject_reason_not_enrolled. This is covered quite a bit in other posts. Here's one: How to listen to when a checkbox is checked in Jquery
|
Over 2 million people have fled Syria since the beginning of the conflict in 2011, making this one of the largest refugee exoduses in recent history with no end yet in sight. The refugee population in the region could reach over 4 million by the end of 2014. The inter-agency Syria Regional Response Plan is appealing for US$ 4.2 billion to cover the needs of 4.1 million refugees fleeing Syria and 2.7 million people in host communities in the region from 1 January to 31 December 2014. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.