content
stringlengths
228
999k
pred_label
stringclasses
1 value
pred_score
float64
0.5
1
An instance referring to another instance via a reference attribute. no type hierarchy no subtypes hierarchy Attributes attributeSource Codeshared formal ValueDeclaration attribute The attribute making the reference. Inherited Attributes Attributes inherited from: Object Methods referredSource Codeshared formal Anything referred(Object instance) The referred() instance reachable from the given instance. Note: If this member refers to a late declaration and the attribute of the given instance has not been initialized this method will return uninitializedLateValue. Inherited Methods Methods inherited from: Object Methods inherited from: ReachableReference
__label__pos
0.99304
Given these two vectors [1,2,3] and [4,5,6] how would look for their unit vectors? 1 Answer | Add Yours sciencesolve's profile pic sciencesolve | Teacher | (Level 3) Educator Emeritus Posted on You need to remember what is the formula that allows you to evaluate the unit vector such that: `bar u = (bar v)/(|v|)` Reasoning by analogy, you may find the unit vectors `bar (u_1)` and `bar(u_2)` such that: `bar (u_1) = (bar v_1)/(|v_1|)` `bar (u_1) = (<1,2,3>)/(sqrt(1^2+2^2+3^2))` `bar (u_1) = (<1,2,3>)/(sqrt(1+4+9))` `bar (u_1) = (<1,2,3>)/(sqrt14) => bar (u_1) = <1/sqrt14,2/sqrt14,3/sqrt14>` `bar (u_2) = (bar v_2)/(|v_2|)` `bar (u_2) = (<4,5,6>)/(sqrt(4^2+5^2+6^2))` `bar (u_2) = (<4,5,6>)/(sqrt(16+25+36))` `bar (u_2) = (<4,5,6>)/(sqrt77) => bar (u_2) = <4/sqrt77,5/sqrt77,6/sqrt77>` Hence, evaluating the unit vectors of the given vectors `v_1` and `v_2` yields `bar (u_1) = <1/sqrt14,2/sqrt14,3/sqrt14> ` and` bar (u_2) = <4/sqrt77,5/sqrt77,6/sqrt77>` . We’ve answered 318,996 questions. We can answer yours, too. Ask a question
__label__pos
0.999973
WebX入门指南-阿里云开发者社区 开发者社区> 开发与运维> 正文 登录阅读全文 WebX入门指南 简介: [说明] 本文围绕WebX的Web框架展开,试图将整个开发中使用的软件栈或者说生态系统串联起来。本文中不讲解原理性的东西,只是讲解各种场景下如何使用WebX相关的技术。入门指南中涉及到的实践指南和原理指南,不会展开,在后续博文中,详细阐述。 WebX简介 详细的简介说明见WebX官网。首先看一下WebX的官方介绍: Webx是一个框架,它可用来做下面的事情: 创 [说明] 本文围绕WebX的Web框架展开,试图将整个开发中使用的软件栈或者说生态系统串联起来。本文中不讲解原理性的东西,只是讲解各种场景下如何使用WebX相关的技术。入门指南中涉及到的实践指南和原理指南,不会展开,在后续博文中,详细阐述。 WebX简介 详细的简介说明见WebX官网。首先看一下WebX的官方介绍: Webx是一个框架,它可用来做下面的事情: 创建一个全功能的Web应用     Webx提供了创建一个Web应用所需要的所有必要功能. 创建一个新的Web框架     Webx允许你定制、甚至重写大部分的Webx框架逻辑,从而实现全新的功能,或者和其它应用框架相整合。 创建一个非Web应用     Webx的功能并不受限于Web应用,而是对所有类型的应用都有帮助。     Webx所提供的SpringExt子框架是对Spring框架的扩展,能简化Spring的配置,加强了Spring组件的扩展性。 所以应该来说,WebX基于Spring组件,提供了开发Web应用以及非Web应用的基础性平台。从本质上讲,Webx可以做为一个spring容器来使用,只要是spring允许的一切,webx都能做。Webx的特色功能还是在web上面,不仅仅是作为spring容器来使用,而是一套完整的扩展性强的MVC框架。 WebX = Spring + 组件 + Velocity 从官方的说明来看,WebX的定位并不仅仅是Web框架,而是强调了框架的灵活性和扩展性。对于这一点,大家后面好好体会哈。现在流行的MVC框架很多,SSH、Spring MVC是比较主流的。WebX的优劣势官方也有说明。这里我也敢随便说,谁好谁不好,我选择学习WebX的原因有两个: • Webx构建了一个完整的生态,提供了各种支持 • 成熟可靠、可扩展 关于WebX的一些详细内容,这里不再赘述,使用WebX开发网站,通常结合Velocity模板引擎和IBatis ORM等一起使用。下面简单介绍下相关的技术: Spring Spring框架是由于软件开发的复杂性而创建的。Spring使用的是基本的JavaBean来完成以前只可能由EJB完成的事情。然而,Spring的用途不仅仅限于服务器端的开发。从简单性、可测试性和松耦合性的角度而言,绝大部分Java应用都可以从Spring中受益。-[百度百科]- • 目的:解决企业应用开发的复杂性 • 功能:使用基本的JavaBean代替EJB,并提供了更多的企业应用功能 Velocity Velocity是一个基于java的模板引擎(template engine)。它允许任何人仅仅使用简单的模板语言(template language)来引用由java代码定义的对象。 当Velocity应用于web开发时,界面设计人员可以和java程序开发人员同步开发一个遵循MVC架构的web站点,也就是说,页面设计人员可以只关注页面的显示效果,而由java程序开发人员关注业务逻辑编码。Velocity将java代码从web页面中分离出来,这样为web站点的长期维护提供了便利,同时也为我们在JSP和PHP之外又提供了一种可选的方案。-[百度百科]- IBatis MyBatis 是支持普通SQL查询,存储过程和高级映射的优秀持久层框架。MyBatis 消除了几乎所有的JDBC代码和参数的手工设置以及结果集的检索。MyBatis 使用简单的 XML或注解用于配置和原始映射,将接口和 Java 的POJOs(Plain Old Java Objects,普通的 Java对象)映射成数据库中的记录。 WebX入门指南 创建WebX应用 详细内容参考:http://openwebx.org/docs/firstapp.html WebX工程使用Maven来构建,创建WebX应用和创建普通的Maven工程是一样的,本文中使用的IDE是Intelij Idea。 1. 创建Webx工程 命令行: mvn archetype:generate -DgroupId=com.alibaba.webx -DartifactId=tutorial1 -Dversion=1.0-SNAPSHOT -Dpackage=com.alibaba.webx.tutorial1 -DarchetypeArtifactId=archetype-webx-quickstart -DarchetypeGroupId=com.alibaba.citrus.sample -DarchetypeVersion=1.8 -DinteractiveMode=false IDE开发: 1. 创建Maven工程,进入页面后选择add archetype 2. 添对应的内容:见下图 > archetypeArtifactId=archetype-webx-quickstart archetypeGroupId=com.alibaba.citrus.sample archetypeVersion=1.8 3.从添加archetype创建新的应用 4.后续的内容和创建普通的应用一样 这里写图片描述 2. 运行 mvn jetty:run localhost:8081访问示例网站,部署的时候可以采用Nginx+Tomcat的方式部署,在应用服务器相关的博文中再介绍。 使用Idea集成开发环境的,可以直接使用IDE来运行。具体的内容建议看下Maven权威指南和Idea的使用手册。接下来看下Web给出的示例程序,窥探下WebX的设计思想,便于我们开发自己的应用。 示例说明 创建应用后,默认的会提供一个示例网站。首先看下整个代码的目录结构,main目录下包括了java和webapp两个子目录。Java目录下代码用于后台逻辑的代码实现,webapp是网站的根目录,分别对应代码中的module和templates。下面看一下,前端和后台是如何实现交互的。介绍前,先看下module和templates。 Module 作用:Webx3 作为一个 MVC 框架,由 Module 组件承担控制器的职责(Controller)。Module 负责接受客户端数据输入,执行业务逻辑,响应客户输出,以及数据校验,页面流程控制等。 Module 主要分为3种:Action,Screen,Control • Screen 为页面展示或输出准备数据Model • Action 负责接收 Form 提交和数据写入控制 • Control 可嵌套的 Screen 处理器,用于Screen的组装 screen • 职责:响应只读功能操作,例如:显示查询或查看结果,为此构造必要的数据Model。 • 运行机制: Webx 中url 一般需要映射到一个 Screen 类进行逻辑处理,一个 Screen典型场景:根据 productId 查看 Product 信息, 根据orderId 修改 Order 状态,举例:http://lcoalhost/product/view_product.htm?productId=100035 public class ViewProduct { public void execute(@Param(name = "productId") String productId,Context context) { ProductDO productDO = productAO.find(productId); context.put("product", productDO); } } Action public class ProductAction { /** * 新增产品 */ public void doCreate(@FormGroup(name = "productForm") ProductVO productVO) { productAO.create(productVO); } /** * 修改产品 */ public void doUpdate(@FormGroup(name = "productForm") ProductVO productVO) { productAO.update( productVO ); } } Control • 职责:功能同 Screen,但 Control是可重用的 Screen。vm 模板中重用control 写法, $control.setTemplate(“/product/viewProduct.vm”),注意加载的目录默认是从control目录下开始的。 • 典型场景: 嵌入基本信息框。 • 开发方式:与 Screen类似, 只不过它放在 control 目录下。 public class ViewProduct { public void execute(@Param(name = "productId") String productId,Context context) { ProductDO productDO = productAO.find(productId); context.put("product", productDO); } } 关于Module模块的知识,讲完了,估计还有很多东西不清楚。那姑且先记住有这样的几块东西,分别的应用场景就可以了。一些细节的东西在实践篇里面讲解。 Templates 作用:templates对应于MVC中的View模块。用于界面的渲染,这里使用了Velocity模板引擎来处理,动态数据和静态页面。这里不讲解具体的Velocity模板引擎的知识,只说明templates的结构。 templates也分为三种:layout,screen以及control文件。 • Screen 页面展示的静态文件,动态数据由module下的scrren对象传入,使用Velocity模板引擎渲染 • Control 用于可重用的Scrren显示, 具体使用时$control.setTemplate (“home:product/viewProduct.vm”) • Layout用于页面布局 Module与Templates交互 现在,已经知道了Module和Templates的作用和应用场景,看一下具体的一个页面是如何加载的: 例如加载: http://192.168.1.102:8081/?home 示例程序的首页 接到请求后,WebX框架按照对应的Pipeline执行相应的代码,具体的执行流程: 1. 获取渲染的页面目标target, 这里的页面index.vm。 2. 在webapp的/templates/screen目下,查找/index.vm模板。 3. 依次查找screen类(module代码): Index (如果找不到,下一个) Default (如果找不到,下一个, 如果多级的会再查找上一级的类) TemplateScreen (系统默screen) > 示例中未提供对应的类,系统会使用默认的TemplateScreen类渲染 4. 执行screen类,并渲染screen模板。 1) 如果存在layout布局,渲染layout,执行screen类,渲染screen模板 2) 根据target查找layout模板 layout模板查找也是按照对应的文件路径来查找,首先是layout/index.vm, 如果没有找到layout/default.vm。如果存在多级目录会查找上一级目录中的default.vm。如果没找到会使用common中提供的default.vm文件。 3) 渲染Layout模板 4) 渲染在layout模板中引用的control(如果有) 大家可以修改下其中的一些文件,体验下home目录加载的流程。知道了WebX页面的渲染流程,就可以添加自己的页面了。 小试牛刀 设计自己的网站时,遵循WebX本身设计的思想,页面驱动和规约大于配置的理念。添加页面时,首先设计对应页面的布局,然后添加具体的页面内容,动态数据通过module传入。这里先给一个简单的示例: 代码已上传http://download.csdn.net/detail/fiboliu/9302219, 运行方法 mvn jetty:run, 访问路径:http://localhost:8081/blog/index blog example 网页Layout 布局的设计,常见的上中下,左中右等方式。这里给出我们使用的布局方式 这里写图片描述 这里的Header和Sidebar其实就是WebX中的Control,Screen对应的就是Screen文件。Layout下创建对应的布局文件,见示例代码中的/layout/blog/default.vm: <!DOCTYPE html> <html lang="zh-CN"> <head> <!-- Meta, title, CSS, favicons, etc. --> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="description" content="WebX Sample"> <meta name="keywords" content="HTML, CSS, JS, JavaScript, framework, bootstrap, front-end, frontend, web development"> <meta name="author" content="刘杰 <[email protected]>"> <title> 组件 &middot; Bootstrap v3 中文文档 </title> <!-- 新 Bootstrap 核心 CSS 文件 --> <link rel="stylesheet" href="//cdn.bootcss.com/bootstrap/3.3.5/css/bootstrap.min.css"> <!-- 可选的Bootstrap主题文件(一般不用引入) --> <link rel="stylesheet" href="//cdn.bootcss.com/bootstrap/3.3.5/css/bootstrap-theme.min.css"> <script> var _hmt = _hmt || []; </script> </head> <body> <header class="navbar navbar-static-top bs-docs-nav" id="top" role="banner"> <div class="container"> <!-- header--> $control.setTemplate("/header.vm") </div> </header> <div class="row"> <div class="container"> <div class="col-md-2"> <!-- left side bar --> $control.setTemplate("/leftSideBar.vm") </div> <div class="col-md-10"> $screen_placeholder </div> </div> </div> <!-- jQuery文件。务必在bootstrap.min.js 之前引入 --> <script src="//cdn.bootcss.com/jquery/1.11.3/jquery.min.js"></script> <!-- 最新的 Bootstrap 核心 JavaScript 文件 --> <script src="//cdn.bootcss.com/bootstrap/3.3.5/js/bootstrap.min.js"></script> </body> </html> Screen和Control设计 有个布局页面接下来设计每一个页面。 header.vm、sidebar.vm contorl页,以及screen页面。具体的页面设计时,可以使用一些前端框架,比如bootstrap。 参考:http://v3.bootcss.com/ <nav class="navbar navbar-inverse"> <div class="container-fluid"> <!-- Brand and toggle get grouped for better mobile display --> <div class="navbar-header"> <button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1" aria-expanded="false"> <span class="sr-only">Toggle navigation</span> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> <a class="navbar-brand" href="#">Brand</a> </div> <!-- Collect the nav links, forms, and other content for toggling --> <div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1"> <ul class="nav navbar-nav"> <li class="active"><a href="#">Link <span class="sr-only">(current)</span></a></li> <li><a href="#">Link</a></li> </ul> </div><!-- /.navbar-collapse --> </div><!-- /.container-fluid --> </nav> • leftSideBar.vm <div class="section"> <div class="row"> <a href="#">分类1</a> <br> <a href="#">分类2</a> </div> </div> • 设计screen页面 给一个简单的首页界面,只显示一句欢迎内容。欢迎内容包含静态数据和动态数据。在对应的blog目录下创建index.vm <div class="section"> Hello, $name!! <br> This is my Blog!! </div> 动态数据渲染时通过Velocity模板引擎完成的,这里的$name变量需要通过对应的Screen Module加载。在Module Screen目录下创建/blog/Index.java public class Index { public void execute(Context context) { context.put("name", "fiboliu"); } } 版权声明:本文内容由阿里云实名注册用户自发贡献,版权归原作者所有,阿里云开发者社区不拥有其著作权,亦不承担相应法律责任。具体规则请查看《阿里云开发者社区用户服务协议》和《阿里云开发者社区知识产权保护指引》。如果您发现本社区中有涉嫌抄袭的内容,填写侵权投诉表单进行举报,一经查实,本社区将立刻删除涉嫌侵权内容。 分享:
__label__pos
0.510943
BRender Tutorial Guide:5 Adding Colour Next|Prev|Up 5 Adding Colour The Program 8-Bit Indexed Colour Mode A `material' may be explicitly assigned to a model actor or to each face on a model. A material describes the appearance of a surface - its colour and texture, whether it's shiny or dull, smooth or rough, etc. For each face on a model, BRender looks for an associated material. If none has been specified (or the associated material is not found in the registry), the model actor's material is assumed. If a material has not been assigned to the model actor, it inherits its parent's material. If the parent actor, or a previous ancestor, has not been assigned a material, a flat-shaded grey material is used by default. The default material has been used with all the models you have displayed so far. Let's design a material and apply it to the revolving grey cube of BRTUTOR1.C. The information describing a material is stored in a br_material data structure. Refer to your technical reference manual for details of br_material. Care should be taken when initialising data structures statically, as only public members of BRender data structures are documented in the technical reference manual. A custom function, BrFmtScriptMaterialLoad, is provided for loading material descriptions from a script file. A material script file is a text file as in the following example: sample material script file # Comment # # Fields may be specified in any order, or omitted # Where fields are omitted, sensible defaults will be supplied # Extra white spaces are ignored material = [ identifier = "block"; flags = [light,prelit,smooth,environment, environment_local,perspective,decal, always_visible,two-sided,force_z_0]; colour = [0,0,255]; ambient = 0.05; diffuse = 0.55; specular = 0.4; power = 20; map_transform = [[1,0], [0,1], [0,0]]; index_base = 0; index_range = 0; colour_map = "brick" index_shade = "shade.tab" ]; The fields in the script file relate directly to br_material fields. Refer to your technical reference manual for futher details. Note that all material flags would never be set at the same time, as they are in the above example. They are shown here to illustrate how material flags are specified in script files. The script file used to determine the appearance of the revolving cube in BRTUTOR5.C is called cube.mat and is included on your Tutorial Programs disk. cube.mat # This material script file describes the appearance # of the material "BLUE MATERIAL" material = [ identifer = "BLUE MATERIAL"; colour = [0,0,255]; ambient = 0.05; diffuse = 0.55; specular = 0.4; power = 20; flags = [light,smooth]; ]; A script file may contain a number of material descriptions. The identifier field allows you to specify a name by which each loaded material is subsequently known. When you want to assign a particular material to a model, you simply instruct BRender to find it by name before completing the assignation. The material colour is pure blue (both red and green components are 0). There are a number of material properties, besides colour, that determine how a surface will appear under given lighting conditions - whether it will appear rough or smooth, shiny or dull etc. The ambient, diffuse and specular fields are used to specify, respectively, the ka, kd and ks members of the br_material data structure. Each ranges between 0 and 1, and the three should sum to 1. An additional field, power, determines how sharp highlights will appear. A more detailed discussion of material properties can be found in your technical reference manual (see br_material). The most commonly specified flags are light and smooth. Light specifies that lighting effects should be taken into account when rendering. Smooth specifies Gouraud shading. The polygons that make up the surface of a model can be drawn with a single colour (flat shading) or with many colours (smooth or Gouraud shading). With flat shading, the colour of a single vertex is calculated and duplicated across the entire polygon. With smooth shading, the colour at each vertex is computed and colour values for the interior of the polygon interpolated linearly between the vertex colours. In our present example, if we hadn't specified smooth shading, each face on the cube would have been drawn using a single colour. With smooth shading a more realistic effect is achieved through interpolation. For a demonstration of flat shading, simply remove `smooth' from the flag field in the text file cube.dat and run the program again. This is the major advantage of using script files to define material properties; any of these properties can be changed, and the result viewed, without having to re-compile the program. Simply edit the script file as necessary. This makes it easy to experiment with different colour values, specular/diffuse/ambient properties and shading techniques. Generated with CERN WebMaker
__label__pos
0.915896
Juconcurrent 学而不思则罔,思而不学则殆。 Jackson实现自定义序列化器 2016-11-03 jackson能有效地实现序列化操作,将一个java对象序列化成一串字符串。 @JsonProperty 这个annotation用以修饰java字段,不管此属性是否有getter/setter方法,将此字段序列化和反序列化为指定的value,而value默认和属性名相同。 包含以下几个可能的属性: 1. value - 将要使用的名称 2. index - 序列化的属性,整数值 3. defaultValue - 默认值,若此值为null时,序列化为字符串时使用 4. access - 访问级别,包含读写、只读、只写 @Data class AmountReq { @JsonProperty("your_name") private String yourName; } @JsonIgnore 用以忽略java字段 @Data class AmountReq { @JsonIgnore private String myName; } @JsonFormat 这个属性用于修饰字段或setter方法,仅仅能对Date/Time类型进行反序列化。 @Data class AmountReq { @jsonFormat(shape=JsonFormat.Shape.STRING, pattern="yyyy-MM-dd HH:mm:ss") private String herName; } @JsonSetter 和 @JsonGetter 可用以替代JsonPropertyJsonSetter用以修饰setter方法,表示反序列化的字段名和java字段名的映射。 JsonGetter用以修饰getter方法,表示序列化的字段名和java字段名的映射。 @JsonSerialize 和 @JsonDeserialize @JsonSerialize用以修饰属性,表示进行序列化时所使用的class,而@JsonDeserialize却恰恰相反。 现以人民币的元和分的转换为例。我们java里面的类型为BigDecimal,以元为单位,而序列化之后的字符串却使用分为单位。他们之间刚好是100的倍数。 首先,定义一个转换器,用以控制费率。万一以后需要用到其他的参数呢。 @Retention(RetentionPolicy.RUNTIME) @Target(ElementType.FIELD) public @interface BigDecimal2IntConverter { /** * 费率,用以数值的转换 */ int rate() default 100; } 其次,定义序列化器。这个序列化器可以将自定义annotation中的值传入进来,以便增加其使用灵活性。 public class BigDecimal2IntSerializer extends StdSerializer<BigDecimal> implements ContextualSerializer { private int value = 100; public BigDecimal2IntSerializer() { super(BigDecimal.class); } public BigDecimal2IntSerializer(int key) { super(BigDecimal.class); this.value = key; } @Override public void serialize(BigDecimal value, JsonGenerator gen, SerializerProvider serializers) throws IOException { if (value == null) { gen.writeNull(); } else { gen.writeNumber(value.multiply(new BigDecimal(this.value)).intValue()); } } @Override public JsonSerializer<?> createContextual(SerializerProvider prov, BeanProperty property) throws JsonMappingException { int key = 100; BigDecimal2IntConverter ann = null; if (property != null) { ann = property.getAnnotation(BigDecimal2IntConverter.class); } if (ann != null) { key = ann.rate(); } return new BigDecimal2IntSerializer(key); } } 再其次,使用反序列化器对其进行处理。 public class BigDecimal2IntDeserializer extends StdDeserializer<BigDecimal> implements ContextualDeserializer { private int value = 100; public BigDecimal2IntDeserializer() { super(BigDecimal.class); } public BigDecimal2IntDeserializer(int key) { super(BigDecimal.class); this.value = key; } @Override public BigDecimal deserialize(JsonParser p, DeserializationContext ctxt) throws IOException, JsonProcessingException { int v = p.getIntValue(); return new BigDecimal(v).divide(new BigDecimal(value)); } @Override public JsonDeserializer<?> createContextual(DeserializationContext ctxt, BeanProperty property) throws JsonMappingException { int key = 100; BigDecimal2IntConverter ann = null; if (property != null) { ann = property.getAnnotation(BigDecimal2IntConverter.class); } if (ann != null) { key = ann.rate(); } return new BigDecimal2IntDeserializer(key); } } 最后,考虑使用。使用起来就相当简单啦。 @Data public class AmountReq { @BigDecimal2IntConverter @JsonSerialize(using = BigDecimal2IntSerializer.class) @JsonDeserialize(using = BigDecimal2IntDeserializer.class) private BigDecimal amount; } public class JacksonTest { @Test public void test() throws Exception { ObjectMapper objectMapper = new ObjectMapper(); AmountReq req = new AmountReq(); req.setAmount(new BigDecimal(20.3)); String s = objectMapper.writeValueAsString(req); System.out.println(s); AmountReq req2 = objectMapper.readValue("{\"amount\": 29000}", AmountReq.class); System.out.println(req2); } } 上一篇 MQ(1)JMS规范 Content
__label__pos
0.994194
2 On a previous question I have been told to use Banshee as a replacement for iTunes, which leads me to this question:can I upload movies on my iPhone using Banshee instead of iTunes? 3 Banshee is more than capable of syncing content from your library to your iPhone however it depends on the model of iPhone and the software version installed on it. The current version of gtkpod does not support the iPhone 4 (and essentially iOS 5) see the SourceForge release notes here | improve this answer | | 0 Banshee does work. However it crashes sometimes. The best way to work around the crashes is to connect the iphone, it mounts, and then go in the iphone folders to Itunes_Control/iTunes/. Rename the file Playcounts.plist as Playcounts.plist.txt. Then start Banshee. And all works. This is a known bug. | improve this answer | | Your Answer By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.752808
layout/style/StyleRule.cpp author Gregory Szorc <[email protected]> Wed, 28 Jan 2015 13:37:00 -0800 branchGECKO240b10_2013090922_RELBRANCH changeset 243463 112369647ed85b4fdb527c918478b6bc845e2728 parent 144520 76321fce71e793ea12db7e0dacf504ad9d9404e5 child 148649 727736b233b39f7aab2aa93279370ad1b5f2d629 permissions -rw-r--r-- Close old release branch GECKO240b10_2013090922_RELBRANCH /* -*- Mode: C++; tab-width: 2; indent-tabs-mode: nil; c-basic-offset: 2 -*- */ /* This Source Code Form is subject to the terms of the Mozilla Public * License, v. 2.0. If a copy of the MPL was not distributed with this * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ /* * representation of CSS style rules (selectors+declaration), CSS * selectors, and DOM objects for style rules, selectors, and * declarations */ #include "mozilla/css/StyleRule.h" #include "mozilla/css/GroupRule.h" #include "mozilla/css/Declaration.h" #include "nsCSSStyleSheet.h" #include "nsIDocument.h" #include "nsIAtom.h" #include "nsString.h" #include "nsStyleUtil.h" #include "nsICSSStyleRuleDOMWrapper.h" #include "nsDOMCSSDeclaration.h" #include "nsINameSpaceManager.h" #include "nsXMLNameSpaceMap.h" #include "nsCSSPseudoElements.h" #include "nsCSSPseudoClasses.h" #include "nsCSSAnonBoxes.h" #include "nsTArray.h" #include "nsDOMClassInfoID.h" #include "nsContentUtils.h" #include "nsError.h" #include "mozAutoDocUpdate.h" class nsIDOMCSSStyleDeclaration; class nsIDOMCSSStyleSheet; namespace css = mozilla::css; #define NS_IF_CLONE(member_) \ PR_BEGIN_MACRO \ if (member_) { \ result->member_ = member_->Clone(); \ if (!result->member_) { \ delete result; \ return nullptr; \ } \ } \ PR_END_MACRO #define NS_IF_DELETE(ptr) \ PR_BEGIN_MACRO \ delete ptr; \ ptr = nullptr; \ PR_END_MACRO /* ************************************************************************** */ nsAtomList::nsAtomList(nsIAtom* aAtom) : mAtom(aAtom), mNext(nullptr) { MOZ_COUNT_CTOR(nsAtomList); } nsAtomList::nsAtomList(const nsString& aAtomValue) : mAtom(nullptr), mNext(nullptr) { MOZ_COUNT_CTOR(nsAtomList); mAtom = do_GetAtom(aAtomValue); } nsAtomList* nsAtomList::Clone(bool aDeep) const { nsAtomList *result = new nsAtomList(mAtom); if (!result) return nullptr; if (aDeep) NS_CSS_CLONE_LIST_MEMBER(nsAtomList, this, mNext, result, (false)); return result; } size_t nsAtomList::SizeOfIncludingThis(nsMallocSizeOfFun aMallocSizeOf) const { size_t n = 0; const nsAtomList* a = this; while (a) { n += aMallocSizeOf(a); // The following members aren't measured: // - a->mAtom, because it may be shared a = a->mNext; } return n; } nsAtomList::~nsAtomList(void) { MOZ_COUNT_DTOR(nsAtomList); NS_CSS_DELETE_LIST_MEMBER(nsAtomList, this, mNext); } nsPseudoClassList::nsPseudoClassList(nsCSSPseudoClasses::Type aType) : mType(aType), mNext(nullptr) { NS_ASSERTION(!nsCSSPseudoClasses::HasStringArg(aType) && !nsCSSPseudoClasses::HasNthPairArg(aType), "unexpected pseudo-class"); MOZ_COUNT_CTOR(nsPseudoClassList); u.mMemory = nullptr; } nsPseudoClassList::nsPseudoClassList(nsCSSPseudoClasses::Type aType, const PRUnichar* aString) : mType(aType), mNext(nullptr) { NS_ASSERTION(nsCSSPseudoClasses::HasStringArg(aType), "unexpected pseudo-class"); NS_ASSERTION(aString, "string expected"); MOZ_COUNT_CTOR(nsPseudoClassList); u.mString = NS_strdup(aString); } nsPseudoClassList::nsPseudoClassList(nsCSSPseudoClasses::Type aType, const int32_t* aIntPair) : mType(aType), mNext(nullptr) { NS_ASSERTION(nsCSSPseudoClasses::HasNthPairArg(aType), "unexpected pseudo-class"); NS_ASSERTION(aIntPair, "integer pair expected"); MOZ_COUNT_CTOR(nsPseudoClassList); u.mNumbers = static_cast<int32_t*>(nsMemory::Clone(aIntPair, sizeof(int32_t) * 2)); } // adopts aSelectorList nsPseudoClassList::nsPseudoClassList(nsCSSPseudoClasses::Type aType, nsCSSSelectorList* aSelectorList) : mType(aType), mNext(nullptr) { NS_ASSERTION(nsCSSPseudoClasses::HasSelectorListArg(aType), "unexpected pseudo-class"); NS_ASSERTION(aSelectorList, "selector list expected"); MOZ_COUNT_CTOR(nsPseudoClassList); u.mSelectors = aSelectorList; } nsPseudoClassList* nsPseudoClassList::Clone(bool aDeep) const { nsPseudoClassList *result; if (!u.mMemory) { result = new nsPseudoClassList(mType); } else if (nsCSSPseudoClasses::HasStringArg(mType)) { result = new nsPseudoClassList(mType, u.mString); } else if (nsCSSPseudoClasses::HasNthPairArg(mType)) { result = new nsPseudoClassList(mType, u.mNumbers); } else { NS_ASSERTION(nsCSSPseudoClasses::HasSelectorListArg(mType), "unexpected pseudo-class"); // This constructor adopts its selector list argument. result = new nsPseudoClassList(mType, u.mSelectors->Clone()); } if (aDeep) NS_CSS_CLONE_LIST_MEMBER(nsPseudoClassList, this, mNext, result, (false)); return result; } size_t nsPseudoClassList::SizeOfIncludingThis(nsMallocSizeOfFun aMallocSizeOf) const { size_t n = 0; const nsPseudoClassList* p = this; while (p) { n += aMallocSizeOf(p); if (!p->u.mMemory) { // do nothing } else if (nsCSSPseudoClasses::HasStringArg(p->mType)) { n += aMallocSizeOf(p->u.mString); } else if (nsCSSPseudoClasses::HasNthPairArg(p->mType)) { n += aMallocSizeOf(p->u.mNumbers); } else { NS_ASSERTION(nsCSSPseudoClasses::HasSelectorListArg(p->mType), "unexpected pseudo-class"); n += p->u.mSelectors->SizeOfIncludingThis(aMallocSizeOf); } p = p->mNext; } return n; } nsPseudoClassList::~nsPseudoClassList(void) { MOZ_COUNT_DTOR(nsPseudoClassList); if (nsCSSPseudoClasses::HasSelectorListArg(mType)) { delete u.mSelectors; } else if (u.mMemory) { NS_Free(u.mMemory); } NS_CSS_DELETE_LIST_MEMBER(nsPseudoClassList, this, mNext); } nsAttrSelector::nsAttrSelector(int32_t aNameSpace, const nsString& aAttr) : mValue(), mNext(nullptr), mLowercaseAttr(nullptr), mCasedAttr(nullptr), mNameSpace(aNameSpace), mFunction(NS_ATTR_FUNC_SET), mCaseSensitive(1) { MOZ_COUNT_CTOR(nsAttrSelector); nsAutoString lowercase; nsContentUtils::ASCIIToLower(aAttr, lowercase); mCasedAttr = do_GetAtom(aAttr); mLowercaseAttr = do_GetAtom(lowercase); } nsAttrSelector::nsAttrSelector(int32_t aNameSpace, const nsString& aAttr, uint8_t aFunction, const nsString& aValue, bool aCaseSensitive) : mValue(aValue), mNext(nullptr), mLowercaseAttr(nullptr), mCasedAttr(nullptr), mNameSpace(aNameSpace), mFunction(aFunction), mCaseSensitive(aCaseSensitive) { MOZ_COUNT_CTOR(nsAttrSelector); nsAutoString lowercase; nsContentUtils::ASCIIToLower(aAttr, lowercase); mCasedAttr = do_GetAtom(aAttr); mLowercaseAttr = do_GetAtom(lowercase); } nsAttrSelector::nsAttrSelector(int32_t aNameSpace, nsIAtom* aLowercaseAttr, nsIAtom* aCasedAttr, uint8_t aFunction, const nsString& aValue, bool aCaseSensitive) : mValue(aValue), mNext(nullptr), mLowercaseAttr(aLowercaseAttr), mCasedAttr(aCasedAttr), mNameSpace(aNameSpace), mFunction(aFunction), mCaseSensitive(aCaseSensitive) { MOZ_COUNT_CTOR(nsAttrSelector); } nsAttrSelector* nsAttrSelector::Clone(bool aDeep) const { nsAttrSelector *result = new nsAttrSelector(mNameSpace, mLowercaseAttr, mCasedAttr, mFunction, mValue, mCaseSensitive); if (aDeep) NS_CSS_CLONE_LIST_MEMBER(nsAttrSelector, this, mNext, result, (false)); return result; } nsAttrSelector::~nsAttrSelector(void) { MOZ_COUNT_DTOR(nsAttrSelector); NS_CSS_DELETE_LIST_MEMBER(nsAttrSelector, this, mNext); } // -- nsCSSSelector ------------------------------- nsCSSSelector::nsCSSSelector(void) : mLowercaseTag(nullptr), mCasedTag(nullptr), mIDList(nullptr), mClassList(nullptr), mPseudoClassList(nullptr), mAttrList(nullptr), mNegations(nullptr), mNext(nullptr), mNameSpace(kNameSpaceID_Unknown), mOperator(0), mPseudoType(nsCSSPseudoElements::ePseudo_NotPseudoElement) { MOZ_COUNT_CTOR(nsCSSSelector); MOZ_STATIC_ASSERT(nsCSSPseudoElements::ePseudo_MAX < INT16_MAX, "nsCSSPseudoElements::Type values overflow mPseudoType"); } nsCSSSelector* nsCSSSelector::Clone(bool aDeepNext, bool aDeepNegations) const { nsCSSSelector *result = new nsCSSSelector(); if (!result) return nullptr; result->mNameSpace = mNameSpace; result->mLowercaseTag = mLowercaseTag; result->mCasedTag = mCasedTag; result->mOperator = mOperator; result->mPseudoType = mPseudoType; NS_IF_CLONE(mIDList); NS_IF_CLONE(mClassList); NS_IF_CLONE(mPseudoClassList); NS_IF_CLONE(mAttrList); // No need to worry about multiple levels of recursion since an // mNegations can't have an mNext. NS_ASSERTION(!mNegations || !mNegations->mNext, "mNegations can't have non-null mNext"); if (aDeepNegations) { NS_CSS_CLONE_LIST_MEMBER(nsCSSSelector, this, mNegations, result, (true, false)); } if (aDeepNext) { NS_CSS_CLONE_LIST_MEMBER(nsCSSSelector, this, mNext, result, (false, true)); } return result; } nsCSSSelector::~nsCSSSelector(void) { MOZ_COUNT_DTOR(nsCSSSelector); Reset(); // No need to worry about multiple levels of recursion since an // mNegations can't have an mNext. NS_CSS_DELETE_LIST_MEMBER(nsCSSSelector, this, mNext); } void nsCSSSelector::Reset(void) { mNameSpace = kNameSpaceID_Unknown; mLowercaseTag = nullptr; mCasedTag = nullptr; NS_IF_DELETE(mIDList); NS_IF_DELETE(mClassList); NS_IF_DELETE(mPseudoClassList); NS_IF_DELETE(mAttrList); // No need to worry about multiple levels of recursion since an // mNegations can't have an mNext. NS_ASSERTION(!mNegations || !mNegations->mNext, "mNegations can't have non-null mNext"); NS_CSS_DELETE_LIST_MEMBER(nsCSSSelector, this, mNegations); mOperator = PRUnichar(0); } void nsCSSSelector::SetNameSpace(int32_t aNameSpace) { mNameSpace = aNameSpace; } void nsCSSSelector::SetTag(const nsString& aTag) { if (aTag.IsEmpty()) { mLowercaseTag = mCasedTag = nullptr; return; } mCasedTag = do_GetAtom(aTag); nsAutoString lowercase; nsContentUtils::ASCIIToLower(aTag, lowercase); mLowercaseTag = do_GetAtom(lowercase); } void nsCSSSelector::AddID(const nsString& aID) { if (!aID.IsEmpty()) { nsAtomList** list = &mIDList; while (nullptr != *list) { list = &((*list)->mNext); } *list = new nsAtomList(aID); } } void nsCSSSelector::AddClass(const nsString& aClass) { if (!aClass.IsEmpty()) { nsAtomList** list = &mClassList; while (nullptr != *list) { list = &((*list)->mNext); } *list = new nsAtomList(aClass); } } void nsCSSSelector::AddPseudoClass(nsCSSPseudoClasses::Type aType) { AddPseudoClassInternal(new nsPseudoClassList(aType)); } void nsCSSSelector::AddPseudoClass(nsCSSPseudoClasses::Type aType, const PRUnichar* aString) { AddPseudoClassInternal(new nsPseudoClassList(aType, aString)); } void nsCSSSelector::AddPseudoClass(nsCSSPseudoClasses::Type aType, const int32_t* aIntPair) { AddPseudoClassInternal(new nsPseudoClassList(aType, aIntPair)); } void nsCSSSelector::AddPseudoClass(nsCSSPseudoClasses::Type aType, nsCSSSelectorList* aSelectorList) { // Take ownership of nsCSSSelectorList instead of copying. AddPseudoClassInternal(new nsPseudoClassList(aType, aSelectorList)); } void nsCSSSelector::AddPseudoClassInternal(nsPseudoClassList *aPseudoClass) { nsPseudoClassList** list = &mPseudoClassList; while (nullptr != *list) { list = &((*list)->mNext); } *list = aPseudoClass; } void nsCSSSelector::AddAttribute(int32_t aNameSpace, const nsString& aAttr) { if (!aAttr.IsEmpty()) { nsAttrSelector** list = &mAttrList; while (nullptr != *list) { list = &((*list)->mNext); } *list = new nsAttrSelector(aNameSpace, aAttr); } } void nsCSSSelector::AddAttribute(int32_t aNameSpace, const nsString& aAttr, uint8_t aFunc, const nsString& aValue, bool aCaseSensitive) { if (!aAttr.IsEmpty()) { nsAttrSelector** list = &mAttrList; while (nullptr != *list) { list = &((*list)->mNext); } *list = new nsAttrSelector(aNameSpace, aAttr, aFunc, aValue, aCaseSensitive); } } void nsCSSSelector::SetOperator(PRUnichar aOperator) { mOperator = aOperator; } int32_t nsCSSSelector::CalcWeightWithoutNegations() const { int32_t weight = 0; if (nullptr != mLowercaseTag) { weight += 0x000001; } nsAtomList* list = mIDList; while (nullptr != list) { weight += 0x010000; list = list->mNext; } list = mClassList; while (nullptr != list) { weight += 0x000100; list = list->mNext; } // FIXME (bug 561154): This is incorrect for :-moz-any(), which isn't // really a pseudo-class. In order to handle :-moz-any() correctly, // we need to compute specificity after we match, based on which // option we matched with (and thus also need to try the // highest-specificity options first). nsPseudoClassList *plist = mPseudoClassList; while (nullptr != plist) { weight += 0x000100; plist = plist->mNext; } nsAttrSelector* attr = mAttrList; while (nullptr != attr) { weight += 0x000100; attr = attr->mNext; } return weight; } int32_t nsCSSSelector::CalcWeight() const { // Loop over this selector and all its negations. int32_t weight = 0; for (const nsCSSSelector *n = this; n; n = n->mNegations) { weight += n->CalcWeightWithoutNegations(); } return weight; } // // Builds the textual representation of a selector. Called by DOM 2 CSS // StyleRule:selectorText // void nsCSSSelector::ToString(nsAString& aString, nsCSSStyleSheet* aSheet, bool aAppend) const { if (!aAppend) aString.Truncate(); // selectors are linked from right-to-left, so the next selector in // the linked list actually precedes this one in the resulting string nsAutoTArray<const nsCSSSelector*, 8> stack; for (const nsCSSSelector *s = this; s; s = s->mNext) { stack.AppendElement(s); } while (!stack.IsEmpty()) { uint32_t index = stack.Length() - 1; const nsCSSSelector *s = stack.ElementAt(index); stack.RemoveElementAt(index); s->AppendToStringWithoutCombinators(aString, aSheet); // Append the combinator, if needed. if (!stack.IsEmpty()) { const nsCSSSelector *next = stack.ElementAt(index - 1); PRUnichar oper = s->mOperator; if (next->IsPseudoElement()) { NS_ASSERTION(oper == PRUnichar('>'), "improperly chained pseudo element"); } else { NS_ASSERTION(oper != PRUnichar(0), "compound selector without combinator"); aString.Append(PRUnichar(' ')); if (oper != PRUnichar(' ')) { aString.Append(oper); aString.Append(PRUnichar(' ')); } } } } } void nsCSSSelector::AppendToStringWithoutCombinators (nsAString& aString, nsCSSStyleSheet* aSheet) const { AppendToStringWithoutCombinatorsOrNegations(aString, aSheet, false); for (const nsCSSSelector* negation = mNegations; negation; negation = negation->mNegations) { aString.AppendLiteral(":not("); negation->AppendToStringWithoutCombinatorsOrNegations(aString, aSheet, true); aString.Append(PRUnichar(')')); } } void nsCSSSelector::AppendToStringWithoutCombinatorsOrNegations (nsAString& aString, nsCSSStyleSheet* aSheet, bool aIsNegated) const { nsAutoString temp; bool isPseudoElement = IsPseudoElement(); // For non-pseudo-element selectors or for lone pseudo-elements, deal with // namespace prefixes. bool wroteNamespace = false; if (!isPseudoElement || !mNext) { // append the namespace prefix if needed nsXMLNameSpaceMap *sheetNS = aSheet ? aSheet->GetNameSpaceMap() : nullptr; // sheetNS is non-null if and only if we had an @namespace rule. If it's // null, that means that the only namespaces we could have are the // wildcard namespace (which can be implicit in this case) and the "none" // namespace, which then needs to be explicitly specified. if (!sheetNS) { NS_ASSERTION(mNameSpace == kNameSpaceID_Unknown || mNameSpace == kNameSpaceID_None, "How did we get this namespace?"); if (mNameSpace == kNameSpaceID_None) { aString.Append(PRUnichar('|')); wroteNamespace = true; } } else if (sheetNS->FindNameSpaceID(nullptr) == mNameSpace) { // We have the default namespace (possibly including the wildcard // namespace). Do nothing. NS_ASSERTION(mNameSpace == kNameSpaceID_Unknown || CanBeNamespaced(aIsNegated), "How did we end up with this namespace?"); } else if (mNameSpace == kNameSpaceID_None) { NS_ASSERTION(CanBeNamespaced(aIsNegated), "How did we end up with this namespace?"); aString.Append(PRUnichar('|')); wroteNamespace = true; } else if (mNameSpace != kNameSpaceID_Unknown) { NS_ASSERTION(CanBeNamespaced(aIsNegated), "How did we end up with this namespace?"); nsIAtom *prefixAtom = sheetNS->FindPrefix(mNameSpace); NS_ASSERTION(prefixAtom, "how'd we get a non-default namespace " "without a prefix?"); nsStyleUtil::AppendEscapedCSSIdent(nsDependentAtomString(prefixAtom), aString); aString.Append(PRUnichar('|')); wroteNamespace = true; } else { // A selector for an element in any namespace, while the default // namespace is something else. :not() is special in that the default // namespace is not implied for non-type selectors, so if this is a // negated non-type selector we don't need to output an explicit wildcard // namespace here, since those default to a wildcard namespace. if (CanBeNamespaced(aIsNegated)) { aString.AppendLiteral("*|"); wroteNamespace = true; } } } if (!mLowercaseTag) { // Universal selector: avoid writing the universal selector when we // can avoid it, especially since we're required to avoid it for the // inside of :not() if (wroteNamespace || (!mIDList && !mClassList && !mPseudoClassList && !mAttrList && (aIsNegated || !mNegations))) { aString.Append(PRUnichar('*')); } } else { // Append the tag name nsAutoString tag; (isPseudoElement ? mLowercaseTag : mCasedTag)->ToString(tag); if (isPseudoElement) { if (!mNext) { // Lone pseudo-element selector -- toss in a wildcard type selector // XXXldb Why? aString.Append(PRUnichar('*')); } if (!nsCSSPseudoElements::IsCSS2PseudoElement(mLowercaseTag)) { aString.Append(PRUnichar(':')); } // This should not be escaped since (a) the pseudo-element string // has a ":" that can't be escaped and (b) all pseudo-elements at // this point are known, and therefore we know they don't need // escaping. aString.Append(tag); } else { nsStyleUtil::AppendEscapedCSSIdent(tag, aString); } } // Append the id, if there is one if (mIDList) { nsAtomList* list = mIDList; while (list != nullptr) { list->mAtom->ToString(temp); aString.Append(PRUnichar('#')); nsStyleUtil::AppendEscapedCSSIdent(temp, aString); list = list->mNext; } } // Append each class in the linked list if (mClassList) { if (isPseudoElement) { #ifdef MOZ_XUL NS_ABORT_IF_FALSE(nsCSSAnonBoxes::IsTreePseudoElement(mLowercaseTag), "must be tree pseudo-element"); aString.Append(PRUnichar('(')); for (nsAtomList* list = mClassList; list; list = list->mNext) { nsStyleUtil::AppendEscapedCSSIdent(nsDependentAtomString(list->mAtom), aString); aString.Append(PRUnichar(',')); } // replace the final comma with a close-paren aString.Replace(aString.Length() - 1, 1, PRUnichar(')')); #else NS_ERROR("Can't happen"); #endif } else { nsAtomList* list = mClassList; while (list != nullptr) { list->mAtom->ToString(temp); aString.Append(PRUnichar('.')); nsStyleUtil::AppendEscapedCSSIdent(temp, aString); list = list->mNext; } } } // Append each attribute selector in the linked list if (mAttrList) { nsAttrSelector* list = mAttrList; while (list != nullptr) { aString.Append(PRUnichar('[')); // Append the namespace prefix if (list->mNameSpace == kNameSpaceID_Unknown) { aString.Append(PRUnichar('*')); aString.Append(PRUnichar('|')); } else if (list->mNameSpace != kNameSpaceID_None) { if (aSheet) { nsXMLNameSpaceMap *sheetNS = aSheet->GetNameSpaceMap(); nsIAtom *prefixAtom = sheetNS->FindPrefix(list->mNameSpace); // Default namespaces don't apply to attribute selectors, so // we must have a useful prefix. NS_ASSERTION(prefixAtom, "How did we end up with a namespace if the prefix " "is unknown?"); nsAutoString prefix; prefixAtom->ToString(prefix); nsStyleUtil::AppendEscapedCSSIdent(prefix, aString); aString.Append(PRUnichar('|')); } } // Append the attribute name list->mCasedAttr->ToString(temp); nsStyleUtil::AppendEscapedCSSIdent(temp, aString); if (list->mFunction != NS_ATTR_FUNC_SET) { // Append the function if (list->mFunction == NS_ATTR_FUNC_INCLUDES) aString.Append(PRUnichar('~')); else if (list->mFunction == NS_ATTR_FUNC_DASHMATCH) aString.Append(PRUnichar('|')); else if (list->mFunction == NS_ATTR_FUNC_BEGINSMATCH) aString.Append(PRUnichar('^')); else if (list->mFunction == NS_ATTR_FUNC_ENDSMATCH) aString.Append(PRUnichar('$')); else if (list->mFunction == NS_ATTR_FUNC_CONTAINSMATCH) aString.Append(PRUnichar('*')); aString.Append(PRUnichar('=')); // Append the value nsStyleUtil::AppendEscapedCSSString(list->mValue, aString); } aString.Append(PRUnichar(']')); list = list->mNext; } } // Append each pseudo-class in the linked list for (nsPseudoClassList* list = mPseudoClassList; list; list = list->mNext) { nsCSSPseudoClasses::PseudoTypeToString(list->mType, temp); // This should not be escaped since (a) the pseudo-class string // has a ":" that can't be escaped and (b) all pseudo-classes at // this point are known, and therefore we know they don't need // escaping. aString.Append(temp); if (list->u.mMemory) { aString.Append(PRUnichar('(')); if (nsCSSPseudoClasses::HasStringArg(list->mType)) { nsStyleUtil::AppendEscapedCSSIdent( nsDependentString(list->u.mString), aString); } else if (nsCSSPseudoClasses::HasNthPairArg(list->mType)) { int32_t a = list->u.mNumbers[0], b = list->u.mNumbers[1]; temp.Truncate(); if (a != 0) { if (a == -1) { temp.Append(PRUnichar('-')); } else if (a != 1) { temp.AppendInt(a); } temp.Append(PRUnichar('n')); } if (b != 0 || a == 0) { if (b >= 0 && a != 0) // check a != 0 for whether we printed above temp.Append(PRUnichar('+')); temp.AppendInt(b); } aString.Append(temp); } else { NS_ASSERTION(nsCSSPseudoClasses::HasSelectorListArg(list->mType), "unexpected pseudo-class"); nsString tmp; list->u.mSelectors->ToString(tmp, aSheet); aString.Append(tmp); } aString.Append(PRUnichar(')')); } } } bool nsCSSSelector::CanBeNamespaced(bool aIsNegated) const { return !aIsNegated || (!mIDList && !mClassList && !mPseudoClassList && !mAttrList); } size_t nsCSSSelector::SizeOfIncludingThis(nsMallocSizeOfFun aMallocSizeOf) const { size_t n = 0; const nsCSSSelector* s = this; while (s) { n += aMallocSizeOf(s); #define MEASURE(x) n += x ? x->SizeOfIncludingThis(aMallocSizeOf) : 0; MEASURE(s->mIDList); MEASURE(s->mClassList); MEASURE(s->mPseudoClassList); MEASURE(s->mNegations); // Measurement of the following members may be added later if DMD finds it is // worthwhile: // - s->mAttrList // // The following members aren't measured: // - s->mLowercaseTag, because it's an atom and therefore shared // - s->mCasedTag, because it's an atom and therefore shared s = s->mNext; } return n; } // -- nsCSSSelectorList ------------------------------- nsCSSSelectorList::nsCSSSelectorList(void) : mSelectors(nullptr), mWeight(0), mNext(nullptr) { MOZ_COUNT_CTOR(nsCSSSelectorList); } nsCSSSelectorList::~nsCSSSelectorList() { MOZ_COUNT_DTOR(nsCSSSelectorList); delete mSelectors; NS_CSS_DELETE_LIST_MEMBER(nsCSSSelectorList, this, mNext); } nsCSSSelector* nsCSSSelectorList::AddSelector(PRUnichar aOperator) { nsCSSSelector* newSel = new nsCSSSelector(); if (mSelectors) { NS_ASSERTION(aOperator != PRUnichar(0), "chaining without combinator"); mSelectors->SetOperator(aOperator); } else { NS_ASSERTION(aOperator == PRUnichar(0), "combinator without chaining"); } newSel->mNext = mSelectors; mSelectors = newSel; return newSel; } void nsCSSSelectorList::ToString(nsAString& aResult, nsCSSStyleSheet* aSheet) { aResult.Truncate(); nsCSSSelectorList *p = this; for (;;) { p->mSelectors->ToString(aResult, aSheet, true); p = p->mNext; if (!p) break; aResult.AppendLiteral(", "); } } nsCSSSelectorList* nsCSSSelectorList::Clone(bool aDeep) const { nsCSSSelectorList *result = new nsCSSSelectorList(); result->mWeight = mWeight; NS_IF_CLONE(mSelectors); if (aDeep) { NS_CSS_CLONE_LIST_MEMBER(nsCSSSelectorList, this, mNext, result, (false)); } return result; } size_t nsCSSSelectorList::SizeOfIncludingThis(nsMallocSizeOfFun aMallocSizeOf) const { size_t n = 0; const nsCSSSelectorList* s = this; while (s) { n += aMallocSizeOf(s); n += s->mSelectors ? s->mSelectors->SizeOfIncludingThis(aMallocSizeOf) : 0; s = s->mNext; } return n; } // -- ImportantRule ---------------------------------- namespace mozilla { namespace css { ImportantRule::ImportantRule(Declaration* aDeclaration) : mDeclaration(aDeclaration) { } ImportantRule::~ImportantRule() { } NS_IMPL_ISUPPORTS1(ImportantRule, nsIStyleRule) /* virtual */ void ImportantRule::MapRuleInfoInto(nsRuleData* aRuleData) { mDeclaration->MapImportantRuleInfoInto(aRuleData); } #ifdef DEBUG /* virtual */ void ImportantRule::List(FILE* out, int32_t aIndent) const { // Indent for (int32_t index = aIndent; --index >= 0; ) fputs(" ", out); fprintf(out, "! Important declaration=%p\n", static_cast<void*>(mDeclaration)); } #endif } // namespace css } // namespace mozilla // -------------------------------------------------------- namespace mozilla { namespace css { class DOMCSSStyleRule; } } class DOMCSSDeclarationImpl : public nsDOMCSSDeclaration { public: DOMCSSDeclarationImpl(css::StyleRule *aRule); virtual ~DOMCSSDeclarationImpl(void); NS_IMETHOD GetParentRule(nsIDOMCSSRule **aParent); void DropReference(void); virtual css::Declaration* GetCSSDeclaration(bool aAllocate); virtual nsresult SetCSSDeclaration(css::Declaration* aDecl); virtual void GetCSSParsingEnvironment(CSSParsingEnvironment& aCSSParseEnv); virtual nsIDocument* DocToUpdate(); // Override |AddRef| and |Release| for being a member of // |DOMCSSStyleRule|. Also, we need to forward QI for cycle // collection things to DOMCSSStyleRule. NS_DECL_ISUPPORTS_INHERITED virtual nsINode *GetParentObject() { return mRule ? mRule->GetDocument() : nullptr; } friend class css::DOMCSSStyleRule; protected: // This reference is not reference-counted. The rule object tells us // when it's about to go away. css::StyleRule *mRule; inline css::DOMCSSStyleRule* DomRule(); private: // NOT TO BE IMPLEMENTED // This object cannot be allocated on its own. It must be a member of // DOMCSSStyleRule. void* operator new(size_t size) CPP_THROW_NEW; }; namespace mozilla { namespace css { class DOMCSSStyleRule : public nsICSSStyleRuleDOMWrapper { public: DOMCSSStyleRule(StyleRule *aRule); virtual ~DOMCSSStyleRule(); NS_DECL_CYCLE_COLLECTING_ISUPPORTS NS_DECL_CYCLE_COLLECTION_SCRIPT_HOLDER_CLASS(DOMCSSStyleRule) NS_DECL_NSIDOMCSSRULE NS_DECL_NSIDOMCSSSTYLERULE // nsICSSStyleRuleDOMWrapper NS_IMETHOD GetCSSStyleRule(StyleRule **aResult); DOMCSSDeclarationImpl* DOMDeclaration() { return &mDOMDeclaration; } friend class ::DOMCSSDeclarationImpl; protected: DOMCSSDeclarationImpl mDOMDeclaration; StyleRule* Rule() { return mDOMDeclaration.mRule; } }; } // namespace css } // namespace mozilla DOMCSSDeclarationImpl::DOMCSSDeclarationImpl(css::StyleRule *aRule) : mRule(aRule) { MOZ_COUNT_CTOR(DOMCSSDeclarationImpl); } DOMCSSDeclarationImpl::~DOMCSSDeclarationImpl(void) { NS_ASSERTION(!mRule, "DropReference not called."); MOZ_COUNT_DTOR(DOMCSSDeclarationImpl); } inline css::DOMCSSStyleRule* DOMCSSDeclarationImpl::DomRule() { return reinterpret_cast<css::DOMCSSStyleRule*> (reinterpret_cast<char*>(this) - offsetof(css::DOMCSSStyleRule, mDOMDeclaration)); } NS_IMPL_ADDREF_USING_AGGREGATOR(DOMCSSDeclarationImpl, DomRule()) NS_IMPL_RELEASE_USING_AGGREGATOR(DOMCSSDeclarationImpl, DomRule()) NS_INTERFACE_MAP_BEGIN(DOMCSSDeclarationImpl) NS_WRAPPERCACHE_INTERFACE_MAP_ENTRY // We forward the cycle collection interfaces to DomRule(), which is // never null (in fact, we're part of that object!) if (aIID.Equals(NS_GET_IID(nsCycleCollectionISupports)) || aIID.Equals(NS_GET_IID(nsXPCOMCycleCollectionParticipant))) { return DomRule()->QueryInterface(aIID, aInstancePtr); } else NS_IMPL_QUERY_TAIL_INHERITING(nsDOMCSSDeclaration) void DOMCSSDeclarationImpl::DropReference(void) { mRule = nullptr; } css::Declaration* DOMCSSDeclarationImpl::GetCSSDeclaration(bool aAllocate) { if (mRule) { return mRule->GetDeclaration(); } else { return nullptr; } } void DOMCSSDeclarationImpl::GetCSSParsingEnvironment(CSSParsingEnvironment& aCSSParseEnv) { GetCSSParsingEnvironmentForRule(mRule, aCSSParseEnv); } NS_IMETHODIMP DOMCSSDeclarationImpl::GetParentRule(nsIDOMCSSRule **aParent) { NS_ENSURE_ARG_POINTER(aParent); if (!mRule) { *aParent = nullptr; return NS_OK; } NS_IF_ADDREF(*aParent = mRule->GetDOMRule()); return NS_OK; } nsresult DOMCSSDeclarationImpl::SetCSSDeclaration(css::Declaration* aDecl) { NS_PRECONDITION(mRule, "can only be called when |GetCSSDeclaration| returned a declaration"); nsCOMPtr<nsIDocument> owningDoc; nsCOMPtr<nsIStyleSheet> sheet = mRule->GetStyleSheet(); if (sheet) { owningDoc = sheet->GetOwningDocument(); } mozAutoDocUpdate updateBatch(owningDoc, UPDATE_STYLE, true); nsRefPtr<css::StyleRule> oldRule = mRule; mRule = oldRule->DeclarationChanged(aDecl, true).get(); if (!mRule) return NS_ERROR_OUT_OF_MEMORY; nsrefcnt cnt = mRule->Release(); if (cnt == 0) { NS_NOTREACHED("container didn't take ownership"); mRule = nullptr; return NS_ERROR_UNEXPECTED; } if (owningDoc) { owningDoc->StyleRuleChanged(sheet, oldRule, mRule); } return NS_OK; } nsIDocument* DOMCSSDeclarationImpl::DocToUpdate() { return nullptr; } // needs to be outside the namespace DOMCI_DATA(CSSStyleRule, css::DOMCSSStyleRule) namespace mozilla { namespace css { DOMCSSStyleRule::DOMCSSStyleRule(StyleRule* aRule) : mDOMDeclaration(aRule) { } DOMCSSStyleRule::~DOMCSSStyleRule() { } NS_INTERFACE_MAP_BEGIN_CYCLE_COLLECTION(DOMCSSStyleRule) NS_INTERFACE_MAP_ENTRY(nsICSSStyleRuleDOMWrapper) NS_INTERFACE_MAP_ENTRY(nsIDOMCSSStyleRule) NS_INTERFACE_MAP_ENTRY(nsIDOMCSSRule) NS_INTERFACE_MAP_ENTRY(nsISupports) NS_DOM_INTERFACE_MAP_ENTRY_CLASSINFO(CSSStyleRule) NS_INTERFACE_MAP_END NS_IMPL_CYCLE_COLLECTING_ADDREF(DOMCSSStyleRule) NS_IMPL_CYCLE_COLLECTING_RELEASE(DOMCSSStyleRule) NS_IMPL_CYCLE_COLLECTION_TRACE_BEGIN(DOMCSSStyleRule) // Trace the wrapper for our declaration. This just expands out // NS_IMPL_CYCLE_COLLECTION_TRACE_PRESERVED_WRAPPER which we can't use // directly because the wrapper is on the declaration, not on us. tmp->DOMDeclaration()->TraceWrapper(aCallbacks, aClosure); NS_IMPL_CYCLE_COLLECTION_TRACE_END NS_IMPL_CYCLE_COLLECTION_UNLINK_BEGIN(DOMCSSStyleRule) // Unlink the wrapper for our declaraton. This just expands out // NS_IMPL_CYCLE_COLLECTION_UNLINK_PRESERVED_WRAPPER which we can't use // directly because the wrapper is on the declaration, not on us. nsContentUtils::ReleaseWrapper(static_cast<nsISupports*>(p), tmp->DOMDeclaration()); NS_IMPL_CYCLE_COLLECTION_UNLINK_END NS_IMPL_CYCLE_COLLECTION_TRAVERSE_BEGIN(DOMCSSStyleRule) // Just NS_IMPL_CYCLE_COLLECTION_TRAVERSE_SCRIPT_OBJECTS here: that will call // into our Trace hook, where we do the right thing with declarations // already. NS_IMPL_CYCLE_COLLECTION_TRAVERSE_SCRIPT_OBJECTS NS_IMPL_CYCLE_COLLECTION_TRAVERSE_END NS_IMETHODIMP DOMCSSStyleRule::GetType(uint16_t* aType) { *aType = nsIDOMCSSRule::STYLE_RULE; return NS_OK; } NS_IMETHODIMP DOMCSSStyleRule::GetCssText(nsAString& aCssText) { if (!Rule()) { aCssText.Truncate(); } else { Rule()->GetCssText(aCssText); } return NS_OK; } NS_IMETHODIMP DOMCSSStyleRule::SetCssText(const nsAString& aCssText) { if (Rule()) { Rule()->SetCssText(aCssText); } return NS_OK; } NS_IMETHODIMP DOMCSSStyleRule::GetParentStyleSheet(nsIDOMCSSStyleSheet** aSheet) { if (!Rule()) { *aSheet = nullptr; return NS_OK; } return Rule()->GetParentStyleSheet(aSheet); } NS_IMETHODIMP DOMCSSStyleRule::GetParentRule(nsIDOMCSSRule** aParentRule) { if (!Rule()) { *aParentRule = nullptr; return NS_OK; } return Rule()->GetParentRule(aParentRule); } NS_IMETHODIMP DOMCSSStyleRule::GetSelectorText(nsAString& aSelectorText) { if (!Rule()) { aSelectorText.Truncate(); } else { Rule()->GetSelectorText(aSelectorText); } return NS_OK; } NS_IMETHODIMP DOMCSSStyleRule::SetSelectorText(const nsAString& aSelectorText) { if (Rule()) { Rule()->SetSelectorText(aSelectorText); } return NS_OK; } NS_IMETHODIMP DOMCSSStyleRule::GetStyle(nsIDOMCSSStyleDeclaration** aStyle) { *aStyle = &mDOMDeclaration; NS_ADDREF(*aStyle); return NS_OK; } NS_IMETHODIMP DOMCSSStyleRule::GetCSSStyleRule(StyleRule **aResult) { *aResult = Rule(); NS_IF_ADDREF(*aResult); return NS_OK; } } // namespace css } // namespace mozilla // -- StyleRule ------------------------------------ namespace mozilla { namespace css { StyleRule::StyleRule(nsCSSSelectorList* aSelector, Declaration* aDeclaration) : Rule(), mSelector(aSelector), mDeclaration(aDeclaration), mImportantRule(nullptr), mDOMRule(nullptr), mLineNumber(0), mColumnNumber(0), mWasMatched(false) { NS_PRECONDITION(aDeclaration, "must have a declaration"); } // for |Clone| StyleRule::StyleRule(const StyleRule& aCopy) : Rule(aCopy), mSelector(aCopy.mSelector ? aCopy.mSelector->Clone() : nullptr), mDeclaration(new Declaration(*aCopy.mDeclaration)), mImportantRule(nullptr), mDOMRule(nullptr), mLineNumber(aCopy.mLineNumber), mColumnNumber(aCopy.mColumnNumber), mWasMatched(false) { // rest is constructed lazily on existing data } // for |SetCSSDeclaration| StyleRule::StyleRule(StyleRule& aCopy, Declaration* aDeclaration) : Rule(aCopy), mSelector(aCopy.mSelector), mDeclaration(aDeclaration), mImportantRule(nullptr), mDOMRule(aCopy.mDOMRule), mLineNumber(aCopy.mLineNumber), mColumnNumber(aCopy.mColumnNumber), mWasMatched(false) { // The DOM rule is replacing |aCopy| with |this|, so transfer // the reverse pointer as well (and transfer ownership). aCopy.mDOMRule = nullptr; // Similarly for the selector. aCopy.mSelector = nullptr; // We are probably replacing the old declaration with |aDeclaration| // instead of taking ownership of the old declaration; only null out // aCopy.mDeclaration if we are taking ownership. if (mDeclaration == aCopy.mDeclaration) { // This should only ever happen if the declaration was modifiable. mDeclaration->AssertMutable(); aCopy.mDeclaration = nullptr; } } StyleRule::~StyleRule() { delete mSelector; delete mDeclaration; NS_IF_RELEASE(mImportantRule); if (mDOMRule) { mDOMRule->DOMDeclaration()->DropReference(); NS_RELEASE(mDOMRule); } } // QueryInterface implementation for StyleRule NS_INTERFACE_MAP_BEGIN(StyleRule) if (aIID.Equals(NS_GET_IID(mozilla::css::StyleRule))) { *aInstancePtr = this; NS_ADDREF_THIS(); return NS_OK; } else NS_INTERFACE_MAP_ENTRY(nsIStyleRule) NS_INTERFACE_MAP_ENTRY_AMBIGUOUS(nsISupports, nsIStyleRule) NS_INTERFACE_MAP_END NS_IMPL_ADDREF(StyleRule) NS_IMPL_RELEASE(StyleRule) void StyleRule::RuleMatched() { if (!mWasMatched) { NS_ABORT_IF_FALSE(!mImportantRule, "should not have important rule yet"); mWasMatched = true; mDeclaration->SetImmutable(); if (mDeclaration->HasImportantData()) { NS_ADDREF(mImportantRule = new ImportantRule(mDeclaration)); } } } /* virtual */ int32_t StyleRule::GetType() const { return Rule::STYLE_RULE; } /* virtual */ already_AddRefed<Rule> StyleRule::Clone() const { nsRefPtr<Rule> clone = new StyleRule(*this); return clone.forget(); } /* virtual */ nsIDOMCSSRule* StyleRule::GetDOMRule() { if (!mDOMRule) { if (!GetStyleSheet()) { // Inline style rules aren't supposed to have a DOM rule object, only // a declaration. But if we do have one already, from a style sheet // rule that used to be in a document, we still want to return it. return nullptr; } mDOMRule = new DOMCSSStyleRule(this); NS_ADDREF(mDOMRule); } return mDOMRule; } /* virtual */ nsIDOMCSSRule* StyleRule::GetExistingDOMRule() { return mDOMRule; } /* virtual */ already_AddRefed<StyleRule> StyleRule::DeclarationChanged(Declaration* aDecl, bool aHandleContainer) { nsRefPtr<StyleRule> clone = new StyleRule(*this, aDecl); if (aHandleContainer) { nsCSSStyleSheet* sheet = GetStyleSheet(); if (mParentRule) { if (sheet) { sheet->ReplaceRuleInGroup(mParentRule, this, clone); } else { mParentRule->ReplaceStyleRule(this, clone); } } else if (sheet) { sheet->ReplaceStyleRule(this, clone); } } return clone.forget(); } /* virtual */ void StyleRule::MapRuleInfoInto(nsRuleData* aRuleData) { NS_ABORT_IF_FALSE(mWasMatched, "somebody forgot to call css::StyleRule::RuleMatched"); mDeclaration->MapNormalRuleInfoInto(aRuleData); } #ifdef DEBUG /* virtual */ void StyleRule::List(FILE* out, int32_t aIndent) const { // Indent for (int32_t index = aIndent; --index >= 0; ) fputs(" ", out); nsAutoString buffer; if (mSelector) mSelector->ToString(buffer, GetStyleSheet()); buffer.AppendLiteral(" "); fputs(NS_LossyConvertUTF16toASCII(buffer).get(), out); if (nullptr != mDeclaration) { mDeclaration->List(out); } else { fputs("{ null declaration }", out); } fputs("\n", out); } #endif void StyleRule::GetCssText(nsAString& aCssText) { if (mSelector) { mSelector->ToString(aCssText, GetStyleSheet()); aCssText.Append(PRUnichar(' ')); } aCssText.Append(PRUnichar('{')); aCssText.Append(PRUnichar(' ')); if (mDeclaration) { nsAutoString tempString; mDeclaration->ToString( tempString ); aCssText.Append( tempString ); } aCssText.Append(PRUnichar(' ')); aCssText.Append(PRUnichar('}')); } void StyleRule::SetCssText(const nsAString& aCssText) { // XXX TBI - need to re-parse rule & declaration } void StyleRule::GetSelectorText(nsAString& aSelectorText) { if (mSelector) mSelector->ToString(aSelectorText, GetStyleSheet()); else aSelectorText.Truncate(); } void StyleRule::SetSelectorText(const nsAString& aSelectorText) { // XXX TBI - get a parser and re-parse the selectors, // XXX then need to re-compute the cascade // XXX and dirty sheet } /* virtual */ size_t StyleRule::SizeOfIncludingThis(nsMallocSizeOfFun aMallocSizeOf) const { size_t n = aMallocSizeOf(this); n += mSelector ? mSelector->SizeOfIncludingThis(aMallocSizeOf) : 0; n += mDeclaration ? mDeclaration->SizeOfIncludingThis(aMallocSizeOf) : 0; // Measurement of the following members may be added later if DMD finds it is // worthwhile: // - mImportantRule; // - mDOMRule; return n; } } // namespace css } // namespace mozilla
__label__pos
0.992086
Device or Screen Resolution Specific Tags in Google Tag Manager Google Tag Manager has a lot of options for variables to create a trigger on Some Page Views like Click Element, Scroll Depth, Page URL, Session Time and many more. Surprisingly, they miss Device Type option in the list which could be one of the important variable for firing a tag. You can check out a list of variables by clicking on "Some Page Views" while creating a new trigger under "Choose built-in variable" as shown below. Create a New Trigger with Google's Built-in Variables When you click on "Choose Built-in Variable", it will show you a list of all the built-in variables which does not include any variable for screen resolution or device type. List of Built-in Variables In this post, we will discuss how to add a new variable for screen resolution as it's really important for firing a tag sometimes. For example, Browsee is a session recording tool but I want to do session recording for my desktop pages. Now, this could be managed easily via GTM if I create a Screen Resolution variable which will allow me to eliminate tablets and mobiles and fire Browsee Tag only on desktops. How to add a Screen Resolution Tag? 1. Go to Variables from left side panel and click on "New" button to add a variable. 2.  Edit it to add variable type and add click on "Custom Javascript" from the options in the right panel. 3.   Add the JS code as mentioned below and name the variable as "Screen Resolution" then save it. function () { var width = window.innerWidth, screenType; if (width <= 520) { screenType = "mobile"; } else if (width <= 820) { screenType = "tablet"; } else { screenType = "desktop"; } return screenType; } 4.   Once you save it, you can use it for creating triggers. Coming back to the example from where we started which is to target only Desktop Users. While creating a trigger, choose Screen Resolution in conditions dropdown as shown below. 5.   Once you selected Screen Resolution, set it to contains desktop. You can type in the value desktop in the input box after contains (all small case). Similarly for mobile website, you can check for mobile. 6.   You can save this trigger as Desktop Users and use it as per your need. Check out our blog on Trigger Group to use this Screen Resolution trigger with other triggers. How much is a great User Experience worth to you? Browsee helps you understand your user's behaviour on your site. It's the next best thing to talking to them. Browsee Product
__label__pos
0.894483
Popularity 7.0 Stable Activity 0.0 Stable 96 2 41 Monthly Downloads: 3,821 Programming language: Elixir License: MIT License Tags: Third Party APIs     Latest version: v0.8.1 kane alternatives and similar packages Based on the "Third Party APIs" category. Alternatively, view kane alternatives based on common mentions on social networks and blogs. Do you think we are missing an alternative of kane or a related project? Add another 'Third Party APIs' Package README Build Status Kane Kane. Citizen Kane. Charles Foster Kane, to be exact, Publisher extraordinaire. Rosebud. Kane is for publishing and subscribing to topics using Google Cloud Pub/Sub. Installation 1. Add Kane to your list of dependencies in mix.exs: def deps do [{:kane, "~> 0.7.0"}] end 1. Configure Goth (Kane's underlying token storage and retrieval library) with your Google JSON credentials: config :goth, json: "path/to/google/json/creds.json" |> File.read! 1. Ensure Kane is started before your application: def application do [applications: [:kane]] end Usage Pull, process and acknowledge messages via a pre-existing subscription: subscription = %Kane.Subscription{ name: "my-sub", topic: %Kane.Topic{ name: "my-topic" } } {:ok, messages} = Kane.Subscription.pull(subscription) Enum.each messages, fn(mess)-> process_message(mess) end # acknowledge message receipt in bulk Kane.Subscription.ack(subscription, messages) Send message via pre-existing subscription: topic = %Kane.Topic{name: "my-topic"} message = %Kane.Message{data: %{"hello": "world"}, attributes: %{"random" => "attr"}} result = Kane.Message.publish(message, topic) case result do {:ok, _return} -> IO.puts("It worked!") {:error, _reason} -> IO.puts("Should we try again?") end Hints: For more details, see the documentation.
__label__pos
0.972659
SimCity: A Case Study in Poor Digital Rights Management If you follow technology or gaming news, chances are you’ve read something about EA’s newest reboot of the popular SimCity franchise recently. The game has come under heavy fire for a variety of reasons, mostly related to its always-on Digital Rights Management (DRM), a system that ostensibly protects against software piracy, but quite often has negative effects on the ability of some legitimate consumers to play while pirated copies of the game work flawlessly after being cracked (having the DRM removed or circumvented), as illustrated here in .gif form. In case you haven’t read, the game requires a constant connection to the Internet in order to play. Moreover, players must also be able to connect to one of EA’s game servers, which suffered from severe congestion from the day of the launch, with players experiencing wait times upwards of a few hours to join. Online as long as you are! A parody of the SimCity cover art created by Redditor amperages All in all, it was not a promising launch. Despite the huge fan-base of the classic city-management series, users were so frustrated with the issues that they began to demand refunds, going so far as to make the product one of the all-time worst-reviewed products on Amazon with the online retailer even going so far as to halt sales of the game. The fury continued amid reports that Origin (EA’s content distribution arm) was refusing to refund purchases and even threatening to ban anyone who issued a credit card charge-back. One user had to resort to using the so-called “executive e-mail carpet bomb” to receive a refund. Regardless of how EA works to upgrade their server capabilities, the game is almost certain to be plagued by the specter of its release for a while yet. Besides the DRM, players have also complained about missing game features that have been a part of the SimCity series since its inception, such as subways and railroads. Scuttlebutt on the ‘net seems to indicate that these features will be packaged and sold as DLC further down the road. After all, The Sims 3 itself only costs about $25, but add the cost of all of the expansions and you’re looking at at least several hundred dollars. What happened to the good old days when you bought a game and then just owned it? Part of the change almost certainly comes from the death of physical media as the primary tool for disseminating software. After all, when I bought Commander Keen: Invasion of the Vorticons back in 1991 (after savings weeks of allowance) I was happy to receive a package containing a few 3.5″ floppies with the game on them. If I felt the need to go dig through some boxes to find them, I’d be able to install it and play no problem. Anyone remember these babies? Anyone remember these babies? Of course, once high-speed Internet connections became a thing, companies were faced with the dilemma of having a fantastic way of distributing the game that doubled as a fantastic tool for piracy. This was the era of CD keys, such that each copy of the game came packaged with a special code that would unlock it upon installing. Even where these keys could be faked by programs designed to generate acceptable codes, they were often used as a unique identifier for online play such that if multiple people were using the same CD Key they couldn’t play online simultaneously. Today’s DRM strategies such as those that have plagued SimCity’s launch, but also the sort of license checking that is used for mobile games on iOS and Android or the ever-popular Steam platform, have finally brought us to the point where we question what we’re buying. Did people who shelled out $60 for a copy of SimCity actually purchase the game? The fact is that no, they did not. A purchase of SimCity (or just about any other game or piece of software) in today’s world is effectively a temporary agreement or license between you and the content creator that you can install and access the program or app. If you read the fine print (and let’s face it, most of us don’t), you’ll invariably find that the company in question retains most of the rights, including terminating your ability to use the software at any time, often without any notice of reason. This is especially worrying in a DRM situation like SimCity’s where EA could simply turn off their servers with 30 days notice and (legally) void the hundreds of dollars you might have spent on their content. Companies need to begin asking themselves if DRM strategies like that used in SimCity are really doing anything to help their bottom line. When it comes to SimCity, it seems like the answer is a resounding “No,” evidenced by things like Amazon reviews and the more than 68,000 signatures on a Change.org petition to remove DRM from games permanently. It certainly doesn’t help that people, including a mysterious “Maxis insider,” have revealed that despite the creators’ claims to the contrary, the game can be played without a constant connection. This has prompted even louder calls for changes to the always-on DRM to be removed and certainly doesn’t bring any positive PR points to EA Games. So, the next time you’re forking out your hard-earned dollars for something that you can’t actually hold in your hands, take a moment to consider: what are you really buying? What's really for sale? What’s really for sale? Advertisements One response to “SimCity: A Case Study in Poor Digital Rights Management 1. Pingback: The Power of Players: Microsoft’s XBox (DRM) 360 | Play as Life· Leave a Reply Fill in your details below or click an icon to log in: WordPress.com Logo You are commenting using your WordPress.com account. Log Out / Change ) Twitter picture You are commenting using your Twitter account. Log Out / Change ) Facebook photo You are commenting using your Facebook account. Log Out / Change ) Google+ photo You are commenting using your Google+ account. Log Out / Change ) Connecting to %s
__label__pos
0.532932
Skip to main content Output Functions Learning Outcomes# After reading this section, you will be able to: • Invoke standard library procedures to stream data to users Introduction# The adequate provision of a user interface is an important aspect of software development: an interface that consists of user friendly input and user friendly output. The output facilities of a programming language convert the data in memory into a stream of characters that is read by the user. The stdio module of the C language provides such facilities. This chapter describes two functions in the stdio module that provide formatted and unformatted buffered support for streaming output data to the user and demonstrates in detail how to format output for a user friendly interface. Buffering# Standard output is line buffered. A program outputs its data to a buffer. That buffer empties to the standard output device separately. When it empties, we say that the buffer flushes. Output buffering lets a program continue executing without having to wait for the output device to finish displaying the characters it has received. The output buffer flushes if: • it is full • it receives a newline (\n) character • the program terminates Two functions in the stdio module that send characters to the output buffer are • putchar() - unformatted • printf() - formatted Unformatted Output# The putchar() function sends a single character to the output buffer. We pass the character as an argument to this function. The function returns the character sent or EOF if an error occured. The prototype for putchar() is: int putchar (int); To send the character 'a' to the display device, we write: // Single character output// putchar.c #include <stdio.h> int main(void){ putchar('a'); return 0;} The above program produces the following output: a Formatted Output# The printf() function sends data to the output buffer under format control and returns the number of characters sent. The prototype for the printf() function is: int printf(format, argument, ... ); format is a set of characters enclosed in double-quotes that may consist of any combination of plain characters and conversion specifiers. The function sends the plain characters as is to the buffer and uses the conversion specifiers to translate each value passed as an argument in the function call. The ellipsis indicates that the number of arguments can vary. Each conversion specifier corresponds to one argument. Conversion Specifiers# A conversion specifier begins with a % symbol and ends with a conversion character. The conversion character defines the formatting as listed in the table below: SpecifierFormat AsUser With Type ...Common(*) %ccharacterchar* %ddecimalchar, int, short, long, long long* %ooctalchar, int, short, long, long long %xhexadecimalchar, int, short, long, long long %ffloating-pointfloat, double, long double* %ggeneralfloat, double, long double %eexponentialfloat, double, long double For example: int i = 15;float x = 3.141593f;printf("i is %d; x is %f\n", i, x); The above code snippet produces the following output: i is 15; x is 3.141593 Conversion Controls# We refine the output by inserting control characters between the % symbol and the conversion character. The general form of a conversion specification is: % flags width . precision size conversion_character The five control characters are: 1. flags • Prescribes left justification of the converted value in its field • 0 pads the field width with leading zeros 2. width sets the minimum field width within which to format the value (overriding with a wider field only if necessary). Pads the converted value on the left (or right, for left alignment). The padding character is space or 0 if the padding flag is on 3. . separates the field's width from the field's precision 4. precision sets the number of digits to be printed after the decimal point for f conversions and the minimum number of digits to be printed for an integer (adding leading zeros if necessary). A value of 0 suppresses the printing of the decimal point in an f conversion 5. size identifies the size of the type being output Integral values | Size Specifier | User with Type | | :--- | :--- | | none | int | | hh | char | | h | short | | l | long | | ll | long long | Floating-point values | Size Specifier | User with Type | | :--- | :--- | | none | float | | l | double | | L | long double | Special Characters# To insert the special characters \, ', and ", we use their escape sequences. To insert the special character % into the format, we use the % symbol: // Outputting special characters// special.c int main(void){ printf("\\ \' \" %%\n"); return 0;} The above program produces the following output: \ ' " % Reference Example# The following program produces the output listed on the right for the ASCII collating sequence: // Playing with output formatting// printf.c#include <stdio.h> int main(void){ /* integers */ printf("\n* ints *\n"); printf("00000000011\n"); printf("12345678901\n"); printf("------------------------\n"); printf("%d|<-- %%d\n",4321); printf("%10d|<-- %%10d\n",4321); printf("%010d|<-- %%010d\n",4321); printf("%-10d|<-- %%-10d\n",4321); /* floats */ printf("\n* floats *\n"); printf("00000000011\n"); printf("12345678901\n"); printf("------------------------\n"); printf("%f|<-- %%f\n",4321.9876546); /* doubles */ printf("\n* doubles *\n"); printf("00000000011\n"); printf("12345678901\n"); printf("------------------------\n"); printf("%lf|<-- %%lf\n",4321.9876546); printf("%10.3lf|<-- %%10.3lf\n",4321.9876); printf("%010.3lf|<-- %%010.3lf\n",4321.9876); printf("%-10.3lf|<-- %%-10.3lf\n",4321.9876); /* characters */ printf("\n* chars *\n"); printf("00000000011\n"); printf("12345678901\n"); printf("------------------------\n"); printf("%c|<-- %%c\n",'d'); printf("%d|<-- %%d\n",'d'); printf("%x|<-- %%x\n",'d'); return 0;} The above program produces the following output: * ints *0000000001112345678901------------------------4321|<-- %d 4321|<-- %10d0000004321|<-- %010d4321 |<-- %-10d * floats *0000000001112345678901------------------------4321.987655|<-- %f * doubles *0000000001112345678901------------------------4321.987655|<-- %lf4321.988|<-- %10.3lf004321.988|<-- %010.3lf 4321.988 |<-- %-10.3lf * chars *0000000001112345678901------------------------d|<-- %c100|<-- %d64|<-- %x Note • doubles and floats round to the requested precision before being displayed; • double data may be displayed using %f (printf() converts float values to doubles for compatibility with legacy programs); • character data can be displayed in various formats including: • character • decimal • hexadecimal Portability Note (Optional)# Character data is encoded on many computers using the ASCII standard, but not all computers use this sequence. A program is portable across sequences if it refers to character data in its symbolic form ('A') and to special characters - such as newline, tab, and formfeed - by their escape sequences ('\n', '\t', '\f', etc.) rather than by their decimal or hexadecimal values.
__label__pos
0.787941
[Docs] [txt|pdf] [draft-ietf-emu-cr...] [Diff1] [Diff2] INFORMATIONAL Internet Engineering Task Force (IETF) S. Hartman Request for Comments: 7029 M. Wasserman Category: Informational Painless Security ISSN: 2070-1721 D. Zhang Huawei October 2013 Extensible Authentication Protocol (EAP) Mutual Cryptographic Binding Abstract As the Extensible Authentication Protocol (EAP) evolves, EAP peers rely increasingly on information received from the EAP server. EAP extensions such as channel binding or network posture information are often carried in tunnel methods; peers are likely to rely on this information. Cryptographic binding is a facility described in RFC 3748 that protects tunnel methods against man-in-the-middle attacks. However, cryptographic binding focuses on protecting the server rather than the peer. This memo explores attacks possible when the peer is not protected from man-in-the-middle attacks and recommends cryptographic binding based on an Extended Master Session Key, a new form of cryptographic binding that protects both peer and server along with other mitigations. Status of This Memo This document is not an Internet Standards Track specification; it is published for informational purposes. This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Not all documents approved by the IESG are a candidate for any level of Internet Standard; see Section 2 of RFC 5741. Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at http://www.rfc-editor.org/info/rfc7029. Hartman, et al. Informational [Page 1] RFC 7029 Mutual Crypto Binding October 2013 Copyright Notice Copyright (c) 2013 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Table of Contents 1. Introduction ....................................................3 1.1. Keywords for Requirement Levels ............................5 2. An Example Problem ..............................................5 3. The Server Insertion Attack .....................................6 3.1. Conditions for the Attack ..................................7 3.2. Mitigation Strategies ......................................8 3.2.1. Server Authentication ...............................8 3.2.2. Server Policy .......................................9 3.2.3. Existing Cryptographic Binding .....................12 3.2.4. Introducing EMSK-Based Cryptographic Binding .......12 3.2.5. Mix Key into Long-Term Credentials .................14 3.3. Intended Intermediates ....................................14 4. Recommendations ................................................15 4.1. Mutual Cryptographic Binding ..............................15 4.2. State Tracking ............................................15 4.3. Certificate Naming ........................................16 4.4. Inner Mixing ..............................................16 5. Survey of Tunnel Methods .......................................16 5.1. Tunnel EAP (TEAP) Method ..................................16 5.2. Flexible Authentication via Secure Tunneling (FAST) .......17 5.3. EAP Tunneled Transport Layer Security (EAP-TTLS) ..........17 6. Security Considerations ........................................17 7. Acknowledgements ...............................................18 8. References .....................................................18 8.1. Normative References ......................................18 8.2. Informative References ....................................18 Hartman, et al. Informational [Page 2] RFC 7029 Mutual Crypto Binding October 2013 1. Introduction The Extensible Authentication Protocol (EAP) [RFC3748] provides authentication between a peer (a party accessing some service) and a authentication server. Traditionally, peers have not relied significantly on information received from EAP servers. However, facilities such as EAP channel binding [RFC6677] provide the peer with confirmation of information about the resource it is accessing. Other facilities such as EAP Posture Transport [PT-EAP] permit a peer and EAP server to discuss the security properties of accessed networks. Both of these facilities provide peers with information they need to rely on and provide attackers who are able to impersonate an EAP server to a peer with new opportunities for attack. Instead of adding these new facilities to all EAP methods, work has focused on adding support to tunnel methods [RFC6678]. There are numerous tunnel methods, including [RFC4851] and [RFC5281], and work on building a Standards Track tunnel method [TEAP]. These tunnel methods are extensible. By adding an extension to support a facility such as channel binding to a tunnel method, an extension can be used with any inner method carried in the tunnel. Tunnel methods need to be careful about man-in-the-middle attacks. See [RFC6678] (Sections 3.2 and 4.6.3) and [TUNNEL-MITM] for a detailed description of these attacks. For example, an attack can happen when a peer is willing to perform authentication inside and outside a tunnel. An attacker can impersonate the EAP server and offer the inner method to the peer. However, on the other side, the attacker acts as a man-in-the-middle and opens a tunnel to the real EAP server. Figure 1 illustrates this attack. At the end of the attack, the EAP server believes it is talking to the peer. At the inner method level, this is true. At the outer method level, however, the server is talking to the attacker. Hartman, et al. Informational [Page 3] RFC 7029 Mutual Crypto Binding October 2013 Peer Attacker Service AAA Server | | | | | | | | |Peer Initiates Connection to a Service | | |---------------------+-------X-------->| | | (Intercepted by an Attacker) | | | | | | | | Tunnel Establishment | | |<-------------------------------->| | | | | | |..................................| | | Tunnel | | Non-Tunneled | | | | Method | Tunneled Authentication Method | |<===================>|<================================>| | | | | | |..................................| | | | | | | Attacker |<--- MSK keys --| | | Connected as | | | | Peer | | | |<--------------->| | A classic tunnel attack where the attacker inserts an extra tunnel between the attacker and EAP server. Figure 1: Classic Tunnel Attack There are two mitigation strategies for this classic attack. First, security policy can be set up so that the same method is not offered by a server both inside and outside a tunnel. Second, a technical solution is available if the inner method is sufficiently strong: cryptographic binding is a security property of a tunnel method under which the EAP server confirms that the inner and outer parties are the same. Cryptographic binding is typically implemented by requiring the outer party (the other end of the tunnel) to prove knowledge of the Master Session Key (MSK) of the inner method. This proves to the server that the inner and outer exchanges are with the same party. RFC 3748's definition of cryptographic binding allows for an optional proof to the peer that the inner and outer exchanges are with the same party. As discussed below, proving knowledge of the MSK is insufficient to prove to the peer that the inner and outer exchanges are with the same party. Hartman, et al. Informational [Page 4] RFC 7029 Mutual Crypto Binding October 2013 1.1. Keywords for Requirement Levels The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119]. 2. An Example Problem The GSS-EAP (Generic Security Service Extensible Authentication Protocol) mechanism [GSS-EAP] provides application authentication using EAP. A peer could reasonably trust some applications significantly more than others. If the peer sends confidential information to some applications, an attacker may gain significant value from convincing the peer that the attacker is the trusted application. Channel bindings are used to provide information to the peer about the application service to which the peer connects. Prior to channel bindings, peers could not distinguish one Network Access Service (NAS) from another, so attacks where one NAS impersonated another were out of scope. However, channel bindings add this capability and thus expands the threat model of EAP. The GSS-EAP mechanism requires distinguishing one service from another. Consider the following example. A relatively untrusted service, say a print server, has been compromised. A user is attempting to connect to a trusted service such as a financial application. Both the print server and the financial application use an Authentication, Authorization, and Accounting protocol (AAA) to transport EAP authentication back to the user's EAP server. The print server mounts a man-in-the-middle attack on the user's connection to the financial application and claims to be the application. The print server offers a tunnel method towards the peer. The print server extracts the inner method from the tunnel and sends it on towards the AAA server. Channel binding happens at the tunnel method though. So, the print server is happy to confirm that it is the financial application. After the inner method completes, the EAP server sends the MSK to the print server over the AAA protocol. If only the MSK is needed for cryptographic binding, then the print server can successfully perform cryptographic binding and may be able to impersonate the financial application to the peer. Hartman, et al. Informational [Page 5] RFC 7029 Mutual Crypto Binding October 2013 Peer Attacker Service AAA Server | | | | | | | | |Peer Initiates Connection to a Service | | |---------------------+----X----------->| | | (Intercepted by an Attacker) | | | | | | | | | | | Tunnel Establishment| | | |<------------------->| | | |.....................| | | | Tunnel | | | | | | | Tunneled | Non-Tunneled | | Method | Authentication Method | |<===================>|<================================>| | |(Same as Inner Method from Tunnel)| |.....................| | | | | | | | Peer | | | | Connected to |<----------------------MSK keys --| | Attacker | | | |<------------------->| | | | | | | A modified tunnel attack when an extra server rather than extra client is inserted. Figure 2: Channel Binding Requires More than Cryptographic Binding This attack is not specific to GSS-EAP. The channel bindings specification [RFC6677] describes a number of situations where channel bindings are important for network access. In these situations, one NAS could impersonate another by using a similar attack. 3. The Server Insertion Attack The previous section described an example of the server insertion attack. In this attack, one party adds a layer of tunneling such that from the perspective of the EAP peer, there are more methods than from the perspective of the EAP server. This attack is most beneficial when the party inserting the extra tunnel is a legitimate NAS, so mitigations need to be able to prevent a legitimate NAS from inappropriately adding a layer of tunneling. Some deployments utilize an intentional intermediary that adds an extra level of EAP tunneling between the peer and the EAP server; see Section 3.3 for a discussion. Hartman, et al. Informational [Page 6] RFC 7029 Mutual Crypto Binding October 2013 3.1. Conditions for the Attack For an inserted server attack to have value, the attacker needs to gain an advantage from its attack. An attacker could gain an advantage in the following ways: o The attacker can send information to a peer that the peer would trust from the EAP server but not the attacker. Examples of this include channel-binding responses. o The peer sends information to the attacker that was intended for the EAP server. For example, the inner user identity may disclose privacy-sensitive information. The channel-binding request may disclose what service the peer wishes to connect to. o The attacker may influence session parameters. For example, if the attacker can influence the MSK, then the attacker may be able to read or influence session traffic and mount an attack on the confidentiality or integrity of the resulting session. o An attacker may impact availability of the session. In practice though, an attacker that can mount a server insertion attack is likely to be able to impact availability in other ways. For this attack to be possible, the following conditions need to hold: 1. The attacker needs to be able to establish a tunnel method with the peer over which the peer will authenticate. 2. The attacker needs to be able to respond to any inner authentication. For example, an attacker who is a legitimate NAS can forward the inner authentication over AAA towards the EAP server. Note that the inner authentication may not be EAP. 3. Typically, the attacker needs to be able to complete the tunnel method after inner authentication. This may not be necessary if the attacker is gaining advantage from information sent by the peer over the tunnel. 4. In some cases, the attacker may need to complete a Secure Association Protocol (SAP) or otherwise demonstrate knowledge of the MSK after the tunnel method successfully completes. Attackers who are legitimate NASes are the primary focus of this memo. Previous work has provided mitigation against attackers who are not NASes; these mitigations are briefly discussed. Hartman, et al. Informational [Page 7] RFC 7029 Mutual Crypto Binding October 2013 3.2. Mitigation Strategies 3.2.1. Server Authentication If the peer confirms the identity of the party that the tunnel method is established with, the peer prevents the first condition (attacker establishing a tunnel method). Many tunnel methods rely on Transport Layer Security (TLS) [RFC5281] [TEAP]. The specifications for these methods tend to encourage or mandate certificate checking. If the TLS certificate is validated back to a trust anchor and the identity of the tunnel method server confirmed, then the first attack condition cannot be met. Many challenges make server authentication difficult. There is not an obvious name by which to identify a tunnel method server. It is not obvious where in the tunnel server certificate the name should be found. One particularly problematic practice is to use a certificate that names the host on which the tunnel server runs. Given such a name, it is very difficult for a peer to understand whether that server is intended to be a tunnel method server for the realm. It's not clear what trust anchors to use for tunnel servers. Using commercial Certificate Authorities (CAs) is probably undesirable because tunnel servers often operate in a closed community and are often provisioned with certificates issued by that community. Using commercial CAs can be particularly problematic with peers that support hostnames in certificates. Then anyone who can obtain a certificate for any host in the domain being contacted can impersonate a tunnel server. These difficulties lead to poor deployment of good certificate validation. Many peers make it easy to disable certificate validation. Other peers validate back to trust anchors but do not check names of certificates. What name types are supported and what configuration is easy to perform depend significantly on the peer in question. Specifications also make the problem worse. For example, [RFC5281] indicates that the only impact of failing to perform certificate validation is that the inner method can be attacked. Administrators and implementors believing this claim may believe that protection from passive attacks is sufficient. In addition, some deployments such as provisioning or strong inner methods are designed to work without certificate validation. Section 3.9 of the tunnel requirements document [RFC6678] discusses this requirement. Hartman, et al. Informational [Page 8] RFC 7029 Mutual Crypto Binding October 2013 3.2.2. Server Policy Server policy can potentially prevent the second condition (attacker being able to respond to inner authentication) from being possible. If the server only performs a particular inner authentication within a tunnel, then the attacker cannot gain a response to the inner authentication without there being such a tunnel. The attacker may be able to add a second layer of tunnels; see Figure 3. The inner tunnel may limit the attacker's capabilities; for example, if channel binding is performed over tunnel t2 in the figure, then an attacker cannot observe or influence it. Peer Attacker Service AAA Server | | | | | | | | |Peer Initiates Connection to a Service | | |---------------------+----X----------->| | | (Intercepted by an Attacker) | | | | | | | | | | | Tunnel Establishment| | | |<------------------->| | | |.....................| | | | Tunnel t1 | | | | | | | |.......................................... .............| | Tunnel t2 | | | | | | Inner Method | |<======================================================>| | | |.......................................... .............| | | | | |.....................| | | | | | | | Peer | | | | Connected to |<----------------------MSK keys --| | Attacker | | | |<------------------->| | | | | | | A tunnel t1 from the peer to the attacker contains a tunnel t2 from the peer to the home EAP server. Inside tunnel t2 is an inner authentication. Figure 3: Multiple Layered Tunnels Hartman, et al. Informational [Page 9] RFC 7029 Mutual Crypto Binding October 2013 Peer policy can be combined with this server policy to help prevent conditions 1 (attacker can establish a tunnel the peer will use) and 2 (attacker can respond to inner authentication). If the peer requires exactly one tunnel of a particular type and the EAP server only performs inner authentication over a tunnel of this type, then the attacker cannot establish tunnel t1 in the figure above. Configuring this peer policy may be more challenging than configuring policy on the EAP server. An attacker may be able to mount a more traditional man-in-the-middle attack in this instance; see Figure 4. This policy on the peer and EAP server combined with a tunnel method that supports cryptographic binding will allow the EAP server to detect the attacker. This means the attacker cannot act as a legitimate NAS and, in particular, does not obtain the MSK. So, if the tunnel between the attacker and peer also requires cryptographic binding and if the cryptographic binding requires both the EAP server and peer to prove knowledge of the inner MSK, then the authentication will fail. If cryptographic binding is not performed, then this attack may succeed. Hartman, et al. Informational [Page 10] RFC 7029 Mutual Crypto Binding October 2013 Peer Attacker Service AAA Server | | | | | | | | |Peer Initiates Connection to a Service | | |---------------------+----X----------->| | | (Intercepted by an Attacker) | | | | | | | | | | | Tunnel Establishment| Tunnel Establishment | |<------------------->|<-------------------------------->| |.....................|.................... .............| | Tunnel t1 | Tunnel t2 | | | | | Tunneled | | | Method | Tunneled Method | |<===================>|<================================>| | | | |.....................|..................................| | | | | | Peer | | | | Connected to | | | | Attacker | | | |<------------------->| | | | | | | A tunnel t1 extends from the peer to the attacker. A tunnel t2 extends from the attacker to the home EAP server. An inner EAP authentication is forwarded unmodified by the attacker from tunnel t1 to tunnel t2. The attacker can observe this inner authentication. Figure 4: A Traditional Man-in-the-Middle Attack Cryptographic binding is only a valuable component of a defense if the inner authentication is a key-deriving EAP method. Most tunnel methods also support non-EAP inner authentication such as Microsoft CHAP version 2 [RFC2759]. This may undermine cryptographic binding in a number of ways. An attacker may be able to convert an EAP method into a compatible non-EAP form of the same credential to suppress cryptographic binding. In addition, an inner authentication may be available through an entirely different means. For example, a Lightweight Directory Access Protocol [RFC4510] or other directory server may provide an attacker a way to get challenges and provide responses for an authentication mechanism entirely outside of the AAA/EAP context. An attacker with this capability may be able to get around server policy requiring an inner authentication be used only in a given type of tunnel. Hartman, et al. Informational [Page 11] RFC 7029 Mutual Crypto Binding October 2013 To recap, the following policy conditions appear sufficient to prevent a server insertion attack: 1. Peer and EAP server require a particular inner EAP method used within a particular tunnel method. 2. The inner EAP method's authentication is only available within the tunnel and through no other means including non-EAP means. 3. The inner EAP method produces a key. 4. The tunnel method uses cryptographic binding and the peer requires the other end of the tunnel to prove knowledge of the inner MSK. 3.2.3. Existing Cryptographic Binding The most advanced examples of cryptographic binding today work at two levels. First, the server and peer prove to each other knowledge of the inner MSK. Then, the inner MSK is combined with some outer key material to form the tunnel's EAP keys. This is sufficient to detect an inserted server or peer provided that the attacker does not learn the inner MSK. This seems sufficient to defend against attackers who cannot act as a legitimate NAS. The definition of cryptographic binding in [RFC3748] does not require these steps. To meet that definition, it would be sufficient for a peer to prove knowledge of the inner key to the EAP server. This would open some additional attacks. For example, by indicating success, an attacker might be able to mask a cryptographic binding failure. The peer is unlikely to be able to detect the failure, especially if only the tunnel key material is used for the final keys. As discussed in the previous section, cryptographic binding is only effective when the inner method is EAP. 3.2.4. Introducing EMSK-Based Cryptographic Binding Cryptographic binding can be strengthened when the inner EAP method supports an Extended Master Session Key (EMSK). The EMSK is never disclosed to any party other than the EAP server or peer, so even a legitimate NAS cannot learn the EMSK. So, if the same techniques currently applied to the inner MSK are applied to the inner EMSK, then condition 3 (completing tunnel authentication) will not hold because the attacker cannot complete this new form of cryptographic binding. This does not prevent the attacker from learning Hartman, et al. Informational [Page 12] RFC 7029 Mutual Crypto Binding October 2013 confidential information such as a channel-binding request sent over the tunnel prior to cryptographic binding. Obviously, as with all forms of cryptographic binding, cryptographic binding only works for key-deriving inner EAP methods. Also, some deployments (see Section 3.3) insert intermediates between the peer and the EAP server. EMSK-based cryptographic binding is incompatible with these deployments because the intermediate cannot learn the EMSK. Formally, EMSK-based cryptographic binding is a security claim for EAP tunnel methods that holds when: 1. The peer proves to the server that the peer participating in any inner method is the same as the peer for the tunnel method. 2. The server proves to the peer that the server for any inner method is the same as the server for the tunnel method. 3. The MSK and EMSK for the tunnel depend on the MSK and EMSK of inner methods. 4. The peer MUST be able to force the authentication to fail if the peer is unable to confirm the identity of the server. 5. Proofs offered need to be secure even against attackers who know the inner method MSK. If EMSK-based cryptographic binding is not an optional facility, it provides a strong defense against server insertion attacks and other tunnel man-in-the-middle (MITM) attacks for inner methods that provide an EMSK. The strength of the defense is dependent on the strength of the inner method. EMSK-based cryptographic binding MAY be provided as an optional facility. The value of EMSK-based cryptographic binding is reduced somewhat if it is an optional feature. It permits configurations where a peer uses other means to authenticate the server if the peer has sufficient information configured to validate the certificate and identity of an EAP server while using EMSK-based cryptographic binding for deployments where that is possible. If EMSK-based cryptographic binding is an optional facility, the negotiation of whether to use it MUST be protected by the inner MSK or EMSK. Typically, the MSK will be used because the primary advantage of making EMSK-based cryptographic binding an optional facility is to permit intermediates who know only the MSK to decline to use EMSK-based cryptographic binding. The peer MUST have an Hartman, et al. Informational [Page 13] RFC 7029 Mutual Crypto Binding October 2013 opportunity to fail the authentication after the server declines to use EMSK-based cryptographic binding. 3.2.5. Mix Key into Long-Term Credentials Another defense against tunnel MITM attacks, potentially including server insertion attacks, is to use a different credential for tunneled methods from other authentications. This may prevent the second condition (attacker being able to respond to inner authentication) from taking place. For example, if key material from the tunnel is mixed into a shared secret or password that is the basis of the inner authentication, then the second condition will not hold unless the attacker already knows this shared secret. The advantage of this approach is that it seems to be the only way to strengthen non-EAP inner authentications within a tunnel. There are several disadvantages. Choosing a function to mix the tunnel key material into the inner authentication will be very dependent on the inner authentication. In addition, this appears to involve a layering violation. However, exploring the possibility of providing a solution like this seems important because it can function for inner authentications where no other approach will work. 3.3. Intended Intermediates Some deployments introduce a tunnel server separate from the EAP server; see [RFC5281] for an example of this style of deployment. The tunnel server is between the NAS and the EAP server. The only difference between such an intermediate and an attacker is that the intermediate provides some function valuable to the peer or EAP server and that the intermediate is trusted by the peer. If peers are configured with the necessary information to validate certificates of these intermediates and to confirm their identity, then tunnel MITM and inserted server attacks can be defended against. The intermediates need to be trusted with regard to channel binding and other services that the peer depends on. Support for trusted intermediates is not a requirement according to the tunnel method requirements. It seems reasonable to treat trusted intermediates as a special case if they are supported and to focus on the security of the case where there are not intermediates in the tunnel as the common case. Hartman, et al. Informational [Page 14] RFC 7029 Mutual Crypto Binding October 2013 4. Recommendations 4.1. Mutual Cryptographic Binding The Tunnel EAP method [TEAP] should gain support for EMSK-based cryptographic binding. As channel-binding support is added to existing EAP methods, EMSK- based cryptographic binding or some other form of cryptographic binding that protects against server insertion should also be added to these methods. Mutual cryptographic binding may also be valuable when other services are added to EAP methods that may require a peer trust an EAP server. 4.2. State Tracking Today, mutual authentication in EAP is thought of as a security claim about a method. However, in practice, it's an attribute of a particular exchange. Mutual authentication can be obtained via checking certificates, through mutual cryptographic binding, or in very controlled cases through carefully crafted peer and server policy combined with existing cryptographic binding. Using services like channel binding that involve the peer trusting the EAP server should require mutual authentication be present in the session. To accomplish this, implementations including channel binding or other peer services MUST track whether mutual authentication has happened. They SHOULD default to not permitting these peer services unless mutual authentication has happened. They SHOULD support a configuration where the peer fails to authenticate unless mutual authentication takes place. Discussion of whether this configuration should be recommended as a default is required. The Tunnel EAP method [TEAP] should permit peers to force authentication failure if they are unable to perform mutual authentication. The protocol should permit this to be deferred until after mutual cryptographic binding is considered. Services such as channel binding should be deferred until after cryptographic binding or mutual cryptographic binding. An additional complication arises when a tunnel method authenticates multiple parties such as authenticating both the peer machine and the peer user to the EAP server. Depending on how mutual authentication is achieved, only some of these parties may have confidence in it. For example, if a strong shared secret is used to mutually authenticate the user and the EAP server, the machine may not have confidence that the EAP server is the authenticated party if the Hartman, et al. Informational [Page 15] RFC 7029 Mutual Crypto Binding October 2013 machine cannot trust the user not to disclose the shared secret to an attacker. In these cases, the parties that have achieved mutual authentication need to be considered when evaluating whether to use peer services. 4.3. Certificate Naming Work is required to promote interoperable deployment of server certificate validation by peers. A standard way to name EAP servers is required. Recommendations for what name forms peers should implement is required. 4.4. Inner Mixing More consideration of the proposal to mix some key material into inner authentications is desired. Currently, the proposal is under- defined and fairly invasive. Are there versions of this proposal that would be valuable? Is there a way to view it as something more abstract so that it does not involve a combinatorial explosion as a result of considering specific tunnels and inner methods? 5. Survey of Tunnel Methods 5.1. Tunnel EAP (TEAP) Method The Tunnel EAP method [TEAP] provides several features designed to limit man-in-the-middle vulnerabilities and provide a safe platform for peer services. TEAP implementations support checking the Network Access Identifier (NAI) realm portion against a DNS subjectAlternativeName in the certificate of the TEAP server. TEAP supports EMSK-based cryptographic binding as a way to achieve mutual cryptographic binding. TEAP also supports MSK-based cryptographic binding for cases where the EMSK is not available; this cryptographic binding does not provide sufficient assurance for peer services. TEAP provides recommendations on conditions that need to be met prior to using peer services. These recommendations explicitly address when the MSK-based cryptographic binding is sufficient and when EMSK-based cryptographic binding is required. TEAP meets the recommendations for implementations outlined in this memo. Hartman, et al. Informational [Page 16] RFC 7029 Mutual Crypto Binding October 2013 5.2. Flexible Authentication via Secure Tunneling (FAST) EAP-FAST [RFC4851] provides MSK-based cryptographic binding. EAP-FAST requires that server certificates be validated. However, no guidance is given on how servers are named, so the specification does not provide enough guidance to interoperably enforce this requirement. EAP-FAST does not support channel binding or other peer services, although the protocol is extensible and TLVs could be defined for peer services. If the certificates are actually validated and names checked, then EAP-FAST would provide security guarantees sufficient to use these peer services. However, the cryptographic binding in EAP-FAST is not strong enough to secure peer services if the server certificate is not validated and name checked. 5.3. EAP Tunneled Transport Layer Security (EAP-TTLS) The EAP Tunneled Transport Layer Security Version 0 (EAP-TTLS) [RFC5281] does not support cryptographic binding. It also does not support peer services such as channel binding although they could be added using extensible AVPs. EAP-TTLS recommends that implementations SHOULD validate certificates but gives no guidance on how to handle naming. Even if certificates are validated, EAP-TTLS is not generally suited to peer services. As an example, EAP-TTLS does not include protected result indication. So, an unprotected EAP success packet can end the authentication. In addition, it is difficult for a peer to request services such as channel binding because the server ends the authentication as soon as authentication is successful. A variety of extensions, including EAP-TTLS version 1, improve some of these concerns. Specification and implementation issues complicate analysis of these extensions. As an example, most implementations can be tricked into using EAP-TTLS version 0. 6. Security Considerations This memo examines the security considerations of providing new classes of service within EAP methods. Traditionally, the primary focus of EAP is authenticating the peer to the network. However, as the peer places trust in the EAP server, mutual authentication becomes more important. This memo examines the security of mutual authentication for EAP tunnel methods. Hartman, et al. Informational [Page 17] RFC 7029 Mutual Crypto Binding October 2013 7. Acknowledgements The authors would like to thank Alan DeKok for helping to explore these attacks. Alan focused the discussion on the importance of inner authentications that are not EAP and proposed mixing in key material as a way to resolve these authentications. Jari Arkko provided a review of the attack and valuable context on past efforts in developing cryptographic binding. Sam Hartman's and Margaret Wasserman's work on this memo is funded by Huawei. 8. References 8.1. Normative References [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, March 1997. [RFC3748] Aboba, B., Blunk, L., Vollbrecht, J., Carlson, J., and H. Levkowetz, "Extensible Authentication Protocol (EAP)", RFC 3748, June 2004. 8.2. Informative References [GSS-EAP] Hartman, S. and J. Howlett, "A GSS-API Mechanism for the Extensible Authentication Protocol", Work in Progress, August 2012. [PT-EAP] Cam-Winget, N. and P. Sangster, "PT-EAP: Posture Transport (PT) Protocol For EAP Tunnel Methods", Work in Progress, March 2013. [RFC2759] Zorn, G., "Microsoft PPP CHAP Extensions, Version 2", RFC 2759, January 2000. [RFC4510] Zeilenga, K., "Lightweight Directory Access Protocol (LDAP): Technical Specification Road Map", RFC 4510, June 2006. [RFC4851] Cam-Winget, N., McGrew, D., Salowey, J., and H. Zhou, "The Flexible Authentication via Secure Tunneling Extensible Authentication Protocol Method (EAP-FAST)", RFC 4851, May 2007. Hartman, et al. Informational [Page 18] RFC 7029 Mutual Crypto Binding October 2013 [RFC5281] Funk, P. and S. Blake-Wilson, "Extensible Authentication Protocol Tunneled Transport Layer Security Authenticated Protocol Version 0 (EAP-TTLSv0)", RFC 5281, August 2008. [RFC6677] Hartman, S., Clancy, T., and K. Hoeper, "Channel-Binding Support for Extensible Authentication Protocol (EAP) Methods", RFC 6677, July 2012. [RFC6678] Hoeper, K., Hanna, S., Zhou, H., and J. Salowey, "Requirements for a Tunnel-Based Extensible Authentication Protocol (EAP) Method", RFC 6678, July 2012. [TEAP] Zhou, H., Cam-Winget, N., Salowey, J., and S. Hanna, "Tunnel EAP Method (TEAP) Version 1", Work in Progress, September 2013. [TUNNEL-MITM] Asokan, N., Niemi, V., and K. Nyberg, "Man-in-the-Middle in Tunnelled Authentication Protocols", Cryptology ePrint Archive: Report 2002/163, November 2002. Authors' Addresses Sam Hartman Painless Security EMail: [email protected] Margaret Wasserman Painless Security EMail: [email protected] URI: http://www.painless-security.com/ Dacheng Zhang Huawei EMail: [email protected] Hartman, et al. Informational [Page 19] Html markup produced by rfcmarkup 1.111, available from https://tools.ietf.org/tools/rfcmarkup/
__label__pos
0.502821
Take the 2-minute tour × TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. It's 100% free, no registration required. I have a corollary with a number of parts whose parts need to be referenced individually later on, so something like: \begin{cor}\label{cor} \begin{enumerate} \item Part one \label{partone} \item Part two \label{parttwo} \end{enumerate} \end{cor} Ideally, I want \ref{partone} to produce 2.1 (supposing it were Corollary 2 for the sake of argument), but, of course, \ref{partone} gives 1, as it's referencing the enumi counter. Is there a way to get \ref to somehow combine the enumi and cor counters? share|improve this question add comment 3 Answers up vote 5 down vote accepted This can also be achieved by using the enumitem package: \documentclass{article} \usepackage{amsthm} \usepackage{enumitem} \newtheorem{cor}{Corolary} \begin{document} \begin{cor}\label{cor} \begin{enumerate}[label={\thecor.\arabic*}] \item Part one \label{partone} \item Part two \label{parttwo} \end{enumerate} \end{cor} As we see in part~\ref{partone} \end{document} share|improve this answer   Shouldn't it be [ref={\thecor.\arabic*}]? I understand that holyland only wants the reference to be changed, not the label of the item itself. –  Michael Ummels Mar 15 '11 at 0:00   @Michael Ummels: if the change should only affect the reference, then yes, [ref={\thecor.\arabic*}] would do the job; however, that would produce an odd result, since formally you would have a reference, for example, 1.2 to an object numbered as simply 2. For the sake of consistency I suggested the change to both the label and the reference. –  Gonzalo Medina Mar 15 '11 at 0:32   I agree. Maybe, it is better to put the item number in parentheses, so the label would just say (1) and the reference would be e.g. to Theorem 2 (1). This could be done by saying [label={(\arabic*)},ref={\thecor~(\arabic*)}], I suppose. –  Michael Ummels Mar 15 '11 at 12:30   Michael's second suggestion is what I was after. We'll see what people say about consistency between labels and references (if anyone notices). My feeling is that having the labels appear as 2.1.1, 2.1.2, etc looks really cluttered, especially with 'Corollary 2.1' right above. –  hoyland Mar 15 '11 at 23:54 add comment You can use the chngcntr package to make the enumi counter depend on your cor counter. If you do this within the corollary environment, it won't affect other enumerate environments. \documentclass{article} \usepackage{amsthm} \newtheorem{cor}{Corollary} \usepackage{chngcntr} \begin{document} \begin{cor}{My Cor}\label{mycor} \counterwithin{enumi}{cor} \begin{enumerate} \item An item \label{part1} \item Another one \label{part2} \end{enumerate} \end{cor} In Part~\ref{part1} of the the corollary\ldots \begin{enumerate} \item An item \label{outside} \end{enumerate} This is a reference to Item~\ref{outside} that is outside the corollary. \end{document} Even simpler, and without using the chngcntr package, you can simply redefine \theenumi within the cor environment: \begin{cor}{My Cor}\label{mycor} \renewcommand{\theenumi}{\thecor.\arabic{enumi}} \begin{enumerate} \item An item \label{part1} \item Another one \label{part2} \end{enumerate} \end{cor} share|improve this answer 2   Whoever downvoted this answer: is there something wrong with the solution that I missed? –  Alan Munn Mar 15 '11 at 1:09   It does get the job done, as far as I can tell. I'm inclined to think Gonzalo's solution is more elegant or at least allows for more flexibility. (What I was actually after was Michael's comment to Gonzalo's answer.) –  hoyland Mar 15 '11 at 23:48   @hoyland I see. But since your question didn't actually state that... Anyway, I don't particularly care about votes, (I certainly don't need the reputation) but you may want to take a look at this for some other ideas on downvoting: Down Vote Etiquette –  Alan Munn Mar 16 '11 at 1:37   Ah, I wasn't the down vote; I was just speculating. I have no idea what the objection could have been other than someone thinking it didn't answer the question, since people seemed to read it different ways. –  hoyland Mar 16 '11 at 4:18 add comment i was going to ask the exact same question. i found the solutions given here too complicated, i especially didnt want to get new packages for such a simple job. so i kept on reading and i am now completely satisfied with this solution: \hyperref[partone]{\ref*{cor}.\ref*{partone}} share|improve this answer add comment Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.998329
Lambda Expressions Lambda Expressions are ideally used when we need to do something simple and are more interested in getting the job done quickly rather than formally naming the function. Lambda expressions are also known as anonymous functions. Lambda Expressions in Python are a short way to declare small and anonymous functions (it is not necessary to provide a name for lambda functions). Lambda functions behave just like regular functions declared with the def keyword. They come in handy when you want to define a small function in a concise way. They can contain only one expression, so they are not best suited for functions with control-flow statements. Syntax of a Lambda Function lambda arguments: expression Lambda functions can have any number of arguments but only one expression. Example code # Lambda function to calculate square of a number square = lambda x: x ** 2 print(square(3)) # Output: 9 # Traditional function to calculate square of a number def square1(num): return num ** 2 print(square(5)) # Output: 25 In the above lambda example, lambda x: x ** 2 yields an anonymous function object which can be associated with any name. So, we associated the function object with square. So from now on we can call the square object like any traditional function, for example square(10) Examples of lambda functions Beginner lambda_func = lambda x: x**2 # Function that takes an integer and returns its square lambda_func(3) # Returns 9 Intermediate lambda_func = lambda x: True if x**2 >= 10 else False lambda_func(3) # Returns False lambda_func(4) # Returns True Complex my_dict = {"A": 1, "B": 2, "C": 3} sorted(my_dict, key=lambda x: my_dict[x]%3) # Returns ['C', 'A', 'B'] Use-case Let’s say you want to filter out odd numbers from a list. You could use a for loop: my_list = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] filtered = [] for num in my_list: if num % 2 != 0: filtered.append(num) print(filtered) # Python 2: print filtered # [1, 3, 5, 7, 9] Or you could write this as a one liner with list-comprehensions: filtered = [x for x in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] if x % 2 != 0] But you might be tempted to use the built-in filter function. Why? The first example is a bit too verbose and the one-liner can be harder to understand. But filter offers the best of both words. What is more, the built-in functions are usually faster. my_list = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] filtered = filter(lambda x: x % 2 != 0, my_list) list(filtered) # [1, 3, 5, 7, 9] NOTE: in Python 3 built in functions return generator objects, so you have to call list. In Python 2, on the other hand, they return a list, tupleor string. So what happened? You told filter to take each element in my_list and apply the lambda expressions. The values that return False are filtered out. More Information:
__label__pos
0.996097
Take the 2-minute tour × Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required. Possible Duplicate: Python dynamic inheritance: How to choose base class upon instance creation? I want a class to choose a base class on the fly based on a parameter in the init method. Class A(): #... Class B(): #... Class C(): def __init__(self, base_type): if parent_type == 'A': #TODO: make C derive A else: #TODO: make C derive B A and B are library classes that derive the same base class. The answers to a similar question seemed too ugly. share|improve this question marked as duplicate by Hamish, Dougal, Ned Batchelder, David Robinson, Graviton Sep 20 '12 at 3:52 This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question. 3   Why would you do something like this? Although many things are possible in a dynamic language like python, this does not seem as a good way to go with. With some more context, we might be able to better understand your problem and offer a solution. –  cyroxx Aug 23 '12 at 20:18 5   Do you really need to use inheritance? Could you use composition instead (e.g. have each C instance own an instance of either A or B)? Check out the second, highly rated answer to the question you linked. –  Blckknght Aug 23 '12 at 20:21 5   If you insist on not using a factory pattern, you may want to investigate metaclasses. docs.python.org/py3k/reference/datamodel.html#metaclasses –  dsh Aug 23 '12 at 20:22      @cyroxx I just added an important detail that A and B are library classes that derive the same base class. –  travis1097 Aug 23 '12 at 20:36 5   Similar question is more or less exactly the same, and provides all the answers you're going to get here... if the answer is 'ugly', your question should address what you find ugly about those soltuions, not just 'give me new answers to the same question'. –  Hamish Aug 23 '12 at 20:38 3 Answers 3 up vote 1 down vote accepted I assume you mean base_type insteand of parent type. But the following should work Class C(): def __init__(self, base_type): if base_type == 'A': self.__class__=A else: self.__class__=B Some more details on this approach can be found here: http://harkablog.com/dynamic-state-machines.html share|improve this answer 1   any reasons for the downvote? –  schacki Aug 23 '12 at 21:42 I am completely agree with cyroxx, you should give us some context to your problem. As it stands now, __init__ is called after the instance of the class is created to initialize its members. Too late to change the inheritance. Would a simple class factory be enough for you: class MyA(ABMixin, A): pass class MyB(ABMixin, B): pass def factory(class_type): if class_type == 'A': return MyA() else: return MyB() I suggest to read this SO answer about dynamic class creation in python. share|improve this answer While you can change the class in __init__, it's more appropriate to do it in __new__. The former is for initialising, the latter for construction: class A(object): pass class B(object): pass class C(object): def __new__(cls, base_type, *args, **kwargs): return super(C, cls).__new__(base_type, *args, **kwargs) assert isinstance( C(A), A ) assert isinstance( C(B), B ) With __init__, you're creating an instance of C, and then modifying its type. With __new__, you're never creating an instance of C, just the required base_type. share|improve this answer Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.544392
1 This has been asked here before; but question is closed and the few replies do not provide sufficient information. I am trying to use a contrib module (sitewide_alert) which provides its own custom entity. I am trying to figure out how to patch this module to make it fieldable (and how to do this in general). In other posts I have seen suggestions to add the follinwg to the ContentEntityType annotation: • fieldable = TRUE; • bundle_entity_type = ?? • field_ui_base_route -> different suggestions but already exists for this module pointing to the module's config form. But none of those seemed to fix this. the full annotation for the entity is: * @ContentEntityType( * id = "sitewide_alert", * label = @Translation("Sitewide Alert"), * label_plural = @Translation("Sitewide Alerts"), * label_collection = @Translation("Sitewide Alerts"), * handlers = { * "storage" = "Drupal\sitewide_alert\SitewideAlertStorage", * "view_builder" = "Drupal\Core\Entity\EntityViewBuilder", * "list_builder" = "Drupal\sitewide_alert\SitewideAlertListBuilder", * "views_data" = "Drupal\sitewide_alert\Entity\SitewideAlertViewsData", * "translation" = "Drupal\sitewide_alert\SitewideAlertTranslationHandler", * * "form" = { * "default" = "Drupal\sitewide_alert\Form\SitewideAlertForm", * "add" = "Drupal\sitewide_alert\Form\SitewideAlertForm", * "edit" = "Drupal\sitewide_alert\Form\SitewideAlertForm", * "delete" = "Drupal\sitewide_alert\Form\SitewideAlertDeleteForm", * }, * "route_provider" = { * "html" = "Drupal\sitewide_alert\SitewideAlertHtmlRouteProvider", * }, * "access" = "Drupal\sitewide_alert\SitewideAlertAccessControlHandler", * }, * base_table = "sitewide_alert", * data_table = "sitewide_alert_field_data", * revision_table = "sitewide_alert_revision", * revision_data_table = "sitewide_alert_field_revision", * show_revision_ui = TRUE, * translatable = TRUE, * admin_permission = "administer sitewide alert entities", * entity_keys = { * "id" = "id", * "revision" = "vid", * "label" = "name", * "uuid" = "uuid", * "uid" = "user_id", * "langcode" = "langcode", * "published" = "status", * }, * revision_metadata_keys = { * "revision_user" = "revision_user", * "revision_created" = "revision_created", * "revision_log_message" = "revision_log", * }, * links = { * "canonical" = "/admin/content/sitewide_alert/{sitewide_alert}", * "add-form" = "/admin/content/sitewide_alert/add", * "edit-form" = "/admin/content/sitewide_alert/{sitewide_alert}/edit", * "delete-form" = "/admin/content/sitewide_alert/{sitewide_alert}/delete", * "version-history" = "/admin/content/sitewide_alert/{sitewide_alert}/revisions", * "revision" = "/admin/content/sitewide_alert/{sitewide_alert}/revisions/{sitewide_alert_revision}/view", * "revision_revert" = "/admin/content/sitewide_alert/{sitewide_alert}/revisions/{sitewide_alert_revision}/revert", * "revision_delete" = "/admin/content/sitewide_alert/{sitewide_alert}/revisions/{sitewide_alert_revision}/delete", * "translation_revert" = "/admin/content/sitewide_alert/{sitewide_alert}/revisions/{sitewide_alert_revision}/revert/{langcode}", * "collection" = "/admin/content/sitewide_alert", * }, * field_ui_base_route = "sitewide_alert.settings", * constraints = { * "ScheduledDateProvided" = {} * } * ) */ In my trial/error approach I did notice I created the Manage Fields/Display/etc UI for each entity I had previously created. So I think this is possibly related to not having a bundle defined? My use case doesn't require making new bundles (similar to the User entity). Despite the other posted answers; my guess is modifying the annotation is not enough to add this functionality. 1 Answer 1 1 There is nothing wrong with the entity type annotation. After you have removed this code intentionally disabling the field UI routes https://git.drupalcode.org/project/sitewide_alert/-/blob/8.x-1.6/src/Routing/RouteSubscriber.php the entity should be fieldable at admin/content/sitewide_alert/settings/fields. This is a very cool module. After playing around a little bit I've found out the module has two settings routes, the mentioned dummy form in an odd place under content and the official module settings form in configuration. If you attach the field UI to this form it's easier to find I think: field_ui_base_route = "sitewide_alert.sitewide_alert_config_form" 3 • awesome. I hadn't even seen that route file intentionally blocking user from getting to field management. I guess the project maintainer assumes to know every possbile use case for their module. And yes, moving the field ui under the config form makes more sense, i guess. Like the User entity; although always personally though that was wrong to be there and should be under Structure. I'll add the missing links/tabs as well and create a patch. Thanks for your help. – liquidcms Oct 14, 2021 at 12:55 • I guess this means the predominent answer I have seen posted that annotation: "fieldable = true" is requied - is not required (it isn't as i don't have it now and everything is working as expected). – liquidcms Oct 14, 2021 at 13:44 • 1 No, this annotation doesn't exist in D8/9. For a minimal fieldable entity type see drupal.stackexchange.com/questions/283826/… – 4uk4 Oct 14, 2021 at 13:48 Your Answer By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.719723
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.             256 lines 7.1 KiB /* * mod_access - restrict access to the webserver for certain clients * * Todo: * - access.redirect_url * * Author: * Copyright (c) 2009 Thomas Porzelt * License: * MIT, see COPYING file in the lighttpd 2 tree */ #include <lighttpd/base.h> #include <lighttpd/radix.h> LI_API gboolean mod_access_init(liModules *mods, liModule *mod); LI_API gboolean mod_access_free(liModules *mods, liModule *mod); struct access_check_data { liPlugin *p; liRadixTree *ipv4, *ipv6; }; typedef struct access_check_data access_check_data; enum { ACCESS_DENY = 1, ACCESS_ALLOW = 2 }; enum { OPTION_LOG_BLOCKED = 0 }; enum { OPTION_REDIRECT_URL = 0 }; static liHandlerResult access_check(liVRequest *vr, gpointer param, gpointer *context) { access_check_data *acd = param; liSockAddr *addr = vr->coninfo->remote_addr.addr; gboolean log_blocked = _OPTION(vr, acd->p, OPTION_LOG_BLOCKED).boolean; GString *redirect_url = _OPTIONPTR(vr, acd->p, OPTION_REDIRECT_URL).string; UNUSED(context); UNUSED(redirect_url); if (addr->plain.sa_family == AF_INET) { if (GINT_TO_POINTER(ACCESS_DENY) == li_radixtree_lookup(acd->ipv4, &addr->ipv4.sin_addr.s_addr, 32)) { if (!li_vrequest_handle_direct(vr)) return LI_HANDLER_GO_ON; vr->response.http_status = 403; if (log_blocked) VR_INFO(vr, "access.check: blocked %s", vr->coninfo->remote_addr_str->str); } #ifdef HAVE_IPV6 } else if (addr->plain.sa_family == AF_INET6) { if (GINT_TO_POINTER(ACCESS_DENY) == li_radixtree_lookup(acd->ipv6, &addr->ipv6.sin6_addr.s6_addr, 128)) { if (!li_vrequest_handle_direct(vr)) return LI_HANDLER_GO_ON; vr->response.http_status = 403; if (log_blocked) VR_INFO(vr, "access.check: blocked %s", vr->coninfo->remote_addr_str->str); } #endif } else { VR_ERROR(vr, "%s", "access.check only supports ipv4 or ipv6 clients"); return LI_HANDLER_ERROR; } return LI_HANDLER_GO_ON; } static void access_check_free(liServer *srv, gpointer param) { access_check_data *acd = param; UNUSED(srv); li_radixtree_free(acd->ipv4, NULL, NULL); li_radixtree_free(acd->ipv6, NULL, NULL); g_slice_free(access_check_data, acd); } static liAction* access_check_create(liServer *srv, liWorker *wrk, liPlugin* p, liValue *val, gpointer userdata) { access_check_data *acd = NULL; UNUSED(srv); UNUSED(wrk); UNUSED(userdata); val = li_value_get_single_argument(val); if (LI_VALUE_STRING == li_value_list_type_at(val, 0)) { li_value_wrap_in_list(val); } if (!li_value_list_has_len(val, 1) && !li_value_list_has_len(val, 2)) { ERROR(srv, "%s", "access_check expects a list of one or two string,list tuples as parameter"); return NULL; } acd = g_slice_new0(access_check_data); acd->p = p; acd->ipv4 = li_radixtree_new(); acd->ipv6 = li_radixtree_new(); li_radixtree_insert(acd->ipv4, NULL, 0, GINT_TO_POINTER(ACCESS_DENY)); li_radixtree_insert(acd->ipv6, NULL, 0, GINT_TO_POINTER(ACCESS_DENY)); LI_VALUE_FOREACH(v, val) liValue *vAD, *vIPs; gboolean deny = FALSE; if (!li_value_list_has_len(v, 2)) { ERROR(srv, "%s", "access_check expects a list of one or two string,list tuples as parameter"); goto failed_free_acd; } vAD = li_value_list_at(v, 0); if (LI_VALUE_STRING != li_value_type(vAD)) { ERROR(srv, "%s", "access_check expects a list of one or two string,list tuples as parameter"); goto failed_free_acd; } if (g_str_equal(vAD->data.string->str, "allow")) { deny = FALSE; } else if (g_str_equal(vAD->data.string->str, "deny")) { deny = TRUE; } else { ERROR(srv, "access_check: invalid option \"%s\"", vAD->data.string->str); goto failed_free_acd; } vIPs = li_value_list_at(v, 1); if (LI_VALUE_LIST != li_value_type(vIPs)) { ERROR(srv, "%s", "access_check expects a list of one or two string,list tuples as parameter"); goto failed_free_acd; } LI_VALUE_FOREACH(ip, vIPs) guint32 ipv4, netmaskv4; guint8 ipv6_addr[16]; guint ipv6_network; if (LI_VALUE_STRING != li_value_type(ip)) { ERROR(srv, "%s", "access_check expects a list of one or two string,list tuples as parameter"); goto failed_free_acd; } if (g_str_equal(ip->data.string->str, "all")) { li_radixtree_insert(acd->ipv4, NULL, 0, GINT_TO_POINTER(deny ? ACCESS_DENY : ACCESS_ALLOW)); li_radixtree_insert(acd->ipv6, NULL, 0, GINT_TO_POINTER(deny ? ACCESS_DENY : ACCESS_ALLOW)); } else if (li_parse_ipv4(ip->data.string->str, &ipv4, &netmaskv4, NULL)) { gint prefixlen; netmaskv4 = ntohl(netmaskv4); prefixlen = 32 - g_bit_nth_lsf(netmaskv4, -1); if (prefixlen < 0 || prefixlen > 32) prefixlen = 0; li_radixtree_insert(acd->ipv4, &ipv4, prefixlen, GINT_TO_POINTER(deny ? ACCESS_DENY : ACCESS_ALLOW)); } else if (li_parse_ipv6(ip->data.string->str, ipv6_addr, &ipv6_network, NULL)) { li_radixtree_insert(acd->ipv6, ipv6_addr, ipv6_network, GINT_TO_POINTER(deny ? ACCESS_DENY : ACCESS_ALLOW)); } else { ERROR(srv, "access_check: error parsing ip: %s", ip->data.string->str); goto failed_free_acd; } LI_VALUE_END_FOREACH() LI_VALUE_END_FOREACH() return li_action_new_function(access_check, NULL, access_check_free, acd); failed_free_acd: li_radixtree_free(acd->ipv4, NULL, NULL); li_radixtree_free(acd->ipv6, NULL, NULL); g_slice_free(access_check_data, acd); return NULL; } static liHandlerResult access_deny(liVRequest *vr, gpointer param, gpointer *context) { gboolean log_blocked = _OPTION(vr, ((liPlugin*)param), OPTION_LOG_BLOCKED).boolean; GString *redirect_url = _OPTIONPTR(vr, ((liPlugin*)param), OPTION_REDIRECT_URL).string; UNUSED(context); UNUSED(redirect_url); if (!li_vrequest_handle_direct(vr)) return LI_HANDLER_GO_ON; vr->response.http_status = 403; if (log_blocked) { VR_INFO(vr, "access.deny: blocked %s", vr->coninfo->remote_addr_str->str); } return LI_HANDLER_GO_ON; } static liAction* access_deny_create(liServer *srv, liWorker *wrk, liPlugin* p, liValue *val, gpointer userdata) { UNUSED(srv); UNUSED(wrk); UNUSED(userdata); if (!li_value_is_nothing(val)) { ERROR(srv, "%s", "access.deny doesn't expect any parameters"); return NULL; } return li_action_new_function(access_deny, NULL, NULL, p); } static const liPluginOption options[] = { { "access.log_blocked", LI_VALUE_BOOLEAN, 0, NULL }, { NULL, 0, 0, NULL } }; static const liPluginOptionPtr optionptrs[] = { { "access.redirect_url", LI_VALUE_STRING, NULL, NULL, NULL }, { NULL, 0, NULL, NULL, NULL } }; static const liPluginAction actions[] = { { "access.check", access_check_create, NULL }, { "access.deny", access_deny_create, NULL }, { NULL, NULL, NULL } }; static const liPluginSetup setups[] = { { NULL, NULL, NULL } }; static void plugin_access_init(liServer *srv, liPlugin *p, gpointer userdata) { UNUSED(srv); UNUSED(userdata); p->options = options; p->optionptrs = optionptrs; p->actions = actions; p->setups = setups; } gboolean mod_access_init(liModules *mods, liModule *mod) { UNUSED(mod); MODULE_VERSION_CHECK(mods); mod->config = li_plugin_register(mods->main, "mod_access", plugin_access_init, NULL); return mod->config != NULL; } gboolean mod_access_free(liModules *mods, liModule *mod) { if (mod->config) li_plugin_free(mods->main, mod->config); return TRUE; }
__label__pos
0.999138
The gPRC connector enables LoopBack applications to connect to gRPC data sources. Page Contents loopback-connector-grpc The gRPC connector enables LoopBack applications to interact with gRPC services. Installation In your application root directory, enter: $ npm install loopback-connector-grpc --save This will install the module from npm and add it as a dependency to the application’s package.json file. Configuration To interact with a gRPC API, configure a data source backed by the gRPC connector: With code: var ds = loopback.createDataSource('grpc', { connector: 'loopback-connector-grpc', spec: 'note.proto', }); With JSON in datasources.json (for example, with basic authentication): "gRPCDataSource": { "name": "gRPCDataSource", "connector": "grpc", "spec": "note.proto", "security": { "type" : "basic", "username": "the user name", "password": "thepassword" } Data source properties Specify the options for the data source with the following properties. Property Description Default connector Must be 'loopback-connector-grpc' to specify gRPC connector None spec HTTP URL or path to the gRPC specification file (with file name extension .yaml/.yml or .json). File path must be relative to current working directory (process.cwd()). None validate When true, validates provided spec against gRPC specification 2.0 before initializing a data source. false security Security configuration for making authenticated requests to the API. None Authentication Basic authentication: security: { rootCerts: 'rootCerts.crt', // Path to root certs key: 'gprc.key', // Path to client SSL private key cert: 'grpc.crt' // Path to client SSL certificate } Creating a model from the gRPC data source The gRPC connector loads the API specification document asynchronously. As a result, the data source won’t be ready to create models until it is connected. For best results, use an event handler for the connected event of data source: ds.once('connected', function(){ var PetService = ds.createModel('PetService', {}); ... }); Once the model is created, all available gRPC API operations can be accessed as model methods, for example: ... PetService.getPetById({petId: 1}, function (err, res){ ... }); The model methods can also be called as promises: PetService.getPetById({petId: 1}).then(function(res) { ... }, function(err) { ... }); // in async/await flavor const res = await PetService.getPetById({petId: 1}); Extend a model to wrap/mediate API Operations Once you define the model, you can wrap or mediate it to define new methods. The following example simplifies the getPetById operation to a method that takes petID and returns a Pet instance. PetService.searchPet = function(petID, cb){ PetService.getPetById({petId: petID}, function(err, res){ if(err) cb(err, null); var result = res.data; cb(null, result); }); }; This custom method on the PetService model can be exposed as REST API end-point. It uses loopback.remoteMethod to define the mappings: PetService.remoteMethod( 'searchPet', { accepts: [ { arg: 'petID', type: 'string', required: true, http: { source: 'query' } } ], returns: {arg: 'result', type: 'object', root: true }, http: {verb: 'get', path: '/searchPet'} } ); Example Coming soon… Tags: readme
__label__pos
0.60949
As the degree of the Taylor polynomial rises, it approaches the correct function. This image shows sin x and its Taylor approximations, polynomials of degree 1, 3, 5, 7, 9, 11 and 13. In mathematics, a Taylor series is a representation of a function as an infinite sum of terms that are calculated from the values of the function's derivatives at a single point.[1][2][3] In the West, the subject was formulated by the Scottish mathematician James Gregory and formally introduced by the English mathematician Brook Taylor in 1715. If the Taylor series is centered at zero, then that series is also called a Maclaurin series, after the Scottish mathematician Colin Maclaurin, who made extensive use of this special case of Taylor series in the 18th century. A function can be approximated by using a finite number of terms of its Taylor series. Taylor's theorem gives quantitative estimates on the error introduced by the use of such an approximation. The polynomial formed by taking some initial terms of the Taylor series is called a Taylor polynomial. The Taylor series of a function is the limit of that function's Taylor polynomials as the degree increases, provided that the limit exists. A function may not be equal to its Taylor series, even if its Taylor series converges at every point. A function that is equal to its Taylor series in an open interval (or a disc in the complex plane) is known as an analytic function in that interval. Contents DefinitionEdit The Taylor series of a real or complex-valued function f (x) that is infinitely differentiable at a real or complex number a is the power series   where n! denotes the factorial of n and f(n)(a) denotes the nth derivative of f evaluated at the point a. In the more compact sigma notation, this can be written as   The derivative of order zero of f is defined to be f itself and (xa)0 and 0! are both defined to be 1. When a = 0, the series is also called a Maclaurin series.[4] ExamplesEdit The Taylor series for any polynomial is the polynomial itself. The Maclaurin series for 1/1 − x is the geometric series   so the Taylor series for 1/x at a = 1 is   By integrating the above Maclaurin series, we find the Maclaurin series for log(1 − x), where log denotes the natural logarithm:   and the corresponding Taylor series for log x at a = 1 is   and more generally, the corresponding Taylor series for log x at some a = x0 is:   The Taylor series for the exponential function ex at a = 0 is   The above expansion holds because the derivative of ex with respect to x is also ex and e0 equals 1. This leaves the terms (x − 0)n in the numerator and n! in the denominator for each term in the infinite sum. HistoryEdit The Greek philosopher Zeno considered the problem of summing an infinite series to achieve a finite result, but rejected it as an impossibility[5]; the result was Zeno's paradox. Later, Aristotle proposed a philosophical resolution of the paradox, but the mathematical content was apparently unresolved until taken up by Archimedes, as it had been prior to Aristotle by the Presocratic Atomist Democritus. It was through Archimedes's method of exhaustion that an infinite number of progressive subdivisions could be performed to achieve a finite result.[6] Liu Hui independently employed a similar method a few centuries later.[7] In the 14th century, the earliest examples of the use of Taylor series and closely related methods were given by Madhava of Sangamagrama.[1][2] Though no record of his work survives, writings of later Indian mathematicians suggest that he found a number of special cases of the Taylor series, including those for the trigonometric functions of sine, cosine, tangent, and arctangent. The Kerala School of Astronomy and Mathematics further expanded his works with various series expansions and rational approximations until the 16th century. In the 17th century, James Gregory also worked in this area and published several Maclaurin series. It was not until 1715 however that a general method for constructing these series for all functions for which they exist was finally provided by Brook Taylor,[8] after whom the series are now named. The Maclaurin series was named after Colin Maclaurin, a professor in Edinburgh, who published the special case of the Taylor result in the 18th century. Analytic functionsEdit   The function e(−1/x2) is not analytic at x = 0: the Taylor series is identically 0, although the function is not. If f (x) is given by a convergent power series in an open disc (or interval in the real line) centred at b in the complex plane, it is said to be analytic in this disc. Thus for x in this disc, f is given by a convergent power series   Differentiating by x the above formula n times, then setting x = b gives:   and so the power series expansion agrees with the Taylor series. Thus a function is analytic in an open disc centred at b if and only if its Taylor series converges to the value of the function at each point of the disc. If f (x) is equal to its Taylor series for all x in the complex plane, it is called entire. The polynomials, exponential function ex, and the trigonometric functions sine and cosine, are examples of entire functions. Examples of functions that are not entire include the square root, the logarithm, the trigonometric function tangent, and its inverse, arctan. For these functions the Taylor series do not converge if x is far from b. That is, the Taylor series diverges at x if the distance between x and b is larger than the radius of convergence. The Taylor series can be used to calculate the value of an entire function at every point, if the value of the function, and of all of its derivatives, are known at a single point. Uses of the Taylor series for analytic functions include: 1. The partial sums (the Taylor polynomials) of the series can be used as approximations of the function. These approximations are good if sufficiently many terms are included. 2. Differentiation and integration of power series can be performed term by term and is hence particularly easy. 3. An analytic function is uniquely extended to a holomorphic function on an open disk in the complex plane. This makes the machinery of complex analysis available. 4. The (truncated) series can be used to compute function values numerically, (often by recasting the polynomial into the Chebyshev form and evaluating it with the Clenshaw algorithm). 5. Algebraic operations can be done readily on the power series representation; for instance, Euler's formula follows from Taylor series expansions for trigonometric and exponential functions. This result is of fundamental importance in such fields as harmonic analysis. 6. Approximations using the first few terms of a Taylor series can make otherwise unsolvable problems possible for a restricted domain; this approach is often used in physics. Approximation error and convergenceEdit   The sine function (blue) is closely approximated by its Taylor polynomial of degree 7 (pink) for a full period centered at the origin.   The Taylor polynomials for log(1 + x) only provide accurate approximations in the range −1 < x ≤ 1. For x > 1, Taylor polynomials of higher degree provide worse approximations.   The Taylor approximations for log(1 + x) (black). For x > 1, the approximations diverge. Pictured on the right is an accurate approximation of sin x around the point x = 0. The pink curve is a polynomial of degree seven:   The error in this approximation is no more than |x|9/9!. In particular, for −1 < x < 1, the error is less than 0.000003. In contrast, also shown is a picture of the natural logarithm function log(1 + x) and some of its Taylor polynomials around a = 0. These approximations converge to the function only in the region −1 < x ≤ 1; outside of this region the higher-degree Taylor polynomials are worse approximations for the function. This is similar to Runge's phenomenon.[citation needed] The error incurred in approximating a function by its nth-degree Taylor polynomial is called the remainder or residual and is denoted by the function Rn(x). Taylor's theorem can be used to obtain a bound on the size of the remainder. In general, Taylor series need not be convergent at all. And in fact the set of functions with a convergent Taylor series is a meager set in the Fréchet space of smooth functions. And even if the Taylor series of a function f does converge, its limit need not in general be equal to the value of the function f (x). For example, the function   is infinitely differentiable at x = 0, and has all derivatives zero there. Consequently, the Taylor series of f (x) about x = 0 is identically zero. However, f (x) is not the zero function, so does not equal its Taylor series around the origin. Thus, f (x) is an example of a non-analytic smooth function. In real analysis, this example shows that there are infinitely differentiable functions f (x) whose Taylor series are not equal to f (x) even if they converge. By contrast, the holomorphic functions studied in complex analysis always possess a convergent Taylor series, and even the Taylor series of meromorphic functions, which might have singularities, never converge to a value different from the function itself. The complex function e−1/z2, however, does not approach 0 when z approaches 0 along the imaginary axis, so it is not continuous in the complex plane and its Taylor series is undefined at 0. More generally, every sequence of real or complex numbers can appear as coefficients in the Taylor series of an infinitely differentiable function defined on the real line, a consequence of Borel's lemma. As a result, the radius of convergence of a Taylor series can be zero. There are even infinitely differentiable functions defined on the real line whose Taylor series have a radius of convergence 0 everywhere.[9] A function cannot be written as a Taylor series centred at a singularity; in these cases, one can often still achieve a series expansion if one allows also negative powers of the variable x; see Laurent series. For example, f (x) = e−1/x2 can be written as a Laurent series. GeneralizationEdit There is, however, a generalization[10][11] of the Taylor series that does converge to the value of the function itself for any bounded continuous function on (0,∞), using the calculus of finite differences. Specifically, one has the following theorem, due to Einar Hille, that for any t > 0,   Here Δn h is the nth finite difference operator with step size h. The series is precisely the Taylor series, except that divided differences appear in place of differentiation: the series is formally similar to the Newton series. When the function f is analytic at a, the terms in the series converge to the terms of the Taylor series, and in this sense generalizes the usual Taylor series. In general, for any infinite sequence ai, the following power series identity holds:   So in particular,   The series on the right is the expectation value of f (a + X), where X is a Poisson-distributed random variable that takes the value jh with probability et/h·(t/h)j/j!. Hence,   The law of large numbers implies that the identity holds.[12] List of Maclaurin series of some common functionsEdit Several important Maclaurin series expansions follow.[13] All these expansions are valid for complex arguments x. Exponential functionEdit   The exponential function ex (in blue), and the sum of the first n + 1 terms of its Taylor series at 0 (in red). The exponential function   (with base e) has Maclaurin series  . It converges for all x. Natural logarithmEdit The natural logarithm (with base e) has Maclaurin series   They converge for  . Also log(1-x) converges for x=-1 and log(1+x) converges for x=1. Geometric seriesEdit The geometric series and its derivatives have Maclaurin series   All are convergent for  . These are special cases of the binomial series given in the next section. Binomial seriesEdit The binomial series is the power series   whose coefficients are the generalized binomial coefficients   (If n = 0, this product is an empty product and has value 1.) It converges for   for any real or complex number α. When α = −1, this is essentially the infinite geometric series mentioned in the previous section. The special cases α = 1/2 and α = −1/2 give the square root function and its inverse:   When only the linear term is retained, this simplifies to the binomial approximation. Trigonometric functionsEdit The usual trigonometric functions and their inverses have the following Maclaurin series:   All angles are expressed in radians. The numbers Bk appearing in the expansions of tan x are the Bernoulli numbers. The Ek in the expansion of sec x are Euler numbers. Hyperbolic functionsEdit The hyperbolic functions have Maclaurin series closely related to the series for the corresponding trigonometric functions:   The numbers Bk appearing in the series for tanh x are the Bernoulli numbers. Calculation of Taylor seriesEdit Several methods exist for the calculation of Taylor series of a large number of functions. One can attempt to use the definition of the Taylor series, though this often requires generalizing the form of the coefficients according to a readily apparent pattern. Alternatively, one can use manipulations such as substitution, multiplication or division, addition or subtraction of standard Taylor series to construct the Taylor series of a function, by virtue of Taylor series being power series. In some cases, one can also derive the Taylor series by repeatedly applying integration by parts. Particularly convenient is the use of computer algebra systems to calculate Taylor series. First exampleEdit In order to compute the 7th degree Maclaurin polynomial for the function   , one may first rewrite the function as  . The Taylor series for the natural logarithm is (using the big O notation)   and for the cosine function  . The latter series expansion has a zero constant term, which enables us to substitute the second series into the first one and to easily omit terms of higher order than the 7th degree by using the big O notation:   Since the cosine is an even function, the coefficients for all the odd powers x, x3, x5, x7, ... have to be zero. Second exampleEdit Suppose we want the Taylor series at 0 of the function   We have for the exponential function   and, as in the first example,   Assume the power series is   Then multiplication with the denominator and substitution of the series of the cosine yields   Collecting the terms up to fourth order yields   The values of   can be found by comparison of coefficients with the top expression for  , yielding:   Third exampleEdit Here we employ a method called "indirect expansion" to expand the given function. This method uses the known Taylor expansion of the exponential function. In order to expand (1 + x)ex as a Taylor series in x, we use the known Taylor series of function ex:   Thus,   Taylor series as definitionsEdit Classically, algebraic functions are defined by an algebraic equation, and transcendental functions (including those discussed above) are defined by some property that holds for them, such as a differential equation. For example, the exponential function is the function which is equal to its own derivative everywhere, and assumes the value 1 at the origin. However, one may equally well define an analytic function by its Taylor series. Taylor series are used to define functions and "operators" in diverse areas of mathematics. In particular, this is true in areas where the classical definitions of functions break down. For example, using Taylor series, one may extend analytic functions to sets of matrices and operators, such as the matrix exponential or matrix logarithm. In other areas, such as formal analysis, it is more convenient to work directly with the power series themselves. Thus one may define a solution of a differential equation as a power series which, one hopes to prove, is the Taylor series of the desired solution. Taylor series in several variablesEdit The Taylor series may also be generalized to functions of more than one variable with[14][15]   For example, for a function   that depends on two variables, x and y, the Taylor series to second order about the point (a, b) is   where the subscripts denote the respective partial derivatives. A second-order Taylor series expansion of a scalar-valued function of more than one variable can be written compactly as   where D f (a) is the gradient of f evaluated at x = a and D2 f (a) is the Hessian matrix. Applying the multi-index notation the Taylor series for several variables becomes   which is to be understood as a still more abbreviated multi-index version of the first equation of this paragraph, with a full analogy to the single variable case. ExampleEdit   Second-order Taylor series approximation (in orange) of a function f (x,y) = ex log(1 + y) around the origin. In order to compute a second-order Taylor series expansion around point (a, b) = (0, 0) of the function   one first computes all the necessary partial derivatives:   Evaluating these derivatives at the origin gives the Taylor coefficients   Substituting these values in to the general formula   produces   Since log(1 + y) is analytic in |y| < 1, we have   Comparison with Fourier seriesEdit The trigonometric Fourier series enables one to express a periodic function (or a function defined on a closed interval [a,b]) as an infinite sum of trigonometric functions (sines and cosines). In this sense, the Fourier series is analogous to Taylor series, since the latter allows one to express a function as an infinite sum of powers. Nevertheless, the two series differ from each other in several relevant issues: • The finite truncations of the Taylor series of f (x) about the point x = a are all exactly equal to f at a. In contrast, the Fourier series is computed by integrating over an entire interval, so there is generally no such point where all the finite truncations of the series are exact. • The computation of Taylor series requires the knowledge of the function on an arbitrary small neighbourhood of a point, whereas the computation of the Fourier series requires knowing the function on its whole domain interval. In a certain sense one could say that the Taylor series is "local" and the Fourier series is "global". • The Taylor series is defined for a function which has infinitely many derivatives at a single point, whereas the Fourier series is defined for any integrable function. In particular, the function could be nowhere differentiable. (For example, f (x) could be a Weierstrass function.) • The convergence of both series has very different properties. Even if the Taylor series has positive convergence radius, the resulting series may not coincide with the function; but if the function is analytic then the series converges pointwise to the function, and uniformly on every compact subset of the convergence interval. Concerning the Fourier series, if the function is square-integrable then the series converges in quadratic mean, but additional requirements are needed to ensure the pointwise or uniform convergence (for instance, if the function is periodic and of class C1 then the convergence is uniform). • Finally, in practice one wants to approximate the function with a finite number of terms, say with a Taylor polynomial or a partial sum of the trigonometric series, respectively. In the case of the Taylor series the error is very small in a neighbourhood of the point where it is computed, while it may be very large at a distant point. In the case of the Fourier series the error is distributed along the domain of the function. See alsoEdit NotesEdit 1. ^ a b "Neither Newton nor Leibniz – The Pre-History of Calculus and Celestial Mechanics in Medieval Kerala" (PDF). MAT 314. Canisius College. Archived (PDF) from the original on 2015-02-23. Retrieved 2006-07-09. 2. ^ a b S. G. Dani (2012). "Ancient Indian Mathematics – A Conspectus". Resonance. 17 (3): 236–246. doi:10.1007/s12045-012-0022-y. 3. ^ {{Ranjan Roy, The Discovery of the Series Formula for π by Leibniz, Gregory and Nilakantha, Mathematics Magazine Vol. 63, No. 5 (Dec., 1990), pp. 291-306.}} 4. ^ Thomas & Finney 1996, §8.9 5. ^ Lindberg, David (2007). The Beginnings of Western Science (2nd ed.). University of Chicago Press. p. 33. ISBN 978-0-226-48205-7. 6. ^ Kline, M. (1990). Mathematical Thought from Ancient to Modern Times. New York: Oxford University Press. pp. 35–37. ISBN 0-19-506135-7. 7. ^ Boyer, C.; Merzbach, U. (1991). A History of Mathematics (Second revised ed.). John Wiley and Sons. pp. 202–203. ISBN 0-471-09763-2. 8. ^ Taylor, Brook (1715). Methodus Incrementorum Directa et Inversa [Direct and Reverse Methods of Incrementation] (in Latin). London. p. 21–23 (Prop. VII, Thm. 3, Cor. 2). Translated into English in Struik, D. J. (1969). A Source Book in Mathematics 1200–1800. Cambridge, Massachusetts: Harvard University Press. pp. 329–332. 9. ^ Rudin, Walter (1980), Real and Complex Analysis, New Dehli: McGraw-Hill, p. 418, Exercise 13, ISBN 0-07-099557-5 10. ^ Feller, William (1971), An introduction to probability theory and its applications, Volume 2 (3rd ed.), Wiley, pp. 230–232. 11. ^ Hille, Einar; Phillips, Ralph S. (1957), Functional analysis and semi-groups, AMS Colloquium Publications, 31, American Mathematical Society, pp. 300–327. 12. ^ Feller, William (1970). An introduction to probability theory and its applications. 2 (3 ed.). p. 231. 13. ^ Most of these can be found in (Abramowitz & Stegun 1970). 14. ^ Lars Hörmander (1990), The analysis of partial differential operators, volume 1, Springer, Eqq. 1.1.7 and 1.1.7′ 15. ^ Duistermaat; Kolk (2010), Distributions: Theory and applications, Birkhauser, ch. 6 ReferencesEdit External linksEdit
__label__pos
0.803365
The Drupal 8 plugin system - part 2 We saw in part 1 how plugins help us in writing reusable functionality in Drupal 8. There are a lot of concepts which plugins share in common with services, like: 1. limited scope. Do one thing and do it right. 2. PHP classes which are swappable. Which begs the question, how exactly are plugins different from services? If your interface expects implementations to yield the same behaviour, then go for services. Otherwise, you should write it as a plugin. The Drupal 8 plugin system - part 1 Plugins are swappable pieces of code in Drupal 8. To see how different they are from hooks, let's take an example where we want to create a new field type. In Drupal 7, this involves: 1. Providing information about the field hook_field_info - describes the field, adds metadata like label, default formatter and widget. hook_field_schema - resides in the module's .install file. Specifies how the field data is stored in the database. Annotations in Drupal 8 Annotations are PHP comments which hold metadata about your function or class. They do not directly affect program semantics as they are comment blocks. They are read and parsed at runtime by an annotation engine. Annotations are already used in other PHP projects for various purposes. Symfony2 uses annotations for specifying routing rules. Doctrine uses them for adding ORM related metadata.Though handy in various situations, their utility is debated about a lot, like: 1. How to actually differentiate between annotations and actual user comments? 2. Drupal 8 asset management using libraries.yml One of the things you are likely to do if you write a custom module or a theme is include third party Javascript and/or CSS assets in it. Previously, this used to be a clumsy hook_library_info() array but is replaced by a YML file in D8. It makes asset management look more organized and easier to edit. Let's see how to do this for the colorbox module. The new YML file will have the naming convention modulename.libraries.yml. Pages
__label__pos
0.943653
Author: sebor Date: Thu Jan 26 12:56:15 2006 New Revision: 372607 URL: http://svn.apache.org/viewcvs?rev=372607&view=rev Log: 2006-01-26 Liviu Nicoara Martin Sebor STDCXX-4 * 23.deque.modifiers.cpp: New test exercising lib.deque.modifiers. * 23.deque.special.cpp: New test exercising lib.deque.special. Added: incubator/stdcxx/trunk/tests/containers/ incubator/stdcxx/trunk/tests/containers/23.deque.modifiers.cpp (with props) incubator/stdcxx/trunk/tests/containers/23.deque.special.cpp (with props) Added: incubator/stdcxx/trunk/tests/containers/23.deque.modifiers.cpp URL: http://svn.apache.org/viewcvs/incubator/stdcxx/trunk/tests/containers/23.deque.modifiers.cpp?rev=372607&view=auto ============================================================================== --- incubator/stdcxx/trunk/tests/containers/23.deque.modifiers.cpp (added) +++ incubator/stdcxx/trunk/tests/containers/23.deque.modifiers.cpp Thu Jan 26 12:56:15 2006 @@ -0,0 +1,1406 @@ +/*************************************************************************** + * + * 23.deque.modifiers.cpp - test exercising [lib.deque.modifiers] + * + * $Id$ + * + *************************************************************************** + * + * Copyright (c) 1994-2005 Quovadx, Inc., acting through its Rogue Wave + * Software division. Licensed under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the + * License. You may obtain a copy of the License at + * http://www.apache.org/licenses/LICENSE-2.0. Unless required by + * applicable law or agreed to in writing, software distributed under + * the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR + * CONDITIONS OF ANY KIND, either express or implied. See the License + * for the specific language governing permissions and limitations under + * the License. + * + **************************************************************************/ + +#ifdef _MSC_VER + // silence warning C4244: 'argument' : conversion from 'T' to + // 'const std::allocator<_TypeT>::value_type', possible loss of data + // issued for deque::assign(InputIterator a, InputIterator b) and + // deque::insert(iterator, InputIterator a, InputIterator b) due + // the implicit conversion of a to size_type and b to value_type + // required by DR 438: + // http://www.open-std.org/jtc1/sc22/wg21/docs/lwg-defects.html#438 +# pragma warning (disable: 4244) +#endif + +#include // for deque + +#include // for free() + +#ifndef _RWSTD_NO_REPLACEABLE_NEW_DELETE + // disabled for MSVC since it can't reliably replace the operators +# include +#endif // _RWSTD_NO_REPLACEABLE_NEW_DELETE + +#include // for X +#include // for rw_test(), ... +#include // for rw_asnprintf + +/**************************************************************************/ + +// Runtime options +/* extern */ int rw_opt_no_assign = 0; +/* extern */ int rw_opt_no_erase = 0; +/* extern */ int rw_opt_no_insert = 0; +/* extern */ int rw_opt_no_dr438 = 0; +/* extern */ int rw_opt_no_input_iterator = 0; +/* extern */ int rw_opt_no_forward_iterator = 0; +/* extern */ int rw_opt_no_bidirectional_iterator = 0; +/* extern */ int rw_opt_no_random_iterator = 0; +/* extern */ int rw_opt_no_right_thing = 0; + +/**************************************************************************/ + +// For konvenience +typedef unsigned char UChar; + +/**************************************************************************/ + +typedef std::deque > Deque; + +Deque::size_type new_capacity; + +namespace __rw { + +_RWSTD_SPECIALIZED_FUNCTION +inline Deque::size_type +__rw_new_capacity(Deque::size_type n, const Deque*) +{ + if (n) { + // non-zero size argument indicates a request for an increase + // in the capacity of a deque object's dynamically sizable + // vector of nodes + return n * 2; + } + + // zero size argument is a request for the initial size of a deque + // object's dynamically sizable vector of nodes or for the size of + // the objects's fixed-size buffer for elements + return new_capacity; +} + +} + +/**************************************************************************/ + +enum { + NewThrows = 0x1 /* cause operator new to throw */, + CopyCtorThrows = 0x2 /* cause element's copy ctor to throw */, + AssignmentThrows = 0x4 /* cause element's assignment to throw */ +}; + +enum MemberFunction { + Assign_n /* deque::assign (size_type, const_reference) */, + AssignRange /* deque::assign (InputIterator, InputIterator) */, + + Erase_1 /* deque::erase (iterator) */, + EraseRange /* deque::erase (iterator, iterator) */, + + Insert_1 /* deque::insert (iterator, const_reference) */, + Insert_n /* deque::insert (iterator, size_type, const_reference) */, + InsertRange /* deque::insert (iterator, InputIterator, InputIterator) */ +}; + + +// causes operator new, deque element's copy ctor, or assignment operator +// to throw an exception and iterates as long as the member function exits +// by throwing an exception; verifies that the exception had no effects +// on the container +template +void exception_loop (int line /* line number in caller*/, + MemberFunction mfun /* deque member function */, + const char *fcall /* function call string */, + int exceptions /* enabled exceptions */, + Deque &deq /* container to call function on */, + const Deque::iterator &it /* iterator into container */, + int n /* number of elements or offset */, + const X *x /* pointer to an element or 0 */, + const Iterator &first /* beginning of range */, + const Iterator &last /* end of range to insert */, + int *n_copy /* number of copy ctors */, + int *n_asgn /* number of assignments */) +{ + std::size_t throw_after = 0; + + // get the initial size of the container and its begin() iterator + // to detect illegal changes after an exception (i.e., violations + // if the strong exception guarantee) + const std::size_t size = deq.size (); + const Deque::const_iterator begin = deq.begin (); + const Deque::const_iterator end = deq.end (); + +#ifdef DEFINE_REPLACEMENT_NEW_AND_DELETE + + rwt_free_store* const pst = rwt_get_free_store (0); + +#endif // DEFINE_REPLACEMENT_NEW_AND_DELETE + + // repeatedly call the specified member function until it returns + // without throwing an exception + for ( ; ; ) { + + // detect objects constructed but not destroyed after an exception + std::size_t x_count = X::count_; + + _RWSTD_ASSERT (n_copy); + _RWSTD_ASSERT (n_asgn); + + *n_copy = X::n_total_copy_ctor_; + *n_asgn = X::n_total_op_assign_; + +#ifndef _RWSTD_NO_EXCEPTIONS + + // iterate for `n=throw_after' starting at the next call to operator + // new, forcing each call to throw an exception, until the insertion + // finally succeeds (i.e, no exception is thrown) + +# ifdef DEFINE_REPLACEMENT_NEW_AND_DELETE + + if (exceptions & NewThrows) { + *pst->throw_at_calls_ [0] = pst->new_calls_ [0] + throw_after + 1; + } + +# endif // DEFINE_REPLACEMENT_NEW_AND_DELETE + + if (exceptions & CopyCtorThrows) { + X::copy_ctor_throw_count_ = X::n_total_copy_ctor_ + throw_after; + } + + if (exceptions & AssignmentThrows) { + X::op_assign_throw_count_ = X::n_total_op_assign_ + throw_after; + } + +#endif // _RWSTD_NO_EXCEPTIONS + + _TRY { + + switch (mfun) { + case Assign_n: + _RWSTD_ASSERT (x); + deq.assign (n, *x); + break; + case AssignRange: + deq.assign (first, last); + break; + + case Erase_1: + deq.erase (it); + break; + case EraseRange: { + const Deque::iterator erase_end (it + n); + deq.erase (it, erase_end); + break; + } + + case Insert_1: + _RWSTD_ASSERT (x); + deq.insert (it, *x); + break; + case Insert_n: + _RWSTD_ASSERT (x); + deq.insert (it, n, *x); + break; + case InsertRange: + deq.insert (it, first, last); + break; + } + } + _CATCH (...) { + + // verify that an exception thrown from the member function + // didn't cause a change in the state of the container + + rw_assert (deq.size () == size, 0, line, + "line %d: %s: size unexpectedly changed " + "from %zu to %zu after an exception", + __LINE__, fcall, size, deq.size ()); + + rw_assert (deq.begin () == begin, 0, line, + "line %d: %s: begin() unexpectedly " + "changed after an exception by %td", + __LINE__, fcall, deq.begin () - begin); + + rw_assert (deq.end () == end, 0, line, + "line %d: %s: end() unexpectedly " + "changed after an exception by %td", + __LINE__, fcall, deq.end () - end); + + + // count the number of objects to detect leaks + x_count = X::count_ - x_count; + rw_assert (x_count == deq.size () - size, 0, line, + "line %d: %s: leaked %zu objects after an exception", + __LINE__, fcall, x_count - (deq.size () - size)); + + if (exceptions) { + + // increment to allow this call to operator new to succeed + // and force the next one to fail, and try to insert again + ++throw_after; + } + else + break; + + continue; + } + + // count the number of objects to detect leaks + x_count = X::count_ - x_count; + rw_assert (x_count == deq.size () - size, 0, line, + "line %d: %s: leaked %zu objects " + "after a successful insertion", + __LINE__, fcall, x_count - (deq.size () - size)); + + break; + } + +#ifdef DEFINE_REPLACEMENT_NEW_AND_DELETE + + // disable exceptions from replacement operator new + *pst->throw_at_calls_ [0] = _RWSTD_SIZE_MAX; + +#endif // DEFINE_REPLACEMENT_NEW_AND_DELETE + + X::copy_ctor_throw_count_ = 0; + X::op_assign_throw_count_ = 0; + + // compute the number of calls to X copy ctor and assignment operator + // and set `n_copy' and `n_assgn' to the value of the result + *n_copy = X::n_total_copy_ctor_ - *n_copy; + *n_asgn = X::n_total_op_assign_ - *n_asgn; +} + + +// used to determine whether insert() can or cannot use +// an algorithm optimized for BidirectionalIterators +bool is_bidirectional (std::input_iterator_tag) { return false; } +bool is_bidirectional (std::bidirectional_iterator_tag) { return true; } + +// returns the number of invocations of the assignment operators +// for a call to deque::insert(iterator, InputIterator, InputIterator) +// (the value depends on the iterator category) +template +std::size_t insert_assignments (Iterator it, + int nelems, + std::size_t off, + std::size_t seqlen, + std::size_t inslen) +{ + if (is_bidirectional (_RWSTD_ITERATOR_CATEGORY (Iterator, it))) + return 0 == nelems ? 0 : off < seqlen - off ? off : seqlen - off; + + if (0 < nelems) + --nelems; + + if (0 == nelems || 0 == inslen) + return 0; + + // compute the number of assignments done + // to insert the first element in the sequence + const std::size_t first = off < seqlen - off ? off : seqlen - off; + + // recursively compute the numner of assignments + // for the rest of the elements in the sequence + const std::size_t rest = + insert_assignments (it, nelems, off + 1, seqlen + 1, inslen - 1); + + return first + rest; +} + + +template +void test_insert (int line, int exceptions, + const Iterator &dummy, int nelems, + const char *seq, std::size_t seqlen, std::size_t off, + const char *ins, std::size_t inslen, + const char *res, std::size_t reslen) +{ + // Ensure that xsrc, xins are always dereferenceable + const X* const xseq = X::from_char (seq, seqlen + 1); + X* const xins = X::from_char (ins, inslen + 1); + + Deque deq = seqlen ? Deque (xseq, xseq + seqlen) : Deque (); + + // offset must be valid + _RWSTD_ASSERT (off <= deq.size ()); + const Deque::iterator iter = deq.begin () + off; + + // only insert() at either end of the container is exception safe + // insertions into the middle of the container are not (i.e., the + // container may grow or may even become inconsistent) + if (off && off < deq.size ()) + exceptions = 0; + + // format a string describing the function call being exercised + // (used in diagnostic output below) + char* funcall = 0; + std::size_t len = 0; + + rw_asnprintf (&funcall, &len, "deque(\"%{X=*.*}\").insert(" + "%{?}begin(), %{:}%{?}end (), %{:}begin () + %zu%{;}%{;}" + "%{?}%d)%{:}%{?}\"%{X=*.*}\")%{:}%d, %d)%{;}%{;}", + int (seqlen), -1, xseq, 0 == off, seqlen == off, off, + nelems == -2, *ins, nelems == -1, + int (inslen), -1, xins, nelems, *ins); + + int n_copy = X::n_total_copy_ctor_; + int n_asgn = X::n_total_op_assign_; + + if (-2 == nelems) { // insert(iterator, const_reference) + + exception_loop (line, Insert_1, funcall, exceptions, + deq, iter, nelems, xins, dummy, dummy, + &n_copy, &n_asgn); + + } + else if (-1 == nelems) { // insert(iterator, Iterator, Iterator) + + if (inslen > 1) + exceptions = 0; + + const Iterator first = + make_iter (xins, xins, xins + inslen, dummy); + + const Iterator last = + make_iter (xins + inslen, xins, xins + inslen, dummy); + + exception_loop (line, InsertRange, funcall, exceptions, + deq, iter, nelems, 0, first, last, + &n_copy, &n_asgn); + + } + else { // insert(iterator, size_type, const_reference) + + if (nelems > 1) + exceptions = 0; + + exception_loop (line, Insert_n, funcall, exceptions, + deq, iter, nelems, xins, dummy, dummy, + &n_copy, &n_asgn); + + } + + // verify the expected size of the deque after insertion + rw_assert (deq.size () == reslen, __FILE__, line, + "line %d: %s: size == %zu, got %zu\n", + __LINE__, funcall, reslen, deq.size ()); + + // verify the expected contents of the deque after insertion + const Deque::const_iterator resbeg = deq.begin (); + const Deque::const_iterator resend = deq.end (); + + for (Deque::const_iterator it = resbeg; it != resend; ++it) { + if ((*it).val_ != UChar (res [it - resbeg])) { + + char* const got = new char [deq.size () + 1]; + + for (Deque::const_iterator i = resbeg; i != resend; ++i) { + got [i - resbeg] = char ((*i).val_); + } + + got [deq.size ()] = '\0'; + + rw_assert (false, __FILE__, line, + "line %d: %s: expected %s, got %s\n", + __LINE__, funcall, res, got); + + delete[] got; + break; + } + } + + // verify the complexity of the operation in terms of the number + // of calls to the copy ctor and assignment operator on value_type + const std::size_t expect_copy = nelems < 0 ? inslen : nelems; + + rw_assert (n_copy == int (expect_copy), + __FILE__, line, + "line %d: %s: expected %zu invocations " + "of X::X(const X&), got %d\n", + __LINE__, funcall, expect_copy, n_copy); + + // compute the number of calls to the assignment operator + const std::size_t expect_asgn = + insert_assignments (dummy, nelems, off, seqlen, inslen); + + rw_assert (n_asgn == int (expect_asgn), + __FILE__, line, + "line %d: %s: expected %zu invocations " + "of X::operator=(const X&), got %d\n", + __LINE__, funcall, expect_asgn, n_asgn); + + // Free funcall storage + std::free (funcall); + + delete[] xins; + delete[] xseq; +} + +/**************************************************************************/ + +template +void test_insert_range (const Iterator &it, const char* itname) +{ + rw_info (0, 0 ,0, + "std::deque::insert(iterator, %s, %s)", itname, itname); + +#undef TEST +#define TEST(seq, off, ins, res) \ + test_insert (__LINE__, -1, \ + it, -1, \ + seq, sizeof seq - 1, \ + std::size_t (off), \ + ins, sizeof ins - 1, \ + res, sizeof res - 1) + + // +---------------------------------------- seq + // | +--------------------------------- off + // | | +----------------------------- ins + // | | | +---------------------- res + // | | | | + // v v v v + TEST ("", +0, "", ""); + TEST ("", +0, "a", "a"); + TEST ("", +0, "ab", "ab"); + TEST ("", +0, "abc", "abc"); + TEST ("a", +0, "", "a"); + TEST ("b", +0, "a", "ab"); + TEST ("c", +0, "ab", "abc"); + TEST ("cd", +0, "ab", "abcd"); + TEST ("def", +0, "abc", "abcdef"); + + TEST ("a", +1, "", "a"); + TEST ("a", +1, "b", "ab"); + TEST ("a", +1, "bc", "abc"); + TEST ("a", +1, "bcd", "abcd"); + + TEST ("ab", +1, "", "ab"); + TEST ("ac", +1, "b", "abc"); + TEST ("acd", +1, "b", "abcd"); + + TEST ("ab", +2, "", "ab"); + TEST ("ab", +2, "c", "abc"); + TEST ("ab", +2, "cd", "abcd"); + + TEST ("abc", +2, "", "abc"); + TEST ("abd", +2, "c", "abcd"); + TEST ("abe", +2, "cd", "abcde"); + TEST ("abf", +2, "cde", "abcdef"); + + TEST ("abc", +3, "", "abc"); + TEST ("abc", +3, "d", "abcd"); + TEST ("abc", +3, "de", "abcde"); + TEST ("abc", +3, "def", "abcdef"); + + +#define UPPER "ABCDEFGHIJKLMNOPQRSTUVWXYZ" +#define LOWER "abcdefghijklmnopqrstuvwxyz" + + TEST (UPPER, +0, LOWER, "" LOWER "ABCDEFGHIJKLMNOPQRSTUVWXYZ"); + TEST (UPPER, +1, LOWER, "A" LOWER "BCDEFGHIJKLMNOPQRSTUVWXYZ"); + TEST (UPPER, +2, LOWER, "AB" LOWER "CDEFGHIJKLMNOPQRSTUVWXYZ"); + TEST (UPPER, +3, LOWER, "ABC" LOWER "DEFGHIJKLMNOPQRSTUVWXYZ"); + TEST (UPPER, +4, LOWER, "ABCD" LOWER "EFGHIJKLMNOPQRSTUVWXYZ"); + TEST (UPPER, +5, LOWER, "ABCDE" LOWER "FGHIJKLMNOPQRSTUVWXYZ"); + TEST (UPPER, +6, LOWER, "ABCDEF" LOWER "GHIJKLMNOPQRSTUVWXYZ"); + TEST (UPPER, +7, LOWER, "ABCDEFG" LOWER "HIJKLMNOPQRSTUVWXYZ"); + TEST (UPPER, +8, LOWER, "ABCDEFGH" LOWER "IJKLMNOPQRSTUVWXYZ"); + TEST (UPPER, +9, LOWER, "ABCDEFGHI" LOWER "JKLMNOPQRSTUVWXYZ"); + TEST (UPPER, +10, LOWER, "ABCDEFGHIJ" LOWER "KLMNOPQRSTUVWXYZ"); + TEST (UPPER, +11, LOWER, "ABCDEFGHIJK" LOWER "LMNOPQRSTUVWXYZ"); + TEST (UPPER, +12, LOWER, "ABCDEFGHIJKL" LOWER "MNOPQRSTUVWXYZ"); + TEST (UPPER, +13, LOWER, "ABCDEFGHIJKLM" LOWER "NOPQRSTUVWXYZ"); + TEST (UPPER, +14, LOWER, "ABCDEFGHIJKLMN" LOWER "OPQRSTUVWXYZ"); + TEST (UPPER, +15, LOWER, "ABCDEFGHIJKLMNO" LOWER "PQRSTUVWXYZ"); + TEST (UPPER, +16, LOWER, "ABCDEFGHIJKLMNOP" LOWER "QRSTUVWXYZ"); + TEST (UPPER, +17, LOWER, "ABCDEFGHIJKLMNOPQ" LOWER "RSTUVWXYZ"); + TEST (UPPER, +18, LOWER, "ABCDEFGHIJKLMNOPQR" LOWER "STUVWXYZ"); + TEST (UPPER, +19, LOWER, "ABCDEFGHIJKLMNOPQRS" LOWER "TUVWXYZ"); + TEST (UPPER, +20, LOWER, "ABCDEFGHIJKLMNOPQRST" LOWER "UVWXYZ"); + TEST (UPPER, +21, LOWER, "ABCDEFGHIJKLMNOPQRSTU" LOWER "VWXYZ"); + TEST (UPPER, +22, LOWER, "ABCDEFGHIJKLMNOPQRSTUV" LOWER "WXYZ"); + TEST (UPPER, +23, LOWER, "ABCDEFGHIJKLMNOPQRSTUVW" LOWER "XYZ"); + TEST (UPPER, +24, LOWER, "ABCDEFGHIJKLMNOPQRSTUVWX" LOWER "YZ"); + TEST (UPPER, +25, LOWER, "ABCDEFGHIJKLMNOPQRSTUVWXY" LOWER "Z"); + TEST (UPPER, +26, LOWER, "ABCDEFGHIJKLMNOPQRSTUVWXYZ" LOWER ""); +} + +/**************************************************************************/ + +template +void test_insert_int_range (const T&, const IntType&, + const char* t_name, const char* int_name) +{ + rw_info (0, 0, 0, + "std::deque<%s>::insert(iterator, %s, %s)", + t_name, int_name, int_name); + + std::deque d; + + typename std::deque::iterator it = d.begin (); + + // deque::insert(iterator, size_type, const_reference) + + d.insert (it, IntType (1), IntType (0)); + + rw_assert (1 == d.size (), 0, __LINE__, + "deque<%s>::insert(begin(), %s = 1, %s = 0); size() == 1," + " got %zu", t_name, int_name, int_name, d.size ()); + + it = d.begin (); + ++it; + + d.insert (it, IntType (3), IntType (2)); + + rw_assert (4 == d.size (), 0, __LINE__, + "deque<%s>::insert(begin() + 1, %s = 3, %s = 2); size() == 4," + " got %zu", t_name, int_name, int_name, d.size ()); + + it = d.begin (); + ++it; + + d.insert (it, IntType (2), IntType (1)); + + rw_assert (6 == d.size (), 0, __LINE__, + "deque<%s>::insert(begin() + 1, %s = 2, %s = 1); size() == 6," + " got %zu", t_name, int_name, int_name, d.size ()); +} + + +template +void test_insert_int_range (const T &dummy, const char* tname) +{ + test_insert_int_range (dummy, (signed char)0, tname, "signed char"); + test_insert_int_range (dummy, (unsigned char)0, tname, "unsigned char"); + test_insert_int_range (dummy, short (), tname, "short"); + test_insert_int_range (dummy, (unsigned short)0, tname, "unsigned short"); + test_insert_int_range (dummy, int (), tname, "int"); + test_insert_int_range (dummy, (unsigned int)0, tname, "unsigned int"); + test_insert_int_range (dummy, long (), tname, "long"); + test_insert_int_range (dummy, (unsigned long)0, tname, "unsigned long"); + +#ifdef _RWSTD_LONG_LONG + + test_insert_int_range (dummy, (_RWSTD_LONG_LONG)0, + tname, "long long"); + test_insert_int_range (dummy, (unsigned _RWSTD_LONG_LONG)0, + tname, "unsigned long long"); + +#endif // _RWSTD_LONG_LONG + +} + +/**************************************************************************/ + +void test_insert () +{ + ////////////////////////////////////////////////////////////////// + // exercise deque::insert(iterator, const_reference) + + rw_info (0, 0, 0, "std::deque::insert(iterator, const_reference)"); + +#undef TEST +#define TEST(seq, off, ins, res) do { \ + const char insseq [] = { ins, '\0' }; \ + test_insert (__LINE__, -1, \ + (X*)0, -2, \ + seq, sizeof seq - 1, \ + std::size_t (off), \ + insseq, 1, \ + res, sizeof res - 1); \ + } while (0) + + // +------------------- original sequence + // | +----------- insertion offset + // | | +------ element to insert + // | | | +-- resulting sequence + // | | | | + // V V V V + TEST ("", +0, 'a', "a"); + TEST ("b", +0, 'a', "ab"); + TEST ("bc", +0, 'a', "abc"); + TEST ("bcd", +0, 'a', "abcd"); + TEST ("bcde", +0, 'a', "abcde"); + + TEST ("a", +1, 'b', "ab"); + TEST ("ac", +1, 'b', "abc"); + TEST ("acd", +1, 'b', "abcd"); + TEST ("acde", +1, 'b', "abcde"); + + TEST ("ab", +2, 'c', "abc"); + TEST ("abd", +2, 'c', "abcd"); + TEST ("abde", +2, 'c', "abcde"); + + TEST ("abc", +3, 'd', "abcd"); + TEST ("abce", +3, 'd', "abcde"); + + TEST ("abcd", +4, 'e', "abcde"); + +#define A_to_B "AB" +#define A_to_C "ABC" +#define A_to_D "ABCD" +#define A_to_E "ABCDE" +#define A_to_F "ABCDEF" +#define A_to_G "ABCDEFG" +#define A_to_H "ABCDEFGH" +#define A_to_I "ABCDEFGHI" +#define A_to_J "ABCDEFGHIJ" +#define A_to_K "ABCDEFGHIJK" +#define A_to_L "ABCDEFGHIJKL" +#define A_to_M "ABCDEFGHIJKLM" +#define A_to_N "ABCDEFGHIJKLMN" +#define A_to_O "ABCDEFGHIJKLMNO" +#define A_to_P "ABCDEFGHIJKLMNOP" +#define A_to_Q "ABCDEFGHIJKLMNOPQ" +#define A_to_R "ABCDEFGHIJKLMNOPQR" +#define A_to_S "ABCDEFGHIJKLMNOPQRS" +#define A_to_T "ABCDEFGHIJKLMNOPQRST" +#define A_to_U "ABCDEFGHIJKLMNOPQRSTU" +#define A_to_V "ABCDEFGHIJKLMNOPQRSTUV" +#define A_to_W "ABCDEFGHIJKLMNOPQRSTUVW" +#define A_to_X "ABCDEFGHIJKLMNOPQRSTUVWX" +#define A_to_Y "ABCDEFGHIJKLMNOPQRSTUVWXY" +#define A_to_Z "ABCDEFGHIJKLMNOPQRSTUVWXYZ" +#define B_to_Z "BCDEFGHIJKLMNOPQRSTUVWXYZ" +#define C_to_Z "CDEFGHIJKLMNOPQRSTUVWXYZ" +#define D_to_Z "DEFGHIJKLMNOPQRSTUVWXYZ" +#define E_to_Z "EFGHIJKLMNOPQRSTUVWXYZ" +#define F_to_Z "FGHIJKLMNOPQRSTUVWXYZ" +#define G_to_Z "GHIJKLMNOPQRSTUVWXYZ" +#define H_to_Z "HIJKLMNOPQRSTUVWXYZ" +#define I_to_Z "IJKLMNOPQRSTUVWXYZ" +#define J_to_Z "JKLMNOPQRSTUVWXYZ" +#define K_to_Z "KLMNOPQRSTUVWXYZ" +#define L_to_Z "LMNOPQRSTUVWXYZ" +#define M_to_Z "MNOPQRSTUVWXYZ" +#define N_to_Z "NOPQRSTUVWXYZ" +#define O_to_Z "OPQRSTUVWXYZ" +#define P_to_Z "PQRSTUVWXYZ" +#define Q_to_Z "QRSTUVWXYZ" +#define R_to_Z "RSTUVWXYZ" +#define S_to_Z "STUVWXYZ" +#define T_to_Z "TUVWXYZ" +#define U_to_Z "UVWXYZ" +#define V_to_Z "VWXYZ" +#define W_to_Z "WXYZ" +#define X_to_Z "XYZ" +#define Y_to_Z "YZ" + + TEST (A_to_Z, + 0, '^', "" "^" A_to_Z); + TEST (A_to_Z, + 1, '^', "A" "^" B_to_Z); + TEST (A_to_Z, + 2, '^', A_to_B "^" C_to_Z); + TEST (A_to_Z, + 3, '^', A_to_C "^" D_to_Z); + TEST (A_to_Z, + 4, '^', A_to_D "^" E_to_Z); + TEST (A_to_Z, + 5, '^', A_to_E "^" F_to_Z); + TEST (A_to_Z, + 6, '^', A_to_F "^" G_to_Z); + TEST (A_to_Z, + 7, '^', A_to_G "^" H_to_Z); + TEST (A_to_Z, + 8, '^', A_to_H "^" I_to_Z); + TEST (A_to_Z, + 9, '^', A_to_I "^" J_to_Z); + TEST (A_to_Z, +10, '^', A_to_J "^" K_to_Z); + TEST (A_to_Z, +11, '^', A_to_K "^" L_to_Z); + TEST (A_to_Z, +12, '^', A_to_L "^" M_to_Z); + TEST (A_to_Z, +13, '^', A_to_M "^" N_to_Z); + TEST (A_to_Z, +14, '^', A_to_N "^" O_to_Z); + TEST (A_to_Z, +15, '^', A_to_O "^" P_to_Z); + TEST (A_to_Z, +16, '^', A_to_P "^" Q_to_Z); + TEST (A_to_Z, +17, '^', A_to_Q "^" R_to_Z); + TEST (A_to_Z, +18, '^', A_to_R "^" S_to_Z); + TEST (A_to_Z, +19, '^', A_to_S "^" T_to_Z); + TEST (A_to_Z, +20, '^', A_to_T "^" U_to_Z); + TEST (A_to_Z, +21, '^', A_to_U "^" V_to_Z); + TEST (A_to_Z, +22, '^', A_to_V "^" W_to_Z); + TEST (A_to_Z, +23, '^', A_to_W "^" X_to_Z); + TEST (A_to_Z, +24, '^', A_to_X "^" Y_to_Z); + TEST (A_to_Z, +25, '^', A_to_Y "^" "Z"); + TEST (A_to_Z, +26, '^', A_to_Z "^" ""); + + ////////////////////////////////////////////////////////////////// + // exercise deque::insert(iterator, size_type, const_reference) + + rw_info (0, 0, 0, + "std::deque::insert(iterator, size_type, " + "const_reference)"); + +#undef TEST +#define TEST(seq, off, n, ins, res) do { \ + const char insseq [] = { ins, '\0' }; \ + test_insert (__LINE__, -1, \ + (X*)0, n, \ + seq, sizeof seq - 1, \ + std::size_t (off), \ + insseq, 1, \ + res, sizeof res - 1); \ + } while (0) + + TEST ("", +0, 0, 'a', ""); + TEST ("", +0, 1, 'a', "a"); + TEST ("", +0, 2, 'b', "bb"); + TEST ("", +0, 3, 'c', "ccc"); + + TEST ("a", +0, 0, 'a', "a"); + TEST ("b", +0, 1, 'a', "ab"); + TEST ("b", +0, 2, 'a', "aab"); + TEST ("b", +0, 3, 'a', "aaab"); + + TEST ("ab", +1, 0, 'b', "ab"); + TEST ("ac", +1, 1, 'b', "abc"); + TEST ("ac", +1, 2, 'b', "abbc"); + TEST ("ac", +1, 3, 'b', "abbbc"); + + TEST ("abcd", +2, 0, 'c', "abcd"); + TEST ("abde", +2, 1, 'c', "abcde"); + TEST ("abde", +2, 2, 'c', "abccde"); + TEST ("abde", +2, 3, 'c', "abcccde"); + + ////////////////////////////////////////////////////////////////// + // exercise deque::insert(iterator, InputIterator, InputIterator) + + rw_info (0, 0, 0, + "template std::deque::" + "insert(iterator, InputIterator, InputIterator)"); + + if (0 == rw_opt_no_input_iterator) + test_insert_range (InputIter(0, 0, 0), "InputIter"); + else + rw_note (0, 0, __LINE__, + "template " + "std::deque::insert(iterator, T, T) " + "[with T = InputIterator] test disabled."); + + if (0 == rw_opt_no_forward_iterator) + test_insert_range (FwdIter(), "FwdIter"); + else + rw_note (0, 0, __LINE__, + "template " + "std::deque::insert(iterator, T, T) " + "[with T = ForwardIterator] test disabled."); + + if (0 == rw_opt_no_bidirectional_iterator) + test_insert_range (BidirIter(), "BidirIter"); + else + rw_note (0, 0, __LINE__, + "template " + "std::deque::insert(iterator, T, T) " + "[with T = BidirectionalIterator] test disabled."); + + if (0 == rw_opt_no_random_iterator) + test_insert_range (RandomAccessIter(), "RandomAccessIter"); + else + rw_note (0, 0, __LINE__, + "template " + "std::deque::insert(iterator, T, T) " + "[with T = RandomAccessIterator] test disabled."); + + ////////////////////////////////////////////////////////////////// + // exercise deque::insert(iterator, int, int) + + rw_info (0, 0, 0, + "template " + "std::deque::" + "insert(iterator, IntType, IntType)"); + + if (0 == rw_opt_no_right_thing) { + test_insert_int_range ((signed char)0, "signed char"); + test_insert_int_range ((unsigned char)0, "unsigned char"); + test_insert_int_range (short (), "short"); + test_insert_int_range ((unsigned short)0, "unsigned short"); + test_insert_int_range (int (), "int"); + test_insert_int_range ((unsigned int)0, "unsigned int"); + test_insert_int_range (long (), "long"); + test_insert_int_range ((unsigned long)0, "unsigned long"); + +#ifdef _RWSTD_LONG_LONG + + test_insert_int_range ((_RWSTD_LONG_LONG)0, + "long long"); + test_insert_int_range ((unsigned _RWSTD_LONG_LONG)0, + "unsigned long long"); + +#endif // _RWSTD_LONG_LONG + } + else + rw_note (0, 0, __LINE__, + "template " + "std::deque::insert(iterator, T, T) " + "[with T = IntegralType] tests disabled."); + +} + +/**************************************************************************/ + +template +void test_assign (int line, int exceptions, + const Iterator &dummy, int nelems, + const char *seq, std::size_t seqlen, + const char *asn, std::size_t asnlen, + const char *res, std::size_t reslen) +{ + const X* const xseq = X::from_char (seq, seqlen + 1); + X* const xasn = X::from_char (asn, asnlen + 1); + + Deque deq = seqlen ? Deque (xseq, xseq + seqlen) : Deque (); + + // format a string describing the function call being exercised + // (used in diagnostic output below) + char* funcall = 0; + std::size_t len = 0; + + rw_asnprintf (&funcall, &len, + "deque(\"%{X=*.*}\").assign(" + "%{?}\"%{X=*.*}\")%{:}%d, %d)%{;}", + seqlen, -1, xseq, + nelems < 0, + asnlen, -1, xasn, + nelems, *asn); + + int n_copy = X::n_total_copy_ctor_; + int n_asgn = X::n_total_op_assign_; + + // create a dummy deque iterator to pass to exception_loop + // (the object will not be used by the functiuon) + const Deque::iterator dummy_it = deq.begin (); + + if (nelems < 0) { // assign(Iterator, Iterator) + + if (asnlen > 1) + exceptions = 0; + + const Iterator first = + make_iter (xasn, xasn, xasn + asnlen, dummy); + + const Iterator last = + make_iter (xasn + asnlen, xasn, xasn + asnlen, dummy); + + exception_loop (line, AssignRange, funcall, exceptions, + deq, dummy_it, nelems, 0, first, last, + &n_copy, &n_asgn); + } + else { // assign(size_type, const_reference) + if (nelems > 1) + exceptions = 0; + + exception_loop (line, Assign_n, funcall, exceptions, + deq, dummy_it, nelems, xasn, dummy, dummy, + &n_copy, &n_asgn); + } + + // verify the expected size of the deque after assignment + rw_assert (deq.size () == reslen, 0, line, + "line %d: %s: size == %zu, got %zu\n", + __LINE__, funcall, reslen, deq.size ()); + + // verify the expected contents of the deque after assignment + const Deque::const_iterator resbeg = deq.begin (); + const Deque::const_iterator resend = deq.end (); + + for (Deque::const_iterator it = resbeg; it != resend; ++it) { + + const Deque::size_type inx = it - resbeg; + + _RWSTD_ASSERT (inx < deq.size ()); + + if ((*it).val_ != UChar (res [inx])) { + + char* const got = new char [deq.size () + 1]; + + for (Deque::const_iterator i = resbeg; i != resend; ++i) { + + const Deque::size_type inx_2 = i - resbeg; + + _RWSTD_ASSERT (inx_2 < deq.size ()); + + got [inx_2] = char ((*i).val_); + } + + got [deq.size ()] = '\0'; + + rw_assert (false, 0, line, + "line %d: %s: expected %s, got %s\n", + __LINE__, funcall, res, got); + + delete[] got; + break; + } + } + + // set asnlen to the number of elements assigned to the container + if (0 <= nelems) + asnlen = std::size_t (nelems); + + // verify the complexity of the operation in terms of the number + // of calls to the copy ctor and assignment operator on value_type + + // the number of invocations of the copy ctor and the assignment + // operator depends on whether the implementation of assign() + // strictly follows the requirements in 23.2.1.1, p7 or p8 and + // destroys the existing elements before inserting the new ones, + // or whether it assigns the new elements over the existing ones + +#ifndef _RWSTD_NO_EXT_DEQUE_ASSIGN_IN_PLACE + const std::size_t expect_copy = seqlen < asnlen ? asnlen - seqlen : 0; + const std::size_t expect_asgn = asnlen < seqlen ? asnlen : seqlen; +#else // if defined (_RWSTD_NO_EXT_DEQUE_ASSIGN_IN_PLACE) + const std::size_t expect_copy = asnlen; + const std::size_t expect_asgn = 0; +#endif // _RWSTD_NO_EXT_DEQUE_ASSIGN_IN_PLACE + + rw_assert (n_copy == int (expect_copy), 0, line, + "line %d: %s: expected %zu invocations " + "of X::X(const X&), got %d\n", + __LINE__, funcall, expect_copy, n_copy); + + rw_assert (n_asgn == int (expect_asgn), 0, line, + "line %d: %s: expected %zu invocations " + "of X::operator=(const X&), got %d\n", + __LINE__, funcall, expect_asgn, n_asgn); + + // Free funcall storage + std::free (funcall); + + delete[] xasn; + delete[] xseq; +} + + +template +void test_assign_range (const Iterator &it, const char* itname) +{ + rw_info (0, 0, 0, "std::deque::assign(%s, %s)", itname, itname); + + static const char seq[] = "abcdefghijklmnopqrstuvwxyz"; + static const char asn[] = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"; + + for (std::size_t i = 0; i != sizeof seq - 1; ++i) { + for (std::size_t j = 0; j != sizeof asn - 1; ++j) { + + test_assign (__LINE__, 0, it, -1, seq, i, asn, j, asn, j); + } + } +} + + +void test_assign () +{ + ////////////////////////////////////////////////////////////////// + // exercise + // deque::assign(size_type, const_reference) + + rw_info (0, 0, 0, "std::deque::assign(size_type, const_reference)"); + + static const char seq[] = "abcdefghijklmnopqrstuvwxyz"; + static const char res[] = "AAAAAAAAAAAAAAAAAAAAAAAAAA"; + + for (std::size_t i = 0; i != sizeof seq - 1; ++i) { + for (std::size_t j = 0; j != sizeof seq - 1; ++j) { + + test_assign (__LINE__, -1, (X*)0, int (j), seq, i, res, 1U, res, j); + } + } + + ////////////////////////////////////////////////////////////////// + // exercise + // template + // deque::assign(InputIterator, InputIterator) + + rw_info (0, 0, 0, + "template " + "std::deque::assign(InputIterator, InputIterator)"); + + if (0 == rw_opt_no_input_iterator) + test_assign_range (InputIter(0, 0, 0), "InputIter"); + else + rw_note (0, 0, __LINE__, + "template " + "std::deque::assign(T, T) [with T = InputIterator]" + "test disabled."); + + if (0 == rw_opt_no_forward_iterator) + test_assign_range (FwdIter(), "FwdIter"); + else + rw_note (0, 0, __LINE__, + "template " + "std::deque::assign(T, T) [with T = ForwardIterator]" + "test disabled."); + + if (0 == rw_opt_no_bidirectional_iterator) + test_assign_range (BidirIter(), "BidirIter"); + else + rw_note (0, 0, __LINE__, + "template " + "std::deque::assign(T, T) [with T = BidirectionalIterator]" + "test disabled."); + + if (0 == rw_opt_no_random_iterator) + test_assign_range (RandomAccessIter(), "RandomAccessIter"); + else + rw_note (0, 0, __LINE__, + "template " + "std::deque::assign(T, T) [with T = RandomAccessIterator]" + "test disabled."); +} + +/**************************************************************************/ + +void test_erase (int line, + const char *seq, std::size_t seqlen, + std::size_t begoff, std::size_t len, + const char *res, std::size_t reslen) +{ + const X* const xseq = X::from_char (seq, seqlen + 1); + + Deque deq = seqlen ? Deque (xseq, xseq + seqlen) : Deque (); + const Deque::iterator start = deq.begin () + begoff; + + int n_copy = X::n_total_copy_ctor_; + int n_asgn = X::n_total_op_assign_; + + char* funcall = 0; + std::size_t buflen = 0; + + if (std::size_t (-1) == len) { // erase(iterator) + + rw_asnprintf (&funcall, &buflen, + "deque(\"%{X=*.*}\").erase(%{?}end()%{:}" + "%{?}begin () + %zu%{:}begin ()%{;}%{;}", + seqlen, -1, xseq, + begoff == deq.size (), begoff, begoff); + + exception_loop (line, Erase_1, funcall, 0, + deq, start, 1, 0, (X*)0, (X*)0, + &n_copy, &n_asgn); + } + else { // assign(size_type, const_reference) + + const Deque::iterator end = start + len; + + rw_asnprintf (&funcall, &buflen, + "deque(\"%{X=*.*}\").erase(%{?}end()%{:}" + "%{?}begin () + %zu%{:}begin ()%{;}%{;}" + "%{?})%{:}%{?}, end ())%{:}%{?}, begin ())" + "%{:}begin () + %zu%{;}%{;}%{;}", + seqlen, -1, xseq, + begoff == deq.size (), begoff, begoff, + std::size_t (-1) == len, + end == deq.end (), + end == deq.begin (), + end - deq.begin ()); + + exception_loop (line, EraseRange, funcall, 0, + deq, start, len, 0, (X*)0, (X*)0, + &n_copy, &n_asgn); + + } + + // verify the expected size of the deque after erasure + rw_assert (deq.size () == reslen, 0, line, + "line %d: %s: size == %zu, got %zu\n", + __LINE__, funcall, reslen, deq.size ()); + + // verify the expected contents of the deque after assignment + const Deque::const_iterator resbeg = deq.begin (); + const Deque::const_iterator resend = deq.end (); + + for (Deque::const_iterator it = resbeg; it != resend; ++it) { + if ((*it).val_ != UChar (res [it - resbeg])) { + + char* const got = new char [deq.size () + 1]; + + for (Deque::const_iterator i = resbeg; i != resend; ++i) { + got [i - resbeg] = char ((*i).val_); + } + + got [deq.size ()] = '\0'; + + rw_assert (false, 0, line, + "line %d: %s: expected %s, got %s\n", + __LINE__, funcall, res, got); + + delete[] got; + break; + } + } + +#if 0 + // set asnlen to the number of elements assigned to the container + if (0 <= nelems) + asnlen = std::size_t (nelems); + + // verify the complexity of the operation in terms of the number + // of calls to the copy ctor and assignment operator on value_type + + // the number of invocations of the copy ctor and the assignment + // operator depends on whether the implementation of assign() + // strictly follows the requirements in 23.2.1.1, p7 or p8 and + // destroys the existing elements before inserting the new ones, + // or whether it assigns the new elements over the existing ones + +#ifndef _RWSTD_NO_EXT_DEQUE_ASSIGN_IN_PLACE + const std::size_t expect_copy = seqlen < asnlen ? asnlen - seqlen : 0; + const std::size_t expect_asgn = asnlen < seqlen ? asnlen : seqlen; +#else // if defined (_RWSTD_NO_EXT_DEQUE_ASSIGN_IN_PLACE) + const std::size_t expect_copy = asnlen; + const std::size_t expect_asgn = 0; +#endif // _RWSTD_NO_EXT_DEQUE_ASSIGN_IN_PLACE + + rw_assert (n_copy == int (expect_copy), 0, line, + "line %d: %s: expected %zu invocations " + "of X::X(const X&), got %d\n", + __LINE__, funcall, expect_copy, n_copy); + + rw_assert (n_asgn == int (expect_asgn), 0, line, + "line %d: %s: expected %zu invocations " + "of X::operator=(const X&), got %d\n", + __LINE__, funcall, expect_asgn, n_asgn); +#endif + + std::free (funcall); + + delete[] xseq; +} + +void test_erase () +{ + ////////////////////////////////////////////////////////////////// + // exercise deque::erase(iterator) + + rw_info (0, 0, 0, "std::deque::erase(iterator)"); + +#undef TEST +#define TEST(seq, off, res) do { \ + test_erase (__LINE__, \ + seq, sizeof seq - 1, \ + std::size_t (off), \ + std::size_t (-1), \ + res, sizeof res - 1); \ + } while (0) + + TEST ("a", 0, ""); + + TEST ("ab", 0, "b"); + TEST ("ab", 1, "a"); + + TEST ("abc", 0, "bc"); + TEST ("abc", 1, "ac"); + TEST ("abc", 2, "ab"); + + TEST ("abcd", 0, "bcd"); + TEST ("abcd", 1, "acd"); + TEST ("abcd", 2, "abd"); + TEST ("abcd", 3, "abc"); + + TEST ("abcde", 0, "bcde"); + TEST ("abcde", 1, "acde"); + TEST ("abcde", 2, "abde"); + TEST ("abcde", 3, "abce"); + TEST ("abcde", 4, "abcd"); + + TEST ("abcdef", 0, "bcdef"); + TEST ("abcdef", 1, "acdef"); + TEST ("abcdef", 2, "abdef"); + TEST ("abcdef", 3, "abcef"); + TEST ("abcdef", 4, "abcdf"); + TEST ("abcdef", 5, "abcde"); + + TEST ("abcdefg", 0, "bcdefg"); + TEST ("abcdefg", 1, "acdefg"); + TEST ("abcdefg", 2, "abdefg"); + TEST ("abcdefg", 3, "abcefg"); + TEST ("abcdefg", 4, "abcdfg"); + TEST ("abcdefg", 5, "abcdeg"); + TEST ("abcdefg", 6, "abcdef"); + + TEST ("abcdefgh", 0, "bcdefgh"); + TEST ("abcdefgh", 1, "acdefgh"); + TEST ("abcdefgh", 2, "abdefgh"); + TEST ("abcdefgh", 3, "abcefgh"); + TEST ("abcdefgh", 4, "abcdfgh"); + TEST ("abcdefgh", 5, "abcdegh"); + TEST ("abcdefgh", 6, "abcdefh"); + TEST ("abcdefgh", 7, "abcdefg"); + + ////////////////////////////////////////////////////////////////// + // exercise deque::erase(iterator, iterator) + + rw_info (0, 0, 0, "std::deque::erase(iterator, iterator)"); +} + +/**************************************************************************/ + +#ifndef _RWSTD_NO_INLINE_MEMBER_TEMPLATES +# ifndef _RWSTD_NO_EXPLICIT +# if !defined (_MSC_VER) || _MSC_VER > 1200 + +struct DR_438 +{ + static bool cast_used; + + DR_438 () { } + + explicit DR_438 (std::size_t) { cast_used = true; } + + template DR_438 (T) { } +}; + +bool DR_438::cast_used; + +# else // if MSVC <= 6.0 + // avoid an MSVC 6.0 ICE on this code +# define NO_DR_438_TEST "this version of MSVC is too broken" +# endif // !MSVC || MSVC > 6.0 +# else +# define NO_DR_438_TEST "_RWSTD_NO_EXPLICIT #defined" +# endif // _RWSTD_NO_EXPLICIT +# else +# define NO_DR_438_TEST "_RWSTD_NO_INLINE_MEMBER_TEMPLATES #defined" +#endif // _RWSTD_NO_INLINE_MEMBER_TEMPLATES + + +void test_dr_438 () +{ + ////////////////////////////////////////////////////////////////// + // exercise the resolution of DR 438: + ////////////////////////////////////////////////////////////////// + // + // For every sequence defined in clause [lib.containers] + // and in clause [lib.strings]: + + // * If the constructor + // + // template + // X (InputIterator f, InputIterator l, + // const allocator_type& a = allocator_type()) + // + // is called with a type InputIterator that does not qualify + // as an input iterator, then the constructor will behave + // as if the overloaded constructor: + // + // X (size_type, const value_type& = value_type(), + // const allocator_type& = allocator_type()) + // + // were called instead, with the arguments static_cast(f), + // l and a, respectively. + // + // * If the member functions of the forms: + // + // template // such as insert() + // rt fx1(iterator p, InputIterator f, InputIterator l); + // + // template // such as append(), assign() + // rt fx2(InputIterator f, InputIterator l); + // + // template // such as replace() + // rt fx3(iterator i1, iterator i2, InputIterator f, InputIterator l); + // + // are called with a type InputIterator that does not qualify + // as an input iterator, then these functions will behave + // as if the overloaded member functions: + // + // rt fx1(iterator, size_type, const value_type&); + // + // rt fx2(size_type, const value_type&); + // + // rt fx3(iterator, iterator, size_type, const value_type&); + // + // were called instead, with the same arguments. + // + // In the previous paragraph the alternative binding will fail + // if f is not implicitly convertible to X::size_type or + // if l is not implicitly convertible to X::value_type. + // + // The extent to which an implementation determines that a type + // cannot be an input iterator is unspecified, except that + // as a minimum integral types shall not qualify as input iterators. + ////////////////////////////////////////////////////////////////// + + rw_info (0, 0, 0, "resolution of DR 438"); + +#ifndef NO_DR_438_TEST + + std::deque > dq; + + dq.assign (1, 2); + + rw_assert (!DR_438::cast_used, 0, __LINE__, + "deque::assign(InputIterator, InputIterator)" + "[ with InputIterator = ] unexpectedly " + "used explicit argument conversion"); + + dq.insert (dq.begin (), 1, 2); + + rw_assert (!DR_438::cast_used, 0, __LINE__, + "deque::insert(iterator, InputIterator, InputIterator) " + "[ with InputIterator = ] unexpectedly " + "used explicit argument conversion"); +#else // if defined (NO_DR_438_TEST) + + rw_warning (0, 0, __LINE__, "%s; skipping test", NO_DR_438_TEST); + +#endif // NO_DR_438_TEST + +} + +/**************************************************************************/ + +int run_test (int, char**) +{ + if (0 == rw_opt_no_dr438) + test_dr_438 (); + + static const std::size_t caps[] = { + 2, 3, 4, 5, 16, 32 + }; + + for (std::size_t i = 0; i != sizeof caps / sizeof *caps; ++i) { + + new_capacity = caps [i]; + + rw_info (0, 0, 0, + "__rw::__rw_new_capacity >(0) = %zu", + _RW::__rw_new_capacity (0, (Deque*)0)); + + if (0 == rw_opt_no_assign) + test_assign (); + + if (0 == rw_opt_no_erase) + test_erase (); + + if (0 == rw_opt_no_insert) + test_insert (); + } + + return 0; +} + +/**************************************************************************/ + +int main (int argc, char** argv) +{ + return rw_test (argc, argv, __FILE__, + "lib.deque.modifiers", + 0 /* no comment */, run_test, + "|-no-dr438#" + "|-no-assign#" + "|-no-erase#" + "|-no-insert#" + "|-no-InputIterator#" + "|-no-ForwardIterator#" + "|-no-BidirectionalIterator#" + "|-no-RandomIterator#" + "|-no-right-thing#", + &rw_opt_no_dr438, + &rw_opt_no_assign, + &rw_opt_no_erase, + &rw_opt_no_insert, + &rw_opt_no_input_iterator, + &rw_opt_no_forward_iterator, + &rw_opt_no_bidirectional_iterator, + &rw_opt_no_random_iterator, + &rw_opt_no_right_thing); +} Propchange: incubator/stdcxx/trunk/tests/containers/23.deque.modifiers.cpp ------------------------------------------------------------------------------ svn:eol-style = native Propchange: incubator/stdcxx/trunk/tests/containers/23.deque.modifiers.cpp ------------------------------------------------------------------------------ svn:keywords = Id Added: incubator/stdcxx/trunk/tests/containers/23.deque.special.cpp URL: http://svn.apache.org/viewcvs/incubator/stdcxx/trunk/tests/containers/23.deque.special.cpp?rev=372607&view=auto ============================================================================== --- incubator/stdcxx/trunk/tests/containers/23.deque.special.cpp (added) +++ incubator/stdcxx/trunk/tests/containers/23.deque.special.cpp Thu Jan 26 12:56:15 2006 @@ -0,0 +1,271 @@ +/*************************************************************************** + * + * 23.deque.special.cpp - test exercising [lib.deque.special] + * + * $Id$ + * + *************************************************************************** + * + * Copyright (c) 1994-2005 Quovadx, Inc., acting through its Rogue Wave + * Software division. Licensed under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the + * License. You may obtain a copy of the License at + * http://www.apache.org/licenses/LICENSE-2.0. Unless required by + * applicable law or agreed to in writing, software distributed under + * the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR + * CONDITIONS OF ANY KIND, either express or implied. See the License + * for the specific language governing permissions and limitations under + * the License. + * + **************************************************************************/ + +#include // for deque + +#include // for size_t + +#include // for X +#include // for rw_test(), ... + +/**************************************************************************/ + +struct DequeValueType { }; + +typedef std::allocator DequeAllocator; +typedef std::deque DequeType; + + +int deque_swap_called; + +_RWSTD_NAMESPACE (std) { + +// define an explicit specialization of the deque::swap() member +// to verify tha the non-member swap function calls the member + +_RWSTD_SPECIALIZED_FUNCTION +void DequeType::swap (DequeType&) +{ + ++deque_swap_called; +} + +} // namespace std + +/**************************************************************************/ + +void test_std_swap () +{ + rw_info (0, 0, 0, + "Testing std::swap (std::deque&, std::deque&) " + "calls std::deque::swap"); + + // verify the signature of the function specialization + void (*pswap)(DequeType&, DequeType&) = + &std::swap; + + _RWSTD_UNUSED (pswap); + + // verify that std::swap() calls std::deque::swap() + DequeType d; + + std::swap (d, d); + + rw_assert (1 == deque_swap_called, 0, __LINE__, + "std::swap (std::deque&, std::deque&) called " + "std::deque::swap (std::deque&) exactly once; " + "got %d times", deque_swap_called); +} + +/**************************************************************************/ + +typedef std::deque > Deque; + +Deque::size_type new_capacity; + +namespace __rw { + +_RWSTD_SPECIALIZED_FUNCTION +inline Deque::size_type +__rw_new_capacity(Deque::size_type n, const Deque*) +{ + if (n) { + // non-zero size argument indicates a request for an increase + // in the capacity of a deque object's dynamically sizable + // vector of nodes + return n * 2; + } + + // zero size argument is a request for the initial size of a deque + // object's dynamically sizable vector of nodes or for the size of + // the objects's fixed-size buffer for elements + return new_capacity; +} + +} + +/**************************************************************************/ + +template +void test_swap (const T *lhs_seq, std::size_t lhs_seq_len, + const T *rhs_seq, std::size_t rhs_seq_len, + std::deque*, + const char *tname) +{ + typedef std::deque Deque; + typedef typename Deque::iterator Iterator; + typedef typename Deque::size_type SizeType; + + // create two containers from the provided sequences + Deque lhs (lhs_seq, lhs_seq + lhs_seq_len); + Deque rhs (rhs_seq, rhs_seq + rhs_seq_len); + + // save the begin and and iterators and the size + // of each container before swapping the objects + const Iterator lhs_begin_0 = lhs.begin (); + const Iterator lhs_end_0 = lhs.end (); + const SizeType lhs_size_0 = lhs.size (); + + const Iterator rhs_begin_0 = rhs.begin (); + const Iterator rhs_end_0 = rhs.end (); + const SizeType rhs_size_0 = rhs.size (); + + // swap the two containers + lhs.swap (rhs); + + // compute the begin and and iterators and the size + // of each container after swapping the objects + const Iterator lhs_begin_1 = lhs.begin (); + const Iterator lhs_end_1 = lhs.end (); + const SizeType lhs_size_1 = lhs.size (); + + const Iterator rhs_begin_1 = rhs.begin (); + const Iterator rhs_end_1 = rhs.end (); + const SizeType rhs_size_1 = rhs.size (); + + // verify that the iterators and sizes + // of the two objects were swapped + rw_assert (lhs_begin_0 == rhs_begin_1 && lhs_begin_1 == rhs_begin_0, + 0, __LINE__, + "begin() not swapped for \"%{X=*.*}\" and \"%{X=*.*}\"", + int (lhs_seq_len), -1, lhs_seq, + int (rhs_seq_len), -1, rhs_seq); + + rw_assert (lhs_end_0 == rhs_end_1 && lhs_end_1 == rhs_end_0, + 0, __LINE__, + "end() not swapped for \"%{X=*.*}\" and \"%{X=*.*}\"", + int (lhs_seq_len), -1, lhs_seq, + int (rhs_seq_len), -1, rhs_seq); + + rw_assert (lhs_size_0 == rhs_size_1 && lhs_size_1 == rhs_size_0, + 0, __LINE__, + "size() not swapped for \"%{X=*.*}\" and \"%{X=*.*}\"", + int (lhs_seq_len), -1, lhs_seq, + int (rhs_seq_len), -1, rhs_seq); + + // swap one of the containers with an empty unnamed temporary + // container and verify that the object is empty + { Deque ().swap (lhs); } + + const Iterator lhs_begin_2 = lhs.begin (); + const Iterator lhs_end_2 = lhs.end (); + const SizeType lhs_size_2 = lhs.size (); + + rw_assert (lhs_begin_2 == lhs_end_2, 0, __LINE__, + "deque<%s>().begin() not swapped for \"%{X=*.*}\"", + tname, int (rhs_seq_len), -1, rhs_seq); + + rw_assert (0 == lhs_size_2, 0, __LINE__, + "deque<%s>().size() not swapped for \"%{X=*.*}\"", + tname, int (rhs_seq_len), -1, rhs_seq); +} + + +template +void test_swap (const T*, const char* tname) +{ + rw_info (0, 0, 0, + "std::deque<%s>::swap(deque<%1$s>&)", tname); + + typedef std::deque > MyDeque; + typedef typename MyDeque::iterator Iterator; + + // create two empty deque objects + MyDeque empty [2]; + + // save their begin and end iterators before calling swap + const Iterator before [2][2] = { + { empty [0].begin (), empty [0].end () }, + { empty [1].begin (), empty [1].end () } + }; + + // swap the two containers + empty [0].swap (empty [1]); + + // get the new begin and end iterators + const Iterator after [2][2] = { + { empty [0].begin (), empty [0].end () }, + { empty [1].begin (), empty [1].end () } + }; + + // verify that the iterators have not been invalidated + rw_assert ( before [0][0] == after [1][0] + && before [1][0] == after [0][0], 0, __LINE__, + "deque<%s>().begin() not swapped", tname); + + rw_assert ( before [0][1] == after [1][1] + && before [1][1] == after [0][1], 0, __LINE__, + "deque<%s>().end() not swapped", tname); + + // static to zero-initialize if T is a POD type + static T seq [32]; + + const std::size_t seq_len = sizeof seq / sizeof *seq; + + for (std::size_t i = 0; i != seq_len; ++i) { + for (std::size_t j = 0; j != seq_len; ++j) { + test_swap (seq, i, seq, j, (MyDeque*)0, tname); + } + } +} + +/**************************************************************************/ + +void test_swap () +{ + test_swap ((int*)0, "int"); + test_swap ((X*)0, "X"); +} + +/**************************************************************************/ + +int run_test (int, char**) +{ + // Test std::swap calling std::deque::swap + test_std_swap (); + + static const Deque::size_type caps[] = { + 2, 3, 4, 5, 16, 32 + }; + + for (std::size_t i = 0; i != sizeof caps / sizeof *caps; ++i) { + + new_capacity = caps [i]; + + rw_info (0, 0, 0, + "__rw::__rw_new_capacity >(0) = %u", + _RW::__rw_new_capacity (0, (Deque*)0)); + + test_swap (); + } + + return 0; +} + +/**************************************************************************/ + +int main (int argc, char** argv) +{ + return rw_test (argc, argv, __FILE__, + "lib.deque.special", + 0 /* no comment */, + run_test, + 0 /* co command line options */); +} Propchange: incubator/stdcxx/trunk/tests/containers/23.deque.special.cpp ------------------------------------------------------------------------------ svn:eol-style = native Propchange: incubator/stdcxx/trunk/tests/containers/23.deque.special.cpp ------------------------------------------------------------------------------ svn:keywords = Id
__label__pos
0.993603
Ask a Jedi: Where did Loci come from in Lighthouse Pro? This post is more than 2 years old. This really isn't a ColdFusion question, but I thought others may find it interesting. Jeff asks: As I continue to evaluate and use Lighthouse Pro, something sparked my interest. You used the term "project loci", which clicked with me and made perfect sense. But my background with the term "loci" comes from genetics and biology (previous life). Do you have a biology background too? The answer stems from the history of Lighthouse Pro, and directly relates to why there is a "Pro" in the name. Many, many, many moons ago, Nathan Dintenfass created a bug tracker. This bug tracker was very simple. It required no database. But as it was one file, you could drop it in and start using it in about 5 minutes. I loved the simplicity of it. I did some basic updates to his code and released a version named Lighthouse that was just a bit cleaned up. Anyway - Loci is the term Nathan had used, and it stuck. When Macromedia approached me to write an application for the old DRK (boy do I miss those), I proposed an update to Lighthouse that would be a bit more formal, ie, actually use a database. Hence the "Pro". Frankly, I know "Loci" confuses people and I keep meaning to change it to "Location" or "Area", but I forget. ;) I'd rather focus on adding features. Speaking of which, the next update to LHP will include milestones, and I'm considering a system by which you can add any field you want to a project issue, so if you want each issue to have a 'How Sucky' field, it will be possible. Raymond Camden's Picture About Raymond Camden Raymond is a developer advocate for HERE Technologies. He focuses on JavaScript, serverless and enterprise cat demos. If you like this article, please consider visiting my Amazon Wishlist or donating via PayPal to show your support. You can even buy me a coffee! Lafayette, LA https://www.raymondcamden.com Comments
__label__pos
0.577994
Location / Organization association of Ansible roles / Salt states Currently, a admin imports Ansible Roles / Salt States. All organizations / locations have access to these roles / states. For large companies with multiple organizations and locations it make sense to only show the necessary roles / states. Additionally, sometimes roles are only specific for a certain organization and should not be used by another. Are there reasons, why it is not possible currently, to configure which roles / which states can be used for each organization / locations? nobody has an opinion? ping Do you have an opinion @Marek_Hulan / @aruzicka ? Sorry, I completely missed this thread somehow. I agree this would be nice. How do you suggest we do this? There can be a lot of roles, editing each seaprately would be hard. Perhaps this should be inherited form the proxy we imported them from? OTOH the host is not being associated to e.g. “Ansible” proxy. Good idea to re-use the proxy association. maybe we should have something like a “ansible” proxy even if its just a “placeholder” feature. Maybe we should think about a nice react component which can do the association?
__label__pos
0.933952
HomeLearnHow-to Building Modern Applications with Next.js and MongoDB Published: Feb 07, 2020 • MongoDB • Atlas • ... By Ado Kukic Share Developers have more choices than ever before when it comes to choosing the technology stack for their next application. Developer productivity is one of the most important factors in choosing a modern stack and I believe that Next.js coupled with MongoDB can get you up and running on the next great application in no time at all. Let's find out how and why! If you would like to follow along with this tutorial, you can get the code from the GitHub repo. Also, be sure to sign up for a free MongoDB Atlas account to make it easier to connect your MongoDB database. #What is Next.js Next.js is a React based framework for building modern web applications. The framework comes with a lot of powerful features such as server side rendering, automatic code splitting, static exporting and much more that make it easy to build scalable and production ready apps. Its opinionated nature means that the framework is focused on developer productivity, but still flexible enough to give developers plenty of choice when it comes to handling the big architectural decisions. NextJS Homepage For this tutorial, I'll assume that you are already familiar with React, and if so, you'll be up and running with Next.js in no time at all. If you are not familiar with React, I would suggest looking at resources such as the official React docs or taking a free React starter course to get familiar with the framework first. #What We're Building: Macro Compliance Tracker The app we're building today is called the Macro Compliance Tracker. If you're like me, you probably had a New Years Resolution of "I'm going to get in better shape!" This year, I am taking that resolution seriously, and have gotten a person trainer and nutritionist. One interesting thing that I learned is that while the old adage of calories in needs to be less than calories out to lose weight is generally true, your macronutrients also play just as an important role in weight loss. There are many great apps that help you track your calories and macros. Unfortunately, most apps do not allow you to track a range and another interesting thing that I learned in my fitness journey this year is that for many beginners trying to hit their daily macro goals is a challenge and many folks end up giving up when they fail to hit the exact targets consistently. For that reason, my coach suggests a target range for calories and macros rather than a hard set number. MCT App So that's what we're building today. We'll use Next.js to build our entire application and MongoDB as our database to store our progress. Let's get into it! #Setting up a Next.js Application The easiest way to create a Next.js application is by using the official create-next-app npx command. To do that we'll simply open up our Terminal window and type: npx create-next-app mct. "mct" is going to be the name of our application as well as the directory where our code is going to live. create-next-app Execute this command and a default application will be created. Once the files are created navigate into the directory by running cd mct in the Terminal window and then execute npm run dev. This will start a development server for your Next.js application which you'll be able to access at localhost:3000. Next.js Default Navigate to localhost:3000 and you should see a page very similar to the one in the above screenshot. If you see the Welcome to Next.js page you are good to go. If not, I would suggest following the Next.js docs and troubleshooting tips to ensure proper setup. #Next.js Directory Structure Before we dive into building our application any further, let's quickly look at how Next.js structures our application. The default directory structure looks like this: Next.js Default Directory Structure The areas we're going to be focused on are the pages, components, and public directories. The .next directory contains the build artifacts for our application, and we should generally avoid making direct changes to it. The pages directory will contain our application pages, or another way to think of these is that each file here will represent a single route in our application. Our default app only has the index.js page created which corresponds with our home route. If we wanted to add a second page, for example, an about page, we can easily do that by just creating a new file called about.js. The name we give to the filename will correspond to the route. So let's go ahead and create an about.js file in the pages directory. As I mentioned earlier, Next.js is a React based framework, so all your React knowledge is fully transferable here. You can create components using either as functions or as classes. I will be using the function based approach. Feel free to grab the complete GitHub repo if you would like to follow along. Our About.js component will look like this: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 import React from 'react' import Head from 'next/head' import Nav from '../components/nav' const About = () => ( <div> <Head> <title>About</title> <link rel="icon" href="/favicon.ico" /> </Head> <Nav /> <div> <h1>Macro Compliance Tracker!</h1> <p> This app will help you ensure your macros are within a selected range to help you achieve your New Years Resolution! </p> </div> </div> ) export default About Go ahead and save this file. Next.js will automatically rebuild the application and you should be able to navigate to http://localhost:3000/about now and see your new component in action. About Us Page Unstyled Next.js will automatically handle all the routing plumbing and ensure the right component gets loaded. Just remember, whatever you name your file in the pages directory is what the corresponding URL will be. #Adding Some Style with Tailwind.css Our app is looking good, but from a design perspective, it's looking pretty bare. Let's add Tailwind.css to spruce up our design and make it a little easier on the eyes. Tailwind is a very powerful CSS framework, but for brevity we'll just import the base styles from a CDN and won't do any customizations. To do this, we'll simply add <link href="https://unpkg.com/tailwindcss@^1.0/dist/tailwind.min.css" rel="stylesheet"/> in the Head components of our pages. Let's do this for our About component and also add some Tailwind classes to improve our design. Our next component should look like this: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 import React from 'react' import Head from 'next/head' import Nav from '../components/nav' const About = () => ( <div> <Head> <title>About</title> <link rel="icon" href="/favicon.ico" /> <link href="https://unpkg.com/tailwindcss@^1.0/dist/tailwind.min.css" rel="stylesheet" /> </Head> <Nav /> <div className="container mx-auto text-center"> <h1 className="text-6xl m-12">Macro Compliance Tracker!</h1> <p className="text-xl"> This app will help you ensure your macros are within a selected range to help you achieve your New Years Resolution! </p> </div> </div> ) export default About If we go and refresh our browser, the About page should look like this: About Us Page Styled Good enough for now. If you want to learn more about Tailwind, check out their official docs here. Note: If when you make changes to your Next.js application such as adding the className's or other changes, and they are not reflected when you refresh the page, restart the dev server. #Creating Our Application Now that we have our Next.js application setup, we've gone through and familiarized ourselves with how creating components and pages works, let's get into building our Macro Compliance Tracker app. For our first implementation of this app, we'll put all of our logic in the main index.js page. Open the page up and delete all the existing Next.js boilerplate. Before we write the code, let's figure out what features we'll need. We'll want to show the user their daily calorie and macro goals, as well as if they're in compliance with their targeted range or not. Additionally, we'll want to allow the user to update their information every day. Finally, we'll want the user to be able to view previous days and see how they compare. Let's create the UI for this first. We'll do it all in the Home component, and then start breaking it up into smaller individual components. Our code will look like this: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 import React from 'react' import Head from 'next/head' import Nav from '../components/nav' const Home = () => ( <div> <Head> <title>Home</title> <link rel="icon" href="/favicon.ico" /> <link href="https://unpkg.com/tailwindcss@^1.0/dist/tailwind.min.css" rel="stylesheet" /> </Head> <div className="container mx-auto"> <div className="flex text-center"> <div className="w-full m-4"> <h1 className="text-4xl">Macro Compliance Tracker</h1> </div> </div> <div class="flex text-center"> <div class="w-1/3 bg-gray-200 p-4">Previous Day</div> <div class="w-1/3 p-4">1/23/2020</div> <div class="w-1/3 bg-gray-200 p-4">Next Day</div> </div> <div class="flex mb-4 text-center"> <div class="w-1/4 p-4 bg-green-500 text-white"> <h2 className="text-3xl font-bold">1850 <div class="flex text-sm p-4"> <div class="w-1/3">1700</div> <div class="w-1/3 font-bold">1850</div> <div class="w-1/3">2000</div> </div> </h2> <h3 className="text-xl">Calories</h3> </div> <div class="w-1/4 p-4 bg-red-500 text-white"> <h2 className="text-3xl font-bold">195 <div class="flex text-sm p-4"> <div class="w-1/3">150</div> <div class="w-1/3 font-bold">160</div> <div class="w-1/3">170</div> </div> </h2> <h3 className="text-xl">Carbs</h3> </div> <div class="w-1/4 p-4 bg-green-500 text-white"> <h2 className="text-3xl font-bold">55 <div class="flex text-sm p-4"> <div class="w-1/3">50</div> <div class="w-1/3 font-bold">60</div> <div class="w-1/3">70</div> </div> </h2> <h3 className="text-xl">Fat</h3> </div> <div class="w-1/4 p-4 bg-blue-500 text-white"> <h2 className="text-3xl font-bold">120 <div class="flex text-sm p-4"> <div class="w-1/3">145</div> <div class="w-1/3 font-bold">160</div> <div class="w-1/3">175</div> </div> </h2> <h3 className="text-xl">Protein</h3> </div> </div> <div className="flex"> <div className="w-1/3"> <h2 className="text-3xl p-4">Results</h2> <div className="p-4"> <label className="block">Calories</label> <input type="number" className="bg-gray-200 text-gray-700 border rounded py-3 px-4 mb-3 leading-tight focus:outline-none focus:bg-white"></ input> </div> <div className="p-4"> <label className="block">Carbs</label> <input type="number" className="bg-gray-200 text-gray-700 border rounded py-3 px-4 mb-3 leading-tight focus:outline-none focus:bg-white"></ input> </div> <div className="p-4"> <label className="block">Fat</label> <input type="number" className="bg-gray-200 text-gray-700 border rounded py-3 px-4 mb-3 leading-tight focus:outline-none focus:bg-white"></ input> </div> <div className="p-4"> <label className="block">Protein</label> <input type="number" className="bg-gray-200 text-gray-700 border rounded py-3 px-4 mb-3 leading-tight focus:outline-none focus:bg-white"></ input> </div> <div className="p-4"> <button className="bg-blue-500 hover:bg-blue-700 text-white font-bold py-2 px-4 rounded"> Save </button> </div> </div> <div className="w-1/3"> <h2 className="text-3xl p-4">Target</h2> <div className="p-4"> <label className="block">Calories</label> <input type="number" className="bg-gray-200 text-gray-700 border rounded py-3 px-4 mb-3 leading-tight focus:outline-none focus:bg-white"></ input> </div> <div className="p-4"> <label className="block">Carbs</label> <input type="number" className="bg-gray-200 text-gray-700 border rounded py-3 px-4 mb-3 leading-tight focus:outline-none focus:bg-white"></ input> </div> <div className="p-4"> <label className="block">Fat</label> <input type="number" className="bg-gray-200 text-gray-700 border rounded py-3 px-4 mb-3 leading-tight focus:outline-none focus:bg-white"></ input> </div> <div className="p-4"> <label className="block">Protein</label> <input type="number" className="bg-gray-200 text-gray-700 border rounded py-3 px-4 mb-3 leading-tight focus:outline-none focus:bg-white"></ input> </div> <div className="p-4"> <button className="bg-blue-500 hover:bg-blue-700 text-white font-bold py-2 px-4 rounded"> Save </button> </div> </div> <div className="w-1/3"> <h2 className="text-3xl p-4">Variance</h2> <div className="p-4"> <label className="block">Calories</label> <input type="number" className="bg-gray-200 text-gray-700 border rounded py-3 px-4 mb-3 leading-tight focus:outline-none focus:bg-white"></ input> </div> <div className="p-4"> <label className="block">Carbs</label> <input type="number" className="bg-gray-200 text-gray-700 border rounded py-3 px-4 mb-3 leading-tight focus:outline-none focus:bg-white"></ input> </div> <div className="p-4"> <label className="block">Fat</label> <input type="number" className="bg-gray-200 text-gray-700 border rounded py-3 px-4 mb-3 leading-tight focus:outline-none focus:bg-white"></ input> </div> <div className="p-4"> <label className="block">Protein</label> <input type="number" className="bg-gray-200 text-gray-700 border rounded py-3 px-4 mb-3 leading-tight focus:outline-none focus:bg-white"></ input> </div> <div className="p-4"> <button className="bg-blue-500 hover:bg-blue-700 text-white font-bold py-2 px-4 rounded"> Save </button> </div> </div> </div> </div> </div> ) export default Home And this will result in our UI looking like this: MCT App There is a bit to unwind here. So let's take a look at it piece by piece. At the very top we have a simple header that just displays the name of our application. Next, we have our day information and selection options. After that, we have our daily results showing whether we are in compliance or not for the selected day. If we are within the suggested range, the background is green. If we are over the range, meaning we've had too much of a particular macro, the background is red, and if we under-consumed a particular macro, the background is blue. Finally, we have our form which allows us to update our daily results, our target calories and macros, as well as variance for our range. Our code right now is all in one giant component and fairly static. Next let's break up our giant component into smaller parts and add our front end functionality so we're at least working with non-static data. We'll create our components in the components directory and then import them into our index.js page component. Components we create in the components directory can be used across multiple pages with ease allowing us reusability if we add multiple pages to our application. The first component that we'll create is the result component. The result component is the green, red, or blue block that displays our result as well as our target and variance ranges. Our component will look like this: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 import React, {useState, useEffect} from 'react' const Result = ({results}) => { let [bg, setBg] = useState(""); useEffect(() => { setBackground() }); const setBackground = () => { let min = results.target - results.variant; let max = results.target + results.variant; if(results.total >= min && results.total <= max) { setBg("bg-green-500"); } else if ( results.total < min){ setBg("bg-blue-500"); } else { setBg("bg-red-500") } } return ( <div className={bg + " w-1/4 p-4 text-white"}> <h2 className="text-3xl font-bold">{results.total} <div className="flex text-sm p-4"> <div className="w-1/3">{results.target - results.variant}</div> <div className="w-1/3 font-bold">{results.target}</div> <div className="w-1/3">{results.target + results.variant}</div> </div> </h2> <h3 className="text-xl">{results.label}</h3> </div> ) } export default Result This will allow us to feed this component dynamic data and based on the data provided, we'll display the correct background, as well as target ranges for our macros. We can now simplify our index.js page component by removing all the boilerplate code and replacing it with: 1 2 3 4 5 6 <div className="flex mb-4 text-center"> <Result results={results.calories} /> <Result results={results.carbs} /> <Result results={results.fat} /> <Result results={results.protein} /> </div> Let's also go ahead and create some dummy data for now. We'll get to retrieving live data from MongoDB soon, but for now let's just create some data in-memory like so: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 const Home = () => { let data = { calories: { label: "Calories", total: 1840, target: 1840, variant: 15 }, carbs: { label: "Carbs", total: 190, target: 160, variant: 15 }, fat: { label: "Fat", total: 55, target: 60, variant: 10 }, protein: { label: "Protein", total: 120, target: 165, variant: 10 } } const [results, setResults] = useState(data); return ( ... )} If we look at our app now, it won't look very different at all. And that's ok. All we've done so far is change how our UI is rendered, moving it from hard coded static values, to an in-memory object. Next let's go ahead and make our form work with this in-memory data. Since our forms are very similar, we can create a component here as well and re-use the same component. We will create a new component called MCTForm and in this component we'll pass in our data, a name for the form, and an onChange handler that will update the data dynamically as we change the values in the input boxes. Also, for simplicity, we'll remove the Save button and move it outside of the form. This will allow the user to make changes to their data in the UI, and when the user wants to lock in the changes and save them to the database, then they'll hit the Save button. So our Home component will now look like this: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 const Home = () => { let data = { calories: { label: "Calories", total: 1840, target: 1850, variant: 150 }, carbs: { label: "Carbs", total: 190, target: 160, variant: 15 }, fat: { label: "Fat", total: 55, target: 60, variant: 10 }, protein: { label: "Protein", total: 120, target: 165, variant: 10 } } const [results, setResults] = useState(data); const onChange = (e) => { const data = { ...results }; let name = e.target.name; let resultType = name.split(" ")[0].toLowerCase(); let resultMacro = name.split(" ")[1].toLowerCase(); data[resultMacro][resultType] = e.target.value; setResults(data); } return ( <div> <Head> <title>Home</title> <link rel="icon" href="/favicon.ico" /> <link href="https://unpkg.com/tailwindcss@^1.0/dist/tailwind.min.css" rel="stylesheet" /> </Head> <div className="container mx-auto"> <div className="flex text-center"> <div className="w-full m-4"> <h1 className="text-4xl">Macro Compliance Tracker</h1> </div> </div> <div className="flex text-center"> <div className="w-1/3 bg-gray-200 p-4">Previous Day</div> <div className="w-1/3 p-4">1/23/2020</div> <div className="w-1/3 bg-gray-200 p-4">Next Day</div> </div> <div className="flex mb-4 text-center"> <Result results={results.calories} /> <Result results={results.carbs} /> <Result results={results.fat} /> <Result results={results.protein} /> </div> <div className="flex"> <MCTForm data={results} item="Total" onChange={onChange} /> <MCTForm data={results} item="Target" onChange={onChange} /> <MCTForm data={results} item="Variant" onChange={onChange} /> </div> <div className="flex text-center"> <div className="w-full m-4"> <button className="bg-blue-500 hover:bg-blue-700 text-white font-bold py-2 px-4 rounded"> Save </button> </div> </div> </div> </div> )} export default Home Aside from cleaning up the UI code, we also added an onChange function that will be called every time the value of one of the input boxes changes. The onChange function will determine which box was changed and update the data value accordingly as well as re-render the UI to show the new changes. Next, let's take a look at our implementation of the MCTForm component. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 import React from 'react' const MCTForm = ({data, item, onChange}) => { return( <div className="w-1/3"> <h2 className="text-3xl p-4">{item}</h2> <div className="p-4"> <label className="block">Calories</label> <input type="number" name={item + " Calories"} className="bg-gray-200 text-gray-700 border rounded py-3 px-4 mb-3 leading-tight focus:outline-none focus:bg-white" onChange={(e) => onChange(e)}></input> </div> <div className="p-4"> <label className="block">Carbs</label> <input type="number" name={item + " Carbs"} className="bg-gray-200 text-gray-700 border rounded py-3 px-4 mb-3 leading-tight focus:outline-none focus:bg-white" onChange={(e) => onChange(e)}></input> </div> <div className="p-4"> <label className="block">Fat</label> <input type="number" name={item + " Fat"} className="bg-gray-200 text-gray-700 border rounded py-3 px-4 mb-3 leading-tight focus:outline-none focus:bg-white" onChange={(e) => onChange(e)}></input> </div> <div className="p-4"> <label className="block">Protein</label> <input type="number" name={item + " Protein"} className="bg-gray-200 text-gray-700 border rounded py-3 px-4 mb-3 leading-tight focus:outline-none focus:bg-white" onChange={(e) => onChange(e)}></input> </div> </div> ) } export default MCTForm As you can see this component is in charge of rendering our forms. Since the input boxes are the same for all three types of forms, we can reuse the component multiple times and just change the type of data we are working with. Again if we look at our application in the browser now, it doesn't look much different. But now the form works. We can replace the values and the application will be dynamically updated showing our new total calories and macros and whether or not we are in compliance with our goals. Go ahead and play around with it for a little bit to make sure it all works. #Connecting Our Application to MongoDB Our application is looking good. It also works. But, the data is all in memory. As soon as we refresh our page, all the data is reset to the default values. In this sense, our app is not very useful. So our next step will be to connect our application to a database so that we can start seeing our progress over time. We'll use MongoDB and MongoDB Atlas to accomplish this. #Setting Up Our MongoDB Database Before we can save our data, we'll need a database. For this I'll use MongoDB and MongoDB Atlas to host my database. If you don't already have MongoDB Atlas, you can sign up and use it for free here, otherwise go into an existing cluster and create a new database. Inside MongoDB Atlas, I will use an existing cluster and set up a new database called MCT. With this new database created, I will create a new collection called daily that will store my daily results, target macros, as well as allowed variants. MongoDB Atlas With my database set up, I will also add a few days worth of data. Feel free to add your own data or if you'd like the dataset I'm using, you can get it here. I will use MongoDB Compass to import and view the data, but you can import the data however you want: use the CLI, add in manually, or use Compass. Thanks to MongoDB's document model, I can represent the data exactly as I had it in-memory. The only additional fields I will have in my MongoDB model is an _id field that will be a unique identifier for the document and a date field that will represent the data for a specific date. The image below shows the data model for one document in MongoDB Compass. MongoDB Compass Now that we have some real data to work with, let's go ahead and connect our Next.js application to our MongoDB Database. Since Next.js is a React based framework that's running Node server-side we will use the excellent Mongo Node Driver to facilitate this connection. #Connecting Next.js to MongoDB Atlas Our pages and components directory renders both server-side on the initial load as well as client-side on subsequent page changes. The MongoDB Node Driver works only on the server side and assumes we're working on the backend. Not to mention that our credentials to MongoDB need to be secure and not shared to the client ever. Not to worry though, this is where Next.js shines. In the pages directory, we can create an additional special directory called api. In this API directory, as the name implies, we can create api endpoints that are executed exclusively on the backend. The best way to see how this works is to go and create one, so let's do that next. In the pages directory, create an api directory, and there create a new file called daily.js. In the daily.js file, add the following code: 1 2 3 4 5 export default (req, res) => { res.statusCode = 200 res.setHeader('Content-Type', 'application/json') res.end(JSON.stringify({ message: 'Hello from the Daily route' })) } Save the file, go to your browser and navigate to localhost:3000/api/daily. What you'll see is the JSON response of {message:'Hello from the Daily route'}. This code is only ever run server side and the only thing the browser receives is the response we send. This seems like the perfect place to set up our connection to MongoDB. API Endpoint Response While we can set the connection in this daily.js file, in a real world application, we are likely to have multiple API endpoints and for that reason, it's probably a better idea to establish our connection to the database in a middleware function that we can pass to all of our api routes. So as a best practice, let's do that here. Create a new middleware directory at the root of the project structure alongside pages and components and call it middleware. The middleware name is not reserved so you could technically call it whatever you want, but I'll stick to middleware for the name. In this new directory create a file called database.js. This is where we will set up our connection to MongoDB as well as instantiate the middleware so we can use it in our API routes. Our database.js middleware code will look like this: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 import { MongoClient } from 'mongodb'; import nextConnect from 'next-connect'; const client = new MongoClient('{YOUR-MONGODB-CONNECTION-STRING}', { useNewUrlParser: true, useUnifiedTopology: true, }); async function database(req, res, next) { if (!client.isConnected()) await client.connect(); req.dbClient = client; req.db = client.db('MCT'); return next(); } const middleware = nextConnect(); middleware.use(database); export default middleware; If you are following along, be sure to replace the {YOUR-MONGODB-CONNECTION-STRING} variable with your connection string, as well as ensure that the client.db matches the name you gave your database. Also be sure to run npm install --save mongodb next-connect to ensure you have all the correct dependencies. Database names are case sensitive by the way. Save this file and now open up the daily.js file located in the pages/api directory. We will have to update this file. Since now we want to add a piece of middleware to our function, we will no longer be using an anonymous function here. We'll utility next-connect to give us a handler chain as well as allow us to chain middleware to the function. Let's take a look at what this will look like. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 import nextConnect from 'next-connect'; import middleware from '../../middleware/database'; const handler = nextConnect(); handler.use(middleware); handler.get(async (req, res) => { let doc = await req.db.collection('daily').findOne() console.log(doc); res.json(doc); }); export default handler; As you can see we now have a handler object that gives us much more flexibility. We can use different HTTP verbs, add our middleware, and more. What the code above does, is that it connects to our MongoDB Atlas cluster and from the MCT database and daily collection, finds and returns one item and then renders it to the screen. If we hit localhost:3000/api/daily now in our browser we'll see this: Daily API Response Woohoo! We have our data and the data model matches our in-memory data model, so our next step will be to use this real data instead of our in-memory sample. To do that, we'll open up the index.js page. Our main component is currently instantiated with an in-memory data model that the rest of our app acts upon. Let's change this. Next.js gives us a couple of different ways to do this. We can always get the data async from our React component, and if you've used React in the past this should be second nature, but since we're using Next.js I think there is a different and perhaps better way to do it. Each Next.js page component allows us to fetch data server-side thanks to a function called getStaticProps. When this function is called, the initial page load is rendered server-side, which is great for SEO. The page doesn't render until this function completes. In index.js, we'll make the following changes: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 import fetch from 'isomorphic-unfetch' const Home = ({data}) => { ... } export async function getStaticProps(context) { const res = await fetch("http://localhost:3000/api/daily"); const json = await res.json(); return { props: { data: json, }, }; } export default Home Install the isomorphic-unfetch library by running npm install --save isomorphic-unfetch, then below your Home component add the getStaticProps method. In this method we're just making a fetch call to our daily API endpoint and storing that json data in a prop called data. Since we created a data prop, we then pass it into our Home component, and at this point, we can go and remove our in-memory data variable. Do that, save the file, and refresh your browser. Congrats! Your data is now coming live from MongoDB. But at the moment, it's only giving us one result. Let's make a few final tweaks so that we can see daily results, as well as update the data and save it in the database. #View Macro Compliance Tracker Data By Day The first thing we'll do is add the ability to hit the Previous Day and Next Day buttons and display the corresponding data. We won't be creating a new endpoint since I think our daily API endpoint can do the job, we'll just have to make a few enhancements. Let's do those first. Our new daily.js API file will look as such: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 handler.get(async (req, res) => { const { date } = req.query; const dataModel = { "_id": new ObjectID(), "date": date, "calories": { "label": "Calories", "total": 0, "target": 0, "variant": 0 }, "carbs": { "label": "Carbs", "total": 0, "target": 0, "variant": 0 }, "fat": { "label" : "Fat", "total": 0, "target": 0, "variant": 0 }, "protein": { "label" : "Protein", "total": 0, "target": 0, "variant": 0 }} let doc = {} if(date){ doc = await req.db.collection('daily').findOne({date: new Date(date)}) } else { doc = await req.db.collection('daily').findOne() } if(doc == null){ doc = dataModel } res.json(doc) }); We made a couple of changes here so let's go through them one by one. The first thing we did was we are looking for a date query parameter to see if one was passed to us. If a date parameter was not passed, then we'll just pick a random item using the findOne method. But, if we did receive a date, then we'll query our MongoDB database against that date and return the data for that specified date. Next, as our data set is not exhaustive, if we go too far forwards or backwards, we'll eventually run out of data to display, so we'll create an empty in-memory object that serves as our data model. If we don't have data for a specified date in our database, we'll just set everything to 0 and serve that. This way we don't have to do a whole lot of error handling on the front and can always count on our backend to serve some type of data. Now, open up the index.js page and let's add the functionality to see the previous and next days. We'll make use of dayjs to handle our dates, so install it by running npm install --save dayjs first. Then make the following changes to your index.js page: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 // Other Imports ... import dayjs from 'dayjs' const Home = ({data}) => { const [results, setResults] = useState(data); const onChange = (e) => { } const getDataForPreviousDay = async () => { let currentDate = dayjs(results.date); let newDate = currentDate.subtract(1, 'day').format('YYYY-MM-DDTHH:mm:ss') const res = await fetch('http://localhost:3000/api/daily?date=' + newDate) const json = await res.json() setResults(json); } const getDataForNextDay = async () => { let currentDate = dayjs(results.date); let newDate = currentDate.add(1, 'day').format('YYYY-MM-DDTHH:mm:ss') const res = await fetch('http://localhost:3000/api/daily?date=' + newDate) const json = await res.json() setResults(json); } return ( <div className="flex text-center"> <div className="w-1/3 bg-gray-200 p-4"><button onClick={getDataForPreviousDay}>Previous Day</button></div> <div className="w-1/3 p-4">{dayjs(results.date).format('MM/DD/YYYY')}</div> <div className="w-1/3 bg-gray-200 p-4"><button onClick={getDataForNextDay}>Next Day</button></div> </div> )} We added two new methods, one to get the data from the previous day and one to get the data from the following day. In our UI we also made the date label dynamic so that it displays and tells us what day we are currently looking at. With these changes go ahead and refresh your browser and you should be able to see the new data for days you have entered in your database. If a particular date does not exist, it will show 0's for everything. MCT No Data #Saving and Updating Data In MongoDB Finally, let's close out this tutorial by adding the final piece of functionality to our app, which will be to make updates and save new data into our MongoDB database. Again, I don't think we need a new endpoint for this, so we'll use our existing daily.js API. Since we're using the handler convention and currently just handle the GET verb, let's add onto it by adding logic to handle a POST to the endpoint. 1 2 3 4 5 6 7 8 handler.post(async (req, res) => { let data = req.body; data = JSON.parse(data); data.date = new Date(data.date); let doc = await req.db.collection('daily').updateOne({date: new Date(data.date)}, {$set:data}, {upsert: true}) res.json({message: 'ok'}); }) The code is pretty straightforward. We'll get our data in the body of the request, parse it, and then save it to our MongoDB daily collection using the updateOne() method. Let's take a closer look at the values we're passing into the updateOne() method. The first value we pass will be what we match against, so in our collection if we find that the specific date already has data, we'll update it. The second value will be the data we are setting and in our case, we're just going to set whatever the front-end client sends us. Finally, we are setting the upsert value to true. What this will do is, if we cannot match on an existing date, meaning we don't have data for that date already, we'll go ahead and create a new record. With our backend implementation complete, let's add the functionality on our front end so that when the user hits the Save button, the data gets properly updated. Open up the index.js file and make the following changes: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 const Home = ({data}) => { const updateMacros = async () => { const res = await fetch('http://localhost:3000/api/daily', { method: 'post', body: JSON.stringify(results) }) } return ( <div className="flex text-center"> <div className="w-full m-4"> <button className="bg-blue-500 hover:bg-blue-700 text-white font-bold py-2 px-4 rounded" onClick={updateMacros}> Save </button> </div> </div> )} Our new updateMacros method will make a POST request to our daily API endpoint with the new data. Try it now! You should be able to update existing macros or create data for new days that you don't already have any data for. We did it! #Putting It All Together We went through a lot in today's tutorial. Next.js is a powerful framework for building modern web applications and having a flexible database powered by MongoDB made it possible to build a fully fledged application in no time at all. There were a couple of items we omitted for brevity such as error handling and deployment, but feel free to clone the application from GitHub, sign up for MongoDB Atlas for free, and build on top of this foundation. MongoDB Icon • Developer Hub • Documentation • University • Community Forums © MongoDB, Inc.
__label__pos
0.548586
visual process visual process - 1 year ago 112 Java Question How to Enabling ProGuard obfuscation in Android Studio? I have to protect my app by enabling Proguard obfuscation in Android Studio. I have searched for the process of how to apply it but i did not get any clear solution. When i try it, i always get an error. So can anyone tell me the clear steps to apply it in my app? I am doing this by the following steps: 1. In Android Studio, open up an Android project. 2. Change to Project View. 3. Change the following line: minifyEnable false to minifyEnable true 4. Set ProGuard Rules(optional) 4.1 In Project View, select the proguard-rules.pro file. 4.2 Add in the following lines to tell ProGuard not to obfuscate certain classes. -keepclassmembers class com.dom925.xxxx { public * } Error that I am getting by following the steps are Error:Execution failed for task ':app:packageRelease'. Unable to compute hash of D:\Android\Pojectname\app\build\intermediates\classes-proguard\release\classes.jar Answer Source I figured out the problem: Open up the proguard-rules.pro for your project and add this to the bottom: -dontwarn java.nio.file.Files -dontwarn java.nio.file.Path -dontwarn java.nio.file.OpenOption -dontwarn org.codehaus.mojo.animal_sniffer.IgnoreJRERequirement Basically how I solved it was this I tried to run my app in 'release' mode and got a bunch of errors similar to this guy here: https://github.com/square/okio/issues/144 I pretty much followed what he said and fixed it. Hope this can help others with generating their APK's! visit more detail here : Error:Execution failed for task ':app:packageRelease'. > Unable to compute hash of /../AndroidStudioProjects/../classes.jar
__label__pos
0.985402
Often there are multiple reasonable ways to organize science data within files, and files within an HLSP collection. This article provides advice and some best practices to make the process of incorporating your data in MAST go smoothly.  On this page... File Organization The best organization for files delivered to MAST for an HLSP collection depends mostly upon the number of files, and secondarily on the nature of the products. For small collections consisting of perhaps a few dozen files: it is acceptable to put all files in a single directory. For larger collections it may be better to organize the files in a directory tree, with subfolders named (for instance) by target or field identifier. If organizing by some directory structure, please keep files that apply to the full collection (i.e., the README, the project summary for the Web home page, etc.) in the root directory so that MAST staff can locate them easily.  If you are uploading products for a new data release (new and/or updated products), please place them in a sub-folder of the delivery area, with a name like "/dr2" to indicates the data release ID associated with those products. The arrangement of files into a directory tree is mostly for the convenience of the contributing team in preparing the collection, and the MAST team in validating and moving the products to our mass storage devices. The presentation of the collection products in MAST interfaces (e.g., the Portal) does not depend upon the submitted file structure. Data Organization There is typically more than one reasonable way to organize data within or among files. In the absence of community standards, the following guidelines will help to ensure that users can: • retrieve data from MAST without technical problems • use the products with widely available community tools • identify the various components of data products (e.g., science vs. error arrays) easily While many of the guidelines in the following subsections for science data are described in the context of FITS-format, most apply to other formats as well.  Images For organizing data within images, consider the following strategies: • Concomitant data: Put pixel-level arrays of error/uncertainty, data quality flags, exposure maps, etc. in the same file with the science arrays.  • For image data, put the science array in a FITS image extension that includes the keyword EXTNAME = SCI, and put concomitant data in additional extensions with appropriate extension names like "ERR", "DQ", "EXP_MAP", and "WEIGHT". • If data are placed in FITS extensions, do not place any data in the Primary header-data unit (PHDU).  • File size: While there is no hard upper-limit, it is often best to keep the size of individual files under about 1 GB. This will facilitate downloads for users with poor internet connectivity. This advice may be at odds with storing concomitant data in the same file as the science pixels; if so, consider storing concomitant data in separate files if doing so doesn't unduly increase the complexity of the file organization.  • Be sure that the data types of your arrays are consistent with the required precision. For example, 64-bit floating point precision is rarely needed for any quantity other than values of coordinates or timestamps. Similarly, data quality masks may require only 16-bit (short integer) precision.  • Null data: Try to avoid creating arrays with large numbers of missing or null pixels. For combined images, this may be as simple as choosing an orientation for the combined array that naturally captures the footprints of all contributing images with minimal dead area; the world coordinate system (WCS) keywords will let downstream applications know the physical orientation without wasting memory.  • Image maps: if you have created multiple spatial maps of physical quantities for a given target (e.g., reddening, temperature, star formation rate) for a given target, consider putting them in image extensions within a single file. This will keep information about each target together, and also make it easier to follow the file naming requirements Spectra There are two main approaches to storing spectra in files: in images or in tables. Here, spectral data includes pixel-level science and concomitant data, including arrays of: flux(density), uncertainty, data quality (DQ) flags, weights, and wavelength (if tabulated).  Science-ready spectra have a variety of types, including • Spectral image cubes, such as those generated with IFUs, stored as cubes with one dispersion axis and two spatial axes • Long-slit spectra, with one dispersion and one spatial axis • Time-series spectra, with one dispersion and one temporal axis HSLP contributors may wish to provide more than one type of spectrum, e.g., long-slit and a reference 1-D extraction. It is generally better to provide separate types of products in separate files. The strategies for arranging data are summarized below. Spectra in tables Two of the most common community conventions for storing one-dimensional extracted spectra in FITS files are:  1. One spectrum per BINTABLE extension, such that 1-D arrays are stored in separate fields, one (wavelength, flux, err, dq) tuple per row. 2. Multiple spectra per BINTABLE extension, with one spectrum per table row. In this case each cell of (wavelength, flux, err, dq) contains an array of the same length.  A variation on the above options is to express the wavelength array with a function in FITS keywords. If every spectrum in an extension has the same wavelength array, you can use single-valued WCS keywords to describe the function. If the WCS function changes as a function of row in a BINTABLE, you can expand these WCS keywords into BINTABLE columns. This strategy works well for simple 1-D spectra, separate orders of echelle spectra, and Multi-object spectra (from MOS spectrograms of separate targets in a small field of view). Multi-dimensional spectra can also be stored in tables, but it becomes more complicated to describe the WCS in a compact way.  Spectra in images Many spectra derive from spectrograms that are multi-dimensional, where the other dimension(s) may be spatial or temporal. These data are sometimes represented as images with two or more dimensions; the dispersion is most commonly expressed as a function, rather than tabulated in a separate array. Examples include:  • Long-slit spectra are stored as images with one dispersion and one spatial (cross-dispersion) axis. (Note: it is possible in this case to use an additional, degenerate spatial axis to provide equatorial coordinates (RA, Dec) at all spatial  positions in a long-slit spectrum. Consult MAST staff for details.) • Spectral image cubes (sometimes called hyperspectral cubes), where the arrays have one spectral and two spatial axes. In this case the WCS is commonly characterized with a function rather than a separate, tabulated array (in a separate extension). The concomitant data are stored in separate extensions.  • Spectral time series, with a dispersion axis and a temporal axis. The spectral coordinate is commonly characterized with a function; the time coordinate may also be included in the WCS if the temporal sampling is regular.  Descriptions of spectra Consider the following organizational strategies: • Concomitant data: Put pixel-level arrays of error/uncertainty, data quality flags, data quality, etc. in separate columns within the same extension. • Use suggestive column names, e.g. FLUX, WAVELENGTHERR, DQ, WEIGHT • Constant data: Scalar, date, or categorical data that vary among spectra should be stored in separate columns. Scalar/categorical data that are common to all spectra in the extension may instead be stored in the extension header. Consult MAST staff for details.  Catalogs Source catalogs are commonly stored as binary tables (e.g. FITS BINTABLE extensions), with one row per source and columns to contain various quantities (source name, world coordinates, brightness measurements, errors, etc.). It is critical for users that the fields (columns) be properly annotated with units (where applicable), and also with the Virtual Observatory uniform content descriptors (UCD) designations for each column of quantities provided. Metadata that apply to the full catalog should be provided in the FITS primary or extension header (see Required Metadata: Catalog Metadata).  In some cases the catalog data are complex, and can be best expressed as relationships between data in multiple tables. FITS format does not capture such complicated data well; a better choice is SQLite, which is a serverless database. There are community tools for creating and operating on these data, including the SQLite DB Browser, and python libraries support access to data in this format. Consult MAST staff for details.  Metadata within files In order for MAST to provide search interfaces for HLSP data, metadata within files needs to specify the spatial, spectral, temporal, and energy coverage of the data product. Metadata must also specify enough provenance and other information for a user to understand the data product. See Required Metadata for details.  Where to store metadata For data products stored in FITS files, metadata take the form of header keywords. But which keywords go in which FITS extension? The following advice will help users and applications discover and use important metadata in your products:  • Store metadata that are applicable to every extension in the primary header (PHDU).  • DOI, HLSPID, HLSPLEAD, HLSPNAME, HLSPVER, LICENSELICENURL OBSERVAT, TELESCOP, etc. • If you have metadata that are required to interpret the data inside extensions, store these metadata within each such extension; one cannot assume that FITS readers will associate metadata in the primary header with metadata in extension headers. • WCS keywords CDi_j, CRPIXj, CRVALi, RADESYS, WCSAXES, etc. • Coordinate reference systems: RADESYS, TIMESYS • Store metadata that document the various products that were combined to make the final product in a separate BINTABLE extension, with EXTNAME= 'PROVENANCE'. See Provenance Metadata for details.  The required metadata should also appear in data files that are not in FITS format (such as ASCII or ASDF), but the form that they take may differ. It is important to update metadata for combined products, and to delete metadata that are no longer applicable. For example, keywords such as DATE-OBS may be inherited from files that contribute to a product, in which case the value (if retained) should reflect the date of the first observation. Units Units are specified in data files with ASCII strings, and appear in FITS header keywords such as BUNIT (in image extensions) and TUNIT (for columns in table extensions). They are composed of a set of unit substrings; the concept of unit substrings is defined in the FITS v4 Standard (see Sect. 4.3, tables 3, 4, and 5). The Standard allows for valid unit substrings to be combined in multiple ways, but it is best to use simpler syntax when possible, e.g., use "erg/cm^2/s" or "erg cm-2 s-1" rather than "erg*cm**(-2)*s**(-1)". Group substrings with parentheses in cases where necessary to clarify the meaning. A few common unit strings and our recommended FITS-style expressions are given in the table below; for additional examples of allowed units see Sect 2.4 of Units in the Virtual Observatory QuantityUnit StringMeaning plane angledegdegree of arc arcsecsecond of arc, 1/3600 deg masmilli-second of arc, 1/3600000 deg flux densityerg/cm^2/s/Angstromerg cm-2 s-1 Å-1 Jyjansky mag(stellar) magnitude eventaduanalog-to-digital unit electroncount of electrons† ct or countcount ph or photonphoton lengthAUastronomical unit pcparsec mass ratesolMass/yrsolar mass per year surface brightnessMJy/srmega-jansky per steradian mag/arcsec^2magnitude per square arcsecond timedday ssecond yrJulian year †Counts in units of electrons does not appear in standards documents, but is nevertheless widely used.  Use standard scientific prefixes for (sub)multiples of quantities, e.g., kpc (kilo-parsec), Mpc (mega-parsec), mmag (milli-magnitude), and uJy (micro-Jansky).  For Further Reading... • No labels Data Use | Acknowledgements | DOI | Privacy Send comments & corrections on this MAST document to: [email protected]
__label__pos
0.763693
Skip to main content Version: 20.10 Administration Update When upgrading to 20.04, all data of Host Discovery feature will be lost: • Discovery tasks, • Saved parameters/credentials. This is due to the new hardened way credentials are stored in this version. Discovered hosts through those tasks will remain. Upgrading to 20.10 will keep all data stored since 20.04. To update the module, run the following command: yum update -y centreon-auto-discovery-server Connect to the Centreon's web interface using an account allowed to administer products and go to the Administration > Extensions > Manager menu. Make sure that License Manager and Plugin Packs Manager modules are up-to-date before updating Auto Discovery module. Click on the update icon corresponding to the Auto Discovery module: image The module is now updated: image Uninstallation Connect to the Centreon’s web interface using an account allowed to administer products and go to the Administration > Extensions > Manager menu. Click on the delete icon corresponding to the Auto Discovery module: image A confirmation popup will appear, confirm the action: image The module is now uninstalled: image Uninstalling the module will also remove all the associated data. Data won't be restorable unless a database backup has been made. Gorgone module configuration The Auto Discovery module brings a specific configuration for the Gorgone service on the Central server. The default configuration is /etc/centreon-gorgone/config.d/41-autodiscovery.yaml. A maximum duration for hosts discovery jobs is set globally. If its necessary to change it (large subnet SNMP discovery for example), edit the configuration and add the global_timeout directive. If mail notifications are enabled in service discovery rules, mail parameters can be defined to choose the sender, subject or mail command. Example of configuration: gorgone: modules: - name: autodiscovery package: "gorgone::modules::centreon::autodiscovery::hooks" enable: true # Host Discovery check_interval: 15 global_timeout: 300 # Service Discovery mail_subject: Centreon Auto Discovery mail_from: centreon-autodisco mail_command: /bin/mail Be sure to restart Gorgone service after any configuration modification: systemctl restart gorgoned Distributed architecture The hosts and services discoveries both rely on Gorgone to perform discoveries on both Central and Remote Server or Pollers. It is necessary to have a ZMQ communication between the Central server and a Remote Server to perform a discovery on a Poller attached to this Remote Server. Look at the section presenting the differente communication types to know more. Service Discovery scheduled job All the active discovery rules are periodically executed through a scheduled job managed by Gorgone's cron module. The Auto Discovery module brings a cron definition in the following file: /etc/centreon-gorgone/config.d/cron.d/41-service-discovery.yaml. - id: service_discovery timespec: "30 22 * * *" action: LAUNCHSERVICEDISCOVERY The default configuration runs the discovery every day at 10:30 PM. If you had changed the legacy crond configuration file to adapt the schedule you must apply changes to the new configuration file. It is also possible to run multiple service discoveries with different parameters: - id: service_discovery_poller_1 timespec: "15 9 * * *" action: LAUNCHSERVICEDISCOVERY parameters: filter_pollers: - Poller-1 - id: service_discovery_poller_2_linux timespec: "30 9 * * *" action: LAUNCHSERVICEDISCOVERY parameters: filter_pollers: - Poller-2 filter_rules: - OS-Linux-SNMP-Disk-Name - OS-Linux-SNMP-Traffic-Name - id: service_discovery_poller_2_windows timespec: "45 9 * * *" action: LAUNCHSERVICEDISCOVERY parameters: filter_pollers: - Poller-2 filter_rules: - OS-Windows-SNMP-Disk-Name - OS-Windows-SNMP-Traffic-Name Here is the list of all available parameters: KeyValue filter_rulesArray of rules to use for discovery (empty means all) force_ruleRun disabled rules ('0': not forced, '1': forced) filter_hostsArray of hosts which will run the discovery (empty means all) filter_pollersArray of pollers for which linked hosts will undergo discovery (empty means all) dry_runRun discovery without configuration changes ('0': changes, '1': dry run) no_generate_configNo configuration generation (even if there are some changes) ('0': generation, '1': no generation) API accesses When installing Gorgone, a default configuration to access the Centreon APIs is located at /etc/centreon-gorgone/config.d/31-centreon-api.yaml. It defines accesses to both Centreon CLAPI and RestAPI to allow discovery to communicate with Centreon. Example of configuration: gorgone: tpapi: - name: centreonv2 base_url: "http://127.0.0.1/centreon/api/beta/" username: api password: bpltc4aY - name: clapi username: cli password: PYNM5kcc Access to RestAPI, represented by centreonv2, requires credentials of a user with Reach API Configuration access. It is used for Host Discovery. Access to CLAPI requires credentials of an Admin user. It is used for Service Discovery. One user can be used for both accesses. Furthermore, users don't need access to the Centreon UI.
__label__pos
0.585227
18 So i'm trying to enable datepicker for android versions bellow 11. for that i'm using support library v4. I import all the thing necessary: import android.support.v4.app.*; import android.support.v4.app.FragmentManager; import android.support.v4.app.Fragment; import android.support.v4.app.FragmentActivity; import android.support.v4.app.FragmentTransaction; import android.support.v4.app.DialogFragment; And i created a class: import java.text.SimpleDateFormat; import java.util.Calendar; import java.util.Date; import android.app.DatePickerDialog; import android.app.Dialog; import android.os.Bundle; import android.support.v4.app.DialogFragment; import android.widget.EditText; public class DatePicker extends DialogFragment implements DatePickerDialog.OnDateSetListener { public EditText textField; @Override public Dialog onCreateDialog(Bundle savedInstanceState) { final Calendar c = Calendar.getInstance(); int year = c.get(Calendar.YEAR); int month = c.get(Calendar.MONTH); int day = c.get(Calendar.DAY_OF_MONTH); return new DatePickerDialog(getActivity(), this, year, month, day); } public EditText getTextField() { return textField; } public void setTextField(EditText textField) { this.textField = textField; } public void onDateSet(DatePicker view, int year, int month, int day) { textField.setText(day+"."+(month+1)+"."+year); } @Override public void onDateSet(android.widget.DatePicker arg0, int arg1, int arg2,int arg3) { textField.setText(arg3+"."+(arg2+1)+"."+arg1); } } So class compile ok. But the problem is when i try to use it. I have an onclick method for edittext that looks like that: public void showDatePicker(View v) { DialogFragment selectDate = (DialogFragment) new DatePicker(); EditText edit=(EditText)v; ((DatePicker) selectDate).setTextField(edit); selectDate.show(getSupportFragmentManager(), "datePicker"); } however in last line i get the error: The method getSupportFragmentManager() is undefined for the type MainActivity Any ideas how to resolve that? btw i don't have imported anything like android.app.Fragment; So that is not the case here :S 2 • You should actually accept the answer for others indicating that this is a solved problem – Rafael T Jan 27, 2014 at 17:59 • So I did, luckily I'm passed that now :D – gabrjan Jan 28, 2014 at 10:10 1 Answer 1 67 My guess is that your MainActivity is not extending FragmentActivity! In the SupportPackage an Activity must inherit from FragmentActivity to get Methods like getSupportedFragmentManager(). EDIT: Since your Activity is inheriting from another class, you can try to implement the Behavior of one of these classes and kind of merge them. I.e here you'll find the code for FragmentActivity: FragmentActivity Source 8 • wow that's true but i'm activity is extending SherlockMapActivity, and since java only enable to extend one activity i don't have idea how to fix that :S – gabrjan Oct 29, 2012 at 12:20 • 3 @gabrjan: You cannot readily use the Android Support package's fragments with the Maps SDK add-on. Hence, AFAIK, SherlockMapActivity only works on API Level 11+ and inherits from MapActivity, so you would use getFragmentManager(), not getSupportFragmentManager(). Oct 29, 2012 at 12:21 • well that's not true sherlockmapActivity works just fine on api level 8 ... since i tried it and it's just working ok. – gabrjan Oct 29, 2012 at 12:22 • so no way to do that? I understand why it's working because i wasn't using fragments anywhere... – gabrjan Oct 29, 2012 at 12:26 Your Answer By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.87378
Apa Itu Vagrant? Mungkin diantara kita masih belum familiar dengan nama Vagrant, Secara definisi Vagrant adalah sebuah software yang menggunakan teknologi virtual machine dimana kita dapat membuat lingkungan development secara portable, konsisten dan lebih fleksible. Dikarenakan vagrant menggunakan teknologi virtual machine maka kita membutuhkan software seperti virtual box dan VmWare. Tujuannya adalah kita ingin membuat sebuah lingkungan development secara portable, contohnya misalnya pada saat production kita akan menggunakan sistem operasi ubuntu maka pada saat development kita akan menggunakan ubuntu sebagai sistem operasi sehingga pada saat proses deploy ke production diharapkan tidak ada lagi permasalahan yang muncul. Install Virtual Box Kita membutuhkan Virtual Box untuk menjalankan VM yang dibuat oleh Vagrant, untuk menginstall Virtual Box gunakan perintah sudo apt install virtualbox Install Vagrant Selanjutnya kita install vagrant dengan perintah sudo apt install vagrant Sampai disini proses instal Vagrant sudah selesai, kita bisa mencoba membuat Virtual Machine. Membuat Virtual Machine Sekarang kita coba membuat sebuah VM dengan OS Centos 7 Buat folder bernama centos7 mkdir centos7 cd centos7 Selanjutnya kita melakukan inisiasi dengan perintah vagrant init centos/7 Setelah menjalan perintah di atas maka di folder centos7 akan ada sebuah file Vagrantfile berisi konfigurasi Virtual Machine kita nantinya. Selanjutnya kita menghidupkan VM yang kita buat tadi dengan perintah vagrant up Karena ini adalah proses pertama, biasanya vagrant akan mendownload sebuah file box / image terlebih dahulu. Kalian bisa mencari file image / box lainnya di situs resmi vagrant https://app.vagrantup.com/boxes/search Jika proses sudah selesai kita bisa mengecek apakah VM sudah benar-benar hidup dengan perintah vagrant status Lalu untuk login ke VM kita, jalankan vagrant ssh Perintah Lainnya untuk mematikan VM : vagrant halt Untuk menambahkan/mendownload box : vagrant box add namabox Selamat mencoba!
__label__pos
0.982625
Looking at Java 21: String Templates  · 11 min Dim Hou on Unsplash Java’s String type is a ubiquitous type that’s used in every program out there. It’s so omnipresent, we often don’t think much about it, and take it as a given. However, over the years, it received many improvements, like better optimization possibilities and multi-line blocks. And now, another exciting improvement is coming that makes String safer and easier to use: String templates. How to compose Strings in Java To better understand and evaluate String templates, let’s first look at how we can compose String without them. So far, we have several mechanisms and types that work with String literals and instances built right into the language/JDK to that: • The + (plus) operator • StringBuffer and StringBuilder • String::format and String::formatted • java.text.MessageFormat Each of them has use cases, but also particular downsides. The + (plus) Operator The operator is built right into the language to concat String literals or variables: java var name = "Ben"; var tempC = 28; var greeting = "Hello " + name + ", how are you?\nIt's " + tempC + "°C today!"; It’s easy to use, we can even throw non-String values into the mix. However, the resulting code isn’t really pretty or fun to write. And the biggest downside is that a new String gets allocated each time we use the + operator. In our case here, that means 5 String get allocated, which might not seem much, but how about using the operator in a loop? Behind the scenes, the JVM has multiple optimization strategies to reduce String allocation, like replacing the operator with a StringBuilder or using invokedynamic. Even though these optimizations are quite nice, it’s still better to not rely solely on possible optimizations and choose a more appropriate approach in the first place. StringBuffer and StringBuilder The two types java.lang.StringBuffer and java.lang.StringBuilder are special tools built for String concatenation, plus they also have additional methods for inserting, replacing, and finding a String. StringBuffer is thread-safe and available since Java’s inception, whereas StringBuilder got added in Java 5 as an “API compatible more performant but not thread-safe” alternative. Their major downside is their verbosity, especially for simpler Strings: java var greeting = new StringBuilder().append("Hello ") .append(name) .append(", how are you?\nIt's") .append(tempC) .append("°C today!") .toString(); Although StringBuilder offers great performance, that’s why it’s used by the JVM for optimizing the + operator, we shouldn’t replace all String manipulation with a StringBuilder automatically. Performance characteristics are fickle beasts and depend on the size of a String, the kind of manipulation we’re doing, hardware constraints, etc. If we might have performance issues and are in doubt about what to do, benchmark it, to verify that it offers a significant performance improvement. String::format and String::formatted The String type has three methods for formatting: • static String format(String format, Object... args) • static String format(Locale locale, String format, Object... args) • String formatted(Object... args) (Java 15+) They allow for reusable templates, but they require format specifiers and provide the variables in the correct order: java var format = "Hello %s, how are you?\nIt's %d°C today!"; var greeting = String.format(format, name, tempC); // Java 15+ var greeting = format.formatter(name, tempC); As you can imagine, using format specifiers requires creating a Formatter for the template String. Even though you save on the number of String allocations, now the JVM has to parse/validate the template String. java.text.MessageFormat The java.text.MessageFormat type is like the older sibling of String::format, as it uses the same approach of using format String container specifiers. However, it’s more verbose and its syntax is unfamiliar to many devs these days. The following example is the most simplistic variant, without additional formattings, like leading zeros: java var format = new MessageFormat("Hello {0}, how are you?\nIt's {1}°C today!"); var greeting = format.format(name, tempC); It shares the same downsides with String::format. However, it has a few additional tricks up its sleeves, like handling plurals. String Interpolation to the Rescue The discussed techniques for composing Strings so far were all about concatenation. However, many languages, like Groovy or Swift, also support interpolation directly in String literals by wrapping variables or expressions into special constructs: groovy def greeting = "Hello ${this.user.firstname()}, how are you?\nIt's ${tempC}°C today!"; Especially paired with multi-line Strings, interpolation trumps any concatenation approach in readability and simplicity: groovy def json = """ { "user": "${this.user.firstname()}", "temperatureCelsius: ${tempC} } """ Seems like a simple feature and should be easy enough to add to a language. Just define which syntax the wrapper should have, like ${} (Groovy) or \() (Swift), and add it to String literal parsing. However, there’s a big downside to such a simplistic approach to interpolation. The Dangers of String Interpolation Most languages implement String interpolation in the following way: 1. Evaluate expression/variable 2. Convert to String value if needed 3. Insert String representation into the original String literal Don’t get me wrong, this approach is already immensely helpful, but it has a major drawback… what if replacing the result of the interpolation would create an invalid overall String literal? This is especially dangerous if the result is used without previous validation or correct escaping of values. XKCD 327: Exploits of a Mom XKCD 327: Exploits of a Mom (Source) That’s why the designers of Java asked themselves if they can do better than just adding String interpolation. Java String Templates One critique I often hear about Java is its verbosity and lack of certain “simple” features or making things more complicated than it needs to be. On the surface, there’s some truth to it. But if you dig just a little deeper, you will realize that there are good reasons why some features take quite some time to be added to the language, or features aren’t as “simple or concise” compared to other languages. The main reason behind this is that Java’s language designers are perfectly willing to forgo a certain degree of functionality or convenience if that means a feature is safer to use but still useful, or even provides more usability than a simpler default approach. In the case of String templates, their goal was to provide the clarity of interpolation with safer out-of-the-box result, plus the options to extend and bend the feature to our needs if required. The result I want to show you here might be different from other languages’ simple String interpolation, but in return, it gives us a more flexible and versatile scaffold. Template Expressions The new way to work with Strings in Java is called template expression, a programmable way of safely interpolating expressions in String literals. And even better than just interpolating, we can turn structured text into any object, not just a String. The create a template expression, we need two things: • A template processor • A template containing wrapped expressions like \{name} These two requirements are combined by a dot, almost like a method call. Using one of the previous examples, it looks like this: java var name = "Ben"; var tempC = 28; var greeting = STR."Hello \{this.user.firstname()}, how are you?\nIt's \{tempC}°C today!"; The first question you might have is: where does STR come from? As String -> String templates are most likely the default use case for String templates, the template processor STR is automagically imported into every Java source file. So all the inconvenience added by Java’s approach is 4 additional characters. Multi-Line Templates and Expressions Template expressions also work with text blocks (Java 15+): java var json = STR.""" { "user": "\{this.user.firstname()}", "temperatureCelsius: \{tempC} } """; Not only the template itself can be multi-line, expressions can be too, including comments! java var json = STR.""" { "user": "\{ // We only want to use the firstname this.user.firstname() }", "temperatureCelsius: \{tempC} } """; Be aware though, that the expression still needs to be like a single-line lambda, not a code block. More Than Just String The main advantage of Java’s implementation over other languages in my opinion is the possibility of using another template processor than a String -> String one. Look at the JSON example, again. Wouldn’t it be nicer if the interpolation could return a JSONObject and not a String? So let’s do that! Creating Your Own Template Processor Template processing is built upon the newly added nested functional interface java.lang.StringTemplate.Processor: java @FunctionalInterface public interface Processor<R, E extends Throwable> { R process(StringTemplate stringTemplate) throws E; static <T> Processor<T, RuntimeException> of(Function<? super StringTemplate, ? extends T> process) { return process::apply; } // ... } How the processing works is that the String literal containing the expression is converted to a StringTemplate and given to a Processor. If we want to create a JSONObject, we need to interpolate the String literal first, and then create the new instance of the desired return type. Using the static helper Processor.of makes this quite easy: java /// CREATE NEW TEMPLATE PROCESSOR var JSON = StringTemplate.Processor.of( (StringTemplate template) -> new JSONObject(template.interpolate()) ); // USE IT LIKE BEFORE JSONObject json = JSON.""" { "user": "\{ // We only want to use the firstname this.user.firstname() }", "temperatureCelsius: \{tempC} } """; But that’s not the real power of a custom Processor. The StringTemplate gives us more than just an argument-less interpolate method. We have access to the expression results and can manipulate them! That means the template can be simplified, as the Processor will be responsible for handling the values correctly, like escaping double quotes in the user value, etc. This is the desired template we try to use: java JSONObject json = JSON.""" { "user": \{this.user.firstname()}, "temperatureCelsius: \{tempC} } """; To achieve this, the Processor evaluates the results of the expressions (template.values()) and creates new replacements to be matched with fragment literals (template.fragments()): java StringTemplate.Processor<JSONObject, JSONException> JSON = template -> { String quote = "\""; List<Object> newValues = new ArrayList<>(); for (Object value : template.values()) { if (value instanceof String str) { // SANITIZE STRINGS // the many backslashes look weird, but it's the correct regex str = str.replaceAll(quote, "\\\\\""); newValues.add(quote + str + quote); } else if (value instanceof Number || value instanceof Boolean) { newValues.add(value); } // TODO: support more types else { throw new JSONException("Invalid value type"); } } var json = StringTemplate.interpolate(template.fragments(), newValues); return new JSONObject(json); }; That’s it! All the logic required to build a JSONObject from a String template in a single place, and we can safely use any expression in a JSON template and don’t need to think about quoting or not. Endless Possibilities As we have access to the fragments and values, we can create whatever we want. The previous “Bobby Tables” fiasco can be avoided by composing SQL queries with a sanitized Processor. Or a Processor that has access to the current Locale could be used for i18n purposes. Whenever we have a String-based template that requires transformation, validation, or sanitizing, Java’s String templates will give us a built-in simplistic template engine without requiring a third-party dependency. To not start from zero, the Java platform provides two additional template processors besides STR. Be aware that the additional template processors seem to be missing-in-action in Java 21.ea.27. At least I didn’t get it to work in my test setup. The processor FMT combines the interpolation power of STR with the format specifiers defined in java.util.Formatter: java record Shape(String name, int corners) { } var shapes = new Shape[] { new Shape("Circle", 0), new Shape("Triangle", 3), new Shape("Dodecagon", 12) }; var table = FMT.""" Name Corners %-12s\{shapes[0].name()} %3d\{shapes[0].corners()} %-12s\{shapes[1].name()} %3d\{shapes[1].corners()} %-12s\{shapes[2].name()} %3d\{shapes[2].corners()} \{" ".repeat(7)} Total corners %d\{ shapes[0].corners() + shapes[1].corners() + shapes[2].corners() } """; // OUTPUT: // Name Corners // Circle 0 // Triangle 3 // Dodecagon 12 // Total: 15 The third Processor provided by the Java platform is RAW, which doesn’t interpolate and returns a StringTemplate instead. Should I use a Preview Feature? In my opinion, it highly depends on where you want to use it. Preview features are always subject to change, so you might need to fix your code after each new JDK release. Be prepared that there might be bugs or that it’s not complete yet! However, in internal code, I don’t see a big issue in trying out new features. But remember that anyone using such code requires to enable the preview features, too, so don’t “force” this decision onto them. Conclusion Java’s String Templates are another prime example of Java “doing its thing” and giving us a missing feature compared to other languages but with a twist! Instead of simply copying another language to provide the most convenient variant for the most obvious use cases, Java sacrifices a little bit of convenience by requiring four additional characters for the “default” case. In return, however, we received a flexible and easy-to-use simplistic template engine that’s built right into the JDK. I, for one, can’t wait for the feature to leave preview status. A Functional Approach to Java Cover Image Interested in using functional concepts and techniques in your Java code? Check out my book! Available in English, Polish, Korean, and soon, Chinese. Resources Looking at Java 21
__label__pos
0.964269
File I/O problem This is a discussion on File I/O problem within the C++ Programming forums, part of the General Programming Boards category; I have a little problem with file I/O. I want to make a program that can open .cpp file, modify ... 1. #1 Linker.exe64 Join Date Mar 2005 Location Croatia,Pozega Posts 37 File I/O problem I have a little problem with file I/O. I want to make a program that can open .cpp file, modify something and save it in out.cpp . The problem is that when I save file it is saved without identation. I think that problem is in this line : Code: ostream_iterator<string> os (file_output, "\n"); When i change "\n" to " ", there are no newlines, whole code is in one line. How can I save file without loosing formatting ? Here is source of function that opens file, outputs file... PS: Sorry on mess, i am using tabs, and this board has some problems with tabs... Code: /*============================================================ [function] : insertInfoHeader [arguments] : 1. name of file to open 2. name of output file 3. flags class [Purpose] : Inserts header like this above all functions in specified file. ============================================================*/ void insertInfoHeader (char inputFile[], char outputFile[], Flags &infoHeaderFlags) { ifstream file_input; ofstream file_output; // vector for saving contents of file... vector <string> file_buffer; // and its iterator vector <string>::const_iterator file_bufferBegin; //string text; try { cout << "\nOpenning " << inputFile << "..."; file_input.open (inputFile, ios::in); file_output.open (outputFile, ios::trunc); if (!(file_input)) reportError (FileErr ("Input"), OPEN_FAIL); if (!(file_output)) reportError (FileErr ("Output"), OPEN_FAIL); const istream_iterator<string> is (file_input); const istream_iterator<string> eof; ostream_iterator<string> os (file_output, "\n"); copy (is, eof, back_inserter (file_buffer)); // can do stuff to vector now // - copy (file_buffer.begin(), file_buffer.end(), os); // iterate through buffer vector for (file_bufferBegin = file_buffer.begin(); file_bufferBegin != file_buffer.end(); ++file_bufferBegin) { cout << "\n->\t" << file_buffer.back (); file_buffer.pop_back (); } } catch (exception &exc) { cout << "\nException catched!\n" << __DATE__ << "\t" << __TIME__ <<" ." << "\nreport : " << exc.what (); } // close opened files file_output.close(); file_input.close (); } ... and here are the files : Input : Code: Unit::Unit(int Moral, int HP, int Ammo, int ID):_ID(1000) { try { Unit::_Moral = Moral; Unit::_Ammo = Ammo; Unit::_HP = HP; Unit::_ID = ID; incID(); Unit::_numOf++; cout << "\n+\tUnit,\tID| " << Unit::_ID; cout << " |.\t total | " << Unit::_numOf << " | units.\t+" << "\n______________________________________________________________"; } catch(exception &exc) { cerr << exc.what(); } } ... and output: Code: Unit::Unit(int Moral, int HP, int Ammo, int ID):_ID(1000) { try { Unit::_Moral = Moral; Unit::_Ammo = Ammo; Unit::_HP = HP; Unit::_ID = ID; incID(); Unit::_numOf++; cout << "\n+\tUnit,\tID| " << Unit::_ID; cout << " |.\t total | " << Unit::_numOf << " | units.\t+" << "\n______________________________________________________________"; } catch(exception &exc) { cerr << exc.what(); } } *Sorry on lengthy post.... Code: .J?+. ?`+. .+1P .++. ,+` 4++. .+zq\ .:.?i .!`? .yz+. ?zdd ..:.^`J. ,!..?...Kyz+ .+dXM ..!p^.N.+ ,,^.a..`.#XO+. .zwWM ^`.Mc`JMhJ. .:JF`JM..^.#WOz` jwpMr`..NF.JMMM,! .JMMF.dJ..`JNWrc 0Wgb `!B:.MMMMM,: ,.MMMF.j$.` NHSZ` TWMp`.+;`MMMMMM.: ;.MMMMM.;+``JHW= 7Mh,``JMMMM"` .TMMMMb``.H#"` 2. #2 Registered User hk_mp5kpdw's Avatar Join Date Jan 2002 Location Northern Virginia/Washington DC Metropolitan Area Posts 3,734 The problem is because of how you are reading through the input file. The istream_iterator uses the stream extraction operator (>>) to parse through the input file which has the behavior of splitting all the input along whitespace (space/newlines/tabs). Therefore your first line of input: Code: Unit::Unit(int Moral, int HP, int Ammo, int ID):_ID(1000) gets split up into many seperate pieces according to the whitespace and stored in your vector as: Code: Unit::Unit(int Moral, int HP, int Ammo, int ID):_ID(1000) Which is therefore also how it shows up when you want to output it. All your formatting will get destroyed because the istream_iterator skips that formatting (tabs, etc...). I would suggest not using the copy function and istream_iterators in this case to read through the file and instead use a simple loop along with the getline function to read in entire lines of data from the input file at once and push those entire lines of data onto your vector. You should then still be able to use the copy function and the ostream_iterator to output the file. "Owners of dogs will have noticed that, if you provide them with food and water and shelter and affection, they will think you are god. Whereas owners of cats are compelled to realize that, if you provide them with food and water and shelter and affection, they draw the conclusion that they are gods." -Christopher Hitchens 3. #3 Linker.exe64 Join Date Mar 2005 Location Croatia,Pozega Posts 37 Thanks for help. It works fine now. Here's the changed code : Code: . . . string text . . . while (!std::getline(file_input, text).eof()) { file_buffer.push_back (text); } Code: .J?+. ?`+. .+1P .++. ,+` 4++. .+zq\ .:.?i .!`? .yz+. ?zdd ..:.^`J. ,!..?...Kyz+ .+dXM ..!p^.N.+ ,,^.a..`.#XO+. .zwWM ^`.Mc`JMhJ. .:JF`JM..^.#WOz` jwpMr`..NF.JMMM,! .JMMF.dJ..`JNWrc 0Wgb `!B:.MMMMM,: ,.MMMF.j$.` NHSZ` TWMp`.+;`MMMMMM.: ;.MMMMM.;+``JHW= 7Mh,``JMMMM"` .TMMMMb``.H#"` 4. #4 Linker.exe64 Join Date Mar 2005 Location Croatia,Pozega Posts 37 There is one problem with this code... If the last line in file is not blank it runs into the infinite loop. How can this be solved? Code: .J?+. ?`+. .+1P .++. ,+` 4++. .+zq\ .:.?i .!`? .yz+. ?zdd ..:.^`J. ,!..?...Kyz+ .+dXM ..!p^.N.+ ,,^.a..`.#XO+. .zwWM ^`.Mc`JMhJ. .:JF`JM..^.#WOz` jwpMr`..NF.JMMM,! .JMMF.dJ..`JNWrc 0Wgb `!B:.MMMMM,: ,.MMMF.j$.` NHSZ` TWMp`.+;`MMMMMM.: ;.MMMMM.;+``JHW= 7Mh,``JMMMM"` .TMMMMb``.H#"` 5. #5 Banned Join Date Jun 2005 Posts 594 Code: while ( getline( file_input, text, '\n' ) ) would let you read in a line at a time, and it will stop when it reach end of input, no need to check eof, also when your writing it just write the string then a newline, then all the rest formatting will be kept. 6. #6 Linker.exe64 Join Date Mar 2005 Location Croatia,Pozega Posts 37 Is there some way for us lazy programmers? Code: .J?+. ?`+. .+1P .++. ,+` 4++. .+zq\ .:.?i .!`? .yz+. ?zdd ..:.^`J. ,!..?...Kyz+ .+dXM ..!p^.N.+ ,,^.a..`.#XO+. .zwWM ^`.Mc`JMhJ. .:JF`JM..^.#WOz` jwpMr`..NF.JMMM,! .JMMF.dJ..`JNWrc 0Wgb `!B:.MMMMM,: ,.MMMF.j$.` NHSZ` TWMp`.+;`MMMMMM.: ;.MMMMM.;+``JHW= 7Mh,``JMMMM"` .TMMMMb``.H#"` 7. #7 Nonconformist Narf's Avatar Join Date Aug 2005 Posts 174 Is there some way for us lazy programmers? Lazy programmers use Python. If you want to use C++ then ILoveVectors's code will work just fine, and it's simpler than yours. If you want it even simpler, you can rely on the fact that getline always defaults to '\n'--widen('\n') actually--as the delimiter: Code: while (std::getline(file_input, text)) file_buffer.push_back(text); if (!file_input.eof()) { // Fatal input error? } I added the test for eof after the loop so that you can tell how the loop broke. If it broke on end of file then you're solid. If not, some funky error happened--like a device error or something equally devastating--and you may need to deal with it. Most people don't bother with that though and just assume that when the loop breaks, it's because of end of file. Just because I don't care doesn't mean I don't understand. 8. #8 Linker.exe64 Join Date Mar 2005 Location Croatia,Pozega Posts 37 Python? Weakest language i have ever programmed in.... Thank you both. I will try and see what is better for me... Code: .J?+. ?`+. .+1P .++. ,+` 4++. .+zq\ .:.?i .!`? .yz+. ?zdd ..:.^`J. ,!..?...Kyz+ .+dXM ..!p^.N.+ ,,^.a..`.#XO+. .zwWM ^`.Mc`JMhJ. .:JF`JM..^.#WOz` jwpMr`..NF.JMMM,! .JMMF.dJ..`JNWrc 0Wgb `!B:.MMMMM,: ,.MMMF.j$.` NHSZ` TWMp`.+;`MMMMMM.: ;.MMMMM.;+``JHW= 7Mh,``JMMMM"` .TMMMMb``.H#"` 9. #9 Linker.exe64 Join Date Mar 2005 Location Croatia,Pozega Posts 37 ILoveVectors's code works now. There was some problem in my code first time so it didn't work, but now works OK. Code: .J?+. ?`+. .+1P .++. ,+` 4++. .+zq\ .:.?i .!`? .yz+. ?zdd ..:.^`J. ,!..?...Kyz+ .+dXM ..!p^.N.+ ,,^.a..`.#XO+. .zwWM ^`.Mc`JMhJ. .:JF`JM..^.#WOz` jwpMr`..NF.JMMM,! .JMMF.dJ..`JNWrc 0Wgb `!B:.MMMMM,: ,.MMMF.j$.` NHSZ` TWMp`.+;`MMMMMM.: ;.MMMMM.;+``JHW= 7Mh,``JMMMM"` .TMMMMb``.H#"` 10. #10 Banned Join Date Jun 2005 Posts 594 Quote Originally Posted by Narf I added the test for eof after the loop so that you can tell how the loop broke. If it broke on end of file then you're solid. If not, some funky error happened--like a device error or something equally devastating--and you may need to deal with it. Most people don't bother with that though and just assume that when the loop breaks, it's because of end of file. im sure if that happens the last concern they will have is if it the program completed successfully before everything crashed. 11. #11 Nonconformist Narf's Avatar Join Date Aug 2005 Posts 174 im sure if that happens the last concern they will have is if it the program completed successfully before everything crashed. You do realize that there's an error or two between 'perfect operation' and 'thermonuclear meltdown' that would take full advantage of such a test? Just because I don't care doesn't mean I don't understand. 12. #12 Banned Join Date Jun 2005 Posts 594 i dont see your point, my point is if the hardware stops function due toa device drive causeing a crash or not working anymore. or the hardware actually breaking down, the file is not going to be read? and the program may not even continue to run. 13. #13 Nonconformist Narf's Avatar Join Date Aug 2005 Posts 174 i dont see your point Clearly. my point is if the hardware stops function due toa device drive causeing a crash or not working anymore. or the hardware actually breaking down, the file is not going to be read? and the program may not even continue to run. Okay, let's think of it in a light that's easier to follow. Say the stream operates on a file that's on a floppy, and the floppy is removed during an input or output operation. This won't cause the program to crash, but it will cause the badbit to be set for the stream. The loop will end, and if you assume it's because of end of file, data might be lost even though the program is still running. I'm sure you'll agree that data loss is a bad thing. And if you don't know about it, you can't fix it. Just because I don't care doesn't mean I don't understand. Popular pages Recent additions subscribe to a feed Similar Threads 1. help with text input By Alphawaves in forum C Programming Replies: 8 Last Post: 04-08-2007, 05:54 PM 2. File i/o problem By tezcatlipooca in forum C++ Programming Replies: 18 Last Post: 01-01-2007, 09:01 AM 3. File I/O problem By Onions in forum C++ Programming Replies: 41 Last Post: 02-24-2006, 04:32 PM 4. Replies: 3 Last Post: 03-04-2005, 02:46 PM 5. Possible circular definition with singleton objects By techrolla in forum C++ Programming Replies: 3 Last Post: 12-26-2004, 10:46 AM 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
__label__pos
0.684176
Add members to Telegram channel If you have already created a Telegram channel and want to add people to Telegram, you can do this in two different ways. In this article, we will teach you how to add members to the Telegram channel. Introduction of adding members to the Telegram channel As you know, there are two different types of Telegram channels. The first is the most common public channel, and the second is a private channel. In this article, we will discuss how to add people to public and private Telegram channels. If you need further clarification on any part and steps described, you can contact us. Telegram add member to channel Adding people on to the public Telegram channel If you plan to add people to your public channel on Telegram, you can try three different ways. The first method is when the user you want to invite to join the channel is in your contact list. To do this, please add the following to your public channel members: 1. Run the telegram program. 2. Watch the channel. 3. Touch the channel name at the top of the screen to display the channel information menu. 4. Tap a member. 5. Select the option to add members. 6. Select the user you want from the contact list. 7. Select the OK option displayed in the pop-up window. This will add your target audience to your public Telegram channel. Another way to add people on public Telegram channels is as follows: 1. Run the telegram program. 2. Watch the channel. 3. Touch the channel name at the top of the screen to display the channel information menu. 4. At the bottom of the section marked with the @ symbol, you will see a phrase that starts with the word (t.me/). Please pay attention to the phrase. 5. By typing this in the address bar of the browser, any user can access your public channel. Therefore, using a Telegram public channel invitation link that begins with the phrase (t.me/) is another way to add people to a public Telegram channel. Please note that this method only applies to public channels. The third way to add people on Telegram public channels In the third method, people who want to join your channel should search for the specific phrase of the invitation link in Telegram. Follow the steps to find this proprietary expression: 1. Run the telegram program. 2. Watch the channel. 3. Touch the channel name at the top of the screen to display the channel information menu. 4. The phrase you see next to the @ symbol is actually your “public channel link proprietary phrase.” Any Telegram user who enters the @ symbol and then a phrase can access your channel. So you are familiar with the method of publicly adding people on Telegram. How to add members to private Telegram channels There are two ways to invite new members to a private Telegram channels. The first method is exactly the same as the public channel method described above. To do this, the following steps must be followed: 1. Run the telegram program. 2. Watch the channel. 3. Touch the channel name at the top of the screen to display the channel information menu. 4. Tap “Members”. 5. Select the option to add members. 6. Select the user you want from your account contact list. 7. Select the OK option displayed in the pop-up window. The second way to add people on Telegram private channels is to use an invitation link. To access this link, the private channel must follow these steps: 1. Run the telegram program. 2. Watch the channel. 3. Touch the channel name at the top of the screen to display the channel information menu. 4. Tap “Members”. 5. Select the invitation link option. 6. Copy the invitation link. 7. Send this link to anyone you want to subscribe to your private channel. Perhaps you could like: Is Telegram safe? Is Telegram safe? Sure, you have ever questioned how safe is Telegram or what is the security level of Telegram and WhatsApp? This... 0 Comments Submit a Comment Your email address will not be published.
__label__pos
0.880668
Splunk Cloud Platform Knowledge Manager Manual Acrobat logo Download manual as PDF Acrobat logo Download topic as PDF View and update a table dataset After you define the initial data for your table dataset, you can continue to use Table Views to refine it and maintain it. You also use Table Views to make changes to existing table datasets. Table Views includes several table dataset editing tools: • Work with your table in two modes: • Rows, which renders the dataset in a standard table format. • Summary, which displays statistical information for each of the fields in your table and their values. • Click directly on your table to make edits to your dataset. Move field columns, change field names, fix field type mismatches, and update field values. • Apply actions to the table that filter events, add fields, edit field names and field values, perform statistical data aggregations, and more. You can apply actions through menu selections, or by making edits directly to table elements. • Use a command history feature to review, edit, and undo actions that were applied to the table. • Click SPL to see the search language generated for each of your commands. Get to Table Views There are three ways to get to Table Views. Method Details When you define initial data for a new table dataset See Define initial data for a new table dataset. When you edit an existing table dataset. See Edit a table dataset in Manage table datasets. When you extend an existing dataset as a new table dataset See Extend a dataset as a new table datasetin Manage table datasets. Table Views modes You can edit your table in two modes: Rows mode and Summary mode. Rows mode Rows mode is the default Table Views mode. It displays your table dataset as a table, with fields as columns, values in cells, and sample events in rows. It displays 50 sample events from your dataset. It does not represent the results from any particular time range. You can edit your table by applying actions to it, either by making menu selections or by making edits directly to the table. In the context of Table Views, the Rows mode is a search tool rather than an editing tool. It does not provide a time range picker. If you want to see a table-formatted set of results from a specific time range, see Explore a dataset. Summary mode Click Summary to see analytical details about the fields in the table. You can see top value distributions, null value percentages, numeric value statistics, and more. You can apply some menu actions and commands to your table while you are in the Summary mode. You can also apply actions through direct edits, such as moving columns, renaming fields, fixing field type mismatches, and editing field values. When you are in the Summary mode, you can view field analytics for a specific range of time using the time range picker. The time range picker shows events from the last 24 hours by default. If your dataset has no events from the last 24 hours, it has no statistics when you open this view. To fix this, adjust the time range picker to a range where events are present. The time range picker gives you a variety of time range definition options. You can choose a preset time range, or you can define a custom time range. For help with the time range picker, see Select time ranges to apply to your search in the Search Manual. Table element selection options Availability of menu actions depends on the table elements that you select. For example, some actions are only available when you select a field column. You have the same selection options in the Rows and Summary views. Element Applies action to How to select Table Entire dataset Click the asterisk header at the top of the leftmost column. Column A field Click a column header. Multi-Column Two or more fields • To select multiple nonadjacent columns, hold the CTRL or CMD key and click the header row of each column you wish to select. Deselect columns by clicking them while holding CTRL or CMD. • To select a range of adjacent columns, click the header row of the first column, hold SHIFT, and click the header row of the last column. Cell A field value Click a cell. Text A portion of text within a field value. Click and drag to select text. You can select text for text and iPv4 field types. Field types Each field belongs to a type. There are five field types. Some actions and commands can only be applied to fields of specific types. For example, you can apply the Round Values and Map Ranges actions only to numeric fields. Type Icon Definition String The icon for the string type is the letter a in an italic font. A field whose values are text strings. It can include a mix of text and numbers. Number The icon for the number type is a hash symbol. A field whose values are purely numerical. Does not include IPv4 addresses. Boolean The icon for the Boolean type is a large dot surrounded by a circle. A field whose values are either true or false. Alternate value pairs such as 1 and 0 or Yes and No can also be used. IPv4 The icon for the IPv4 type is the acronym I P in all caps. A field whose value is an IPv4 address such as 192.0.2.1. Epoch Time The icon for the Epoch Time type is a simple representation of a clockface. A field whose value is a timestamp. Table Views automatically assigns types to fields when you define initial data for a dataset. It can also assign types to fields when you add fields to those datasets. If a field is assigned the wrong type, you can change the type by selecting the column header and using the Edit action menu. See Apply actions through direct table edits. Apply actions through menu selections You can apply actions to your table or elements of your table by making selections from the action menus just above it. Many of these actions can be performed only while you are in the Rows mode, but some can be performed in either view. The actions and commands that you can apply to your table are categorized into the following menus. Menu Description Edit Contains basic editorial actions, like changing field types, renaming fields, and moving or deleting fields. Sort Sort rows by the values of a selected field. Filter Provides actions that let you filter rows out of your dataset. Clean Features actions that fix or change field values. Summarize Performs statistical aggregations on your dataset. Add new Gives you different ways to add fields to your dataset. Apply actions through direct table edits You can make edits to your table dataset by clicking it. Move field columns, change field names, replace field values, and fix field type mismatches. Move a field column You can drag field columns to new positions in your table. 1. Select the column that you want to move. 2. Click on the column header cell and drag the column to a new location in your table. 3. Drop the column in its new location. This action is not recorded in the command history sidebar. Change a field name 1. Double-click on the column header cell that contains the name of the field that you want to change. 2. Enter the new field name. Field names cannot be blank, start with an underscore, or contain quotes, backslashes, or spaces. 3. Click outside of the cell to complete the field name change. Table Views records this change in the command history sidebar as a Rename field action. Replace field values Select a field value and replace every instance of it in its column with a new value. For example, if your dataset has an action field with a value of addtocart, you can replace that value with add to cart. You can use this method to fill null or empty field values. You cannot make field value replacements on an event by event basis. When you use this method to replace a value in one event in your dataset, that value is changed for that field throughout your dataset. For example, if you have an event where the city field has a value of New York, you cannot change that value to Los Angeles just for that one event. If you change it to Los Angeles, every instance of New York in the city column also changes to Los Angeles. 1. Double-click on a cell that contains the field value that you want to change. 2. Edit the value or replace it entirely. 3. Click outside of the cell to complete the field replacement. Every instance of the field value in the field's column is changed. Table Views records this change in the command history sidebar as a Replace value action. Fix field type mismatches Sometimes fields have type mismatches. For example, a string field that has a lot of values with numbers in them might be mistyped as a numeric field. You can give a field the correct type by clicking on the type symbol in its column header cell. You cannot change the type of the _time or _raw fields. 1. Find the column header cell of the mistyped field and hover over its type icon. The cursor changes to a pointing finger. 2. Click on the type icon. 3. Select the type that is most appropriate for the field. This action is not recorded in the command history sidebar. Use the command history sidebar The command history sidebar keeps track of the commands you apply as you apply them. You can click on a command record to reopen its command editor and change the values entered there. When you click on a command that is not the most recent command applied, Table Views shows you how the table looked at that point in the command history. You can edit the details of any command record in the command history. You can also delete any command in the history by clicking the X on its record. When you edit or delete a command record, you potentially can break commands that follow it. If this happens, the command history sidebar will notify you. Click SPL to see the search processing language behind your commands. When you have SPL selected you can click Open in Search to run a search using this SPL in the Search & Reporting app. Save a new table dataset When you finish editing a table dataset you can click Save As to save it as a new table dataset. When you create table datasets, always give them unique names. If you have more than one table dataset with the same name in your system you risk experiencing object name collision issues that are difficult to resolve. For example, say you have two table datasets named Store Sales, and you share one at the global level, but leave the other one private. If you then extend the global Store Sales dataset, the dataset that is created through that extension will display the table from the private Store Sales dataset instead. 1. Click Save As to save your table. 2. Give your dataset a unique Name. 3. (Optional) Enter or update the Table ID. This value can contain only letters, numbers and underscores. It cannot be changed later. 4. (Optional) Add a dataset Description. Table dataset descriptions are visible in two places: • The Dataset listing page, when you expand the table dataset row. • The Explorer view of the table dataset, under the dataset name. You can edit the description through the Datasets page or the Explorer view by selecting Edit > Edit description. 5. Click Save to save your changes. After you save a new table dataset, you can choose one of three options. Option Outcome Close Returns you to Table Views, where you can keep editing the dataset. View Table Opens the dataset in the Explorer view. View Listings Takes you to the Datasets listing page. Last modified on 25 June, 2021 PREVIOUS Define initial data for a new table dataset   NEXT Dataset extension This documentation applies to the following versions of Splunk Cloud Platform: 8.0.2007, 8.1.2009, 8.1.2011, 8.1.2012, 8.1.2101, 8.1.2103, 8.2.2104, 8.2.2105 (latest FedRAMP release), 8.2.2106, 8.2.2107, 8.2.2109 Was this documentation topic helpful? You must be logged into splunk.com in order to post comments. Log in now. Please try to keep this discussion focused on the content covered in this documentation topic. If you have a more general question about Splunk functionality or are experiencing a difficulty with Splunk, consider posting a question to Splunkbase Answers. 0 out of 1000 Characters
__label__pos
0.651135
How to setup github actions to run my python script on schedule? I have a Github repo which analyzes data of COVID-19 cases from an api which updates everyday. I have run.py script which clones the data but I don’t know how to setup github actions to run the script daily(automaticaly) so that data gets updated daily. And one more thing I also want to export my Analysis.ipnb as index.html daily at the same time. I don’t have any idea how to setup github actions to achive above tasks Github Repo  Hi @piyushke , Thank you for being here! To answer your query: 1. You need to create a workflow yaml, doc here: https://help.github.com/en/actions/configuring-and-managing-workflows/configuring-a-workflow#creating-a-workflow-file 2. In the yaml, set ‘schedule’ event. Details here. code sample as below: name: py on: schedule: - cron: "0 0 * * *" #runs at 00:00 UTC everyday jobs: build: runs-on: ubuntu-latest steps: - name: checkout repo content uses: actions/checkout@v2 # checkout the repository content to github runner. - name: setup python uses: actions/setup-python@v2 with: python-version: 3.8 #install the python needed - name: execute py script # run the run.py to get the latest data run: | python run.py env: key: ${{ secrets.key }} # if run.py requires passwords..etc, set it as secrets - name: export index .... # use crosponding script or actions to help export. Thanks. Hi @piyushke, May I know where is you store the run.py file? Is it under github/workflow folder?
__label__pos
0.99412
This vignettes demonstrates the mediation()-function. Before we start, we fit some models, including a mediation-object from the mediation-package and a structural equation modelling approach with the lavaan-package, both of which we use for comparison with brms and rstanarm. Mediation Analysis in brms and rstanarm library(bayestestR) library(mediation) library(brms) library(rstanarm) # load sample data data(jobs) set.seed(123) # linear models, for mediation analysis b1 <- lm(job_seek ~ treat + econ_hard + sex + age, data = jobs) b2 <- lm(depress2 ~ treat + job_seek + econ_hard + sex + age, data = jobs) # mediation analysis, for comparison with brms m1 <- mediate(b1, b2, sims = 1000, treat = "treat", mediator = "job_seek") # Fit Bayesian mediation model in brms f1 <- bf(job_seek ~ treat + econ_hard + sex + age) f2 <- bf(depress2 ~ treat + job_seek + econ_hard + sex + age) m2 <- brm(f1 + f2 + set_rescor(FALSE), data = jobs, cores = 4) # Fit Bayesian mediation model in rstanarm m3 <- stan_mvmer( list(job_seek ~ treat + econ_hard + sex + age + (1 | occp), depress2 ~ treat + job_seek + econ_hard + sex + age + (1 | occp)), data = jobs, cores = 4, refresh = 0 ) mediation() is a summary function, especially for mediation analysis, i.e. for multivariate response models with casual mediation effects. In the models m2 and m3, treat is the treatment effect and job_seek is the mediator effect. For the brms model (m2), f1 describes the mediator model and f2 describes the outcome model. This is similar for the rstanarm model. mediation() returns a data frame with information on the direct effect (median value of posterior samples from treatment of the outcome model), mediator effect (median value of posterior samples from mediator of the outcome model), indirect effect (median value of the multiplication of the posterior samples from mediator of the outcome model and the posterior samples from treatment of the mediation model) and the total effect (median value of sums of posterior samples used for the direct and indirect effect). The proportion mediated is the indirect effect divided by the total effect. The simplest call just needs the model-object. # for brms mediation(m2) #> # Causal Mediation Analysis for Stan Model #> #> Treatment: treat #> Mediator : job_seek #> Response : depress2 #> #> Effect | Estimate | 89% ETI #> ---------------------------------------------------- #> Direct Effect (ADE) | -0.040 | [-0.110, 0.031] #> Indirect Effect (ACME) | -0.015 | [-0.036, 0.004] #> Mediator Effect | -0.240 | [-0.285, -0.195] #> Total Effect | -0.055 | [-0.129, 0.018] #> #> Proportion mediated: 28.14% [-71.11%, 127.40%] # for rstanarm mediation(m3) #> # Causal Mediation Analysis for Stan Model #> #> Treatment: treat #> Mediator : job_seek #> Response : depress2 #> #> Effect | Estimate | 89% ETI #> ---------------------------------------------------- #> Direct Effect (ADE) | -0.040 | [-0.111, 0.031] #> Indirect Effect (ACME) | -0.018 | [-0.037, 0.002] #> Mediator Effect | -0.241 | [-0.286, -0.197] #> Total Effect | -0.057 | [-0.130, 0.017] #> #> Proportion mediated: 30.59% [-75.65%, 136.82%] Typically, mediation() finds the treatment and mediator variables automatically. If this does not work, use the treatment and mediator arguments to specify the related variable names. For all values, the 89% credible intervals are calculated by default. Use ci to calculate a different interval. Comparison to the mediation package Here is a comparison with the mediation package. Note that the summary()-output of the mediation package shows the indirect effect first, followed by the direct effect. summary(m1) #> #> Causal Mediation Analysis #> #> Quasi-Bayesian Confidence Intervals #> #> Estimate 95% CI Lower 95% CI Upper p-value #> ACME -0.0157 -0.0387 0.01 0.19 #> ADE -0.0438 -0.1315 0.04 0.35 #> Total Effect -0.0595 -0.1530 0.02 0.21 #> Prop. Mediated 0.2137 -2.0277 2.70 0.32 #> #> Sample Size Used: 899 #> #> #> Simulations: 1000 mediation(m2, ci = .95) #> # Causal Mediation Analysis for Stan Model #> #> Treatment: treat #> Mediator : job_seek #> Response : depress2 #> #> Effect | Estimate | 95% ETI #> ---------------------------------------------------- #> Direct Effect (ADE) | -0.040 | [-0.124, 0.046] #> Indirect Effect (ACME) | -0.015 | [-0.041, 0.008] #> Mediator Effect | -0.240 | [-0.294, -0.185] #> Total Effect | -0.055 | [-0.145, 0.034] #> #> Proportion mediated: 28.14% [-181.46%, 237.75%] mediation(m3, ci = .95) #> # Causal Mediation Analysis for Stan Model #> #> Treatment: treat #> Mediator : job_seek #> Response : depress2 #> #> Effect | Estimate | 95% ETI #> ---------------------------------------------------- #> Direct Effect (ADE) | -0.040 | [-0.129, 0.048] #> Indirect Effect (ACME) | -0.018 | [-0.042, 0.006] #> Mediator Effect | -0.241 | [-0.296, -0.187] #> Total Effect | -0.057 | [-0.151, 0.033] #> #> Proportion mediated: 30.59% [-221.09%, 282.26%] If you want to calculate mean instead of median values from the posterior samples, use the centrality-argument. Furthermore, there is a print()-method, which allows to print more digits. m <- mediation(m2, centrality = "mean", ci = .95) print(m, digits = 4) #> # Causal Mediation Analysis for Stan Model #> #> Treatment: treat #> Mediator : job_seek #> Response : depress2 #> #> Effect | Estimate | 95% ETI #> ------------------------------------------------------ #> Direct Effect (ADE) | -0.0395 | [-0.1237, 0.0456] #> Indirect Effect (ACME) | -0.0158 | [-0.0405, 0.0083] #> Mediator Effect | -0.2401 | [-0.2944, -0.1846] #> Total Effect | -0.0553 | [-0.1454, 0.0341] #> #> Proportion mediated: 28.60% [-181.01%, 238.20%] As you can see, the results are similar to what the mediation package produces for non-Bayesian models. Comparison to SEM from the lavaan package Finally, we also compare the results to a SEM model, using lavaan. This example should demonstrate how to “translate” the same model in different packages or modeling approached. library(lavaan) data(jobs) set.seed(1234) model <- ' # direct effects depress2 ~ c1*treat + c2*econ_hard + c3*sex + c4*age + b*job_seek # mediation job_seek ~ a1*treat + a2*econ_hard + a3*sex + a4*age # indirect effects (a*b) indirect_treat := a1*b indirect_econ_hard := a2*b indirect_sex := a3*b indirect_age := a4*b # total effects total_treat := c1 + (a1*b) total_econ_hard := c2 + (a2*b) total_sex := c3 + (a3*b) total_age := c4 + (a4*b) ' m4 <- sem(model, data = jobs) summary(m4) #> lavaan 0.6-7 ended normally after 25 iterations #> #> Estimator ML #> Optimization method NLMINB #> Number of free parameters 11 #> #> Number of observations 899 #> #> Model Test User Model: #> #> Test statistic 0.000 #> Degrees of freedom 0 #> #> Parameter Estimates: #> #> Standard errors Standard #> Information Expected #> Information saturated (h1) model Structured #> #> Regressions: #> Estimate Std.Err z-value P(>|z|) #> depress2 ~ #> treat (c1) -0.040 0.043 -0.929 0.353 #> econ_hard (c2) 0.149 0.021 7.156 0.000 #> sex (c3) 0.107 0.041 2.604 0.009 #> age (c4) 0.001 0.002 0.332 0.740 #> job_seek (b) -0.240 0.028 -8.524 0.000 #> job_seek ~ #> treat (a1) 0.066 0.051 1.278 0.201 #> econ_hard (a2) 0.053 0.025 2.167 0.030 #> sex (a3) -0.008 0.049 -0.157 0.875 #> age (a4) 0.005 0.002 1.983 0.047 #> #> Variances: #> Estimate Std.Err z-value P(>|z|) #> .depress2 0.373 0.018 21.201 0.000 #> .job_seek 0.524 0.025 21.201 0.000 #> #> Defined Parameters: #> Estimate Std.Err z-value P(>|z|) #> indirect_treat -0.016 0.012 -1.264 0.206 #> indirct_cn_hrd -0.013 0.006 -2.100 0.036 #> indirect_sex 0.002 0.012 0.157 0.875 #> indirect_age -0.001 0.001 -1.932 0.053 #> total_treat -0.056 0.045 -1.244 0.214 #> total_econ_hrd 0.136 0.022 6.309 0.000 #> total_sex 0.109 0.043 2.548 0.011 #> total_age -0.000 0.002 -0.223 0.824 # just to have the numbers right at hand and you don't need to scroll up mediation(m2, ci = .95) #> # Causal Mediation Analysis for Stan Model #> #> Treatment: treat #> Mediator : job_seek #> Response : depress2 #> #> Effect | Estimate | 95% ETI #> ---------------------------------------------------- #> Direct Effect (ADE) | -0.040 | [-0.124, 0.046] #> Indirect Effect (ACME) | -0.015 | [-0.041, 0.008] #> Mediator Effect | -0.240 | [-0.294, -0.185] #> Total Effect | -0.055 | [-0.145, 0.034] #> #> Proportion mediated: 28.14% [-181.46%, 237.75%] The summary output from lavaan is longer, but we can find the related numbers quite easily: • the direct effect of treatment is treat (c1), which is -0.040 • the indirect effect of treatment is indirect_treat, which is -0.016 • the mediator effect of job_seek is job_seek (b), which is -0.240 • the total effect is total_treat, which is -0.056
__label__pos
0.883637
6 In the example below, one can see that a comma and a period following a word on the baseline in a TikZ environment is based on how far the node above the word stretches towards the right. How can I change this so that the comma and the period don't care about the node above the word, but only by the word itself? \documentclass{article} \usepackage{tikz} \newlength{\Aheight} \setlength{\Aheight}{\fontcharht\font`A} \newcommand{\phraselabel}[2]{% \begin{tikzpicture}[% baseline = (word.base), txt/.style = {inner sep = 0pt, text height = \Aheight, draw}, above/.style = {inner sep = 0pt, text depth = 0pt, draw}% ] \node[txt] (word) {#1}; \node[above] at (word.north) {\footnotesize{#2}}; \end{tikzpicture}% } \begin{document} \phraselabel{bb}{xxxxx}, \phraselabel{bb}{xxxxx}. \end{document} enter image description here • Do you want the comma just after the bb? – Sigur Apr 10 '15 at 20:53 • @Sigur Right - I'd like the placement of the comma and the period to be exactly as it would be if the nodes above weren't there. – Sverre Apr 10 '15 at 20:54 5 The following example lets \phraselabel look ahead, whether a comma or period is following. If the punctuation char is found, it is read as argument and put behind the word. The first line of the example uses the boxed version, but the punctuation char is left outside, because it does not belong to the word. For the case, the punctuation char should be inside the box or the boxes are just for debugging, the second line shows the simplified version without boxes. Another feature is implemented, the handling of the space factor. In order to add a larger space after full stops with \nonfrenchspacing, TeX keeps track of a space factor. The example saves the space factor after the word inside the node and restores it after the tikzpicture. See the larger space between "cc." and "New". Full example: \documentclass{article} \usepackage{ltxcmds} \usepackage{tikz} \newlength{\Aheight} \setlength{\Aheight}{\fontcharht\font`A} \makeatletter \newcommand{\phraselabel}[2]{% \ltx@ifnextchar@nospace,{\@phraselabel{#1}{#2}}{% \ltx@ifnextchar@nospace.{\@phraselabel{#1}{#2}}{% \ltx@ifnextchar@nospace;{\@phraselabel{#1}{#2}}{% \ltx@ifnextchar@nospace!{\@phraselabel{#1}{#2}}{% \ltx@ifnextchar@nospace?{\@phraselabel{#1}{#2}}{% \@phraselabel{#1}{#2}{}% }}}}}% } \newcommand*{\@phraselabel}[3]{% \begin{tikzpicture}[% baseline = (word.base), txt/.style = {inner sep = 0pt, text height = \Aheight, draw}, above/.style = {inner sep = 0pt, text depth = 0pt, draw}% ] \node[txt] (word) {#1\phrase@save@spacefactor}; \ifx\\#3\\ \else \node[anchor=base, right, inner sep=0pt] at (word.base east) {\phrase@set@spacefactor#3\phrase@save@spacefactor}; \fi \node[above] at (word.north) {\footnotesize{#2}}; \end{tikzpicture}% \phrase@set@spacefactor } \newcount\phrase@spacefactor \newcommand*{\phrase@save@spacefactor}{% \global\phrase@spacefactor=\spacefactor } \newcommand*{\phrase@set@spacefactor}{% \spacefactor=\phrase@spacefactor } \makeatother \begin{document} % With boxes \phraselabel{aa}{xxxxx} \phraselabel{bb}{yyyyy}, \phraselabel{cc}{zzzzz}. \phraselabel{New}{xxxxx} \phraselabel{sentence}{yyyyy}. % Without boxes \makeatletter \renewcommand*{\@phraselabel}[3]{% \begin{tikzpicture}[% baseline = (word.base), txt/.style = {inner sep = 0pt, text height = \Aheight}, above/.style = {inner sep = 0pt, text depth = 0pt}% ] \node[txt] (word) {#1#3\phrase@save@spacefactor}; \node[above] at (word.north) {\footnotesize{#2}}; \end{tikzpicture}% \phrase@set@spacefactor } \makeatother \phraselabel{aa}{xxxxx} \phraselabel{bb}{yyyyy}, \phraselabel{cc}{zzzzz}. \phraselabel{New}{xxxxx} \phraselabel{sentence}{yyyyy}. \end{document} Result Remark: • \ltx@ifnextchar@nospace of package ltxcmds is used for the look ahead. In contrary to LaTeX's \@ifnextchar, it does not gobble spaces, when looking ahead. • I'd like to also do this when the following character is ! or ?. I tried to simply add \ltx@ifnextchar@nospace!{\@phraselabel{#1}{#2}}{% and \ltx@ifnextchar@nospace?{\@phraselabel{#1}{#2}}{% below your other \ltx@ifnextchar@nospace commands, but I get an error Runaway argument? {\ltx@ifnextchar@nospace ,{\@phraselabel {##1}{##2}}{\ltx@ifnextchar@nospace \E TC. Could you please include commands for ! and ? in your code? – Sverre Apr 11 '15 at 13:03 • 1 @Sverre Probably only some closing curly braces were missing. I have now added support for ;, !, and ?. – Heiko Oberdiek Apr 11 '15 at 13:55 1 You can use \rlap{}: \documentclass{article} \usepackage{tikz} \newlength{\Aheight} \setlength{\Aheight}{\fontcharht\font`A} \newcommand{\phraselabel}[2]{% \begin{tikzpicture}[% baseline = (word.base),% txt/.style = {inner sep = 0pt, text height = \Aheight},% tag/.style = {above=0.75ex, inner sep = 0pt, text depth = 0pt}% ]% \node[txt] (word) {#1};% \node[tag] at (word.north) {\footnotesize{#2}};% \end{tikzpicture}% } \begin{document} \phraselabel{gg}{xxxxx} \phraselabel{aa}{jjjjj} \phraselabel{tt}{xxxxx} \phraselabel{bb\rlap{.}}{xxxxx} \end{document} enter image description here • 1 Relevant links for people wanting to know what \rlap does are here and here. – Sverre Apr 10 '15 at 21:23 • Sorry, yes, I had to run before explaining how it works. – Jason Zentz Apr 10 '15 at 21:54 1 As a quick hack to remember you could just use \hphantom{} \documentclass{article} \usepackage{tikz} \newlength{\Aheight} \setlength{\Aheight}{\fontcharht\font`A} \newcommand{\phraselabel}[2]{% \begin{tikzpicture}[% baseline = (word.base), txt/.style = {inner sep = 0pt, text height = \Aheight, draw}, above/.style = {inner sep = 0pt, text depth = 0pt, draw}% ] \node[txt] (word) {#1}; \node[above] at (word.north) {\footnotesize{#2}}; \end{tikzpicture}% } \begin{document} \phraselabel{\hphantom{,}bb,}{xxxxx} \phraselabel{\hphantom{.}bb.}{xxxxx} \end{document} enter image description here • Yes, I have, but the problem then is that the node above will no longer be centered with respect to bb, but to bb + comma/period. – Sverre Apr 10 '15 at 21:17 • 1 I have updated my problem with a solution using \hphantom{} to balance the left and the right side. \hphantom{} and \vphantom{} can often be used for stuff like this for quick fixes. (Edit) I just thought I would also mention if you don't like seeing \hphantom{} in \phraselabel{} you can write a macro to auto balance it on either side with something like \newcommand{\bal}[2]{\hphantom{#1} #2 #1} – MrBrightside Apr 10 '15 at 21:18 Your Answer By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.999112
Take the 2-minute tour × Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It's 100% free, no registration required. Each person from a group of 3 people can choose his dish from a menu of 5 options. Knowing that each person eats only 1 dish what are the number of different orders the waiter can ask the chef? share|improve this question 3 Answers 3 This is a problem of combinations with repetitions. See for example the discussion on Wikipedia (which lacks a proof right now). share|improve this answer 2   A proof is here: en.wikipedia.org/wiki/Stars_and_bars_(probability) –  Douglas S. Stones Oct 20 '10 at 23:38 If we assume that the chef doesn't care which person ordered which dish, just how many of each dish to make, the problem is equivalent to placing 3 balls (each representing a dish the chef needs to make) in 5 urns (representing the 5 possible dishes). This type of balls/urns problem (or pieces of identical candy to children, donuts of different types, etc.) can be solved with a technique sometimes called "stars and bars": represent the balls by stars in a line: * * * Now, to divide these balls among 5 urns, add 4 bars, denoting the breaking points between different urns, e.g.: * * * | | | | all the balls in the first urn * | | * * | | 1 balls in the first urn, 2 in the third urn, none in the rest The number of possible ways to do this is the number of ways to rearrange the line of 3 *s and 4 |s, which is the number of ways to choose which 3 of the 7 symbols in the line should be *s, so ${7\choose 3}$. In general, for $k$ balls into $n$ urns, there are $k$ *s, $n-1$ |s, $k+n-1$ total symbols in the line, so ${k+n-1\choose k}$. share|improve this answer (5+2 choose 3)=(7 choose 3)=35 not (5 choose 3). The 35 different orders orders are {{1, 1, 1}, {1, 1, 2}, {1, 1, 3}, {1, 1, 4}, {1, 1, 5}, {1, 2, 2}, {1, 2, 3}, {1, 2, 4}, {1, 2, 5}, {1, 3, 3}, {1, 3, 4}, {1, 3, 5}, {1, 4, 4}, {1, 4, 5}, {1, 5, 5}, {2, 2, 2}, {2, 2, 3}, {2, 2, 4}, {2, 2, 5}, {2, 3, 3}, {2, 3, 4}, {2, 3, 5}, {2, 4, 4}, {2, 4, 5}, {2, 5, 5}, {3, 3, 3}, {3, 3, 4}, {3, 3, 5}, {3, 4, 4}, {3, 4, 5}, {3, 5, 5}, {4, 4, 4}, {4, 4, 5}, {4, 5, 5}, {5, 5, 5}}. The notation and formula were from http://www.oeis.org/A000292 share|improve this answer Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.999879
What does DGX stand for? 1. Stands for NVIDIA DGX Systems Overview NVIDIA DGX Systems (DGX) are advanced computing platforms designed by NVIDIA for artificial intelligence (AI) and deep learning workloads. These systems combine powerful GPUs with optimized software to accelerate AI research and development. Importance of DGX NVIDIA DGX Systems are crucial for: • Accelerating AI and deep learning research. • Providing high-performance computing capabilities. • Enabling large-scale data analysis and training of complex models. • Supporting innovation in AI across various industries. Components of DGX 1. Powerful GPUs: Equipped with NVIDIA GPUs designed for AI workloads. 2. Optimized Software: Includes NVIDIA’s deep learning frameworks and tools. 3. High-Performance Computing: Provides the computational power needed for intensive AI tasks. 4. Scalability: Can be scaled to meet the demands of growing AI projects. 5. Enterprise Support: Offers support services to ensure optimal performance and reliability. Implementing DGX Organizations implement NVIDIA DGX Systems by integrating them into their AI research and development workflows, leveraging the powerful GPUs and optimized software to accelerate model training, data analysis, and deployment of AI solutions. 2. Stands for Digital Graphics Exchange Overview Digital Graphics Exchange (DGX) refers to platforms or standards for exchanging digital graphics and multimedia content. These exchanges facilitate the sharing, trading, and distribution of digital artwork, designs, and media assets. Importance of DGX Digital Graphics Exchange is crucial for: • Enabling the seamless exchange of digital graphics and multimedia content. • Supporting collaboration among designers, artists, and developers. • Providing access to a wide range of digital assets. • Promoting the monetization and licensing of digital content. Components of DGX 1. Content Marketplace: Platforms for buying, selling, and trading digital graphics. 2. Collaboration Tools: Features that support collaborative design and development. 3. Licensing and Rights Management: Ensuring proper licensing and usage rights for digital assets. 4. Secure Transactions: Secure payment and transaction processes. 5. Community Engagement: Fostering a community of creators and consumers. Implementing DGX Organizations and platforms implement Digital Graphics Exchange by developing marketplaces for digital assets, providing collaboration tools, ensuring secure transactions, managing licensing rights, and engaging with a community of users to facilitate the exchange of digital graphics. 3. Stands for Data Governance Exchange Overview Data Governance Exchange (DGX) is a framework or platform for managing and exchanging data governance practices and policies across organizations. This exchange aims to standardize data governance and ensure compliance with data regulations. Importance of DGX Data Governance Exchange is crucial for: • Standardizing data governance practices across organizations. • Ensuring compliance with data protection regulations. • Promoting data quality and integrity. • Facilitating collaboration and knowledge sharing on data governance. Components of DGX 1. Governance Frameworks: Standardized policies and practices for data governance. 2. Compliance Tools: Tools to ensure compliance with data regulations. 3. Quality Management: Processes to maintain and improve data quality. 4. Collaboration Platforms: Platforms for sharing knowledge and best practices. 5. Training and Support: Resources for training and supporting data governance professionals. Implementing DGX Organizations implement Data Governance Exchange by adopting standardized governance frameworks, using compliance tools, maintaining data quality, providing collaboration platforms, and offering training and support to ensure effective data governance. 4. Stands for Distributed Gaming Experience Overview Distributed Gaming Experience (DGX) refers to gaming platforms or networks that provide a seamless and immersive gaming experience across distributed environments. This includes cloud gaming, multiplayer networks, and cross-platform play. Importance of DGX Distributed Gaming Experience is crucial for: • Enhancing the gaming experience through seamless connectivity and performance. • Supporting multiplayer and cross-platform gaming. • Reducing latency and improving game responsiveness. • Enabling access to games from various devices and locations. Components of DGX 1. Cloud Gaming: Streaming games from the cloud to various devices. 2. Multiplayer Networks: Platforms that support multiplayer gaming across different regions. 3. Cross-Platform Play: Allowing players to play together regardless of the platform they use. 4. Latency Reduction: Technologies to minimize latency and improve game performance. 5. User Accessibility: Ensuring games are accessible from multiple devices and locations. Implementing DGX Gaming companies implement Distributed Gaming Experience by leveraging cloud gaming technology, developing robust multiplayer networks, supporting cross-platform play, reducing latency, and ensuring accessibility to provide an enhanced gaming experience. 5. Stands for Doctor of Global Health Overview Doctor of Global Health (DGX) is an advanced academic degree focused on addressing global health issues. This program prepares professionals to work on health challenges worldwide, including infectious diseases, health policy, and health equity. Importance of DGX Doctor of Global Health is crucial for: • Addressing critical global health challenges. • Promoting health equity and access to healthcare. • Developing and implementing effective health policies. • Enhancing research and innovation in global health. Components of DGX 1. Advanced Curriculum: Comprehensive coursework on global health issues and solutions. 2. Field Research: Conducting research on health challenges in various global contexts. 3. Policy Development: Creating and implementing health policies to improve health outcomes. 4. Community Engagement: Working with communities to address health needs and promote health equity. 5. Interdisciplinary Approach: Integrating perspectives from public health, medicine, sociology, and economics. Implementing DGX Universities implement the Doctor of Global Health program by offering advanced coursework, supporting field research, developing health policies, engaging with communities, and promoting an interdisciplinary approach to prepare professionals for global health leadership. 6. Stands for Dynamic Grid Exchange Overview Dynamic Grid Exchange (DGX) refers to the exchange of energy resources and information within a dynamic and interconnected electrical grid. This system supports the integration of renewable energy sources and enhances grid efficiency and reliability. Importance of DGX Dynamic Grid Exchange is crucial for: • Supporting the integration of renewable energy sources into the grid. • Enhancing the efficiency and reliability of the electrical grid. • Facilitating real-time exchange of energy resources and information. • Promoting sustainable energy practices. Components of DGX 1. Renewable Integration: Connecting renewable energy sources to the grid. 2. Real-Time Monitoring: Systems for real-time data collection and analysis. 3. Energy Storage: Solutions for storing and balancing energy supply and demand. 4. Smart Grid Technology: Using advanced technologies to optimize grid performance. 5. Regulatory Compliance: Ensuring adherence to energy regulations and standards. Implementing DGX Energy providers implement Dynamic Grid Exchange by integrating renewable energy sources, using real-time monitoring systems, deploying energy storage solutions, leveraging smart grid technology, and ensuring regulatory compliance to enhance grid performance and sustainability. 7. Stands for Digital Governance Exchange Overview Digital Governance Exchange (DGX) refers to platforms or frameworks that facilitate the exchange of digital governance practices and policies. These exchanges aim to standardize digital governance and promote transparency, accountability, and efficiency in public sector operations. Importance of DGX Digital Governance Exchange is crucial for: • Standardizing digital governance practices across the public sector. • Promoting transparency and accountability in government operations. • Enhancing the efficiency and effectiveness of public services. • Facilitating collaboration and knowledge sharing on digital governance. Components of DGX 1. Governance Frameworks: Standardized policies and practices for digital governance. 2. Transparency Tools: Tools to enhance transparency and accountability. 3. Efficiency Enhancements: Processes to improve the efficiency of public services. 4. Collaboration Platforms: Platforms for sharing knowledge and best practices. 5. Training and Support: Resources for training and supporting digital governance professionals. Implementing DGX Governments implement Digital Governance Exchange by adopting standardized governance frameworks, using transparency tools, enhancing service efficiency, providing collaboration platforms, and offering training and support to ensure effective digital governance. 8. Stands for Distributed Genome Analysis Overview Distributed Genome Analysis (DGX) refers to the use of distributed computing resources to analyze genomic data. This approach leverages high-performance computing to manage and process large-scale genomic datasets efficiently. Importance of DGX Distributed Genome Analysis is crucial for: • Managing and analyzing large-scale genomic data efficiently. • Accelerating research in genomics and personalized medicine. • Supporting the discovery of genetic markers and treatments. • Enhancing the scalability of genomic data analysis. Components of DGX 1. High-Performance Computing: Using powerful computing resources to process genomic data. 2. Data Integration: Integrating data from various genomic databases. 3. Analysis Tools: Software and algorithms for genomic data analysis. 4. Scalability: Ensuring the system can handle increasing amounts of data. 5. Collaborative Research: Facilitating collaboration among researchers and institutions. Implementing DGX Research institutions implement Distributed Genome Analysis by leveraging high-performance computing, integrating genomic data, using advanced analysis tools, ensuring scalability, and promoting collaborative research to accelerate discoveries in genomics. 9. Stands for Doctor of Geospatial Science Overview Doctor of Geospatial Science (DGX) is an advanced academic degree focusing on the study and application of geospatial technologies. This program prepares professionals to work with geographic information systems (GIS), remote sensing, and spatial analysis. Importance of DGX Doctor of Geospatial Science is crucial for: • Advancing the field of geospatial science and technology. • Supporting spatial data analysis and decision-making. • Enhancing research and innovation in GIS and remote sensing. • Promoting the use of geospatial technologies in various industries. Components of DGX 1. Advanced Curriculum: Comprehensive coursework on geospatial technologies and applications. 2. Field Research: Conducting research on geospatial data collection and analysis. 3. Technology Integration: Using GIS, remote sensing, and other geospatial tools. 4. Spatial Analysis: Techniques for analyzing spatial data and patterns. 5. Industry Applications: Applying geospatial technologies in fields such as urban planning, environmental management, and transportation. Implementing DGX Universities implement the Doctor of Geospatial Science program by offering advanced coursework, supporting field research, integrating geospatial technologies, teaching spatial analysis techniques, and promoting industry applications to prepare professionals for careers in geospatial science. 10. Stands for Dynamic Growth Exchange Overview Dynamic Growth Exchange (DGX) refers to a platform or network that supports the exchange of growth strategies, resources, and best practices among businesses and entrepreneurs. This exchange aims to foster innovation and drive business growth through collaboration. Importance of DGX Dynamic Growth Exchange is crucial for: • Supporting business growth and innovation. • Providing access to resources and best practices. • Facilitating collaboration and knowledge sharing among businesses. • Enhancing the competitiveness and scalability of businesses. Components of DGX 1. Resource Exchange: Platforms for sharing resources and tools for business growth. 2. Best Practices: Sharing strategies and practices that have proven successful. 3. Collaboration Networks: Networks for businesses to connect and collaborate. 4. Innovation Support: Programs and initiatives to foster innovation. 5. Performance Metrics: Tracking and assessing business performance and growth. Implementing DGX Businesses and organizations implement Dynamic Growth Exchange by developing platforms for resource exchange, sharing best practices, creating collaboration networks, supporting innovation initiatives, and using performance metrics to drive growth and competitiveness. Leave a Comment Your email address will not be published. Required fields are marked *
__label__pos
0.98505
KIS Bridging Loans   Presented by KIS Finance   How to choose a safe password and help protect your devices KIS Finance Fraudsters can guess two out of three passwords easily! Fraudsters can gain access to many accounts because they have simply been able to guess the correct log in details, using a system that generates common passwords. Fraudsters with so-called 'Brute Force' software can make 8 million password guesses a second! The software is very sophisticated - it uses words from English dictionaries as well as other languages and even tests the words again changing letters such as 'a' to '@' along with other common substitutions. Despite adding symbols in this way to try to make our passwords safer - around two thirds of our passwords can be guessed quickly!   Fernando Corbato Fernando Corbato The guy who invented the password back in the early 1960's said that passwords have become "kind of a nightmare" Don't use just one real word Forget passwords - think passphrase. To help make it harder for fraudsters to crack your passwords and gain access to your accounts, create a long password, one that isn't simply a dictionary word, and mix it up with capitals and numbers. This sounds like you must create a password that will later be impossible to remember, however if you use one of the following methods you should easily be able to come up with some safe passwords you'll never forget:   Bill Burr Bill Burr The guy who invented password rules in 2003 said "Much of what I did I now regret" 1. Person-Action-Object method Usually when coming up with a new password, people choose a specific word which may be meaningful to them, or if the system requires a combination of numbers and a capital letter they will simply use the word with a capital at the front and a number afterwards. Using this method, why not instead of thinking of a word, think of a memorable scenario described by three words in the order of person-action-object. For example: Doris plays bingo Use the first three letters of each word to create your password. The password will look something like this: DorPlaBin The thought put into this will even help you remember the password without having to write it down.   Top 10 Most Popular Passwords 1. 123456 2. 123456789 3. qwerty 4. 12345678 5. 111111 6. 1234567890 7. 1234567 8. password 9. 123123 10. 987654321 2. Sentences about you A secure password should be at least 12 digits long or more if possible. Since using a word isn't a good idea, think of a short sentence about you, and expand on the information a little - then take the first letter of each word. For example: The first flat I lived in was in Golden Mile View. The rent was £525 The above sentence will give you: TffIliwiGMV.Trw£5   3. Make it backwards If you want to use words rather than a phase, spell it backwards so it won't be guessed when trying dictionary words. For example: Alfiethecat Becomes... tacehteiflA   After using some of the ideas above, you can test your password to see how safe it is using our password checker.   Find it useful? Please share! Subscribe for Updates We will email you monthly details of our latest: • Business and consumer guides • Finance news • Information and awareness about the latest frauds and scams, to help you avoid them.   I want to receive email updates By submitting your email, you agree to our Terms and Privacy Notice. You can opt out at any time.
__label__pos
0.775434
How do I change my resolution to 2560×1440 Windows 10? How do I change the Resolution on Windows 10? View display settings in Windows 10 1. Select Start > Settings > System > Display. 2. If you want to change the size of your text and apps, choose an option from the drop-down menu under Scale and layout. … 3. To change your screen resolution, use the drop-down menu under Display resolution. How do I force high Resolution on Windows 10? How to set custom resolution on Windows 10? 1. Right-click on your desktop and choose NVIDIA Control Panel. 2. In the left side-panel, under Display, click on Change resolution. 3. In the right section scroll a bit, and under Choose the resolution click the Customize button. Can my PC run 2560×1440? Yes, you can. How do I change my screen resolution manually Windows 10? In the left pane, click on Display. In the right pane, scroll down and click Advanced display settings. If you have more than one monitor connected to your computer, then select the monitor on which you want to change the screen resolution. Click the Resolution drop-down menu, and then select a screen resolution. See also  Does Windows 10 have a calendar app? Why can’t I change my screen resolution Windows 10? When you can’t change the display resolution on Windows 10, it means that your drivers could be missing some updates. … If you can’t change the display resolution, try installing the drivers in compatibility mode. Applying some settings manually in the AMD Catalyst Control Center is another great fix. How do I change my resolution to 1920×1080? These are the steps: 1. Open Settings app using Win+I hotkey. 2. Access System category. 3. Scroll down to access the Display resolution section available on the right part of the Display page. 4. Use the drop-down menu available for Display resolution to select 1920×1080 resolution. 5. Press the Keep changes button. How do I force my screen resolution to increase? To change your screen resolution , clicking Control Panel, and then, under Appearance and Personalization, clicking Adjust screen resolution. Click the drop-down list next to Resolution, move the slider to the resolution you want, and then click Apply. How do you force change resolution? How to Set Custom Resolution in Windows 10 With Intel Graphics 1. Right-click on your desktop and select “Intel Graphics Settings”. 2. For simple display settings, you can stay on the General Settings page and adjust the Resolution drop-down menu. How do I force my screen resolution? In the Control Panel app, go to Control PanelAppearance and PersonalizationDisplayScreen Resolution and click Advanced Settings. This will open the Display Adapter’s settings. The rest of the process will remain unchanged; click the ‘List all modes’ button on the Adapter tab, select a resolution, and apply it. See also  How do I rearrange pages in iOS 14? Is 1440p better than 1080p? In the comparison 1080p vs 1440p, we can define that 1440p is better than 1080p as this resolution provides more screen surface workspace footprint, more sharpness accuracy in image definition, and larger screen real estate. … A 32″ 1440p monitor has the same “sharpness” as a 24″ 1080p. How do I know if my monitor is 1080p or 1440p? So, just as 1920×1080 is shortened to 1080p, 2560×1440 gets shortened to 1440p. The letter after the number, a ‘p’ in this case, refers to how the resolution is drawn on the monitor, indicating if it’s progressive (1440p) or interlaced (1440i). Why can’t I change my resolution? Open Settings where you get to change the screen resolution. Go to Settings > System > Display. … See if you can change it to a resolution which either you want to is better than this. Sometimes, because of some issue, the display drivers automatically change the screen resolution. Why can’t I make a custom resolution? Make sure you have installed the latest driver for both your monitor and Nvidia GeForce GPU as sometimes this can make higher resolutions available to you in the Windows Display settings. Restart the machine, and if the resolution you require is still not shown, continue to the next step to create a custom resolution. How do I fix the resolution on Windows 10? How to Fix Windows 10 Display Size and Resolution Issues 1. Determine the Native Resolution of Your Display and Switch to It. 2. Double Check Your Hardware. 3. Check In-App Settings. 4. Install, Reinstall, or Update Your Display Drivers. 5. Roll Back Drivers. 6. Set the Correct Multi-Display Mode. 7. Use Your GPU Utility To Set Resolution. See also  How do I install bittorrent on Linux? Leave a Comment
__label__pos
0.999996
Yichuan Wang Yichuan Wang - 2 months ago 12 Python Question Number of rows in numpy array I know that numpy array has a method called shape that returns [No.of rows, No.of columns], and shape[0] gives you the number of rows, shape[1] gives you the number of columns. a = numpy.array([[1,2,3,4], [2,3,4,5]]) a.shape >> [2,4] a.shape[0] >> 2 a.shape[1] >> 4 However, if my array only have one row, then it returns [No.of columns, ]. And shape[1] will be out of the index. For example a = numpy.array([1,2,3,4]) a.shape >> [4,] a.shape[0] >> 4 //this is the number of column a.shape[1] >> Error out of index Now how do I get the number of rows of an numpy array if the array may have only one row? Thank you Answer The concept of rows and columns applies when you have a 2D array. However, the array numpy.array([1,2,3,4]) is a 1D array and so has only one dimension, therefore shape rightly returns a single valued iterable. For a 2D version of the same array, consider the following instead: >>> a = numpy.array([[1,2,3,4]]) # notice the extra square braces >>> a.shape (1, 4)
__label__pos
0.999067
Inventory Help This is a discussion on Inventory Help within the C Programming forums, part of the General Programming Boards category; Hello everyone, I'm currently in the midst of writing an inventory program for a Car database using File I/O and ... 1. #1 Registered User Join Date Apr 2011 Posts 12 Inventory Help Hello everyone, I'm currently in the midst of writing an inventory program for a Car database using File I/O and structures. I have six functions that will be used for 1. Displaying a list of all car records 2. Adding a new record 3. Modifying an existing record 4. Showing sales and profits (including total profits for all cars) 5. Sorting the list in alphabetical order (based on Company Name) 6. Quitting from the program I have been successful with the first 3 functions but the last two - 5. and 6. are giving me quite a dilemma (I don't consider quitting from the program to really need a function but nevertheless it has it's own place). I've tried brainstorming ideas about how to go about finding the total profit for ALL cars but I just can't seem to get the logic behind it to work! No matter what I've tried it just displays the last car entry and that profit. I also have the 5th function to write and I would normally the car entires from the file and use Bubble-sort to arrange them in descending order. Would I be able to read the car entries into an array and then sort them? Any and all brainstorming would be much appreciated! Below you will find my code, I'm sorry for the lack of commenting especially in a fairly large program. Code: #include <stdio.h> #include <conio.h> #include <stdlib.h> #include <string.h> struct car { char company[30]; char model[10]; char colour[10]; float ucost; float sellprice; float totalval; int stock; int numofcarsold; float profit; float totalprofit; float oldprofit; }; int menu(void); void list(void); void add_record(void); void modify_record(void); void sales_and_profits(void); void quit(void); #define N 10 int main() { struct car cars[N]; char choice; do { choice=menu(); switch(choice) { case 1: list(); break; case 2: add_record(); break; case 3: modify_record(); break; case 4: sales_and_profits(); break; case 5: /*option 5 for sorting by alphabetical order to be added later*/ break; case 6: quit(); break; } } while(choice != 6); /*can fix this up to make it cleaner later*/ return 0; } int menu (void) { int userchoice = 0; system("cls"); printf("\t\t\t Car Sales Inventory\n\n\n\n"); printf("\t\t1.\tDisplay the list of all car records\n"); printf("\t\t2.\tAdd a new car record\n"); printf("\t\t3.\tModify an existing car record\n"); printf("\t\t4.\tShow sales and profits\n"); printf("\t\t5.\tSort the list of cars\n"); printf("\t\t6.\tExit"); printf("\n\n\t\tPlease select your choice ==> "); fflush(stdin); scanf("%d",&userchoice); return userchoice; } void list(void) { FILE *fp; struct car list; int i=0; if ((fp = fopen("H:\\SummerProject.txt","rb")) == NULL) { system("cls"); printf("\n\n\n\n\n\n\n\n\n\n\n\t\t There are currently no car records!\n\n"); printf("\t\t Please choose option 2. to add a record."); getch(); exit (0); } system("cls"); printf("Company\tModel\tColour\tCost\tPrice\tValue\tStock\tSold\tProfit\n\n"); while(fread(&list,sizeof(struct car),1,fp) == 1) { printf("%s\t%s\t%s\t%.2f\t%.2f\t%.2f\t%d\t%d\t%.2f\n",list.company,list.model,list.colour,list.ucost,list.sellprice,list.totalval,list.stock,list.numofcarsold,list.profit); } fclose(fp); getch(); } void add_record(void) { FILE *fp; struct car addcar; fp = fopen("H:\\SummerProject.txt","ab"); system("cls"); printf("Enter the Company name: "); fflush(stdin); scanf("%s",addcar.company); printf("\nEnter the Car model: "); fflush(stdin); scanf("%s",addcar.model); printf("\nEnter the Car colour: "); fflush(stdin); scanf("%s",addcar.colour); printf("\nEnter the Unit cost: "); fflush(stdin); scanf("%f",&addcar.ucost); printf("\nEnter the Sell price: "); fflush(stdin); scanf("%f",&addcar.sellprice); addcar.totalval = (addcar.sellprice - addcar.ucost); printf("\nEnter the Stock of the car: "); fflush(stdin); scanf("%d",&addcar.stock); printf("\nEnter the number of cars sold: "); fflush(stdin); scanf("%d",&addcar.numofcarsold); addcar.profit = (addcar.sellprice * addcar.numofcarsold); fwrite(&addcar,sizeof(struct car),1,fp); fclose(fp); } void modify_record(void) { FILE *fp; int line_number = 0; struct car modifycar; system("cls"); if ((fp = fopen("H:\\SummerProject.txt","r+b")) == NULL) { system("cls"); printf("\n\n\n\n\n\n\n\n\n\n\n\t\t There are currently no car records to modify!\n\n"); printf("\t\t Please choose option 2. to add a record."); getch(); exit (0); } printf("Which record would you like to modify? "); scanf("%d",&line_number); printf("\nEnter the new Company name: "); fflush(stdin); scanf("%s",modifycar.company); printf("\nEnter the new Car model: "); fflush(stdin); scanf("%s",modifycar.model); printf("\nEnter the new Car colour: "); fflush(stdin); scanf("%s",modifycar.colour); printf("\nEnter the new Unit cost: "); fflush(stdin); scanf("%f",&modifycar.ucost); printf("\nEnter the new Sell price: "); fflush(stdin); scanf("%f",&modifycar.sellprice); printf("\nEnter the new Total value: "); fflush(stdin); scanf("%f",&modifycar.totalval); printf("\nEnter the new Stock of the car: "); fflush(stdin); scanf("%d",&modifycar.stock); printf("\nEnter the new number of cars sold: "); fflush(stdin); scanf("%d",&modifycar.numofcarsold); if(modifycar.profit < 0) { modifycar.profit = 0.0; } else { modifycar.profit = ((modifycar.sellprice * modifycar.numofcarsold) - modifycar.totalval); } fseek(fp,((sizeof(struct car))*(line_number -1)),0); fwrite(&modifycar,sizeof(struct car),1,fp); fclose(fp); } void sales_and_profits(void) { FILE *fp; struct car showcarinfo; int i, lines = 0; char c; system("cls"); if ((fp = fopen("H:\\SummerProject.txt","r+b")) == NULL) { system("cls"); printf("\n\n\n\n\n\n\n\n\n\n\n\t There are currently no car records to show profit for!\n\n"); printf("\t\t Please choose option 2. to add a record."); getch(); exit (0); } printf("Car Model\tNo. of Car\tCar Sold\tPrice\t\tProfit\n"); while(fread(&showcarinfo,sizeof(struct car),1,fp) == 1) { showcarinfo.totalprofit = showcarinfo.totalprofit + showcarinfo.profit; /*whilst printing the information below, calculate the totalprofit for all cars by repeatedly reading and adding that float value to itself to accumulate the total value*/ printf("\n%s\t\t%d\t\t%d\t\t%.2f\t\t%.2f", showcarinfo.model, showcarinfo.stock, showcarinfo.numofcarsold, showcarinfo.sellprice, showcarinfo.profit); } printf("\n\nThe total profit for all cars is: $%.2f.", showcarinfo.totalprofit); /*prints the total profit for ALL cars*/ getch(); fclose(fp); } void quit(void) { system("cls"); printf("\n\n\n\n\n\n\n\n\n\n\t Thank you for using my Car Sales Program!"); getch(); } 2. #2 Registered User Join Date Sep 2006 Posts 8,868 In your car struct, you have things that are not intrinsic to any one car. Those should be removed from the struct, and handled as either another struct, or as a separate variable: numofcarsold (a sum is it not?), newprofit, oldprofit. Summary cost and profit variables should be local variables inside the sales_and_ profit function. Your do while() and switch statement in main() should go inside your menu function - that's where you loop around and around. fflush(stdin) doesn't work - fflush() works on OUTWARD streams, not input streams. You're trying to flush your kitchen faucet like a toilet! Delete them all. You will manage your input buffer, directly, using spaces scanf("%d ", &choice), instead of scanf("%d", &choice). The space after the d makes a big difference. Your menu is returning an int, and you're main() loop is expecting a char, so that is certainly a problem. #defines should go right below the last #include file line of code Why not use a text based program? Binary makes it more difficult, and in a program like this, adds nothing that I can see for a benefit. To sort char arrays, you need to use strcmp(). For such a small array, a bubble sort is fine. 3. #3 Registered User Join Date Apr 2011 Posts 12 My apologies for leaving out some information. The "numberofcarsold" member of my struct car is required - you could overwrite if more cars of a specific type is sold, but it is rather inconvienent to re-enter the entire record (I'll look into asking the user if they want to modify a specific entry later on as it is extra). I will change the fflush() - I wasn't aware that it didn't work with input streams! Didn't see that I was returning an integer to a char but that's a simple fix! I am still having some difficulty trying to figure out how to logically go about accumulating the total profit for all cars. Whilst writing the entire structure to the file (after calculating the profit for the respective record), I could add each profit to a new structure member called Code: float totalprofitforallcars; I was thinking next that if I were to modify an existing value then the profit would more than likely change - something I have to take into account so I would also place each car profit (at the time of being calculated) into another structure member called Code: float old profit; However I have had no luck with getting this to work - I thought static variables would somehow help but nothing happened there. I can get it to accumulate the total profit for all cars with creating some static variables (ONLY when a new file is created and the records are entered one by one) but as you know when the program exits those values are lost. I guess I should try and also write that total profit for all cars when the file is firstly created and written to. EDIT: Got the sales_and_profits() function working! Code: void sales_and_profits(void) { FILE *fp; int i = 0; struct car showcarinfo; float totalprofitforallcars = 0.0; system("cls"); if ((fp = fopen("H:\\SummerProject.txt","r+b")) == NULL) { system("cls"); printf("\n\n\n\n\n\n\n\n\n\n\n\t There are currently no car records to show profit for!\n\n"); printf("\t\t Please choose option 2. to add a record."); getch(); exit (0); } printf("Car Model\tNo. of Car\tCar Sold\tPrice\t\tProfit\n"); while(fread(&showcarinfo,sizeof(struct car),1,fp) == 1) { totalprofitforallcars = totalprofitforallcars + showcarinfo.profit; printf("\n%s\t\t%d\t\t%d\t\t%.2f\t\t%.2f", showcarinfo.model, showcarinfo.stock, showcarinfo.numofcarsold, showcarinfo.sellprice, showcarinfo.profit); i++; } printf("\n\nThe total profit for all cars is: $%.2f.", totalprofitforallcars); /*prints the total profit for ALL cars*/ getch(); fclose(fp); } Now the last one shouldn't be too hard! Quote Originally Posted by Adak View Post In your car struct, you have things that are not intrinsic to any one car. Those should be removed from the struct, and handled as either another struct, or as a separate variable: numofcarsold (a sum is it not?), newprofit, oldprofit. Summary cost and profit variables should be local variables inside the sales_and_ profit function. Your do while() and switch statement in main() should go inside your menu function - that's where you loop around and around. fflush(stdin) doesn't work - fflush() works on OUTWARD streams, not input streams. You're trying to flush your kitchen faucet like a toilet! Delete them all. You will manage your input buffer, directly, using spaces scanf("%d ", &choice), instead of scanf("%d", &choice). The space after the d makes a big difference. Your menu is returning an int, and you're main() loop is expecting a char, so that is certainly a problem. #defines should go right below the last #include file line of code Why not use a text based program? Binary makes it more difficult, and in a program like this, adds nothing that I can see for a benefit. To sort char arrays, you need to use strcmp(). For such a small array, a bubble sort is fine. Last edited by Fekore; 07-28-2012 at 09:25 PM. Reason: Typo's 4. #4 Registered User Join Date Apr 2011 Posts 12 So for my function that sorts the cars by Company Name, I have used fscanf() to read the names and input them into a two dimensional array and then proceed to use bubblesort on them. My problem is that I'm only printing out the first Company Name (in this case it's a test record with the company name as "J") and it isn't printing anymore. I have a feeling it's with fscanf because since my data is stored in a structure I'll need to properly position my file point after every record to gain access to each Company Name. How would I go about doing that? Would using a for() loop with fseek() (to position the file pointer to each record and then copy the Company name into an array for sorting) be suitable? Code: Code: void sort_car_list(void) { FILE *fp; int i = 0, j, last; char names[50][15]; char temp[15]; system("cls"); if ((fp = fopen("H:\\SummerProject.txt","r+b")) == NULL) { system("cls"); printf("\n\n\n\n\n\n\n\n\n\n\n\t There are currently no car records to sort!\n\n"); printf("\t\t Please choose option 2. to add records."); getch(); exit (0); } for (i=0; !feof(fp); i++) { fscanf(fp, "%s", names[i]); } last = i - 1; fclose(fp); for (i = 0 ; i < last ; i++) { for (j = i+1 ; j <= i ; j++) { if (strcmp(names[i],names[j - 1]) > 0) { strcpy(temp,names[j]) ; strcpy(names[j],names[j - 1]) ; strcpy(names[j - 1],temp) ; } } } printf("The Company names sorted are:\n"); for (i = 0 ; i <= last ; i++) { printf("\n%s",names[i]); } getch(); } Last edited by Fekore; 07-28-2012 at 10:40 PM. Reason: Adding code 5. #5 Registered User Join Date Sep 2006 Posts 8,868 Structs are the C version of "object" programming. A struct for a student, for instance might typically include: first name, last name, middle initial, id number, major, gpa, advisor, emergency telephone number, personal mobile phone number. It would NOT have a struct member for average height of the students in the school - since that is not a part of any student "object", but belongs to the whole school's student body data. I'll post up a little example for you, in about 20 mins. 6. #6 Registered User Join Date Sep 2006 Posts 8,868 Amazing discovery! I can code much better when I'm awake! A little example program: Code: #include <stdio.h> #include <string.h> #define SIZE 5 struct video { char title[80]; int rating; }; void menu(struct video videos[SIZE]); void sortIt(struct video videos[SIZE], int keyNum); void printIt(struct video videos[SIZE]); int main(void) { struct video videos[SIZE]; char intake[150]; FILE *fp; int i,j; if((fp=fopen("videoDB.txt", "rt"))==NULL) { printf("Error! File Not Found\n"); return 1; } i=0; while((fgets(intake, 149, fp))!= NULL) { //printf("%s",intake); getchar(); //printf("%s j: %d*\n", intake, j); getchar(); j=strlen(intake); while(intake[j] != ' ') --j; intake[j++]='\0'; //printf("%s*\n",intake); getchar(); strcpy(videos[i].title, intake); sscanf(intake+j,"%d ", &videos[i].rating); printf("%s %d\n",videos[i].title, videos[i].rating); ++i; } fclose(fp); menu(videos); printf("\n"); return 0; } void menu(struct video videos[SIZE]) { int choice; do { printf("\n 1. Display All Videos (n/a) 2. Search For a Video (n/a)\n"\ "\n 3. Sort Videos By Title 4. Quit\n"); printf("\nEnter your selection: "); scanf("%d", &choice); printf("\n"); switch(choice) { case 1: printIt(videos); break; case 2: printf("Choice not available yet\n"); break; case 3: sortIt(videos, choice); printIt(videos); break; case 4: printf("Goodbye\n\n"); break; default: printf("try again, please\n"); } }while(choice != 4); } /* later on, keyNum could be used to direct the sort to the proper struct member for that sort */ void sortIt(struct video videos[SIZE], int keyNum){ //this is insertion sort int i,j; struct video temp; for(i=1;i<SIZE;i++) { j=i; while(j>0 && strcmp(videos[j].title, videos[j-1].title)< 0) { temp = videos[j]; videos[j] = videos[j-1]; videos[j-1] = temp; --j; } } } void printIt(struct video videos[SIZE]) { int i; printf("\n Title Rating\n"\ "================================================\n"); for(i=0;i<SIZE;i++) printf("%3d) %-28s %2d\n",i+1,videos[i].title,videos[i].rating); } Where videoDB.txt file consisted of: Code: The Ten Commandments 5 Shooter 4 Click 1 The Bourne Identity 4 Taken 4 7. #7 Registered User Join Date Apr 2011 Posts 12 I completely understand your rationale about some members of my structure not being relevant - I'll be having a chat with my professor about it when I can. Thank you for the example program it certainly got me thinking about my last function and other parts of my code I should polish up. I'm still having a little trouble.. I'm trying to read and write my car records from the existing file into an array of structures of type car - this way I can use strcmp() and Bubblesort. After reading the records from my file I should be able to sort and then print based on descending Company name but it isn't working. I'm going to plug in a few printfs to see if i'm actually reading the records from my file and storing them accordingly but in the meantime here's my code: Code: void sort_car_list(void) { FILE *fp; int i = 0, j, counter = 0; struct car readcar; struct car sortcars[SIZE]; struct car temp = {}; if ((fp = fopen("H:\\SummerProject.txt","r+b")) == NULL) { system("cls"); printf("\n\n\n\n\n\n\n\n\n\n\n\t There are currently no car records to sort!\n\n"); printf("\t\t Please choose option 2. to add records."); getch(); exit (0); } while(fread(&readcar,sizeof(struct car),1,fp) == 1) { sortcars[i] = readcar; i++; counter++; } system("cls"); for (i = 0; i < counter; i) { for (j=i+1; j<=counter; j++) { if(strcmp(sortcars[i].company, sortcars[j].company) > 0) { sortcars[i] = sortcars[j]; sortcars[j] = temp; temp = sortcars[i]; } } } printf("The cars sorted in alphabetical order are:\n"); for(i=0; i<counter; i++) { printf("%d\t%s\n", (i+1), sortcars[i].company); } fclose(fp); } Quote Originally Posted by Adak View Post Amazing discovery! I can code much better when I'm awake! A little example program: Code: #include <stdio.h> #include <string.h> #define SIZE 5 struct video { char title[80]; int rating; }; void menu(struct video videos[SIZE]); void sortIt(struct video videos[SIZE], int keyNum); void printIt(struct video videos[SIZE]); int main(void) { struct video videos[SIZE]; char intake[150]; FILE *fp; int i,j; if((fp=fopen("videoDB.txt", "rt"))==NULL) { printf("Error! File Not Found\n"); return 1; } i=0; while((fgets(intake, 149, fp))!= NULL) { //printf("%s",intake); getchar(); //printf("%s j: %d*\n", intake, j); getchar(); j=strlen(intake); while(intake[j] != ' ') --j; intake[j++]='\0'; //printf("%s*\n",intake); getchar(); strcpy(videos[i].title, intake); sscanf(intake+j,"%d ", &videos[i].rating); printf("%s %d\n",videos[i].title, videos[i].rating); ++i; } fclose(fp); menu(videos); printf("\n"); return 0; } void menu(struct video videos[SIZE]) { int choice; do { printf("\n 1. Display All Videos (n/a) 2. Search For a Video (n/a)\n"\ "\n 3. Sort Videos By Title 4. Quit\n"); printf("\nEnter your selection: "); scanf("%d", &choice); printf("\n"); switch(choice) { case 1: printIt(videos); break; case 2: printf("Choice not available yet\n"); break; case 3: sortIt(videos, choice); printIt(videos); break; case 4: printf("Goodbye\n\n"); break; default: printf("try again, please\n"); } }while(choice != 4); } /* later on, keyNum could be used to direct the sort to the proper struct member for that sort */ void sortIt(struct video videos[SIZE], int keyNum){ //this is insertion sort int i,j; struct video temp; for(i=1;i<SIZE;i++) { j=i; while(j>0 && strcmp(videos[j].title, videos[j-1].title)< 0) { temp = videos[j]; videos[j] = videos[j-1]; videos[j-1] = temp; --j; } } } void printIt(struct video videos[SIZE]) { int i; printf("\n Title Rating\n"\ "================================================\n"); for(i=0;i<SIZE;i++) printf("%3d) %-28s %2d\n",i+1,videos[i].title,videos[i].rating); } Where videoDB.txt file consisted of: Code: The Ten Commandments 5 Shooter 4 Click 1 The Bourne Identity 4 Taken 4 8. #8 Registered User Join Date Sep 2006 Posts 8,868 Your swap code is entirely wacked! Code: for (i = 0; i < counter; i) { for (j=i+1; j<=counter; j++) { if(strcmp(sortcars[i].company, sortcars[j].company) > 0) { /* this section is nuts! */ sortcars[i] = sortcars[j]; sortcars[j] = temp; temp = sortcars[i]; /* re-do it. Look at my program's swap code */ } } } 9. #9 Registered User Join Date May 2012 Posts 1,066 Quote Originally Posted by Fekore View Post Thank you for the example program it certainly got me thinking about my last function and other parts of my code I should polish up. A big candidate for polishing up are the following lines: Code: if ((fp = fopen("H:\\SummerProject.txt","r+b")) == NULL) { system("cls"); printf("\n\n\n\n\n\n\n\n\n\n\n\t There are currently no car records to sort!\n\n"); printf("\t\t Please choose option 2. to add records."); getch(); exit (0); } You repeat them in 4 out of 5 functions (leaving aside quit). Ever thought about moving them into a function? Bye, Andreas 10. #10 Registered User Join Date Apr 2011 Posts 12 I shortly figured out that I was doing the wrong assigning with my Bubblesort - not to mention I forgot to increment "i" in my outer for() loop.. I must've been out of it! Got the sort function to work but I do still have some questions. Is there anyway I can exit appropriately from any function (except quit) for which there is no file to modify/sort/list cars for? Right now I'm exiting with Code: exit(0) but it quits out of the entire program and that's not something that is user-friendly - is there any commands/functions that will let me safely avoid an error like this? 11. #11 Registered User Join Date Sep 2006 Posts 8,868 Yes. You use your functions return. You should exit() or return from your program, from main(). Here's an example from my example program above. There was no file to read, so the program returns to the operating system: Code: if((fp=fopen("videoDB.txt", "rt"))==NULL) { printf("Error! File Not Found\n"); return 1; } If you are in a function that was called by menu(), then you would return to menu(), at which point you could use other parts of the program, or quit elegantly. 12. #12 Registered User Join Date Dec 2011 Posts 795 I would change the following: Code: while((fgets(intake, 149, fp))!= NULL) { //printf("%s",intake); getchar(); //printf("%s j: %d*\n", intake, j); getchar(); j=strlen(intake); while(intake[j] != ' ') --j; intake[j++]='\0'; strcpy(videos[i].title, intake); sscanf(intake+j,"%d ", &videos[i].rating); ++i; to something a lot simpler, for example: Code: while (fgets(intake, sizeof(intake), fp) != NULL) { if (sscanf(intake, "%s %d", videos[i].title, &videos[i].rating) != 2) break; i++; } 13. #13 TEIAM - problem solved Join Date Apr 2012 Location Melbourne Australia Posts 1,508 Have you thought about using a linked list? 14. #14 Registered User Join Date Apr 2011 Posts 12 I haven't been taught about linked lists as yet but I'm sure it looks like a viable option. Thanks for all of the help Adak and everyone else! Popular pages Recent additions subscribe to a feed Similar Threads 1. Inventory Program, Need Help By pfdandk in forum C++ Programming Replies: 5 Last Post: 03-25-2010, 10:46 PM 2. Inventory System By headshot119 in forum C Programming Replies: 28 Last Post: 09-24-2009, 10:53 AM 3. Game inventory By Xannidel in forum Game Programming Replies: 2 Last Post: 02-27-2009, 01:20 PM 4. Expanding Inventory By Sentral in forum Game Programming Replies: 11 Last Post: 08-23-2006, 05:28 AM 5. need help with an inventory program By lackerson191 in forum C++ Programming Replies: 3 Last Post: 09-10-2001, 09:32 PM 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
__label__pos
0.892954
Inspecting Behavior We may also wish to inspect the behavior of our program that could lead to a particular error. Specifically, we may need to know what set of function calls and classes lead to the error itself. In that case, we’ll need a way to see what code was executed before the bug was reached. Stack Trace One of the most useful ways to inspect the behavior of our application is to look at the call stack or stack trace of the program when it reaches an exception. The call stack will list all of the functions currently being executed, even including the individual line numbers of the currently executed piece of code. For example, consider this code: public class Test { public void functionA() throws Exception{ this.functionB(); } public void functionB() throws Exception{ this.functionC(); } public void functionC() throws Exception{ throw new Exception("Test Exception"); } public static void main(String[] args) throws Exception{ Test test = new Test(); test.functionA(); } } class Test: def function_a(self) -> None: self.function_b() def function_b(self) -> None: self.function_c() def function_c(self) -> None: raise Exception("Test Exception") Test().function_a() This code includes a chain of three functions, and the innermost function will throw an exception. When we run this code, we’ll get the following error messages: Exception in thread "main" java.lang.Exception: Test Exception at Test.functionC(Test.java:12) at Test.functionB(Test.java:8) at Test.functionA(Test.java:4) at Test.main(Test.java:17) Traceback (most recent call last): File "Test.py", line 11, in <module> Test().function_a() File "Test.py", line 3, in function_a self.function_b() File "Test.py", line 6, in function_b self.function_c() File "Test.py", line 9, in function_c raise Exception("Test Exception") Exception: Test Exception As we can see, both Java and Python will automatically print a stack trace of the exact functions and lines of code that we executed when we were reaching the error. Recall that this relates to the call stack in memory that is created while this program is executed: Call Stack Call Stack As we can see, Java will print the innermost call at the top of the call stack, whereas Python will invert the order and put the innermost call at the end. So, you’ll have to read carefully to make sure you are interpreting the call stack correctly. What if we want to get a call stack without crashing our program? Both Java and Python support a method for this: Thread.dumpStack(); traceback.print_stack() In both instances, we just need to import the appropriate library, and we have a method for examining the complex behaviors of our programs at our fingertips. Of course, as we’ll see in a bit, both debuggers and loggers can be used in conjunction with these methods to get even more information from our program.
__label__pos
0.781767
     Logo Search packages:       Sourcecode: csound version File versions  Download package filebuilding.cpp #include "filebuilding.h" #include <stdlib.h> #include <string> #include <vector> #include <map> #ifdef MACOSX #define gcvt(val,dig,buf) snprintf(buf,dig,"%f",val) #endif 00012 struct CsoundFile_ { std::string options; std::string orchestra; std::vector<std::string> score; }; static std::map<CSOUND *, CsoundFile_> files; #ifdef __cplusplus extern "C" { #endif uintptr_t perfthread(void *data){ CSOUND *cs = (CSOUND *)data; int res = 0; while(res == 0) res = csoundPerformKsmps(cs); return 0; } PUBLIC void csoundNewCSD(char *path) { char *argv[2]; CSOUND *instance; argv[0] = (char *)malloc(7); argv[1] = (char *)malloc(strlen(path)+1); strcpy(argv[0], "csound"); strcpy(argv[1], path); //argv[0] = "csound"; //argv[1] = path; printf("%s \n", argv[1]); instance = csoundCreate(NULL); csoundCompile(instance,2,argv); perfthread((void *) instance); csoundReset(instance); // csoundDestroy(instance); free(argv[0]); free(argv[1]); } PUBLIC int csoundPerformLoop(CSOUND *cs){ csoundCreateThread(perfthread, (void *)cs); return 1; } 00059 PUBLIC void csoundCsdCreate(CSOUND *csound) { CsoundFile_ csoundFile; files[csound] = csoundFile; } 00065 PUBLIC void csoundCsdSetOptions(CSOUND *csound, char *options) { files[csound].options = options; } 00070 PUBLIC const char* csoundCsdGetOptions(CSOUND *csound) { return files[csound].options.c_str(); } 00075 PUBLIC void csoundCsdSetOrchestra(CSOUND *csound, char *orchestra) { files[csound].orchestra = orchestra; } 00080 PUBLIC const char* csoundCsdGetOrchestra(CSOUND *csound) { return files[csound].orchestra.c_str(); } 00085 PUBLIC void csoundCsdAddScoreLine(CSOUND *csound, char *line) { files[csound].score.push_back(line); } 00090 PUBLIC void csoundCsdAddEvent11(CSOUND *csound, double p1, double p2, double p3, double p4, double p5, double p6, double p7, double p8, double p9, double p10, double p11) { char note[0x100]; sprintf(note, "i %-.10g %-.10g %-.10g %-.10g %-.10g %-.10g %-.10g %-.10g %-.10g %-.10g %-.10g", p1, p2, p3, p4, p5, p6, p7, p8, p9, p10, p11); files[csound].score.push_back(note); } 00097 PUBLIC void csoundCsdAddEvent10(CSOUND *csound, double p1, double p2, double p3, double p4, double p5, double p6, double p7, double p8, double p9, double p10) { char note[0x100]; sprintf(note, "i %-.10g %-.10g %-.10g %-.10g %-.10g %-.10g %-.10g %-.10g %-.10g %-.10g", p1, p2, p3, p4, p5, p6, p7, p8, p9, p10); files[csound].score.push_back(note); } 00104 PUBLIC void csoundCsdAddEvent9(CSOUND *csound, double p1, double p2, double p3, double p4, double p5, double p6, double p7, double p8, double p9) { char note[0x100]; sprintf(note, "i %-.10g %-.10g %-.10g %-.10g %-.10g %-.10g %-.10g %-.10g %-.10g", p1, p2, p3, p4, p5, p6, p7, p8, p9); files[csound].score.push_back(note); } 00111 PUBLIC void csoundCsdAddEvent8(CSOUND *csound, double p1, double p2, double p3, double p4, double p5, double p6, double p7, double p8) { char note[0x100]; sprintf(note, "i %-.10g %-.10g %-.10g %-.10g %-.10g %-.10g %-.10g %-.10g", p1, p2, p3, p4, p5, p6, p7, p8); files[csound].score.push_back(note); } 00117 PUBLIC void csoundCsdAddEvent7(CSOUND *csound, double p1, double p2, double p3, double p4, double p5, double p6, double p7) { char note[0x100]; sprintf(note, "i %-.10g %-.10g %-.10g %-.10g %-.10g %-.10g %-.10g", p1, p2, p3, p4, p5, p6, p7); files[csound].score.push_back(note); } 00124 PUBLIC void csoundCsdAddEvent6(CSOUND *csound, double p1, double p2, double p3, double p4, double p5, double p6) { char note[0x100]; sprintf(note, "i %-.10g %-.10g %-.10g %-.10g %-.10g %-.10g", p1, p2, p3, p4, p5, p6); files[csound].score.push_back(note); } 00131 PUBLIC void csoundCsdAddEvent5(CSOUND *csound, double p1, double p2, double p3, double p4, double p5) { char note[0x100]; sprintf(note, "i %-.10g %-.10g %-.10g %-.10g %-.10g", p1, p2, p3, p4, p5); files[csound].score.push_back(note); } 00138 PUBLIC void csoundCsdAddEvent4(CSOUND *csound, double p1, double p2, double p3, double p4) { char note[0x100]; sprintf(note, "i %-.10g %-.10g %-.10g %-.10g", p1, p2, p3, p4); files[csound].score.push_back(note); } 00145 PUBLIC void csoundCsdAddEvent3(CSOUND *csound, double p1, double p2, double p3) { char note[0x100]; sprintf(note, "i %-.10g %-.10g %-.10g", p1, p2, p3); files[csound].score.push_back(note); } 00152 PUBLIC int csoundCsdSave(CSOUND *csound, char *filename) { CsoundFile_ &csoundFile = files[csound]; FILE *file = fopen(filename, "w+"); fprintf(file, "<CsoundSynthesizer>"); fprintf(file, "<CsOptions>"); fprintf(file, "%s", csoundFile.options.c_str()); fprintf(file, "<CsoundSynthesizer>"); fprintf(file, "<CsInstruments>"); fprintf(file, "%s", csoundFile.orchestra.c_str()); fprintf(file, "</CsInstruments>"); fprintf(file, "<CsScore>"); for (std::vector<std::string>::iterator it = csoundFile.score.begin(); it != csoundFile.score.end(); ++it) { fprintf(file, it->c_str()); } fprintf(file, "</CsScore>"); fprintf(file, "</CsoundSynthesizer>"); return fclose(file); } 00172 PUBLIC int csoundCsdCompile(CSOUND *csound, char *filename) { csoundCsdSave(csound, filename); return csoundCompileCsd(csound, filename); } 00178 PUBLIC int csoundCsdPerform(CSOUND *csound, char *filename) { csoundCsdSave(csound, filename); return csoundPerformCsd(csound, filename); } 00184 PUBLIC int csoundCompileCsd(CSOUND *csound, char *csdFilename) { char *argv[2]; argv[0] = (char*)"csound"; argv[1] = csdFilename; return csoundCompile(csound, 2, argv); } 00192 PUBLIC int csoundPerformCsd(CSOUND *csound, char *csdFilename) { int retval = csoundCompileCsd(csound, csdFilename); if (!retval) retval = csoundPerform(csound); csoundCleanup(csound); return (retval >= 0 ? 0 : retval); } #ifdef __cplusplus } #endif Generated by  Doxygen 1.6.0   Back to index
__label__pos
0.928901
Chegg Textbook Solutions for Calculus: Multivariable 5th Edition: Chapter 12.5 Chapter: Problem: • Step 1 of 4 (a) Consider: . Match the function with the figures of level surfaces.. Following is the input given in maple: x = y^2+z^2, 1 = y^2+z^2, 2 = y^2+z^2. • Step 2 of 4 Press on plot builder. Then the following output comes in maple. plots[:-display](plots[:-implicitplot3d](x = y^2+z^2, x = -2 .. 2, y = -2 .. 2, z = -2 .. 2), plots[:-implicitplot3d](1 = y^2+z^2, x = -2 .. 2, y = -2 .. 2, z = -2 .. 2), plots[:-implicitplot3d](2 = y^2+z^2, x = -2 .. 2, y = -2 .. 2, z = -2 .. 2)). Picture 2 The above figure maps with the figure I. • Step 3 of 4 (b) Consider: . Match the function with the figures of level surfaces. Following is the input given in maple: y = x^2+z^2, 1 = x^2+z^2, 2 = x^2+z^2. • Step 4 of 4 Press on plot builder. Then the following output comes in maple. plots[:-display](plots[:-implicitplot3d](y = x^2+z^2, x = -2 .. 2, y = -2 .. 2, z = -2 .. 2), plots[:-implicitplot3d](1 = x^2+z^2, x = -2 .. 2, y = -2 .. 2, z = -2 .. 2), plots[:-implicitplot3d](2 = x^2+z^2, x = -2 .. 2, y = -2 .. 2, z = -2 .. 2)); Picture 3 The above figure maps with the figure II. Get Study Help from Chegg Chegg is one of the leading providers of homework help for college and high school students. Get homework help and answers to your toughest questions in math, calculus, algebra, physics, chemistry, science, accounting, English and more. Master your homework assignments with our step-by-step solutions to more than 3000 textbooks. If we don't support your textbook, don't worry! You can ask a homework question and get an answer in as little as two hours. With Chegg, homework help is just a few clicks away.
__label__pos
0.998481
LaTeX/Presentations From Wikibooks, open books for an open world Jump to: navigation, search Clipboard To do: • Add other packages for creating presentations. • Bonus: Add screenshots of the results. • Working with columns • Navigation — see here • Using sections & subsections LaTeX logo.svg LaTeX Getting Started 1. Introduction 2. Installation 3. Installing Extra Packages 4. Basics Common Elements 1. Document Structure 2. Text Formatting 3. Paragraph Formatting 4. Colors 5. Fonts 6. List Structures 7. Special Characters 8. Internationalization 9. Rotations 10. Tables 11. Title creation 12. Page Layout 13. Importing Graphics 14. Floats, Figures and Captions 15. Footnotes and Margin Notes 16. Hyperlinks 17. Labels and Cross-referencing Mechanics 1. Errors and Warnings 2. Lengths 3. Counters 4. Boxes 5. Rules and Struts Technical Texts 1. Mathematics 2. Advanced Mathematics 3. Theorems 4. Chemical Graphics 5. Algorithms 6. Source Code Listings 7. Linguistics Special Pages 1. Indexing 2. Glossary 3. Bibliography Management 4. More Bibliographies Special Documents 1. Letters 2. Presentations 3. Teacher's Corner 4. Curriculum Vitae Creating Graphics 1. Introducing Procedural Graphics 2. MetaPost 3. Picture 4. PGF/TikZ 5. PSTricks 6. Xy-pic 7. Creating 3D graphics Programming 1. Macros 2. Plain TeX 3. Creating Packages 4. Themes Miscellaneous 1. Modular Documents 2. Collaborative Writing of LaTeX Documents 3. Export To Other Formats Help and Recommendations 1. FAQ 2. Tips and Tricks Appendices 1. Authors 2. Links 3. Package Reference 4. Sample LaTeX documents 5. Index 6. Command Glossary edit this boxedit the TOC LaTeX can be used for creating presentations. There are several packages for the task, including the beamer package. The Beamer package[edit] The beamer package is provided with most LaTeX distributions, but is also available from CTAN. If you use MikTeX, all you have to do is to include the beamer package and let LaTeX download all wanted packages automatically. The documentation explains the features in great detail. You can also have a look at the PracTex article Beamer by example.[1] The beamer package also loads many useful packages including hyperref. Introductory example[edit] The beamer package is loaded by calling the beamer class: \documentclass{beamer} The usual header information may then be specified. Note that if you are compiling with XeTeX then you should use \documentclass[xetex,mathserif,serif]{beamer} Inside the usual document environment, multiple frame environments specify the content to be put on each slide. The frametitle command specifies the title for each slide (see image): \begin{document} \begin{frame} \frametitle{This is the first slide} %Content goes here \end{frame} \begin{frame} \frametitle{This is the second slide} \framesubtitle{A bit more information about this} %More content goes here \end{frame} % etc \end{document} Frametitle keyword example.png The usual environments (itemize, enumerate, equation, etc.) may be used. Inside frames, you can use environments like block, theorem, proof, ... Also, \maketitle is possible to create the frontpage, if title and author are set. Trick: Instead of using \begin{frame}...\end{frame}, you can also use \frame{...}. For the actual talk, if you can compile it with pdflatex then you could use a pdf reader with a fullscreen mode, such as Okular, Evince or Adobe Reader. If you want to navigate in your presentation, you can use the almost invisible links in the bottom right corner without leaving the fullscreen mode. Document Structure[edit] Title page and information[edit] You give information about authors, titles and dates in the preamble. \title[Crisis] % (optional, only for long titles) {The Economics of Financial Crisis} \subtitle{Evidence from India} \author[Author, Anders] % (optional, for multiple authors) {F.~Author\inst{1} \and S.~Anders\inst{2}} \institute{Universities Here and There} % (optional) { \inst{1}% Institute of Computer Science\\ University Here \and \inst{2}% Institute of Theoretical Philosophy\\ University There } \date{KPT 2004} % (optional) {Conference on Presentation Techniques, 2004} \subject{Computer Science} In the document, you add the title page : \frame{\titlepage} Table of Contents[edit] The table of contents, with the current section highlighted, is displayed by: \begin{frame} \frametitle{Table of Contents} \tableofcontents[currentsection] \end{frame} This can be done automatically at the beginning of each section using the following code in the preamble: \AtBeginSection[] { \begin{frame} \frametitle{Table of Contents} \tableofcontents[currentsection] \end{frame} } Or for subsections: \AtBeginSubsection[] { \begin{frame} \frametitle{Table of Contents} \tableofcontents[currentsection,currentsubsection] \end{frame} } References (Beamer)[edit] Beamer does not officially support BibTeX. Instead bibliography items will need to be partly set "by hand" (see beameruserguide.pdf 3.20). The following example shows a references slide containing two entries: \begin{frame}[allowframebreaks] \frametitle<presentation>{Further Reading} \begin{thebibliography}{10} \beamertemplatebookbibitems \bibitem{Autor1990} A.~Autor. \newblock {\em Introduction to Giving Presentations}. \newblock Klein-Verlag, 1990. \beamertemplatearticlebibitems \bibitem{Jemand2000} S.~Jemand. \newblock On this and that. \newblock {\em Journal of This and That}, 2(1):50--100, 2000. \end{thebibliography} \end{frame} As the reference list grows, the reference slide will divide into two and so on, through use of the allowframebreaks option. Individual items can be cited after adding an 'optional' label to the relevant bibitem stanza. The citation call is simply \cite. Beamer also supports limited customization of the way references are presented (see the manual). Those who wish to use natbib, for example, with Beamer may need to troubleshoot both their document setup and the relevant BibTeX style file. Style[edit] Themes[edit] The first solution is to use a built-in theme such as Warsaw, Berlin, etc. The second solution is to specify colors, inner themes and outer themes. The Built-in solution[edit] To the preamble you can add the following line: \usetheme{Warsaw} to use the "Warsaw" theme. Beamer has several themes, many of which are named after cities (e.g. Barcelona, Madrid, Berlin, etc.). This Theme Matrix contains the various theme and color combinations included with beamer. For more customizing options, have a look to the official documentation included in your distribution of beamer, particularly the part Change the way it looks. The full list of themes is: • Antibes • Bergen • Berkeley • Berlin • Copenhagen • Darmstadt • Dresden • Frankfurt • Goettingen • Hannover • Ilmenau • JuanLesPins • Luebeck • Madrid • Malmoe • Marburg • Montpellier • PaloAlto • Pittsburgh • Rochester • Singapore • Szeged • Warsaw • boxes • default Color themes, typically with animal names, can be specified with \usecolortheme{beaver} The full list of color themes is: • default • albatross • beaver • beetle • crane • dolphin • dove • fly • lily • orchid • rose • seagull • seahorse • whale • wolverine The do it yourself solution[edit] First you can specify the outertheme. The outertheme defines the head and the footline of each slide. \useoutertheme{infolines} Here is a list of all available outer themes: • infolines • miniframes • shadow • sidebar • smoothbars • smoothtree • split • tree Then you can add the innertheme: \useinnertheme{rectangles} Here is a list of all available inner themes: • rectangles • circles • inmargin • rounded You can define the color of every element: \setbeamercolor{alerted text}{fg=orange} \setbeamercolor{background canvas}{bg=white} \setbeamercolor{block body alerted}{bg=normal text.bg!90!black} \setbeamercolor{block body}{bg=normal text.bg!90!black} \setbeamercolor{block body example}{bg=normal text.bg!90!black} \setbeamercolor{block title alerted}{use={normal text,alerted text},fg=alerted text.fg!75!normal text.fg,bg=normal text.bg!75!black} \setbeamercolor{block title}{bg=blue} \setbeamercolor{block title example}{use={normal text,example text},fg=example text.fg!75!normal text.fg,bg=normal text.bg!75!black} \setbeamercolor{fine separation line}{} \setbeamercolor{frametitle}{fg=brown} \setbeamercolor{item projected}{fg=black} \setbeamercolor{normal text}{bg=black,fg=yellow} \setbeamercolor{palette sidebar primary}{use=normal text,fg=normal text.fg} \setbeamercolor{palette sidebar quaternary}{use=structure,fg=structure.fg} \setbeamercolor{palette sidebar secondary}{use=structure,fg=structure.fg} \setbeamercolor{palette sidebar tertiary}{use=normal text,fg=normal text.fg} \setbeamercolor{section in sidebar}{fg=brown} \setbeamercolor{section in sidebar shaded}{fg=grey} \setbeamercolor{separation line}{} \setbeamercolor{sidebar}{bg=red} \setbeamercolor{sidebar}{parent=palette primary} \setbeamercolor{structure}{bg=black, fg=green} \setbeamercolor{subsection in sidebar}{fg=brown} \setbeamercolor{subsection in sidebar shaded}{fg=grey} \setbeamercolor{title}{fg=brown} \setbeamercolor{titlelike}{fg=brown} Colors can be defined as usual: \definecolor{chocolate}{RGB}{33,33,33} Block styles can also be defined: \setbeamertemplate{blocks}[rounded][shadow=true] \setbeamertemplate{background canvas}[vertical shading][bottom=white,top=structure.fg!25] \setbeamertemplate{sidebar canvas left}[horizontal shading][left=white!40!black,right=black] You can also suppress the navigation bar: \beamertemplatenavigationsymbolsempty Fonts[edit] You may also change the fonts for particular elements. If you wanted the title of the presentation as rendered by \frame{\titlepage} to occur in a serif font instead of the default sanserif, you would use: \setbeamerfont{title}{family=\rm} You could take this a step further if you are using OpenType fonts with Xe(La)TeX and specify a serif font with increased size and oldstyle proportional alternate number glyphs: \setbeamerfont{title}{family=\rm\addfontfeatures{Scale=1.18, Numbers={Lining, Proportional}}} Math Fonts[edit] The default settings for beamer use a different set of math fonts than one would expect from creating a simple math article. One quick fix for this is to use at the beginning of the file the option mathserif \documentclass[mathserif]{beamer} Others have proposed to use the command \usefonttheme[onlymath]{serif} but it is not clear if this works for absolutely every math character. Frames Options[edit] The plain option. Sometimes you need to include a large figure or a large table and you don't want to have the bottom and the top of the slides. In that case, use the plain option : \frame[plain]{ % ... } If you want to include lots of text on a slide, use the shrink option. \frame[shrink]{ % ... } The allowframebreaks option will auto-create new frames if there is too much content to be displayed on one. \frame[allowframebreaks]{ % ... } Before using any verbatim environment (like listings), you should pass the option fragile to the frame environement, as verbatim environments need to be typeset differently. Usually, the form fragile=singleslide is usable (for details see the manual). Note that the fragile option may not be used with \frame commands since it expects to encounter a \end{frame}, which should be alone on a single line. \begin{frame}[fragile] \frametitle{Source code} \begin{lstlisting}[caption=First C example] int main() { printf("Hello World!"); return 0; } \end{lstlisting} \end{frame} Hyperlink navigation[edit] Internal and external hyperlinks can be used in beamer to assist navigation. Clean looking buttons can also be added. Clipboard To do: • add information about hyperref • add information about beamerbutton and friends Animations[edit] The following is merely an introduction to the possibilities in beamer. Chapter 8 of the beamer manual provides much more detail, on many more features. Making items appear on a slide is possible by simply using the \pause statement: \begin{frame} \frametitle{Some background} We start our discussion with some concepts. \pause The first concept we introduce originates with Erd\H os. \end{frame} Text or figures after \pause will display after one of the following events (which may vary between PDF viewers): pressing space, return or page down on the keyboard, or using the mouse to scroll down or click the next slide button. Pause can be used within \itemize etc. Text animations[edit] For text animations, for example in the itemize environment, it is possible to specify appearance and disappearance of text by using <a-b> where a and b are the numbers of the events the item is to be displayed for (inclusive). For example: \begin{itemize} \item This one is always shown \item<1-> The first time (i.e. as soon as the slide loads) \item<2-> The second time \item<1-> Also the first time \only<1-1> {This one is shown at the first time, but it will hide soon (on the next event after the slide loads).} \end{itemize} A simpler approach for revealing one item per click is to use \begin{itemize}[<+->]. \begin{frame} \frametitle{`Hidden higher-order concepts?'} \begin{itemize}[<+->] \item The truths of arithmetic which are independent of PA in some sense themselves `{contain} essentially {\color{blue}{hidden higher-order}}, or infinitary, concepts'??? \item `Truths in the language of arithmetic which \ldots \item That suggests stronger version of Isaacson's thesis. \end{itemize} \end{frame} In all these cases, pressing page up, scrolling up, or clicking the previous slide button in the navigation bar will backtrack through the sequence. Handout mode[edit] In beamer class, the default mode is presentation which makes the slides. However, you can work in a different mode that is called handout by setting this option when calling the class: \documentclass[12pt,handout]{beamer} This mode is useful to see each slide only one time with all its stuff on it, making any \itemize[<+->] environments visible all at once (for instance, printable version). Nevertheless, this makes an issue when working with the only command, because its purpose is to have only some text or figures at a time and not all of them together. If you want to solve this, you can add a statement to specify precisely the behavior when dealing with only commands in handout mode. Suppose you have a code like this \only<1>{\includegraphics{pic1.eps}} \only<2>{\includegraphics{pic2.eps}} These pictures being completely different, you want them both in the handout, but they cannot be both on the same slide since they are large. The solution is to add the handout statement to have the following: \only<1| handout:1>{\includegraphics{pic1.eps}} \only<2| handout:2>{\includegraphics{pic2.eps}} This will ensure the handout will make a slide for each picture. Now imagine you still have your two pictures with the only statements, but the second one show the first one plus some other graphs and you don't need the first one to appear in the handout. You can thus precise the handout mode not to include some only commands by: \only<1| handout:0>{\includegraphics{pic1.eps}} \only<2>{\includegraphics{pic2.eps}} The command can also be used to hide frames, e.g. \begin{frame}<handout:0> or even, if you have written a frame that you don't want anymore but maybe you will need it later, you can write \begin{frame}<0| handout:0> and this will hide your slide in both modes. (The order matters. Don't put handout:0|beamer:0 or it won't work.) A last word about the handout mode is about the notes. Actually, the full syntax for a frame is \begin{frame} ... \end{frame} \note{...} \note{...} ... and you can write your notes about a frame in the field note (many of them if needed). Using this, you can add an option to the class calling, either \documentclass[12pt,handout,notes=only]{beamer} or \documentclass[12pt,handout,notes=show]{beamer} The first one is useful when you make a presentation to have only the notes you need, while the second one could be given to those who have followed your presentation or those who missed it, for them to have both the slides with what you said. Note that the 'handout' option in the \documentclass line suppress all the animations. Important: the notes=only mode is literally doing only the notes. This means there will be no output file but the DVI. Thus it requires you to have run the compilation in another mode before. If you use separate files for a better distinction between the modes, you may need to copy the .aux file from the handout compilation with the slides (w/o the notes). Columns and Blocks[edit] There are two handy environments for structuring a slide: "blocks", which divide the slide (horizontally) into headed sections, and "columns" which divides a slide (vertically) into columns. Blocks and columns can be used inside each other. Columns[edit] Example \begin{frame}{Example of columns 1} \begin{columns}[c] % the "c" option specifies center vertical alignment \column{.5\textwidth} % column designated by a command Contents of the first column \column{.5\textwidth} Contents split \\ into two lines \end{columns} \end{frame} \begin{frame}{Example of columns 2} \begin{columns}[T] % contents are top vertically aligned \begin{column}[T]{5cm} % each column can also be its own environment Contents of first column \\ split into two lines \end{column} \begin{column}[T]{5cm} % alternative top-align that's better for graphics \includegraphics[height=3cm]{graphic.png} \end{column} \end{columns} \end{frame} Example of columns in Beamer Blocks[edit] Enclosing text in the block environment creates a distinct, headed block of text (a blank heading can be used). This allows to visually distinguish parts of a slide easily. There are three basic types of block. Their formatting depends on the theme being used. Simple \begin{frame} \begin{block}{This is a Block} This is important information \end{block} \begin{alertblock}{This is an Alert block} This is an important alert \end{alertblock} \begin{exampleblock}{This is an Example block} This is an example \end{exampleblock} \end{frame} Ejemplo de bloques en una presentación con Beamer PDF options[edit] You can specify the default options of your PDF.[2] \hypersetup{pdfstartview={Fit}} % fits the presentation to the window when first displayed The powerdot package[edit] The powerdot package is available from CTAN. The documentation explains the features in great detail. The powerdot package is loaded by calling the powerdot class: \documentclass{powerdot} The usual header information may then be specified. Inside the usual document environment, multiple slide environments specify the content to be put on each slide. \begin{document} \begin{slide}{This is the first slide} %Content goes here \end{slide} \begin{slide}{This is the second slide} %More content goes here \end{slide} % etc \end{document} References[edit] 1. Andrew Mertz and William Slough Beamer by Example 2. Other possible values are defined in the hyperref manual Links[edit] Previous: Letters Index Next: Teacher's Corner
__label__pos
0.999881
How to Delete a Group Groups can be deleted within VisualVault by those who have VaultAccess privileges. Groups can only be deleted when it is not assigned to any workflow. Groups cannot be deleted if users still exist in the group. VisualVault will clean up group information when the group is deleted. To delete a group: 1. Navigate to Control Panel - Administration Tools - Groups. 2. Select the check box to the left of the group you want to delete. 3. Select the Delete Selected Groups button. If the group cannot be deleted for any reason, users will be prompted with a message box indicating that the group cannot be deleted.
__label__pos
0.774464
Linking Between Forms Overview Case management allows you to save data from one form and use that data later on in another form.  Here's how to set this up: Say you want to use the answer to "What is your edd?" in Registration later on in your Followup form. 1. Set up case management in your application 2. Create a question in Registration for "What is your edd?".  It doesn't matter what the question ID is. 3. Go to the Case Management for the Registration form: 1. Select "What is your edd?" from the dropdown 2. Write "edd" in the case property box.   3. Hit "save". The answer to "What is your edd" is now saved to the case as case property "edd" 4. In the followup form, reference that case property by typing #case/ and choosing the property you want to reference NOTE:  When data get saved to the case as properties, they're always saved as text.  If you reference them in a form, even if you set the data type to be "date" or "int" or whatever is appropriate, they're still really strings. If you want to use it in a calculate or a comparison expression, you have to "cast" it to the desired data type.  For example, if you reference a property into the hidden value "date_question", you have to reference it as date(#case/date_question).  Other available cast operators are int(), boolean(), number() and string().  
__label__pos
0.998277
Skip to main content Track conversation topics Overview Conversation Topics tracks the topics you define based on keywords and phrases. Tracking topics can provide insight into trends in your chatters' behavior. Understanding these trends helps you build better bot content for an enhanced customer experience, and can even inform the development of your products and services. Use Conversation Topics The Topics view is found under Conversations on the Ada dashboard. From there, you can create, track, and analyze the topics your chatter's discuss. Dashboard_view.png Create a topic To create a new topic: 1. On the Ada dashboard, go to Conversations > Topics. 2. In the Topics view, click New Topic. 3. In the Define a new Topic dialog, add a name and description for your topic, then click Next to proceed the topic setup page. 4. On the topic setup page, in the Keywords to track field, add the keywords and phrases you want to track. • Place a comma after each keyword or phrase, even if you include only one keyword. The text will become a pill icon. Example: If you were tracking board game titles, you would enter Monopoly, Stratego, Risk,. • Use quotation marks to track an exact match only. Example: If you were tracking board games with phrases for titles, you would enter "Snakes and Ladders", "Mouse Trap",. adding_keywords_convotopics.gif 5. [Optional] To exclude keywords, select the Exclude Keywords checkbox. In the field that appears, enter the keywords to exclude from the topic tracking. 6. [Optional] On the Additional Conditions panel, create filters to refine which conversations the topic applies. You can filter by Answer, language, or variable. To add more filters, click Add new filter. 7. Click Save. The dialog closes and your new topic is added to the topics list. Multilingual support Conversation Topics is a multilingual feature. It supports every language enabled for your bot. Conversation Topics does not perform translations, however. If you want to track a topic across multiple languages, you need to manually add the translation of that topic in every language you want to include in your search. Example: Let's say you have a bot that supports French and Spanish, as well as English, and you create a Conversation Topic called Pricing. In addition to the keyword price, you would also include prix and precio to track the keyword in French and Spanish, as well. Tracking keywords in specific languages means you can filter a topic for those languages. This helps you identify any trends that might be language specific. Note Though Conversation Topics is multilingual, conversation summaries, found on the lower half of the conversation topic card, are generated solely in English. This is especially important to note for topics tracking only non-English keywords. For those topics, you won't see any relevant conversation summaries. Individual topic insights The Topics view contains a column of topic cards for each of the topics you are tracking. Each topic card displays the number of conversations in which it was triggered, the number of those conversations that resulted in a handoff, and the last person to update the topic settings. The progress bar for each topic shows the volume of that topic relative to all the other topics you are tracking. Click any topic card in the list to view a more detailed breakdown of the data collected for that topic. This topic insight view is made up of three panels. Use them to: • Explore volume, handoff, and customer satisfaction (CSAT) trends to help understand the topic's performance • Review conversation summaries and CSAT comments to gain insight into your customer's sentiment • Uncover opportunities for content and training improvements based on unanswered questions related to this topic • Filter and download insights. Topics_view.png Conversation Insight The bottommost panel on the topic page contains the Conversation Insights. Use the left-hand menu to select which insight you wish to see. The following descriptions will help you understand what each item measures. • Conversation summaries uses Ada's language engine to distill a conversation down to its key points. This feature helps you understand what it is your chatter is looking to achieve. The listed conversation summaries are identified by chatter ID. Click on a summary to open the entire conversation associated with it. • Unanswered questions lists the unanswered questions from conversations in which the topic was triggered. This helps you uncover content gaps and improvement opportunities for your bot. To view training suggestions for your topic, click View all Questions. This opens the Unanswered view under Improve on the Ada dashboard. • CSAT Comments displays your chatter's ratings and comments. This provides a targeted way to review CSAT comments related to a topic, and identify opportunities to improve your product and services based on user feedback. This data is only available for bots using Ada's CSAT feature. Have any questions? Contact your Ada team—or email us at [email protected].
__label__pos
0.637353
tech-invite   World Map 3GPP     Specs     Glossaries     Architecture     IMS     UICC       IETF     RFCs     Groups     SIP     ABNFs       Search     Home RFC 6234       US Secure Hash Algorithms (SHA and SHA-based HMAC and HKDF) Part 2 of 5, p. 12 to 30 Prev RFC Part       Next RFC Part   prevText      Top      Up      ToC       Page 12  6. Computing the Message Digest The output of each of the secure hash functions, after being applied to a message of N blocks, is the hash quantity H(N). For SHA-224 and SHA-256, H(i) can be considered to be eight 32-bit words, H(i)0, H(i)1, ... H(i)7. For SHA-384 and SHA-512, it can be considered to be eight 64-bit words, H(i)0, H(i)1, ..., H(i)7. As described below, the hash words are initialized, modified as each message block is processed, and finally concatenated after processing the last block to yield the output. For SHA-256 and SHA-512, all of the H(N) variables are concatenated while the SHA-224 and SHA-384 hashes are produced by omitting some from the final concatenation. 6.1. SHA-224 and SHA-256 Initialization For SHA-224, the initial hash value, H(0), consists of the following 32-bit words in hex: H(0)0 = c1059ed8 H(0)1 = 367cd507 H(0)2 = 3070dd17 H(0)3 = f70e5939 H(0)4 = ffc00b31 H(0)5 = 68581511 H(0)6 = 64f98fa7 H(0)7 = befa4fa4 Top      Up      ToC       Page 13  For SHA-256, the initial hash value, H(0), consists of the following eight 32-bit words, in hex. These words were obtained by taking the first 32 bits of the fractional parts of the square roots of the first eight prime numbers. H(0)0 = 6a09e667 H(0)1 = bb67ae85 H(0)2 = 3c6ef372 H(0)3 = a54ff53a H(0)4 = 510e527f H(0)5 = 9b05688c H(0)6 = 1f83d9ab H(0)7 = 5be0cd19 6.2. SHA-224 and SHA-256 Processing SHA-224 and SHA-256 perform identical processing on message blocks and differ only in how H(0) is initialized and how they produce their final output. They may be used to hash a message, M, having a length of L bits, where 0 <= L < 2^64. The algorithm uses (1) a message schedule of sixty-four 32-bit words, (2) eight working variables of 32 bits each, and (3) a hash value of eight 32-bit words. The words of the message schedule are labeled W0, W1, ..., W63. The eight working variables are labeled a, b, c, d, e, f, g, and h. The words of the hash value are labeled H(i)0, H(i)1, ..., H(i)7, which will hold the initial hash value, H(0), replaced by each successive intermediate hash value (after each message block is processed), H(i), and ending with the final hash value, H(N), after all N blocks are processed. They also use two temporary words, T1 and T2. The input message is padded as described in Section 4.1 above, then parsed into 512-bit blocks that are considered to be composed of sixteen 32-bit words M(i)0, M(i)1, ..., M(i)15. The following computations are then performed for each of the N message blocks. All addition is performed modulo 2^32. For i = 1 to N 1. Prepare the message schedule W: For t = 0 to 15 Wt = M(i)t For t = 16 to 63 Wt = SSIG1(W(t-2)) + W(t-7) + SSIG0(w(t-15)) + W(t-16) Top      Up      ToC       Page 14  2. Initialize the working variables: a = H(i-1)0 b = H(i-1)1 c = H(i-1)2 d = H(i-1)3 e = H(i-1)4 f = H(i-1)5 g = H(i-1)6 h = H(i-1)7 3. Perform the main hash computation: For t = 0 to 63 T1 = h + BSIG1(e) + CH(e,f,g) + Kt + Wt T2 = BSIG0(a) + MAJ(a,b,c) h = g g = f f = e e = d + T1 d = c c = b b = a a = T1 + T2 4. Compute the intermediate hash value H(i) H(i)0 = a + H(i-1)0 H(i)1 = b + H(i-1)1 H(i)2 = c + H(i-1)2 H(i)3 = d + H(i-1)3 H(i)4 = e + H(i-1)4 H(i)5 = f + H(i-1)5 H(i)6 = g + H(i-1)6 H(i)7 = h + H(i-1)7 After the above computations have been sequentially performed for all of the blocks in the message, the final output is calculated. For SHA-256, this is the concatenation of all of H(N)0, H(N)1, through H(N)7. For SHA-224, this is the concatenation of H(N)0, H(N)1, through H(N)6. 6.3. SHA-384 and SHA-512 Initialization For SHA-384, the initial hash value, H(0), consists of the following eight 64-bit words, in hex. These words were obtained by taking the first 64 bits of the fractional parts of the square roots of the ninth through sixteenth prime numbers. Top      Up      ToC       Page 15  H(0)0 = cbbb9d5dc1059ed8 H(0)1 = 629a292a367cd507 H(0)2 = 9159015a3070dd17 H(0)3 = 152fecd8f70e5939 H(0)4 = 67332667ffc00b31 H(0)5 = 8eb44a8768581511 H(0)6 = db0c2e0d64f98fa7 H(0)7 = 47b5481dbefa4fa4 For SHA-512, the initial hash value, H(0), consists of the following eight 64-bit words, in hex. These words were obtained by taking the first 64 bits of the fractional parts of the square roots of the first eight prime numbers. H(0)0 = 6a09e667f3bcc908 H(0)1 = bb67ae8584caa73b H(0)2 = 3c6ef372fe94f82b H(0)3 = a54ff53a5f1d36f1 H(0)4 = 510e527fade682d1 H(0)5 = 9b05688c2b3e6c1f H(0)6 = 1f83d9abfb41bd6b H(0)7 = 5be0cd19137e2179 6.4. SHA-384 and SHA-512 Processing SHA-384 and SHA-512 perform identical processing on message blocks and differ only in how H(0) is initialized and how they produce their final output. They may be used to hash a message, M, having a length of L bits, where 0 <= L < 2^128. The algorithm uses (1) a message schedule of eighty 64-bit words, (2) eight working variables of 64 bits each, and (3) a hash value of eight 64-bit words. The words of the message schedule are labeled W0, W1, ..., W79. The eight working variables are labeled a, b, c, d, e, f, g, and h. The words of the hash value are labeled H(i)0, H(i)1, ..., H(i)7, which will hold the initial hash value, H(0), replaced by each successive intermediate hash value (after each message block is processed), H(i), and ending with the final hash value, H(N) after all N blocks are processed. The input message is padded as described in Section 4.2 above, then parsed into 1024-bit blocks that are considered to be composed of sixteen 64-bit words M(i)0, M(i)1, ..., M(i)15. The following computations are then performed for each of the N message blocks. All addition is performed modulo 2^64. Top      Up      ToC       Page 16  For i = 1 to N 1. Prepare the message schedule W: For t = 0 to 15 Wt = M(i)t For t = 16 to 79 Wt = SSIG1(W(t-2)) + W(t-7) + SSIG0(W(t-15)) + W(t-16) 2. Initialize the working variables: a = H(i-1)0 b = H(i-1)1 c = H(i-1)2 d = H(i-1)3 e = H(i-1)4 f = H(i-1)5 g = H(i-1)6 h = H(i-1)7 3. Perform the main hash computation: For t = 0 to 79 T1 = h + BSIG1(e) + CH(e,f,g) + Kt + Wt T2 = BSIG0(a) + MAJ(a,b,c) h = g g = f f = e e = d + T1 d = c c = b b = a a = T1 + T2 4. Compute the intermediate hash value H(i) H(i)0 = a + H(i-1)0 H(i)1 = b + H(i-1)1 H(i)2 = c + H(i-1)2 H(i)3 = d + H(i-1)3 H(i)4 = e + H(i-1)4 H(i)5 = f + H(i-1)5 H(i)6 = g + H(i-1)6 H(i)7 = h + H(i-1)7 After the above computations have been sequentially performed for all of the blocks in the message, the final output is calculated. For SHA-512, this is the concatenation of all of H(N)0, H(N)1, through H(N)7. For SHA-384, this is the concatenation of H(N)0, H(N)1, through H(N)5. Top      Up      ToC       Page 17  7. HKDF- and SHA-Based HMACs Below are brief descriptions and pointers to more complete descriptions and code for (1) SHA-based HMACs and (2) an HMAC-based extract-and-expand key derivation function. Both HKDF and HMAC were devised by Hugo Krawczyk. 7.1. SHA-Based HMACs HMAC is a method for computing a keyed MAC (Message Authentication Code) using a hash function as described in [RFC2104]. It uses a key to mix in with the input text to produce the final hash. Sample code is also provided, in Section 8.3 below, to perform HMAC based on any of the SHA algorithms described herein. The sample code found in [RFC2104] was written in terms of a specified text size. Since SHA is defined in terms of an arbitrary number of bits, the sample HMAC code has been written to allow the text input to HMAC to have an arbitrary number of octets and bits. A fixed-length interface is also provided. 7.2. HKDF HKDF is a specific Key Derivation Function (KDF), that is, a function of initial keying material from which the KDF derives one or more cryptographically strong secret keys. HKDF, which is described in [RFC5869], is based on HMAC. Sample code for HKDF is provided in Section 8.4 below. 8. C Code for SHAs, HMAC, and HKDF Below is a demonstration implementation of these secure hash functions in C. Section 8.1 contains the header file sha.h that declares all constants, structures, and functions used by the SHA and HMAC functions. It includes conditionals based on the state of definition of USE_32BIT_ONLY that, if that symbol is defined at compile time, avoids 64-bit operations. It also contains sha- private.h that provides some declarations common to all the SHA functions. Section 8.2 contains the C code for sha1.c, sha224-256.c, sha384-512.c, and usha.c. Section 8.3 contains the C code for the HMAC functions, and Section 8.4 contains the C code for HKDF. Section 8.5 contains a test driver to exercise the code. For each of the digest lengths $$$, there is the following set of constants, a structure, and functions: Top      Up      ToC       Page 18  Constants: SHA$$$HashSize number of octets in the hash SHA$$$HashSizeBits number of bits in the hash SHA$$$_Message_Block_Size number of octets used in the intermediate message blocks Most functions return an enum value that is one of: shaSuccess(0) on success shaNull(1) when presented with a null pointer parameter shaInputTooLong(2) when the input data is too long shaStateError(3) when SHA$$$Input is called after SHA$$$FinalBits or SHA$$$Result Structure: typedef SHA$$$Context an opaque structure holding the complete state for producing the hash Functions: int SHA$$$Reset(SHA$$$Context *context); Reset the hash context state. int SHA$$$Input(SHA$$$Context *context, const uint8_t *octets, unsigned int bytecount); Incorporate bytecount octets into the hash. int SHA$$$FinalBits(SHA$$$Context *, const uint8_t octet, unsigned int bitcount); Incorporate bitcount bits into the hash. The bits are in the upper portion of the octet. SHA$$$Input() cannot be called after this. int SHA$$$Result(SHA$$$Context *, uint8_t Message_Digest[SHA$$$HashSize]); Do the final calculations on the hash and copy the value into Message_Digest. In addition, functions with the prefix USHA are provided that take a SHAversion value (SHA$$$) to select the SHA function suite. They add the following constants, structure, and functions: Constants: shaBadParam(4) constant returned by USHA functions when presented with a bad SHAversion (SHA$$$) parameter or other illegal parameter values USAMaxHashSize maximum of the SHA hash sizes SHA$$$ SHAversion enumeration values, used by USHA, HMAC, and HKDF functions to select the SHA function suite Top      Up      ToC       Page 19  Structure: typedef USHAContext an opaque structure holding the complete state for producing the hash Functions: int USHAReset(USHAContext *context, SHAversion whichSha); Reset the hash context state. int USHAInput(USHAContext context*, const uint8_t *bytes, unsigned int bytecount); Incorporate bytecount octets into the hash. int USHAFinalBits(USHAContext *context, const uint8_t bits, unsigned int bitcount); Incorporate bitcount bits into the hash. int USHAResult(USHAContext *context, uint8_t Message_Digest[USHAMaxHashSize]); Do the final calculations on the hash and copy the value into Message_Digest. Octets in Message_Digest beyond USHAHashSize(whichSha) are left untouched. int USHAHashSize(enum SHAversion whichSha); The number of octets in the given hash. int USHAHashSizeBits(enum SHAversion whichSha); The number of bits in the given hash. int USHABlockSize(enum SHAversion whichSha); The internal block size for the given hash. const char *USHAHashName(enum SHAversion whichSha); This function will return the name of the given SHA algorithm as a string. The HMAC functions follow the same pattern to allow any length of text input to be used. Structure: typedef HMACContext an opaque structure holding the complete state for producing the keyed message digest (MAC) Functions: int hmacReset(HMACContext *ctx, enum SHAversion whichSha, const unsigned char *key, int key_len); Reset the MAC context state. int hmacInput(HMACContext *ctx, const unsigned char *text, int text_len); Incorporate text_len octets into the MAC. int hmacFinalBits(HMACContext *ctx, const uint8_t bits, unsigned int bitcount); Incorporate bitcount bits into the MAC. int hmacResult(HMACContext *ctx, uint8_t Message_Digest[USHAMaxHashSize]); Top      Up      ToC       Page 20  Do the final calculations on the MAC and copy the value into Message_Digest. Octets in Message_Digest beyond USHAHashSize(whichSha) are left untouched. In addition, a combined interface is provided, similar to that shown in [RFC2104], that allows a fixed-length text input to be used. int hmac(SHAversion whichSha, const unsigned char *text, int text_len, const unsigned char *key, int key_len, uint8_t Message_Digest[USHAMaxHashSize]); Calculate the given digest for the given text and key, and return the resulting MAC. Octets in Message_Digest beyond USHAHashSize(whichSha) are left untouched. The HKDF functions follow the same pattern to allow any length of text input to be used. Structure: typedef HKDFContext an opaque structure holding the complete state for producing the keying material Functions: int hkdfReset(HKDFContext *context, enum SHAversion whichSha, const unsigned char *salt, int salt_len) Reset the key derivation state and initialize it with the salt_len octets of the optional salt. int hkdfInput(HKDFContext *context, const unsigned char *ikm, int ikm_len) Incorporate ikm_len octets into the entropy extractor. int hkdfFinalBits(HKDFContext *context, uint8_t ikm_bits, unsigned int ikm_bit_count) Incorporate ikm_bit_count bits into the entropy extractor. int hkdfResult(HKDFContext *context, uint8_t prk[USHAMaxHashSize], const unsigned char *info, int info_len, uint8_t okm[ ], int okm_len) Finish the HKDF extraction and perform the final HKDF expansion, storing the okm_len octets into output keying material (okm). Optionally store the pseudo-random key (prk) that is generated internally. In addition, combined interfaces are provided, similar to that shown in [RFC5869], that allows a fixed-length text input to be used. int hkdfExtract(SHAversion whichSha, const unsigned char *salt, int salt_len, const unsigned char *ikm, int ikm_len, uint8_t prk[USHAMaxHashSize]) Top      Up      ToC       Page 21  Perform HKDF extraction, combining the salt_len octets of the optional salt with the ikm_len octets of the input keying material (ikm) to form the pseudo-random key prk. The output prk must be large enough to hold the octets appropriate for the given hash type. int hkdfExpand(SHAversion whichSha, const uint8_t prk[ ], int prk_len, const unsigned char *info, int info_len, uint8_t okm[ ], int okm_len) Perform HKDF expansion, combining the prk_len octets of the pseudo-random key prk with the info_len octets of info to form the okm_len octets stored in okm. int hkdf(SHAversion whichSha, const unsigned char *salt, int salt_len, const unsigned char *ikm, int ikm_len, const unsigned char *info, int info_len, uint8_t okm[ ], int okm_len) This combined interface performs both HKDF extraction and expansion. The variables are the same as in hkdfExtract() and hkdfExpand(). 8.1. The Header Files 8.1.1. The .h file The following sha.h file, as stated in the comments within the file, assumes that <stdint.h> is available on your system. If it is not, you should change to including <stdint-example.h>, provided in Section 8.1.2, or the like. /**************************** sha.h ****************************/ /***************** See RFC 6234 for details. *******************/ /* Copyright (c) 2011 IETF Trust and the persons identified as authors of the code. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: - Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Top      Up      ToC       Page 22  - Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. - Neither the name of Internet Society, IETF or IETF Trust, nor the names of specific contributors, may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef _SHA_H_ #define _SHA_H_ /* * Description: * This file implements the Secure Hash Algorithms * as defined in the U.S. National Institute of Standards * and Technology Federal Information Processing Standards * Publication (FIPS PUB) 180-3 published in October 2008 * and formerly defined in its predecessors, FIPS PUB 180-1 * and FIP PUB 180-2. * * A combined document showing all algorithms is available at * http://csrc.nist.gov/publications/fips/ * fips180-3/fips180-3_final.pdf * * The five hashes are defined in these sizes: * SHA-1 20 byte / 160 bit * SHA-224 28 byte / 224 bit * SHA-256 32 byte / 256 bit * SHA-384 48 byte / 384 bit * SHA-512 64 byte / 512 bit * Top      Up      ToC       Page 23  * Compilation Note: * These files may be compiled with two options: * USE_32BIT_ONLY - use 32-bit arithmetic only, for systems * without 64-bit integers * * USE_MODIFIED_MACROS - use alternate form of the SHA_Ch() * and SHA_Maj() macros that are equivalent * and potentially faster on many systems * */ #include <stdint.h> /* * If you do not have the ISO standard stdint.h header file, then you * must typedef the following: * name meaning * uint64_t unsigned 64-bit integer * uint32_t unsigned 32-bit integer * uint8_t unsigned 8-bit integer (i.e., unsigned char) * int_least16_t integer of >= 16 bits * * See stdint-example.h */ #ifndef _SHA_enum_ #define _SHA_enum_ /* * All SHA functions return one of these values. */ enum { shaSuccess = 0, shaNull, /* Null pointer parameter */ shaInputTooLong, /* input data too long */ shaStateError, /* called Input after FinalBits or Result */ shaBadParam /* passed a bad parameter */ }; #endif /* _SHA_enum_ */ /* * These constants hold size information for each of the SHA * hashing operations */ enum { SHA1_Message_Block_Size = 64, SHA224_Message_Block_Size = 64, SHA256_Message_Block_Size = 64, SHA384_Message_Block_Size = 128, SHA512_Message_Block_Size = 128, USHA_Max_Message_Block_Size = SHA512_Message_Block_Size, Top      Up      ToC       Page 24  SHA1HashSize = 20, SHA224HashSize = 28, SHA256HashSize = 32, SHA384HashSize = 48, SHA512HashSize = 64, USHAMaxHashSize = SHA512HashSize, SHA1HashSizeBits = 160, SHA224HashSizeBits = 224, SHA256HashSizeBits = 256, SHA384HashSizeBits = 384, SHA512HashSizeBits = 512, USHAMaxHashSizeBits = SHA512HashSizeBits }; /* * These constants are used in the USHA (Unified SHA) functions. */ typedef enum SHAversion { SHA1, SHA224, SHA256, SHA384, SHA512 } SHAversion; /* * This structure will hold context information for the SHA-1 * hashing operation. */ typedef struct SHA1Context { uint32_t Intermediate_Hash[SHA1HashSize/4]; /* Message Digest */ uint32_t Length_High; /* Message length in bits */ uint32_t Length_Low; /* Message length in bits */ int_least16_t Message_Block_Index; /* Message_Block array index */ /* 512-bit message blocks */ uint8_t Message_Block[SHA1_Message_Block_Size]; int Computed; /* Is the hash computed? */ int Corrupted; /* Cumulative corruption code */ } SHA1Context; /* * This structure will hold context information for the SHA-256 * hashing operation. */ typedef struct SHA256Context { uint32_t Intermediate_Hash[SHA256HashSize/4]; /* Message Digest */ uint32_t Length_High; /* Message length in bits */ uint32_t Length_Low; /* Message length in bits */ int_least16_t Message_Block_Index; /* Message_Block array index */ /* 512-bit message blocks */ uint8_t Message_Block[SHA256_Message_Block_Size]; Top      Up      ToC       Page 25  int Computed; /* Is the hash computed? */ int Corrupted; /* Cumulative corruption code */ } SHA256Context; /* * This structure will hold context information for the SHA-512 * hashing operation. */ typedef struct SHA512Context { #ifdef USE_32BIT_ONLY uint32_t Intermediate_Hash[SHA512HashSize/4]; /* Message Digest */ uint32_t Length[4]; /* Message length in bits */ #else /* !USE_32BIT_ONLY */ uint64_t Intermediate_Hash[SHA512HashSize/8]; /* Message Digest */ uint64_t Length_High, Length_Low; /* Message length in bits */ #endif /* USE_32BIT_ONLY */ int_least16_t Message_Block_Index; /* Message_Block array index */ /* 1024-bit message blocks */ uint8_t Message_Block[SHA512_Message_Block_Size]; int Computed; /* Is the hash computed?*/ int Corrupted; /* Cumulative corruption code */ } SHA512Context; /* * This structure will hold context information for the SHA-224 * hashing operation. It uses the SHA-256 structure for computation. */ typedef struct SHA256Context SHA224Context; /* * This structure will hold context information for the SHA-384 * hashing operation. It uses the SHA-512 structure for computation. */ typedef struct SHA512Context SHA384Context; /* * This structure holds context information for all SHA * hashing operations. */ typedef struct USHAContext { int whichSha; /* which SHA is being used */ union { SHA1Context sha1Context; SHA224Context sha224Context; SHA256Context sha256Context; SHA384Context sha384Context; SHA512Context sha512Context; } ctx; Top      Up      ToC       Page 26  } USHAContext; /* * This structure will hold context information for the HMAC * keyed-hashing operation. */ typedef struct HMACContext { int whichSha; /* which SHA is being used */ int hashSize; /* hash size of SHA being used */ int blockSize; /* block size of SHA being used */ USHAContext shaContext; /* SHA context */ unsigned char k_opad[USHA_Max_Message_Block_Size]; /* outer padding - key XORd with opad */ int Computed; /* Is the MAC computed? */ int Corrupted; /* Cumulative corruption code */ } HMACContext; /* * This structure will hold context information for the HKDF * extract-and-expand Key Derivation Functions. */ typedef struct HKDFContext { int whichSha; /* which SHA is being used */ HMACContext hmacContext; int hashSize; /* hash size of SHA being used */ unsigned char prk[USHAMaxHashSize]; /* pseudo-random key - output of hkdfInput */ int Computed; /* Is the key material computed? */ int Corrupted; /* Cumulative corruption code */ } HKDFContext; /* * Function Prototypes */ /* SHA-1 */ extern int SHA1Reset(SHA1Context *); extern int SHA1Input(SHA1Context *, const uint8_t *bytes, unsigned int bytecount); extern int SHA1FinalBits(SHA1Context *, uint8_t bits, unsigned int bit_count); extern int SHA1Result(SHA1Context *, uint8_t Message_Digest[SHA1HashSize]); Top      Up      ToC       Page 27  /* SHA-224 */ extern int SHA224Reset(SHA224Context *); extern int SHA224Input(SHA224Context *, const uint8_t *bytes, unsigned int bytecount); extern int SHA224FinalBits(SHA224Context *, uint8_t bits, unsigned int bit_count); extern int SHA224Result(SHA224Context *, uint8_t Message_Digest[SHA224HashSize]); /* SHA-256 */ extern int SHA256Reset(SHA256Context *); extern int SHA256Input(SHA256Context *, const uint8_t *bytes, unsigned int bytecount); extern int SHA256FinalBits(SHA256Context *, uint8_t bits, unsigned int bit_count); extern int SHA256Result(SHA256Context *, uint8_t Message_Digest[SHA256HashSize]); /* SHA-384 */ extern int SHA384Reset(SHA384Context *); extern int SHA384Input(SHA384Context *, const uint8_t *bytes, unsigned int bytecount); extern int SHA384FinalBits(SHA384Context *, uint8_t bits, unsigned int bit_count); extern int SHA384Result(SHA384Context *, uint8_t Message_Digest[SHA384HashSize]); /* SHA-512 */ extern int SHA512Reset(SHA512Context *); extern int SHA512Input(SHA512Context *, const uint8_t *bytes, unsigned int bytecount); extern int SHA512FinalBits(SHA512Context *, uint8_t bits, unsigned int bit_count); extern int SHA512Result(SHA512Context *, uint8_t Message_Digest[SHA512HashSize]); /* Unified SHA functions, chosen by whichSha */ extern int USHAReset(USHAContext *context, SHAversion whichSha); extern int USHAInput(USHAContext *context, const uint8_t *bytes, unsigned int bytecount); extern int USHAFinalBits(USHAContext *context, uint8_t bits, unsigned int bit_count); extern int USHAResult(USHAContext *context, uint8_t Message_Digest[USHAMaxHashSize]); extern int USHABlockSize(enum SHAversion whichSha); extern int USHAHashSize(enum SHAversion whichSha); extern int USHAHashSizeBits(enum SHAversion whichSha); extern const char *USHAHashName(enum SHAversion whichSha); Top      Up      ToC       Page 28  /* * HMAC Keyed-Hashing for Message Authentication, RFC 2104, * for all SHAs. * This interface allows a fixed-length text input to be used. */ extern int hmac(SHAversion whichSha, /* which SHA algorithm to use */ const unsigned char *text, /* pointer to data stream */ int text_len, /* length of data stream */ const unsigned char *key, /* pointer to authentication key */ int key_len, /* length of authentication key */ uint8_t digest[USHAMaxHashSize]); /* caller digest to fill in */ /* * HMAC Keyed-Hashing for Message Authentication, RFC 2104, * for all SHAs. * This interface allows any length of text input to be used. */ extern int hmacReset(HMACContext *context, enum SHAversion whichSha, const unsigned char *key, int key_len); extern int hmacInput(HMACContext *context, const unsigned char *text, int text_len); extern int hmacFinalBits(HMACContext *context, uint8_t bits, unsigned int bit_count); extern int hmacResult(HMACContext *context, uint8_t digest[USHAMaxHashSize]); /* * HKDF HMAC-based Extract-and-Expand Key Derivation Function, * RFC 5869, for all SHAs. */ extern int hkdf(SHAversion whichSha, const unsigned char *salt, int salt_len, const unsigned char *ikm, int ikm_len, const unsigned char *info, int info_len, uint8_t okm[ ], int okm_len); extern int hkdfExtract(SHAversion whichSha, const unsigned char *salt, int salt_len, const unsigned char *ikm, int ikm_len, uint8_t prk[USHAMaxHashSize]); extern int hkdfExpand(SHAversion whichSha, const uint8_t prk[ ], int prk_len, const unsigned char *info, int info_len, uint8_t okm[ ], int okm_len); /* * HKDF HMAC-based Extract-and-Expand Key Derivation Function, * RFC 5869, for all SHAs. * This interface allows any length of text input to be used. */ extern int hkdfReset(HKDFContext *context, enum SHAversion whichSha, const unsigned char *salt, int salt_len); Top      Up      ToC       Page 29  extern int hkdfInput(HKDFContext *context, const unsigned char *ikm, int ikm_len); extern int hkdfFinalBits(HKDFContext *context, uint8_t ikm_bits, unsigned int ikm_bit_count); extern int hkdfResult(HKDFContext *context, uint8_t prk[USHAMaxHashSize], const unsigned char *info, int info_len, uint8_t okm[USHAMaxHashSize], int okm_len); #endif /* _SHA_H_ */ 8.1.2. stdint-example.h If your system does not have <stdint.h>, the following should be adequate as a substitute for compiling the other code in this document. /*********************** stdint-example.h **********************/ /**** Use this file if your system does not have a stdint.h. ***/ /***************** See RFC 6234 for details. *******************/ #ifndef STDINT_H #define STDINT_H typedef unsigned long long uint64_t; /* unsigned 64-bit integer */ typedef unsigned int uint32_t; /* unsigned 32-bit integer */ typedef unsigned char uint8_t; /* unsigned 8-bit integer */ /* (i.e., unsigned char) */ typedef int int_least32_t; /* integer of >= 32 bits */ typedef short int_least16_t; /* integer of >= 16 bits */ #endif /* STDINT_H */ 8.1.3. sha-private.h The sha-private.h header file contains definitions that should only be needed internally in the other code in this document. These definitions should not be needed in application code calling the code provided in this document. /************************ sha-private.h ************************/ /***************** See RFC 6234 for details. *******************/ #ifndef _SHA_PRIVATE__H #define _SHA_PRIVATE__H /* * These definitions are defined in FIPS 180-3, section 4.1. * Ch() and Maj() are defined identically in sections 4.1.1, * 4.1.2, and 4.1.3. * * The definitions used in FIPS 180-3 are as follows: */ Top      Up      ToC       Page 30  #ifndef USE_MODIFIED_MACROS #define SHA_Ch(x,y,z) (((x) & (y)) ^ ((~(x)) & (z))) #define SHA_Maj(x,y,z) (((x) & (y)) ^ ((x) & (z)) ^ ((y) & (z))) #else /* USE_MODIFIED_MACROS */ /* * The following definitions are equivalent and potentially faster. */ #define SHA_Ch(x, y, z) (((x) & ((y) ^ (z))) ^ (z)) #define SHA_Maj(x, y, z) (((x) & ((y) | (z))) | ((y) & (z))) #endif /* USE_MODIFIED_MACROS */ #define SHA_Parity(x, y, z) ((x) ^ (y) ^ (z)) #endif /* _SHA_PRIVATE__H */ (page 30 continued on part 3) Next RFC Part
__label__pos
0.982015
Connecting Secure Shell to the UD Network Note: • IT no longer distributes Secure Shell at UD. This page will no longer be updated. • IT now uses WinSCP. Visit IT's new help pages for information about connecting and using WinSCP for file transfers. November 18, 2014 Secure Shell is a program that allows you to log in to a remote UNIX computer over the Internet. It provides a secure connection between the computers by encrypting passwords and other data. Starting a Secure Shell session 1. Open the SSH Secure Shell Client program (either from the Start menu or a desktop shortcut).  2. When the SSH Secure Shell dialog box appears (shown below), press ENTER. SSH introductory dialog box 3. In the Connect to Remote Host dialog box (shown below), type the following: • In the Host Name field, type copland.udel.edu  • In the User Name field, type your UDelNet ID.   4. Click Connect. 5. In the Enter Password dialog box (shown below), type your UNIX password. 6. Click OK. 7. If the Host Identification dialog box appears (shown below), and if you are using your own computer, click Yes. If you are using a computer in a public site or someone else's computer, click No. NOTE This dialog box will only appear if you have never logged in to copland.udel.edu before. 8. In the next dialog box, you should see the UNIX prompt for copland.udel.edu (i.e., copland.udel.edu%). You can now work with your UNIX account. Window with UNIX prompt Printing from a Secure Shell session Because Secure Shell establishes a connection to a remote UNIX server, you can print files and email messages on the University's central printers using standard UNIX commands. If, however, you wish to print email messages or files using a local printer (e.g., one directly connected to your computer), follow these steps: 1. If you are using Pine to read email, use the Pine E (export) command to save the message to a file. 2. Exit or suspend the Pine program. 3. Use the UNIX cat command to display the contents of the file on your screen. For example, if the file is named "katmessage," type this command at the UNIX prompt then press the RETURN key to display the file: cat katmessage 4. If necessary, use the scrollbar to the right of the Secure Shell window to scroll backwards to the point in your session at which the beginning of the file is displayed. 5. Highlight the entire display of the file you wish to print. 6. With the text highlighted, click the printer icon on the Secure Shell window. 7. On the resulting print window, check to be sure the button next to Selection is selected. 8. Click OK to print the file or message. Ending your Secure Shell session To end your Secure Shell session, follow these steps:  1. At the UNIX server's prompt, type exit and press the ENTER key. You are now logged out of the UNIX server. 2. Quit the Secure Shell program.
__label__pos
0.652316
简单的冒泡排序c# int[] arr = {800,11,50,771,649,770,240, 9}; int temp = 0; for (int write = 0; write < arr.Length; write++) { for (int sort = 0; sort arr[sort + 1]) { temp = arr[sort + 1]; arr[sort + 1] = arr[sort]; arr[sort] = temp; } } Console.Write("{0} ", arr[write]); } 我试图做的就是使用这个数组进行简单的冒泡排序。 我想弄清楚为什么排序搞砸了。 例如,这里的数组是{800,11,50,771,649,770,240, 9} 以下是显示的内容: 11, 50, 649, 9, 649, 770, 771, 800 我想我可能会在比较中遗漏一些东西。 不,您的算法有效,但您的Write操作在外部循环中放错了位置。 int[] arr = { 800, 11, 50, 771, 649, 770, 240, 9 }; int temp = 0; for (int write = 0; write < arr.Length; write++) { for (int sort = 0; sort < arr.Length - 1; sort++) { if (arr[sort] > arr[sort + 1]) { temp = arr[sort + 1]; arr[sort + 1] = arr[sort]; arr[sort] = temp; } } } for (int i = 0; i < arr.Length; i++) Console.Write(arr[i] + " "); Console.ReadKey(); 这个对我有用 public static int[] SortArray(int[] array) { int length = array.Length; int temp = array[0]; for (int i = 0; i < length; i++) { for (int j = i+1; j < length; j++) { if (array[i] > array[j]) { temp = array[i]; array[i] = array[j]; array[j] = temp; } } } return array; } public static void BubbleSort(int[] a) { for (int i = 1; i <= a.Length - 1; ++i) for (int j = 0; j < a.Length - i; ++j) if (a[j] > a[j + 1]) Swap(ref a[j], ref a[j + 1]); } public static void Swap(ref int x, ref int y) { int temp = x; x = y; y = temp; } 我看到有人使用这个例子作为工作申请测试的一部分。 我对他的反馈是,当数组大部分排序时,它没有从外循环中逃脱。 考虑在这种情况下会发生什么: int[] arr = {1,2,3,4,5,6,7,8}; 这是更有意义的事情: int[] arr = {1,2,3,4,5,6,7,8}; int temp = 0; int loopCount=0; bool doBreak=true; for (int write = 0; write < arr.Length; write++) { doBreak=true; for (int sort = 0; sort < arr.Length - 1; sort++) { if (arr[sort] > arr[sort + 1]) { temp = arr[sort + 1]; arr[sort + 1] = arr[sort]; arr[sort] = temp; doBreak=false; } loopCount++; } if(doBreak){ break; /*early escape*/ } } Console.WriteLine(loopCount); for (int i = 0; i < arr.Length; i++) Console.Write(arr[i] + " "); int[] arr = { 800, 11, 50, 771, 649, 770, 240, 9 }; int temp = 0; for (int write = 0; write < arr.Length; write++) { for (int sort = 0; sort < arr.Length - 1 - write ; sort++) { if (arr[sort] > arr[sort + 1]) { temp = arr[sort + 1]; arr[sort + 1] = arr[sort]; arr[sort] = temp; } } } for (int i = 0; i < arr.Length; i++) Console.Write(arr[i] + " "); Console.ReadKey(); 你的Console.Write("{0} ", arr[write]); 太早了。 您在排序仍在进行时打印值。 例如,你在数组中打印9作为索引3,但是在循环的下一次迭代中, 9已经移动到索引2而240已经移动到索引3 …但是你的外循环已经移动了向前,所以它第二次打印649永远不打印。 int[] array = new int[10] { 13, 2, 5, 8, 23, 90, 41, 4, 77, 61 }; for (int i = 10; i > 0; i--) { for (int j = 0; j < 9; j++) { if (array[j] > array[j + 1]) { int temp = array[j]; array[j] = array[j + 1]; array[j + 1] = temp; } } } static bool BubbleSort(ref List myList, int number) { if (number == 1) return true; for (int i = 0; i < number; i++) { if ((i + 1 < number) && (myList[i] > myList[i + 1])) { int temp = myList[i]; myList[i] = myList[i + 1]; myList[i + 1] = temp; } else continue; } return BubbleSort(ref myList, number - 1); } 只是另一个例子,但有一个更好的WHILE循环而不是FOR: public static void Bubble() { int[] data = { 5, 4, 3, 2, 1 }; bool newLoopNeeded = false; int temp; int loop = 0; while (!newLoopNeeded) { newLoopNeeded = true; for (int i = 0; i < data.Length - 1; i++) { if (data[i + 1] < data[i]) { temp = data[i]; data[i] = data[i + 1]; data[i + 1] = temp; newLoopNeeded = false; } loop++; } } } public static int[] BubbleSort(int[] arr) { int length = arr.Length(); while (length > 0) { int newLength = 0; for (int i = 1; i < length; i++) { if (arr[i - 1] > arr[i]) { Swap(ref arr[i - 1], ref arr[i]); newLength = i; } } length = newLength; } } public static void Swap(ref int x, ref int y) { int temp = y; y = x; x = temp; } 排序方向的冒泡排序 – using System; public class Program { public static void Main(string[] args) { var input = new[] { 800, 11, 50, 771, 649, 770, 240, 9 }; BubbleSort(input); Array.ForEach(input, Console.WriteLine); Console.ReadKey(); } public enum Direction { Ascending = 0, Descending } public static void BubbleSort(int[] input, Direction direction = Direction.Ascending) { bool swapped; var length = input.Length; do { swapped = false; for (var index = 0; index < length - 1; index++) { var needSwap = direction == Direction.Ascending ? input[index] > input[index + 1] : input[index] < input[index + 1]; if (needSwap) { var temp = input[index]; input[index] = input[index + 1]; input[index + 1] = temp; swapped = true; } } } while (swapped); } } using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace Practice { class Program { static void Main(string[] args) { Console.WriteLine("Enter the size"); int n = Convert.ToInt32(Console.ReadLine()); int[] mynum = new int[n]; Console.WriteLine("Enter the Numbers"); for (int p = 0; p < n;p++ ) { mynum[p] = Convert.ToInt32(Console.ReadLine()); } Console.WriteLine("The number are"); foreach(int p in mynum) { Console.WriteLine(p); } for (int i = 0; i < n;i++ ) { for(int j=i+1;jmynum[j]) { int x = mynum[j]; mynum[j] = mynum[i]; mynum[i] = x; } } } Console.WriteLine("Sortrd data is-"); foreach(int p in mynum) { Console.WriteLine(p); } Console.ReadLine(); } } } int[] arr = { 800, 11, 50, 771, 649, 770, 240, 9 }; for (int i = 0; i < arr.Length; i++) { for (int j = i; j < arr.Length ; j++) { if (arr[j] < arr[i]) { int temp = arr[i]; arr[i] = arr[j]; arr[j] = temp; } } } Console.ReadLine(); public void BubbleSortNum() { int[] a = {10,5,30,25,40,20}; int length = a.Length; int temp = 0; for (int i = 0; i a[j]) { temp = a[j]; a[j] = a[i]; a[i] = temp; } } Console.WriteLine(a[i]); } }
__label__pos
1
View Javadoc 1 /* 2 * Copyright (c) 1994, 2003, Oracle and/or its affiliates. All rights reserved. 3 * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. 4 * 5 * This code is free software; you can redistribute it and/or modify it 6 * under the terms of the GNU General Public License version 2 only, as 7 * published by the Free Software Foundation. Oracle designates this 8 * particular file as subject to the "Classpath" exception as provided 9 * by Oracle in the LICENSE file that accompanied this code. 10 * 11 * This code is distributed in the hope that it will be useful, but WITHOUT 12 * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 13 * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License 14 * version 2 for more details (a copy is included in the LICENSE file that 15 * accompanied this code). 16 * 17 * You should have received a copy of the GNU General Public License version 18 * 2 along with this work; if not, write to the Free Software Foundation, 19 * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. 20 * 21 * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA 22 * or visit www.oracle.com if you need additional information or have any 23 * questions. 24 */ 25 26 package sun.tools.tree; 27 28 import sun.tools.java.*; 29 import sun.tools.asm.Assembler; 30 import java.io.PrintStream; 31 32 /** 33 * WARNING: The contents of this source file are not part of any 34 * supported API. Code that depends on them does so at its own risk: 35 * they are subject to change or removal without notice. 36 */ 37 public 38 class StringExpression extends ConstantExpression { 39 String value; 40 41 /** 42 * Constructor 43 */ 44 public StringExpression(long where, String value) { 45 super(STRINGVAL, where, Type.tString); 46 this.value = value; 47 } 48 49 public boolean equals(String s) { 50 return value.equals(s); 51 } 52 public boolean isNonNull() { 53 return true; // string literal is never null 54 } 55 56 /** 57 * Code 58 */ 59 public void codeValue(Environment env, Context ctx, Assembler asm) { 60 asm.add(where, opc_ldc, this); 61 } 62 63 /** 64 * Get the value 65 */ 66 public Object getValue() { 67 return value; 68 } 69 70 /** 71 * Hashcode 72 */ 73 public int hashCode() { 74 return value.hashCode() ^ 3213; 75 } 76 77 /** 78 * Equality 79 */ 80 public boolean equals(Object obj) { 81 if ((obj != null) && (obj instanceof StringExpression)) { 82 return value.equals(((StringExpression)obj).value); 83 } 84 return false; 85 } 86 87 /** 88 * Print 89 */ 90 public void print(PrintStream out) { 91 out.print("\"" + value + "\""); 92 } 93 }
__label__pos
0.992217
Insert JSON Array into Table with Stored Procedure Parameter Introduction In order to save JSON array data passed as a parameter in a stored procedure and insert it into a table, you'll need to use a programming language that your database supports for writing stored procedures. I'll provide an example using SQL Server's T-SQL language, but the concept can be adapted to other database systems with slight variations. Let's assume you have a table called MyTable with columns ID, Name, and Data. You want to pass a JSON array as a parameter to a stored procedure and insert each element of the JSON array into the MyTable table. Here's an example of how you can create a stored procedure to achieve this in SQL Server. -- Create a table to store the data CREATE TABLE MyTable ( ID INT IDENTITY(1,1) PRIMARY KEY, Name NVARCHAR(255), Data NVARCHAR(MAX) ); -- Create a stored procedure to insert JSON array data CREATE PROCEDURE InsertJsonData @jsonData NVARCHAR(MAX) AS BEGIN -- Use OPENJSON to parse the JSON array INSERT INTO MyTable (Name, Data) SELECT 'ItemName', [value] -- You can replace 'ItemName' with a specific name or retrieve it from JSON FROM OPENJSON(@jsonData) END; In this example 1. Create a table MyTable to store the data. 2. Create a stored procedure InsertJsonData that takes @jsonData as a parameter, which should be a JSON array. 3. Inside the stored procedure, we use the OPENJSON function to parse the JSON array and insert each element into the MyTable table. You can call this stored procedure and pass your JSON array as a parameter like this. DECLARE @json NVARCHAR(MAX); SET @json = '[{"value": "Value1"}, {"value": "Value2"}, {"value": "Value3"}]'; EXEC InsertJsonData @jsonData = @json; Replace the JSON array (@json) with your actual JSON data, and the stored procedure will insert each element into the MyTable table. Keep in mind that the actual implementation may vary depending on your database system, but the general idea of parsing the JSON array and inserting its elements into a table should be similar across different database systems with JSON support. Similar Articles
__label__pos
0.978609
E-mails Limitation? You are allowed to send out specific number of e-mails per hour per hosting account depending on the plan that you are in. This is to avoid spamming. We also do not allow bulk emails that may be considered as SPAM, this may lead to the IP of the server being banned. **Please be informed that if you are identified or reported to our support desk as a spammer we will be doing an investigation and have the right to suspend your account. Was this answer helpful?  Print this Article Also Read How to create an e-mail forwarder in cPanel? You can use this to avoid making your Web Hosting disk quota getting full or you want to save... Use Google Apps with cpanel Please follow the steps on the link below:https://support.google.com/a/answer/54717?hl=en How to create an e-mail account in cPanel Login into your CPanel with the username and password Click on EMail Accounts Fill in the... How do I access my Web mail? 1. Open up your favorite web browser2. In the address bar type yourwebsite.com/webmail3. Under... How Do I Change The Disk Space/qouta Of An E-mail Address? 1. From cPanel click the "Email Accounts" icon under the Mail section. 2. Click "Change Quota"... Powered by WHMCompleteSolution Language:  
__label__pos
0.809544
setText не работает с appiun v 1.5.2 Я пишу тест для моего приложения для Android, у меня есть текстовое поле, в которое я должен поместить вход, чтобы сделать это, я сделал этот код: $els = $this->element($this->using('class name')->value('android.widget.EditText')); $els->click(); $els->setText("govin"); Однако, когда я запускаю этот тест, у меня появляется эта ошибка: Something unexpected happened: 'Parameters were incorrect. We wanted {"required":["value"]} and you sent ["elementId","value"] Я обнаружил, что «setText» не работает в Appium v1.5.2 У кого-нибудь есть альтернатива этому? Спасибо 0 Решение Я не уверен, что это будет работать на php, я только начал изучать appium на окнах Java, но вы можете попробовать sendKeys("string"); или попробуйте просмотреть это https://gist.github.com/aczietlow/7c4834f79a7afd920d8f 0 Другие решения Если вы работаете в appium v1.5.2, «setText» не поддерживается, вместо этого вы можете использовать функцию «value» 0 По вопросам рекламы [email protected] Adblock detector
__label__pos
0.547413
文档章节 Kotlin Getting Started c  clxhhsy 发布于 2017/06/02 12:03 字数 2202 阅读 36 收藏 0 Getting Started 基本语法 定义包 包定义应位于源文件顶部 package my.demo import java.util.* // .... 包名和目录不需要匹配:源文件可以放在文件系统的人员位置 定义函数 • 返回类型为Int类型且有两个Int参数的函数 fun sum(a: Int, b: Int): Int{ return a + b } • 定义一个只有表达式函数体以及一个自推导型的返回值 fun sum(a: Int, b:Int) = a + b • 返回一个没有意义的值 fun printSum(a: Int, b: Int): Unit{ println("sum of $a and $b is ${a + b}") } Unit可以省略 fun printSum(a: Int, b: Int){ println("sum of $a and $b is ${a + b}") } 定义局部变量 常量 val a: Int = 1 //立即初始化 val b = 2 //推导出Int类型 val c: Int //没有初始化时必须声明类型 c = 3 //赋值 变量 var x = 5 //推导出Int类型 x += 1 更多查阅属性和字段 注释 与Java和JavaScript一样,Kotlin支持单行注释和块注释 // 单行注释 /* 块注释 */ 与Java不一样的是,Kotlin中的块注释可以级联 参看Documenting Kotlin Code学习更多关于文档化注释的语法。 使用字符串模板 var a = 1 val s1 = "a is $a" a = 2 val s2 = "${s1.replace("is","was")},but now is $a" 使用条件表达式 fun maxOf(a: Int, b: Int){ if(a > b){ return a } else { return b } } 把if作为表达式 fun maxOf(a: Int, b: Int) = if (a > b) a else b 使用可空变量和空值检查 当空值可能出现时,应明确指出该引用为可空 如果str不包含整数时,返回null fun parseInt(str:String):Int?{ } 使用一个返回可空值的函数 fun parseInt(str:String):Int?{ return str.toIntOrNull() } fun printProduct(arg1:String,arg2:String){ val x = parseInt(arg1) val y = parseInt(arg2) if(x != null && y != null){ println(x * y) }else{ println("either '$arg1' or '$arg2' is not a number") } } fun main(args:Array<String>){ printProduct("6","7") printProduct("a","7") printProduct("a","b") } or //... if(x == null){ println("Wrong number format in arg1: '$arg1'") return } if(y == null){ println("Wrong number format in arg2: '$arg2'") return } println(x * y) 使用类型检查和自动转换 使用is运算符检查表达式是否是某个类型的实例,如果对不可变的局部变量或属性进行了类型检查,就没有必要显示转换。 fun getStringLength(obj:Any):Int?{ if(obj is String){ return obj.length //obj自动转换为String } return null } fun main(args:Array<String>){ fun printLength(obj:Any){ println("'$obj' string length is ${getStringLength(obj) ?:"...err,not a string"}") } printLength("Incomprehensibilities") printLength(1000) printLength(listOf(Any())) } or fun getStringLength(obj:Any):Int?{ if(obj !is String) return null return obj.length //obj自动转换为String } or fun getStringLength(obj:Any):Int?{ if(obj is String && obj.length > 0){ //obj自动转换为String return obj.length } return null } 使用for循环 fun main(args:Array<String>){ val items = listOf("apple","banana","kiwi") for(item in items){ println(item) } } or fun main(args:Array<String>){ val items = listOf("apple","banana","kiwi") for (index in items.indices){ println("item at $index is ${items[index]}") } } 使用while循环 fun main(args:Array<String>){ val items = listOf("apple","banana","kiwi") var index = 0 while (index<items.size){ println("item at $index is ${items[index]}") index++ } } 使用when表达式 fun describe(obj:Any):String = when (obj){ 1 -> "One" "Hello" ->"Greeting" is Long ->"Long" !is String -> "Not a string" else -> "Unkown" } fun main (args:Array<String>){ println(describe(1)) println(describe("Hello")) println(describe(1000L)) println(describe(2)) println(describe("other")) } Range使用 使用in操作符检查数值是否在某个范围内 fun main(args:Array<String>){ val x = 10 val y = 9 if(x in 1..y+1){ println("fits in range") } } 检查数值是否超出范围 fun main(args:Array<String>){ val list = listOf("a","b","c") if(-1 !in 0..list.lastIndex){ println("-1 is out of range") } if(list.size !in list.indices){ println("list size is out of valid list indices range too") } } 在范围内迭代 fun main (args:Array<String>){ for(x in 1..5){ println(x) } } 或者使用一个步长 fun main (args:Array<String>){ for(x in 1..10 step 2){ println(x) } for(x in 9 downTo 0 step 3){ println(x) } } 使用集合 对集合进行迭代 fun main (args:Array<String>){ val items = listOf("apple","banana","kiwi") for(item in items){ println(item) } } 使用in操作符判断集合中是否包含一个对象 fun main (args:Array<String>){ val items = setOf("apple","banana","kiwi") when{ "orange" in items -> println("juicy") "apple" in items -> println("apple is fine too") } } 使用lambda表达式过滤和映射集合 fun main(args:Array<String>){ val fruits = listOf("banana","avocado","apple","kiwi") fruits .filter {it.startsWith("a")} .sortedBy {it} .map {it.toUpperCase()} .forEach {println(it)} } 惯用语法 下面是Kotlin中经常使用的语法 创建DTOs(POJOs/POCOs) data class Customer(val name:String,val email:String) 给Customer类提供以下方法 • 给所有属性添加getters,如果为var类型,同时添加setters • equals() • hashCode() • toString() • copy() • component1(),component2() 函数默认值 fun foo(a:Int=0,b:String = ""){...} 过滤list val positives = list.filter{x -> x > 0} 或者 val positives = list.filter { it > 0} 字符串插值 println("Name $name") 实例检查 when(x){ is Foo -> ... is Bar -> ... else -> ... } 遍历map/list列表 for((k,v) in map){ println("$k -> $v") } 使用ranges for (i in 1..100) {...} //闭区间,包含100 for (i in 1 util 100) {...} //半开区间,不包含100 for (x in 2..10 step 2) {...} for (x in 10 downTo 1) {} if(x in 1..10) {} 只读列表 val list = listOf("a","b","c") 只读map val map = mapOf("a" to 1,"b" to 2,"c" to 3) 访问map println(map["key"]) map["key"]=value 懒属性(延迟加载) val p:String by lazy{ //计算字符串 } 扩展函数 fun String.spaceToCamelCase(){} "Convert this to camelcase".spaceToCamelCase() 创建单例 object Resource { val name = "Name" } 如果非空,则..的简写 val files = File("Test").listFiles() println(files?.size) 如果非空...否则...的简写 val files = File("Test").listFiles() println(files?.size ?:"empty") 如果为空执行操作 val data = ... val email = data["email"]?: throw IllegalStateException("Email is missing!") 如果非空执行操作 val data = ... data?.let{ //如果不为空执行此代码块 } 返回when声明 fun transform(color:String):Int{ return when(color){ "Red" -> 0 "Green" -> 1 "Blue" -> 2 else -> throw IllegalArgumentException("invalid color param value") } } try/catch表达式 fun test(){ var result = try{ count() } catch (e:ArithmeticException){ throw java.lang.IllegalArgumentException(e) } //处理result } if表达式 fun foo(param:Int){ val result = if (param == 1){ "one" }else if(param ==2){ "two" }else { "three" } } 方法使用builder模式返回Unit fun arrayOfMinusOnes(size:Int):IntArray{ return IntArray(size).apply{ fill(-1) } } 单个表达式的函数 fun theAnswer() = 42 等价于 fun theAnswer():Int{ return 42 } 这可以与其他语法组合成更加高效、简洁的代码,如when表达式 fun transform(color:String):Int = when(color){ "Red" -> 0 "Green" -> 1 "Blue" -> 2 else -> throw java.lang.IllegalArgumentException("invalid color param value") } 利用with调用一个对象的多个方法 class Turtle{ fun penDown() fun penUp() fun turn(degrees:Double) fun forward(pixels:Double) } val myTurtle = Turtle() with(myTurtle){ penDown() for(i in 1..4){ forward(100.0) turn(90.0) } penUp() } Java7中的try with resources val stream = Files.newInputStream(Paths.get("/some/file.txt")) stream.buffered().reader().use{ reader -> println(reader.readText()) } 需要泛型信息的泛型函数的简单方式 //public final class Gson{ // ... // public <T> T fromJson(JsonElement json, Class<T> classOfT) throws JsonSyntaException{ inline fun <reified T:Any> Gson.fromJson(json):T = this.fromJson(json,T::class.java) 使用一个可空的布尔值 val b:Boolean? = ... if (b = true){ ... }else{ //'b' is false or null } 编码公约 本页包含Kotlin语言当前的编码风格 命名风格 如果有疑惑,请默认使用Java编码公约,例如 • 使用驼峰命名法(避免在名称中使用下划线) • 类名首字母大写 • 方法名和属性名首字母小写 • 使用4空格缩进 • public方法需要写文档,这样就可以出现在Kotlin的Doc中 冒号 当冒号分隔父类和子类时,冒号前有空格,当冒号分隔实例和类型时,冒号前面没有空格 interface Foo<out T : Any> : Bar{ fun foo(a: Int): T } Lambdas 在lambdas表达式中,大括号周围要有空格,分隔参数和函数体的箭头周围要有空格。只要有可能,lambda应该放在括号外面传入 list.filter { it > 10 }.map { element -> element * 2 } 在简短而非嵌套的lambda中,推荐使用it而不是显示的声明参数。在嵌套的lambda中使用参数,参数应该总是显示声明 类头部格式 只有几个参数的类可以写成一行 class Person(id: Int, name: String) 具有较多参数的类应该格式化成每个构造函数参数都在单独的缩进行中。此外,结束括号应该在新行上。如果我们使用继承,那么超类的构造函数调用或实现的接口列表应该与括号位于同一行 class Person( id: Int, name: String, surname: String ) : Human( id, name){ //... } 对于多个接口,应首先定义超类构造函数的调用,然后每个接口应位于不同的行中 class Person( id: Int, name: String, surname: String ) : Human( id, name), KotlinMaker { //... } 构造函数参数可以使用常规缩进或连续缩进(双倍的常规缩进) Unit 如果函数返回Unit,则返回类型应该省略 fun foo(){ //": Unit" 此处被省略 } 函数 VS 属性 在某些情况下,没有参数的函数可以与只读属性互换。尽管语义是相似的,但是有一些风格上的约定在什么时候更偏向于另一个。 在下面的情况下,更偏向于属性而不是一个函数。 • 不需要抛出异常 • 拥有O(1)复杂度 • 低消耗的计算(或首次运行结果会被缓存) • 返回与调用相同的结果 © 著作权归作者所有 共有 人打赏支持 c 粉丝 1 博文 14 码字总数 116889 作品 0 南京 程序员 Android Weekly Notes Issue #289 Android Weekly Issue #289 December 24th, 2017 Android Weekly Issue #289 今年最后一篇, 包含了可以上传log记录的HyperLog,以及Android的面试技巧,还有Model的分层,以及如何迁移到Room. 还... 圣骑士wind 2017/12/31 0 0 Kotlin Weekly 中文周报 —— 19 Kotlin Weekly 中文周报 —— 19 Kotlin 开发中文周报 文章 通过犯错不断学习 Kotlin(engineering.udacity.com) Nate Ebel 分享了他学习写 Kotlin 代码的一些技巧和窍门,以及随着时间的推... DoubleThunder 2017/12/04 0 0 The declarative approach of the Ring programming language Download sample Introduction When we look at some popular declarative programming languages like QML, REBOL and Red, we will discover that using nested structures of objects to ...... Mahmoud Samir Fayed 2017/12/22 0 0 Kotlin Weekly 中文周报——103 Kotlin Weekly 中文周报 From Java Builders to Kotlin DSLs (kotlinexpertise.com) 从 java 建造者到 kotlin dsls DSLs – Domain Specific Languages – are an ever trending topic in Ko...... DoubleThunder 07/23 0 0 Kotlin Weekly 中文周报 —— 103 Kotlin Weekly 中文周报 The requireActivity() and requireContext() example (android.jlelse.eu) Introduced in Support Library 27.1.0 在支持库中引入 27.1.0 (Online Talk) How Kotli...... DoubleThunder 07/23 0 0 没有更多内容 加载失败,请刷新页面 加载更多 Coding and Paper Letter(三十七) 资源整理。 1 Coding: 1.GDAL的node.js版本。 node gdal 2.R语言包echor,下载EPA许可设施的废水排放和空气排放数据。 echor 3.CPPTRAJ是一个旨在处理和分析分子动力学轨迹和从其分析中得出的... 胖胖雕 26分钟前 2 0 plsql developer如何创建新用户(users) plsql developer如何创建新用户(users) 2017年05月04日 21:51:43 Alan_ZhQ 阅读数:14558 标签: plsql developer 更多 个人分类: plsql developer 版权声明:本文为博主原创文章,转载请... linjin200 29分钟前 1 0 php安装编译时错误合集 php安装编译时错误合集 出现collect2: ld returned 1 exit status make: *** [sapi/cli/php] Error 1 出现此种错误最大可能是配置时出现了错误 libxml默认安装的路径是/usr/local,我把--wit... alt_tab_jj 31分钟前 1 0 7.09-js保留小数点后两位 //num 是传过来的值,del是要保留几位 function valueFmt(num, del) { if (num != '') { if (del != 0) { num = parseFloat(num).toFixed(del); } var source = String(num).split("."); so...... 静以修身2025 33分钟前 2 0 正则介绍_grep 10月16日任务 9.1 正则介绍_grep上 9.2 grep中 9.3 grep下 grep基本用法 grep [-cinvABC] 'word' filename centos7.x版本的grep实则为grep --color=auto -c 显示匹配到的行数 [root@localhos...... robertt15 41分钟前 5 0 没有更多内容 加载失败,请刷新页面 加载更多 返回顶部 顶部
__label__pos
0.82896
Skip to main content Business LibreTexts 3.8: Chapter 3 Homework • Page ID 51775 • 3.1 Terminology 72. This is a bar graph with three bars for each category on the x-axis: age groups, gender, and total. The first bar shows the number of people in the category. The second bar shows the percent in the category that approve, and the third bar shows percent in the category that disapprove. The y-axis has intervals of 200 from 0–1200. Figure \(\PageIndex{17}\) The graph in Figure \(\PageIndex{17}\) displays the sample sizes and percentages of people in different age and gender groups who were polled concerning their approval of Mayor Ford’s actions in office. The total number in the sample of all the age groups is 1,045. 1. Define three events in the graph. 2. Describe in words what the entry 40 means. 3. Describe in words the complement of the entry in question 2. 4. Describe in words what the entry 30 means. 5. Out of the males and females, what percent are males? 6. Out of the females, what percent disapprove of Mayor Ford? 7. Out of all the age groups, what percent approve of Mayor Ford? 8. Find P(Approve|Male). 9. Out of the age groups, what percent are more than 44 years old? 10. Find P(Approve|Age < 35). 73. Explain what is wrong with the following statements. Use complete sentences. 1. If there is a 60% chance of rain on Saturday and a 70% chance of rain on Sunday, then there is a 130% chance of rain over the weekend. 2. The probability that a baseball player hits a home run is greater than the probability that he gets a successful hit. 3.2 Independent and Mutually Exclusive Events Use the following information to answer the next 12 exercises. The graph shown is based on more than 170,000 interviews done by Gallup that took place from January through December 2012. The sample consists of employed Americans 18 years of age or older. The Emotional Health Index Scores are the sample space. We randomly sample one Emotional Health Index Score. emotional health index score Figure \(\PageIndex{18}\) 74. Find the probability that an Emotional Health Index Score is 82.7. 75. Find the probability that an Emotional Health Index Score is 81.0. 76. Find the probability that an Emotional Health Index Score is more than 81? 77. Find the probability that an Emotional Health Index Score is between 80.5 and 82? 78. If we know an Emotional Health Index Score is 81.5 or more, what is the probability that it is 82.7? 79. What is the probability that an Emotional Health Index Score is 80.7 or 82.7? 80. What is the probability that an Emotional Health Index Score is less than 80.2 given that it is already less than 81. 81. What occupation has the highest emotional index score? 82. What occupation has the lowest emotional index score? 83. What is the range of the data? 84. Compute the average EHIS. 85. If all occupations are equally likely for a certain individual, what is the probability that he or she will have an occupation with lower than average EHIS? 3.3 Two Basic Rules of Probability 86. On February 28, 2013, a Field Poll Survey reported that 61% of California registered voters approved of allowing two people of the same gender to marry and have regular marriage laws apply to them. Among 18 to 39 year olds (California registered voters), the approval rating was 78%. Six in ten California registered voters said that the upcoming Supreme Court’s ruling about the constitutionality of California’s Proposition 8 was either very or somewhat important to them. Out of those CA registered voters who support same-sex marriage, 75% say the ruling is important to them. In this problem, let: • C = California registered voters who support same-sex marriage. • B = California registered voters who say the Supreme Court’s ruling about the constitutionality of California’s Proposition 8 is very or somewhat important to them • A = California registered voters who are 18 to 39 years old. 1. Find \(P(C)\). 2. Find \(P(B)\). 3. Find \(P(C|A)\). 4. Find \(P(B|C)\). 5. In words, what is \(C|A\)? 6. In words, what is \(B|C\)? 7. Find \(P(C \cap B)\). 8. In words, what is \(C \cap B\)? 9. Find \(P(C \cup B)\). 10. Are C and B mutually exclusive events? Show why or why not. 87. After Rob Ford, the mayor of Toronto, announced his plans to cut budget costs in late 2011, the Forum Research polled 1,046 people to measure the mayor’s popularity. Everyone polled expressed either approval or disapproval. These are the results their poll produced: • In early 2011, 60 percent of the population approved of Mayor Ford’s actions in office. • In mid-2011, 57 percent of the population approved of his actions. • In late 2011, the percentage of popular approval was measured at 42 percent. 1. What is the sample size for this study? 2. What proportion in the poll disapproved of Mayor Ford, according to the results from late 2011? 3. How many people polled responded that they approved of Mayor Ford in late 2011? 4. What is the probability that a person supported Mayor Ford, based on the data collected in mid-2011? 5. What is the probability that a person supported Mayor Ford, based on the data collected in early 2011? Use the following information to answer the next three exercises. The casino game, roulette, allows the gambler to bet on the probability of a ball, which spins in the roulette wheel, landing on a particular color, number, or range of numbers. The table used to place bets contains of 38 numbers, and each number is assigned to a color and a range. This is an image of a roulette table. Figure \(\PageIndex{19}\) (credit: film8ker/wikibooks) 88. 1. List the sample space of the 38 possible outcomes in roulette. 2. You bet on red. Find P(red). 3. You bet on -1st 12- (1st Dozen). Find P(-1st 12-). 4. You bet on an even number. Find P(even number). 5. Is getting an odd number the complement of getting an even number? Why? 6. Find two mutually exclusive events. 7. Are the events Even and 1st Dozen independent? 89. Compute the probability of winning the following types of bets: 1. Betting on two lines that touch each other on the table as in 1-2-3-4-5-6 2. Betting on three numbers in a line, as in 1-2-3 3. Betting on one number 4. Betting on four numbers that touch each other to form a square, as in 10-11-13-14 5. Betting on two numbers that touch each other on the table, as in 10-11 or 10-13 6. Betting on 0-00-1-2-3 7. Betting on 0-1-2; or 0-00-2; or 00-2-3 90. Compute the probability of winning the following types of bets: 1. Betting on a color 2. Betting on one of the dozen groups 3. Betting on the range of numbers from 1 to 18 4. Betting on the range of numbers 19–36 5. Betting on one of the columns 6. Betting on an even or odd number (excluding zero) 91. Suppose that you have eight cards. Five are green and three are yellow. The five green cards are numbered 1, 2, 3, 4, and 5. The three yellow cards are numbered 1, 2, and 3. The cards are well shuffled. You randomly draw one card. • G = card drawn is green • E = card drawn is even-numbered 1. List the sample space. 2. \(P(G) =\) _____ 3. \(P(G|E) =\) _____ 4. \(P(G \cap E) =\) _____ 5. \(P(G \cup E) =\) _____ 6. Are G and E mutually exclusive? Justify your answer numerically. 92. Roll two fair dice separately. Each die has six faces. 1. List the sample space. 2. Let A be the event that either a three or four is rolled first, followed by an even number. Find \(P(A)\). 3. Let B be the event that the sum of the two rolls is at most seven. Find \(P(B)\). 4. In words, explain what “\(P(A|B)\)” represents. Find \(P(A|B)\). 5. Are A and B mutually exclusive events? Explain your answer in one to three complete sentences, including numerical justification. 6. Are A and B independent events? Explain your answer in one to three complete sentences, including numerical justification. 93. A special deck of cards has ten cards. Four are green, three are blue, and three are red. When a card is picked, its color of it is recorded. An experiment consists of first picking a card and then tossing a coin. 1. List the sample space. 2. Let A be the event that a blue card is picked first, followed by landing a head on the coin toss. Find P(A). 3. Let B be the event that a red or green is picked, followed by landing a head on the coin toss. Are the events A and B mutually exclusive? Explain your answer in one to three complete sentences, including numerical justification. 4. Let C be the event that a red or blue is picked, followed by landing a head on the coin toss. Are the events A and C mutually exclusive? Explain your answer in one to three complete sentences, including numerical justification. 94. An experiment consists of first rolling a die and then tossing a coin. 1. List the sample space. 2. Let A be the event that either a three or a four is rolled first, followed by landing a head on the coin toss. Find P(A). 3. Let B be the event that the first and second tosses land on heads. Are the events A and B mutually exclusive? Explain your answer in one to three complete sentences, including numerical justification. 95. An experiment consists of tossing a nickel, a dime, and a quarter. Of interest is the side the coin lands on. 1. List the sample space. 2. Let A be the event that there are at least two tails. Find P(A). 3. Let B be the event that the first and second tosses land on heads. Are the events A and B mutually exclusive? Explain your answer in one to three complete sentences, including justification. 96. Consider the following scenario: Let \(P(C) = 0.4\). Let \(P(D) = 0.5\). Let \(P(C|D) = 0.6\). 1. Find \(P(C \cap D)\). 2. Are C and D mutually exclusive? Why or why not? 3. Are C and D independent events? Why or why not? 4. Find \(P(C \cup D)\). 5. Find \(P(D|C)\). 97. Y and Z are independent events. 1. Rewrite the basic Addition Rule \(P(Y \cup Z) = P(Y) + P(Z) - P(Y \cap Z)\) using the information that Y and Z are independent events. 2. Use the rewritten rule to find \(P(Z)\) if \(P(Y \cup Z) = 0.71\) and \(P(Y) = 0.42\). 98. G and H are mutually exclusive events. \(P(G) = 0.5 P(H) = 0.3\) 1. Explain why the following statement MUST be false: \(P(H|G) = 0.4\). 2. Find \(P(H \cup G)\). 3. Are G and H independent or dependent events? Explain in a complete sentence. 99. Approximately 281,000,000 people over age five live in the United States. Of these people, 55,000,000 speak a language other than English at home. Of those who speak another language at home, 62.3% speak Spanish. Let: E = speaks English at home; E′ = speaks another language at home; S = speaks Spanish; Finish each probability statement by matching the correct answer. Probability Statements Answers a. \(P(E′) =\) i. 0.8043 b. \(P(E) =\) ii. 0.623 c. \(P(S \cap E′) =\) iii. 0.1957 d. \(P(S|E′) =\) iv. 0.1219 Table \(\PageIndex{14}\) 100. 1994, the U.S. government held a lottery to issue 55,000 Green Cards (permits for non-citizens to work legally in the U.S.). Renate Deutsch, from Germany, was one of approximately 6.5 million people who entered this lottery. Let G = won green card. 1. What was Renate’s chance of winning a Green Card? Write your answer as a probability statement. 2. In the summer of 1994, Renate received a letter stating she was one of 110,000 finalists chosen. Once the finalists were chosen, assuming that each finalist had an equal chance to win, what was Renate’s chance of winning a Green Card? Write your answer as a conditional probability statement. Let F = was a finalist. 3. Are G and F independent or dependent events? Justify your answer numerically and also explain why. 4. Are G and F mutually exclusive events? Justify your answer numerically and explain why. 101. Three professors at George Washington University did an experiment to determine if economists are more selfish than other people. They dropped 64 stamped, addressed envelopes with $10 cash in different classrooms on the George Washington campus. 44% were returned overall. From the economics classes 56% of the envelopes were returned. From the business, psychology, and history classes 31% were returned. Let: R = money returned; E = economics classes; O = other classes 1. Write a probability statement for the overall percent of money returned. 2. Write a probability statement for the percent of money returned out of the economics classes. 3. Write a probability statement for the percent of money returned out of the other classes. 4. Is money being returned independent of the class? Justify your answer numerically and explain it. 5. Based upon this study, do you think that economists are more selfish than other people? Explain why or why not. Include numbers to justify your answer. 102. The following table of data obtained from www.baseball-almanac.com shows hit information for four players. Suppose that one hit from the table is randomly selected. Name Single Double Triple Home run Total hits Babe Ruth 1,517 506 136 714 2,873 Jackie Robinson 1,054 273 54 137 1,518 Ty Cobb 3,603 174 295 114 4,189 Hank Aaron 2,294 624 98 755 3,771 Total 8,471 1,577 583 1,720 12,351 Table \(\PageIndex{15}\) Are "the hit being made by Hank Aaron" and "the hit being a double" independent events? 1. Yes, because P(hit by Hank Aaron|hit is a double) = P(hit by Hank Aaron) 2. No, because P(hit by Hank Aaron|hit is a double) ≠ P(hit is a double) 3. No, because P(hit is by Hank Aaron|hit is a double) ≠ P(hit by Hank Aaron) 4. Yes, because P(hit is by Hank Aaron|hit is a double) = P(hit is a double) 103. United Blood Services is a blood bank that serves more than 500 hospitals in 18 states. According to their website, a person with type O blood and a negative Rh factor (Rh-) can donate blood to any person with any bloodtype. Their data show that 43% of people have type O blood and 15% of people have Rh- factor; 52% of people have type O or Rh- factor. 1. Find the probability that a person has both type O blood and the Rh- factor. 2. Find the probability that a person does NOT have both type O blood and the Rh- factor. 104. At a college, 72% of courses have final exams and 46% of courses require research papers. Suppose that 32% of courses have a research paper and a final exam. Let F be the event that a course has a final exam. Let R be the event that a course requires a research paper. 1. Find the probability that a course has a final exam or a research project. 2. Find the probability that a course has NEITHER of these two requirements. 105. In a box of assorted cookies, 36% contain chocolate and 12% contain nuts. Of those, 8% contain both chocolate and nuts. Sean is allergic to both chocolate and nuts. 1. Find the probability that a cookie contains chocolate or nuts (he can't eat it). 2. Find the probability that a cookie does not contain chocolate or nuts (he can eat it). 106. A college finds that 10% of students have taken a distance learning class and that 40% of students are part time students. Of the part time students, 20% have taken a distance learning class. Let D = event that a student takes a distance learning class andE = event that a student is a part time student 1. Find \(P(D \cap E)\). 2. Find \(P(E|D)\). 3. Find \(P(D \cup E)\). 4. Using an appropriate test, show whether D and E are independent. 5. Using an appropriate test, show whether D and E are mutually exclusive. 3.5 Venn Diagrams Use the information in the Table \(\PageIndex{16}\) to answer the next eight exercises. The table shows the political party affiliation of each of 67 members of the US Senate in June 2012, and when they are up for reelection. Up for reelection: Democratic party Republican party Other Total November 2014 20 13 0   November 2016 10 24 0   Total         Table \(\PageIndex{16}\) 107. What is the probability that a randomly selected senator has an “Other” affiliation? 108. What is the probability that a randomly selected senator is up for reelection in November 2016? 109. What is the probability that a randomly selected senator is a Democrat and up for reelection in November 2016? 110. What is the probability that a randomly selected senator is a Republican or is up for reelection in November 2014? 111. Suppose that a member of the US Senate is randomly selected. Given that the randomly selected senator is up for reelection in November 2016, what is the probability that this senator is a Democrat? 112. Suppose that a member of the US Senate is randomly selected. What is the probability that the senator is up for reelection in November 2014, knowing that this senator is a Republican? 113. The events “Republican” and “Up for reelection in 2016” are ________ 1. mutually exclusive. 2. independent. 3. both mutually exclusive and independent. 4. neither mutually exclusive nor independent. 114. The events “Other” and “Up for reelection in November 2016” are ________ 1. mutually exclusive. 2. independent. 3. both mutually exclusive and independent. 4. neither mutually exclusive nor independent. 115. Table \(\PageIndex{17}\) gives the number of participants in the recent National Health Interview Survey who had been treated for cancer in the previous 12 months. The results are sorted by age, race (black or white), and sex. We are interested in possible relationships between age, race, and sex. We will let suicide victims be our population. Race and sex 15–24 25–40 41–65 Over 65 TOTALS White, male 1,165 2,036 3,703   8,395 White, female 1,076 2,242 4,060   9,129 Black, male 142 194 384   824 Black, female 131 290 486   1,061 All others           TOTALS 2,792 5,279 9,354   21,081 Table \(\PageIndex{17}\) Do not include "all others" for parts f and g. 1. Fill in the column for cancer treatment for individuals over age 65. 2. Fill in the row for all other races. 3. Find the probability that a randomly selected individual was a white male. 4. Find the probability that a randomly selected individual was a black female. 5. Find the probability that a randomly selected individual was black 6. Find the probability that a randomly selected individual was male. 7. Out of the individuals over age 65, find the probability that a randomly selected individual was a black or white male. Use the following information to answer the next two exercises. The table of data obtained from www.baseball-almanac.com shows hit information for four well known baseball players. Suppose that one hit from the table is randomly selected. Name Single Double Triple Home run TOTAL HITS Babe Ruth 1,517 506 136 714 2,873 Jackie Robinson 1,054 273 54 137 1,518 Ty Cobb 3,603 174 295 114 4,189 Hank Aaron 2,294 624 98 755 3,771 TOTAL 8,471 1,577 583 1,720 12,351 Table \(\PageIndex{18}\) 116. Find P(hit was made by Babe Ruth). 1. \(\frac{1518}{2873}\) 2. \(\frac{2873}{12351}\) 3. \(\frac{583}{12351}\) 4. \(\frac{4189}{12351}\) 117. Find P(hit was made by Ty Cobb|The hit was a Home Run). 1. \(\frac{4189}{12351}\) 2. \(\frac{114}{1720}\) 3. \(\frac{1720}{4189}\) 4. \(\frac{114}{12351}\) 118. Table \(\PageIndex{19}\) identifies a group of children by one of four hair colors, and by type of hair. Hair type Brown Blond Black Red Totals Wavy 20   15 3 43 Straight 80 15   12   Totals   20     215 Table \(\PageIndex{19}\) 1. Complete the table. 2. What is the probability that a randomly selected child will have wavy hair? 3. What is the probability that a randomly selected child will have either brown or blond hair? 4. What is the probability that a randomly selected child will have wavy brown hair? 5. What is the probability that a randomly selected child will have red hair, given that he or she has straight hair? 6. If B is the event of a child having brown hair, find the probability of the complement of B. 7. In words, what does the complement of B represent? 119. In a previous year, the weights of the members of the San Francisco 49ers and the Dallas Cowboys were published in theSan Jose Mercury News. The factual data were compiled into the following table. Shirt # ≤ 210 211–250 251–290 > 290 1–33 21 5 0 0 34–66 6 18 7 4 66–99 6 12 22 5 Table \(\PageIndex{20}\) For the following, suppose that you randomly select one player from the 49ers or Cowboys. 1. Find the probability that his shirt number is from 1 to 33. 2. Find the probability that he weighs at most 210 pounds. 3. Find the probability that his shirt number is from 1 to 33 AND he weighs at most 210 pounds. 4. Find the probability that his shirt number is from 1 to 33 OR he weighs at most 210 pounds. 5. Find the probability that his shirt number is from 1 to 33 GIVEN that he weighs at most 210 pounds. Use the following information to answer the next two exercises. This tree diagram shows the tossing of an unfair coin followed by drawing one bead from a cup containing three red (R), four yellow (Y) and five blue (B) beads. For the coin, P(H) = \(\frac{2}{3}\) and P(T) = \(\frac{1}{3}\) where H is heads and T is tails. Tree diagram with 2 branches. The first branch consists of 2 lines of H=2/3 and T=1/3. The second branch consists of 2 sets of 3 lines each with the both sets containing R=3/12, Y=4/12, and B=5/12. Figure \(\PageIndex{20}\) 120. Find P(tossing a Head on the coin AND a Red bead) 1. \(\frac{2}{3}\) 2. \(\frac{5}{15}\) 3. \(\frac{6}{36}\) 4. \(\frac{5}{36}\) 121. Find P(Blue bead). 1. \(\frac{15}{36}\) 2. \(\frac{10}{36}\) 3. \(\frac{10}{12}\) 4. \(\frac{6}{36}\) 122. A box of cookies contains three chocolate and seven butter cookies. Miguel randomly selects a cookie and eats it. Then he randomly selects another cookie and eats it. (How many cookies did he take?) 1. Draw the tree that represents the possibilities for the cookie selections. Write the probabilities along each branch of the tree. 2. Are the probabilities for the flavor of the SECOND cookie that Miguel selects independent of his first selection? Explain. 3. For each complete path through the tree, write the event it represents and find the probabilities. 4. Let S be the event that both cookies selected were the same flavor. Find P(S). 5. Let T be the event that the cookies selected were different flavors. Find P(T) by two different methods: by using the complement rule and by using the branches of the tree. Your answers should be the same with both methods. 6. Let U be the event that the second cookie selected is a butter cookie. Find P(U). • Was this article helpful?
__label__pos
0.999274
1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More. News IBM Watson computer to compete on US game show Discussion in 'Article Discussion' started by Tim S, 27 Apr 2009. 1. Tim S Tim S OG Joined: 8 Nov 2001 Posts: 18,881 Likes Received: 78 2. steveo_mcg steveo_mcg What's a Dremel? Joined: 26 May 2005 Posts: 5,841 Likes Received: 80 Get it on weakest link it couldn't possibly do any worse that some of the half wits that go on that.   3. zimbloggy zimbloggy Genius Extraordinaire Joined: 8 Jul 2008 Posts: 121 Likes Received: 1 "understand the questions and provide answers that make sense" Don't you mean "understand the answers and provide the questions that make sense"?   4. Gunsmith Gunsmith Maximum Win Joined: 23 Sep 2005 Posts: 9,078 Likes Received: 1,411 would you like to play a game ?   5. Skiddywinks Skiddywinks Minimodder Joined: 10 Aug 2008 Posts: 930 Likes Received: 8 Interesting. Looking forward to seeing how it performs.   6. biebiep biebiep What's a Dremel? Joined: 12 Dec 2007 Posts: 101 Likes Received: 3 Classic :') Edit: Also, after the show, the world will pause and SkyNet will launch... :p   7. Zut Zut What's a Dremel? Joined: 5 Feb 2005 Posts: 137 Likes Received: 0 Meh! Question answering systems are much dumber than they look. The sad truth is that statistical methods and Google-style brute force processing are very effective. "Critical thinking" has nothing to do with it!   8. thehippoz thehippoz What's a Dremel? Joined: 19 Dec 2008 Posts: 5,780 Likes Received: 174 is that weakest link show still around? I liked watching that red headed lady, she was pretty witty :D and yeah.. writing a brute force google app which probably presses I feel lucky on top of it XD   9. HourBeforeDawn HourBeforeDawn a.k.a KazeModz Joined: 26 Oct 2006 Posts: 2,637 Likes Received: 6 so wait its going to be on that type of show how is this fair, I mean if it has wifi access it will be able to google for all the answers lol   10. jsheff jsheff What's a Dremel? Joined: 24 Jul 2004 Posts: 209 Likes Received: 11 "understand the questions and provide answers that make sense" "Don't you mean "understand the answers and provide the questions that make sense"?" Don't you mean "Understand the answers and provide the questions that are right"? It can't win if all it's doing is "making sense"!   11. KingofthePaupers KingofthePaupers What's a Dremel? Joined: 19 Feb 2011 Posts: 1 Likes Received: 0 Jct: "Doing this ends inflation of money?" is one question I'm the only human who claims to have solved and I'd bet Watson cannot.   Tags: Add Tags Share This Page
__label__pos
0.999834
NOIP2007提高组 解题报告 少说多做- -|| 1.统计数字 nlogn排序+n统计,水过 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 var i,n,count:longint; a:array[1..200000]of longint; procedure qsort(l,r:longint); var i,j,x,t:longint; begin i:=l;j:=r;x:=a[(l+r)shr 1]; repeat while a[i]<x do inc(i); while a[j]>x do dec(j); if i<=j then begin t:=a[i];a[i]:=a[j];a[j]:=t; inc(i);dec(j); end; until i>=j; if i<r then qsort(i,r); if l<j then qsort(l,j); end; begin readln(n); for i:=1 to n do readln(a[i]); qsort(1,n); count:=1; for i:=2 to n do if a[i]<>a[i-1] then begin writeln(a[i-1],' ',count); count:=1; end else inc(count); writeln(a[n],' ',count); end. 2.字符串的展开 模拟题,没什么好说的,跟着题目做就是了。不一定越长的代码一次AC的几率越高,简短一点的说不定更容易一次AC,比如我这次就一次AC了。初中的时候每次做都交了n次才过。 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 var q1,q2,q3,k,i,j,last:longint; c:char; s:string; begin assign(input,'expand.in');reset(input); assign(output,'expand.out');rewrite(output); readln(q1,q2,q3); readln(s); s:=s+' '; last:=0; while last<>length(s) do begin k:=pos('-',s); for i:=last+1 to k-1 do write(s[i]); if (k>1)and((s[k-1]in['a'..'z'])and(s[k+1]in['a'..'z']) or (s[k-1]in['0'..'9'])and(s[k+1]in['0'..'9'])and(s[k-1]<s[k+1]) then begin if p3=1 then for c:=succ(s[k-1]) to pred(s[k+1]) do writech(p3) else for c:=pred(s[k+1]) to succ(s[k-1]) do writech(p3); end else begin write(s[k],s[k+1]); last:=k+1; end; end; close(input);close(output); end. 3.矩阵取数游戏 题目可以这么考虑:针对每一行单独处理,每一次只能取走行头或者行尾;如此分别处理这n行即可。 考虑第i行,f[l,r]表示取走区间[l,r]的最大值。那么显然f[l,r]只能够从f[l+1,r]+a[i,l]和f[l,r-1]+a[i,r]中去一个最大值。做一次复杂度为m^2,因此总的复杂度为O(n m^2)。 注意答案需要用高精。 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 type bigint=array[0..40]of longint; var n,m,i,j,x:longint; tmp,ans:bigint; pow:array[1..80]of bigint; f,a:array[1..80,1..80]of bigint; operator +(const a,b:bigint) c:bigint; var i,x:longint; begin fillchar(c,sizeof(c),0); if a[0]>b[0] then c[0]:=a[0] else c[0]:=b[0]; x:=0; for i:=1 to c[0] do begin x:=a[i]+b[i]+x div 10; c[i]:=x mod 10; end; if x>9 then begin inc(c[0]); c[c[0]]:=1; end; end; operator *(const a,b:bigint) c:bigint; var i,j,x:longint; begin fillchar(c,sizeof(c),0); for i:=1 to a[0] do begin x:=0; for j:=1 to b[0] do begin x:=a[i]*b[j]+x+c[i+j-1]; c[i+j-1]:=x mod 10; x:=x div 10; end; c[i+j]:=x; end; c[0]:=a[0]+b[0]; while (c[0]>0)and(c[c[0]]=0) do dec(c[0]); end; operator <(const a,b:bigint) ans:boolean; var i:longint; begin if a[0]<>b[0] then exit(a[0]<b[0]); for i:=a[0] downto 1 do if a[i]<>b[i] then exit(a[i]<b[i]); exit(false); end; procedure calc(left,right:longint); var tmp1,tmp2:bigint; begin if f[left,right][0]>0 then exit; if left<right then begin calc(left+1,right); tmp1:=f[left+1,right]+pow[m-right+left]*a[i,left]; calc(left,right-1); tmp2:=f[left,right-1]+pow[m-right+left]*a[i,right]; if tmp1<tmp2 then f[left,right]:=tmp2 else f[left,right]:=tmp1; end else f[left,right]:=pow[m]*a[i,left]; end; begin assign(input,'game.in');reset(input); assign(output,'game.out');rewrite(output); pow[1][0]:=1;pow[1][1]:=2; for i:=2 to 80 do pow[i]:=pow[i-1]*pow[1]; readln(n,m); for i:=1 to n do for j:=1 to m do begin read(x); while x>0 do begin inc(a[i,j][0]); a[i,j][a[i,j][0]]:=x mod 10; x:=x div 10; end; end; ans[0]:=1; for i:=1 to n do begin fillchar(f,sizeof(f),0); calc(1,m); ans:=ans+f[1,m]; end; for i:=ans[0] downto 1 do write(ans[i]); close(input);close(output); end. 4.树网的核 介绍一个简单的方法: 可以用n3的时间,用floyd求出任两点间的距离d[i,j](或者使用SPFA,这样复杂度大约可以降到n2)。此时可以用n2的时间遍历d[st,ed],找出树的直径长度(或者两遍DFS找直径,复杂度2n)。然后枚举直径st->ed上的路径i->j,也就是枚举核(时间n2),之后求Ecc(i->j)(复杂度1)。 对于直径st->ed上的核i->j(注意,前提是核),Ecc(i->j)=max(min(dist[st,i],dist[st,j]),min(dist[i,ed],dist[j,ed]))。之所以只考虑四个数,是因为i(j)到st(ed)的距离大于其他任何点,如果不是这样的话那么st->ed就不是直径了。 另外,可以证明只需要考虑一条直径上的最优核偏心距(至于怎么证明真的不知道),因此考虑一条直径即可。 总的复杂度=O(n3+n2+n2)=O(n3)。如果使用SPFA,那么大约可以降到O(n^2)。对于n<=300轻松过。 进一步研究可以参照: http://tieba.baidu.com/f?kz=842504583 O(n)算法:http://www.cnblogs.com/yymore/archive/2011/07/01/2095962.html 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 {$inline on} var top,ecc,ans,st,ed,i,j,w,n,m,dia:longint; s:array[1..300]of boolean; q:array[1..300*300]of longint; dist:array[1..300,1..300]of longint; g:array[1..300+300*300]of record v,w,next:longint;end; function min(a,b:longint):longint;inline;begin if(a<b)then exit(a)else exit(b)end; function max(a,b:longint):longint;inline;begin if(a>b)then exit(a)else exit(b)end; procedure join(x,y,w:longint);inline; begin inc(top); g[top].v:=y; g[top].w:=w; g[top].next:=g[x].next; g[x].next:=top; end; procedure SPFA(source:longint); var u,v,head,tail:longint; begin fillchar(dist[source],sizeof(dist[source]),63); fillchar(s,sizeof(s),0); head:=0;tail:=1; q[tail]:=source; dist[source,source]:=0; s[source]:=true; repeat inc(head); u:=q[head]; v:=g[u].next; while v<>-1 do begin if dist[source,g[v].v]>dist[source,u]+g[v].w then begin dist[source,g[v].v]:=dist[source,u]+g[v].w; if not s[g[v].v] then begin s[g[v].v]:=true; inc(tail); q[tail]:=g[v].v; end; end; v:=g[v].next; end; s[u]:=false; until head>=tail; end; begin assign(input,'core.in');reset(input); assign(output,'core.out');rewrite(output); readln(n,m); for i:=1 to n do g[i].next:=-1; top:=n; for i:=1 to n-1 do begin readln(st,ed,w); join(st,ed,w); join(ed,st,w); end; for i:=1 to n do SPFA(i); dia:=0; for i:=1 to n-1 do for j:=i+1 to n do if dist[i,j]>dia then dia:=dist[i,j]; for st:=1 to n-1 do for ed:=st+1 to n do if dist[st,ed]=dia then begin ans:=maxlongint; for i:=1 to n do if dist[st,i]+dist[i,ed]=dia then for j:=1 to n do if (dist[st,j]+dist[j,ed]=dia)and(dist[i,j]<=m) then begin ecc:=max(min(dist[st,i],dist[st,j]),min(dist[i,ed],dist[j,ed])); ans:=min(ans,ecc); end; writeln(ans); close(input);close(output); halt; end; close(input);close(output); end. Comments
__label__pos
0.897188
Question: The following crosstabulation shows household income by educational level of The following crosstabulation shows household income by educational level of the head of household Statistical Abstract of the United States, 2008).  a. Develop a joint probability table. b. What is the probability of a head of household not being a high school graduate? c. What is the probability of a head of household having a bachelor’s degree or more education? d. What is the probability of a household headed by someone with a bachelor’s degree earning $100,000 or more? e. What is the probability of a household having income below $25,000? f. What is the probability of a household headed by someone with a bachelor’s degree earning less than $25,000? g. Is household income independent of educationallevel? View Solution: Sale on SolutionInn Sales9 Views553 Comments • CreatedFebruary 16, 2015 • Files Included Post your question 5000
__label__pos
0.877771
Stack Overflow is a community of 4.7 million programmers, just like you, helping each other. Join them; it only takes a minute: Sign up Join the Stack Overflow community to: 1. Ask programming questions 2. Answer and help your peers 3. Get recognized for your expertise First, here's the concise summary of the question: Is it possible to run an INSERT statement conditionally? Something akin to this: IF(expression) INSERT... Now, I know I can do this with a stored procedure. My question is: can I do this in my query? Now, why would I want to do that? Let's assume we have the following 2 tables: products: id, qty_on_hand orders: id, product_id, qty Now, let's say an order for 20 Voodoo Dolls (product id 2) comes in. We first check if there's enough Quantity On Hand: SELECT IF( ( SELECT SUM(qty) FROM orders WHERE product_id = 2 ) + 20 <= ( SELECT qty_on_hand FROM products WHERE id = 2) , 'true', 'false'); Then, if it evaluates to true, we run an INSERT query. So far so good. However, there's a problem with concurrency. If 2 orders come in at the exact same time, they might both read the quantity-on-hand before any one of them has entered the order. They'll then both place the order, thus exceeding the qty_on_hand. So, back to the root of the question: Is it possible to run an INSERT statement conditionally, so that we can combine both these queries into one? I searched around a lot, and the only type of conditional INSERT statement that I could find was ON DUPLICATE KEY, which obviously does not apply here. share|improve this question up vote 25 down vote accepted INSERT INTO TABLE SELECT value_for_column1, value_for_column2, ... FROM wherever WHERE your_special_condition If no rows are returned from the select (because your special condition is false) no insert happens. Using your schema from question (assuming your id column is auto_increment): insert into orders (product_id, qty) select 2, 20 where (SELECT qty_on_hand FROM products WHERE id = 2) > 20; This will insert no rows if there's not enough stock on hand, otherwise it will create the order row. Nice idea btw! share|improve this answer      See edited answer... – Bohemian Jul 28 '11 at 7:16      @Bohemian: No need for two SELECT statements. One will suffice. Take a look at my answer. :) – Shef Jul 28 '11 at 7:20      True, but I was trying to match my general pattern. I like your answer though... +1 – Bohemian Jul 28 '11 at 7:37      @Joseph Silber: To do what??? Where is the subtraction there? What does subtraction have to do with normalization? :D Do you understand that the above query is composed of two SELECT statements compared to mine, which runs with one? (Nothing against your answer, or you, Bohemian). – Shef Jul 28 '11 at 14:52      @Shef: In your code, after INSERT, if affected rows = 1, you'll UPDATE ... qty_on_hand = qty_on_hand - 20. That's how you keep track of stock. However, the way I do it (see my code examples above) is that qty_on_hand never changes. It is always set to the amount of stock in the warehouse, regardless of how many have sold. Then, when I want to check if a product is available, I compare SUM(qty) FROM orders to qty_on_hand FROM products. This is only possible with a subquery (granted, @Bohemian didn't really do it so either, but we're on the same page as far as requiring a subquery). – Joseph Silber Jul 28 '11 at 16:51 Try: INSERT INTO orders(product_id, qty) SELECT 2, 20 FROM products WHERE id = 2 AND qty_on_hand >= 20 If a product with id equal to 2 exists and the qty_on_hand is greater or equal to 20 for this product, then an insert will occur with the values product_id = 2, and qty = 20. Otherwise, no insert will occur. Note: If your product ids are note unique, you might want to add a LIMIT clause at the end of the SELECT statement. share|improve this answer You're probably solving the problem the wrong way. If you're afraid two read-operations will occur at the same time and thus one will work with stale data, the solution is to use locks or transactions. Have the query do this: • lock table for read • read table • update table • release lock share|improve this answer      I'm not so sure locking is the best solution. It might cause serious performance issues. – Joseph Silber Jul 28 '11 at 7:04 Not sure about concurrency, you'll need to read up on locking in mysql, but this will let you be sure that you only take 20 items if 20 items are available: update products set qty_on_hand = qty_on_hand - 20 where qty_on_hand >= 20 and id=2 You can then check how many rows were affected. If none were affected, you did not have enough stock. If 1 row was affected, you have effectively consumed the stock. share|improve this answer Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.675857
Send HSM Template from Make (ex Integromat) Ana Updated by Ana In this article, we will see how to send an HSM Template using Landbot API and Make. A common use case is when you have your opt-in leads stored in an external database and then you need to send the template using that already stored data. In the following example, we will imagine that our opt-in leads are stored in Google Sheets and that we are going to send a template every time a new row is completed, so firstly we will use the Watch Changes module. 1. Let's jump to what is of our concern. Create a Make an HTTP request module. Then, we will follow the information we get from the Message Template section of the WhatsApp channel to make the request: • URL: https://api.landbot.io/v1/customers/:customerID/send_template/ • Replace :customerID for the data you get from your Google Sheets. • In Method select POST • In Headers add: - Name: Authorization. Value: Token XXXXXXXXXXX. Get your API TOKEN from here. - Name: Content-Type. Value: application/json 1. Scroll down the module and complete the body. • Body type: Raw • Content type: JSON • Parse Response: Yes 1. In the Request content, we will paste what we have under --data-raw: If our template has params, you can use Google Sheet data to complete them: Press OK and it's done! You can activate your scenario and perform a final test by adding a valid customer id to the sheets. Extra step for HSM template with buttons 1. In the case your template has buttons and you want to know what your lead chooses from the bot that is linked to your WhatsApp channel, you will need to add an extra module to make an unassign request to prevent the chat from being assigned to an agent, which would block the execution of the bot.  • URL: https://api.landbot.io/v1/customers/:customerID/unassign/ • Replace :customerID for the data you get from your Google Sheets. • In Method select PUT • In Headers add the same values as above. • Body type must be empty. You can see the unassign request configuration from our API docs. How did we do? How to upload a file to Google Drive using Make.com (formerly Integromat) Make.com (formerly Integromat) trigger Contact
__label__pos
0.876554
Autocad - failed install with exit code [69009748] Hello guys, i receive the error code [69009748] when i try to install autocad 2022 on my laptop (run locally for test) this is the details of the script (for privacy i’ve delete some words with xxx <# .SYNOPSIS This script performs the installation or uninstallation of Autodesk AutoCAD 2022. # LICENSE # PowerShell App Deployment Toolkit - Provides a set of functions to perform common application deployment tasks on Windows. Copyright (C) 2017 - Sean Lillis, Dan Cunningham, Muhammad Mashwani, Aman Motazedian. This program is free software: you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation, either version 3 of the License, or any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU Lesser General Public License along with this program. If not, see http://www.gnu.org/licenses/. .DESCRIPTION The script is provided as a template to perform an install or uninstall of an application(s). The script either performs an “Install” deployment type or an “Uninstall” deployment type. The install deployment type is broken down into 3 main sections/phases: Pre-Install, Install, and Post-Install. The script dot-sources the AppDeployToolkitMain.ps1 script which contains the logic and functions required to install or uninstall an application. .PARAMETER DeploymentType The type of deployment to perform. Default is: Install. .PARAMETER DeployMode Specifies whether the installation should be run in Interactive, Silent, or NonInteractive mode. Default is: Interactive. Options: Interactive = Shows dialogs, Silent = No dialogs, NonInteractive = Very silent, i.e. no blocking apps. NonInteractive mode is automatically set if it is detected that the process is not user interactive. .PARAMETER AllowRebootPassThru Allows the 3010 return code (requires restart) to be passed back to the parent process (e.g. SCCM) if detected from an installation. If 3010 is passed back to SCCM, a reboot prompt will be triggered. .PARAMETER TerminalServerMode Changes to “user install mode” and back to “user execute mode” for installing/uninstalling applications for Remote Destkop Session Hosts/Citrix servers. .PARAMETER DisableLogging Disables logging to file for the script. Default is: $false. .EXAMPLE PowerShell.exe .\Deploy-AutoCAD_2022.ps1 -DeploymentType “Install” -DeployMode “NonInteractive” .EXAMPLE PowerShell.exe .\Deploy-AutoCAD_2022.ps1 -DeploymentType “Install” -DeployMode “Silent” .EXAMPLE PowerShell.exe .\Deploy-AutoCAD_2022.ps1 -DeploymentType “Install” -DeployMode “Interactive” .EXAMPLE PowerShell.exe .\Deploy-AutoCAD_2022.ps1 -DeploymentType “Uninstall” -DeployMode “NonInteractive” .EXAMPLE PowerShell.exe .\Deploy-AutoCAD_2022.ps1 -DeploymentType “Uninstall” -DeployMode “Silent” .EXAMPLE PowerShell.exe .\Deploy-AutoCAD_2022.ps1 -DeploymentType “Uninstall” -DeployMode “Interactive” .NOTES Toolkit Exit Code Ranges: 60000 - 68999: Reserved for built-in exit codes in Deploy-Application.ps1, Deploy-Application.exe, and AppDeployToolkitMain.ps1 69000 - 69999: Recommended for user customized exit codes in Deploy-Application.ps1 70000 - 79999: Recommended for user customized exit codes in AppDeployToolkitExtensions.ps1 .LINK http://psappdeploytoolkit.com #> [CmdletBinding()] Param ( [Parameter(Mandatory=$false)] [ValidateSet(‘Install’,‘Uninstall’,‘Repair’)] [string]$DeploymentType = ‘Install’, [Parameter(Mandatory=$false)] [ValidateSet(‘Interactive’,‘Silent’,‘NonInteractive’)] [string]$DeployMode = ‘Interactive’, [Parameter(Mandatory=$false)] [switch]$AllowRebootPassThru = $false, [Parameter(Mandatory=$false)] [switch]$TerminalServerMode = $false, [Parameter(Mandatory=$false)] [switch]$DisableLogging = $false ) Try { ## Set the script execution policy for this process Try { Set-ExecutionPolicy -ExecutionPolicy ‘ByPass’ -Scope ‘Process’ -Force -ErrorAction ‘Stop’ } Catch {} ## Variables: Script [string] $appScriptVersion = '3.8.2.1' [string] $appScriptDate = '18/02/2022' [string] $appScriptAuthor = 'XXX' ## Variables: Application [string] $appVendor = 'Autodesk' [string] $appName = 'AutoCAD' [string] $appArch = '' [string] $appVersion = '24.1.51.0' [string] $appRevision = '0001' [string] $appLang = 'MUL' ## Variables: Install Title Only set here to override defaults set by the toolkit [string] $installName = '' [string] $installTitle = 'Autodesk Autocad 2022' ## Variables: Wrapper [string] $XXX_appfriendlyName = $appName [string] $XXX_ProcToClose = 'acad,adSSO,AutodeskDesktopApp,AdAppMgrSvc,AdskLicensingService,AdskLicensingAgent,FNPLicensingService' #endregion VARIABLE DECLARATION #region WRAPPERINIT ##* Do not modify section below ## Variables: Exit Code [int32]$mainExitCode = 0 ## Variables: Script [string]$deployAppScriptFriendlyName = 'Deploy Application' [version]$deployAppScriptVersion = [version]'3.8.2' [string]$deployAppScriptDate = '08/05/2020' [hashtable]$deployAppScriptParameters = $psBoundParameters ## Variables: Environment If (Test-Path -LiteralPath 'variable:HostInvocation') { $InvocationInfo = $HostInvocation } Else { $InvocationInfo = $MyInvocation } [string]$scriptDirectory = Split-Path -Path $InvocationInfo.MyCommand.Definition -Parent ## Dot source the required App Deploy Toolkit Functions Try { [string]$moduleAppDeployToolkitMain = "$scriptDirectory\AppDeployToolkit\AppDeployToolkitMain.ps1" If (-not (Test-Path -LiteralPath $moduleAppDeployToolkitMain -PathType 'Leaf')) { Throw "Module does not exist at the specified location [$moduleAppDeployToolkitMain]." } If ($DisableLogging) { . $moduleAppDeployToolkitMain -DisableLogging } Else { . $moduleAppDeployToolkitMain } } Catch { If ($mainExitCode -eq 0){ [int32]$mainExitCode = 60008 } Write-Error -Message "Module [$moduleAppDeployToolkitMain] failed to load: `n$($_.Exception.Message)`n `n$($_.InvocationInfo.PositionMessage)" -ErrorAction 'Continue' ## Exit the script, returning the exit code to SCCM If (Test-Path -LiteralPath 'variable:HostInvocation') { $script:ExitCode = $mainExitCode; Exit } Else { Exit $mainExitCode } } ##* Do not modify section above [boolean]$configShowBalloonNotifications = ShowBalloonTips #endregion WRAPPERINIT ##* Do not modify section above ##*=============================================== ##* END VARIABLE DECLARATION ##*=============================================== If ($deploymentType -ine 'Uninstall' -and $deploymentType -ine 'Repair') { ##*=============================================== ##* PRE-INSTALLATION ##*=============================================== [string]$installPhase = 'Pre-Installation' #*====================================PRE-INSTALLATION BEGIN================================================================== ##*=============================================== ##* INSTALLATION ##*=============================================== [string]$installPhase = 'Installation' # user dialogs (deprecated) if (UseDialogs){ ## Only use for longer installations (Installation duration approx. >3 minutes) Show-InstallationProgress -WindowLocation 'BottomRight' ## Install Autodesk AutoCAD 2022 Write-Log -Message 'Installazione di Autodesk AutoCAD 2022 in corso. Attendere prego...' -Severity 2 -Source $deployAppScriptFriendlyName Execute-Process -Path "$dirFiles\Setup.exe" -Parameters '-W -q -I' -WindowStyle Hidden Write-Log -Message 'Installazione di Autodesk AutoCAD 2022 è stata completata.' -Severity 2 -Source $deployAppScriptFriendlyName ## Remove Autodesk Desktop App Desktop Shortcut (If Present) if (Test-Path -Path "$envPublic\Desktop\Autodesk Desktop App.lnk") { Write-Log -Message "Removing Autodesk Desktop App Desktop Shortcut." Remove-Item -Path "$envPublic\Desktop\Autodesk Desktop App.lnk" -Force -Recurse -ErrorAction SilentlyContinue } ## Disable Data Collection and Use (Autodesk Analytics) Write-Log -Message "Disabling Data Collection and Use (Autodesk Analytics)." [scriptblock]$HKCURegistrySettings = { Set-RegistryKey -Key 'HKCU\Software\Autodesk\MC3' -Name 'ADAOptIn' -Value 0 -Type DWord -SID $UserProfile.SID Set-RegistryKey -Key 'HKCU\Software\Autodesk\MC3' -Name 'ADARePrompted' -Value 1 -Type DWord -SID $UserProfile.SID } Invoke-HKCURegistrySettingsForAllUsers -RegistrySettings $HKCURegistrySettings -ErrorAction SilentlyContinue set-branding } ##*=============================================== ##* POST-INSTALLATION ##*=============================================== [string]$installPhase = 'Post-Installation' } ElseIf ($deploymentType -ieq 'Uninstall') { ##*=============================================== ##* PRE-UNINSTALLATION ##*=============================================== [string]$installPhase = 'Pre-Uninstallation' ## Disable Autodesk Licensing Service Set-Service -Name 'AdskLicensingService' -StartupType 'Disabled' -ErrorAction SilentlyContinue ## Disable FlexNet Licensing Service Set-Service -Name 'FlexNet Licensing Service' -StartupType 'Disabled' -ErrorAction SilentlyContinue # check for pending reboot and stop script execution on true with exit code 1641, if not running in task sequence. if (CheckForReboot){ Set-Reboot -ForceExitScript -OnlyOnPendingReboot -MandatoryDeviceRestart } ##*=============================================== ##* UNINSTALLATION ##*=============================================== [string]$installPhase = 'Uninstallation' # user dialogs (deprecated) if (UseDialogs){ ## Only use for longer installations (Installation duration approx. >3 minutes) Show-InstallationProgress -WindowLocation 'BottomRight' ## Uninstall Autodesk AutoCAD 2022 $XML = Get-ChildItem -Path "C:\ProgramData\Autodesk\ODIS\metadata\{1E7D4EF7-A28E-3D3E-BA3C-C6FAE4AAB2E0}\" -Include bundleManifest.xml -File -Recurse -ErrorAction SilentlyContinue If($XML.Exists) { Write-Log -Message "Found $($XML.FullName), now attempting to uninstall Autodesk AutoCAD 2022." Show-InstallationProgress "Uninstalling Autodesk AutoCAD 2022. This may take some time. Please wait..." if (Test-Path -Path "$envProgramFiles\Autodesk\AdODIS\V1\Installer.exe") { Execute-Process -Path "$envProgramFiles\Autodesk\AdODIS\V1\Installer.exe" -Parameters "-i uninstall -q -m C:\ProgramData\Autodesk\ODIS\metadata\{1E7D4EF7-A28E-3D3E-BA3C-C6FAE4AAB2E0}\bundleManifest.xml" -WindowStyle Hidden -IgnoreExitCodes "1603" Sleep -Seconds 5 } } ## Uninstall AutoCAD Open in Desktop Execute-MSI -Action Uninstall -Path '{C8DFC969-241E-4707-A93B-08C88821D22B}' ## Uninstall AutoCAD Material Library Execute-MSI -action Uninstall -Path '{6256584F-B04B-41D4-8A59-44E70940C473}' ## Uninstall Autodesk Single Sign On Component Execute-MSI -action Uninstall -Path '{B9F5BDED-021C-4926-8518-4FA7114B7040}' ## Uninstall Autodesk AutoCAD Performance Feedback Tool 1.3.8 Execute-MSI -action Uninstall -Path '{3EDD9D7F-E305-485B-A0E5-7F6D24A87093}' ## Uninstall Autodesk Desktop App Show-InstallationProgress "Disinstallazione di Autodesk DesktopAPP in corso. Attendere prego..." if (Test-Path -Path "$envProgramFilesX86\Autodesk\Autodesk Desktop App\removeAdAppMgr.exe") { Execute-Process -Path "$envProgramFilesX86\Autodesk\Autodesk Desktop App\removeAdAppMgr.exe" -Parameters "--mode unattended" -WindowStyle Hidden Sleep -Seconds 5 } ## Cleanup Autodesk Directories $Users = Get-ChildItem C:\Users foreach ($user in $Users){ $AutodeskDir1 = "$($user.fullname)\AppData\Local\Autodesk" If (Test-Path $AutodeskDir1) { Write-Log -Message "Cleanup $AutodeskDir1 Directory." Remove-Item -Path $AutodeskDir1 -Force -Recurse -ErrorAction SilentlyContinue Sleep -Seconds 5 } $AutodeskDir2 = "$($user.fullname)\AppData\Roaming\Autodesk" If (Test-Path $AutodeskDir2) { Write-Log -Message "Cleanup $AutodeskDir2 Directory." Remove-Item -Path $AutodeskDir2 -Force -Recurse -ErrorAction SilentlyContinue Sleep -Seconds 5 } if (Test-Path -Path "$envAllUsersProfile\Autodesk\") { Write-Log -Message "Cleanup $envAllUsersProfile\Autodesk\ Directory." Remove-Item -Path "$envAllUsersProfile\Autodesk\" -Force -Recurse -ErrorAction SilentlyContinue Sleep -Seconds 5 } $AutodeskDir3 = "$envprogramfiles\Autodesk" If (Test-Path $AutodeskDir3) { Write-Log -Message "Cleanup $AutodeskDir3 Directory." Remove-Item -Path $AutodeskDir3 -Force -Recurse -ErrorAction SilentlyContinue Sleep -Seconds 5 } $AutodeskDir4 = "$envprogramfiles\Common Files\Autodesk Shared" If (Test-Path $AutodeskDir4) { Write-Log -Message "Cleanup $AutodeskDir4 Directory." Remove-Item -Path $AutodeskDir4 -Force -Recurse -ErrorAction SilentlyContinue Sleep -Seconds 5 } $AutodeskDir5 = "$envprogramfilesx86\Autodesk" If (Test-Path $AutodeskDir5) { Write-Log -Message "Cleanup $AutodeskDir5 Directory." Remove-Item -Path $AutodeskDir5 -Force -Recurse -ErrorAction SilentlyContinue Sleep -Seconds 5 } $AutodeskDir6 = "C:\ProgramData\Microsoft\Windows\Start Menu\Programs\AutoCAD 2022 - Italiano (Italian)" If (Test-Path $AutodeskDir6) { Write-Log -Message "Cleanup $AutodeskDir6 Directory." Remove-Item -Path $AutodeskDir6 -Force -Recurse -ErrorAction SilentlyContinue Sleep -Seconds 5 } $AutodeskDir7 = "$($user.fullname)\AppData\Roaming\Autodesk Installer" If (Test-Path $AutodeskDir7) { Write-Log -Message "Cleanup $AutodeskDir7 Directory." Remove-Item -Path $AutodeskDir7 -Force -Recurse -ErrorAction SilentlyContinue Sleep -Seconds 5 } } ## Uninstall Autodesk Genuine Service Stop-Process -Name GenuineService -Force -ErrorAction SilentlyContinue Execute-MSI -Action Uninstall -Path '{1C5DB7B1-CE18-438C-B071-3AD6B8ADA5A0}' Stop-Process -Name message_router -Force -ErrorAction SilentlyContinue ##Remove obsolete key registry } remove-branding ##*=============================================== ##* POST-UNINSTALLATION ##*=============================================== [string]$installPhase = 'Post-Uninstallation' } ElseIf ($deploymentType -ieq 'Repair') { ##*=============================================== ##* PRE-REPAIR ##*=============================================== [string]$installPhase = 'Pre-Repair' ## Show Progress Message (with the default message) Show-InstallationProgress ##*=============================================== ##* REPAIR ##*=============================================== [string]$installPhase = 'Repair' ##*=============================================== ##* POST-REPAIR ##*=============================================== [string]$installPhase = 'Post-Repair' } ##*=============================================== ##* END SCRIPT BODY ##*=============================================== ## Call the Exit-Script function to perform final cleanup operations Exit-Script -ExitCode $mainExitCode } Catch { [int32]$mainExitCode = 60001 [string]$mainErrorMessage = “$(Resolve-Error)” Write-Log -Message $mainErrorMessage -Severity 3 -Source $deployAppScriptFriendlyName Show-DialogBox -Text $mainErrorMessage -Icon ‘Stop’ Exit-Script -ExitCode $mainExitCode } Error code [69009748] is not from PSADT. PSADT is just forwarding the error it got from setup.exe According to autocad error 69009748 at DuckDuckGo it’s a licensing error. I bet it would fail is you ran the following in a CMD window: Setup.exe -W -q -I There is a AdskLicensingService.log but the post I found it in did not say where it was located. You might want to try to installing from a local drive instead of the network, too.
__label__pos
0.906692
Have you ever thought about the parts of a solid figure? Solid figures have their own characteristics, just like flat figures. Candice and Trevor are wrapping gifts at the mall. A customer comes with a figure that has 10 faces. It is a unique type of jewelry box. Do you know what the shape of the base of this figure is? In this article, you will learn how to identify a figure according to its faces, edges, and vertices. At the end of this Section, you will discover the shape of the jeweler’s base. Faces, Edges, and Vertices of Solids Faces, Edges and Vertices of Solids Orientation Previously, we worked on identifying solids. Well, now we need to know how to classify them more specifically. To do this, let’s look at the characteristics of solid figures. The number of faces, edges, and vertices that a solid figure has tells us what type of solid figure it is. We can use this information to classify solids. We classify or identify them by the number they have. Let’s start by looking at these faces. A face is a flat side of a solid figure. Faces are shaped like flat shapes, such as triangles, rectangles, and squares. Check out the featured faces below. A face is a flat side of a solid figure Each solid figure has multiple faces. We can count the number of faces that the figure has. How many faces does each figure shown above have? It has a face at the bottom and top base. It has four faces around the sides. Therefore it has six faces in total. What shape do the faces have? They are rectangles. We can name this figure a rectangular prism. Yes. With prisms, you can use the shape you see to help you name what type of prism it is. Now let’s look at another solid figure. prisms This figure only has one face. It is at the bottom and the sides are triangles that meet at a specific vertex. This is called a pyramid. Notice that the base of the pyramid is a square. This is called a square pyramid. The base gives the name to the figure. Now that you understand about faces, let’s look at the edges. We can identify a solid figure by counting the edges. An edge is a place where two faces meet. The edges are straight; they cannot be curved. How many edges does this figure have? edges Count all the edges where two faces meet. This figure has 8 edges. Some figures do not have edges because they do not have flat sides. Think of the cones, spheres, and cylinders. They do not have edges. The place where two or more edges meet is called the vertex. A vertex is like a corner. We can count the number of vertices to identify solid figures. This table can give you an idea of ​​some of the faces, edges, or vertices of common solid shapes. Name Number of Faces Number of Edges Number of Vertices sphere 0 0 0 cone one 0 0 cylinder 2 0 0 pyramid 5 8 5 prism at least 5 at least 9 at least 6 Sometimes you just have to count the faces, edges, and vertices to find out the number in each solid figure. When we look at a rectangular prism, one of the first things we see is that the figure names the type of prism. This is especially true in prisms. When we look at a solid figure like a prism or a pyramid, we must think of polygons to find out what type of prism or pyramid the figure is. In previous math classes, you may have named solid prisms or pyramids, but now you need to be more specific. pyramids First, you can see that each side is a polygon. This means that we are working to identify a type of prism. Let’s use the base to identify the prism. The base is a five-sided figure. We know that a five-sided figure is called a pentagon. It is a pentagonal prism. When you think about the number of faces, vertices, and edges in solid shapes, you may notice that some patterns appear. We can see a pattern of spheres, cones, and cylinders. Can you guess which one it is? To understand the pattern, we need to think about the number of faces, edges, and vertices that each shape has. All these figures are curved in a certain way, so they do not have edges or vertices. How about their faces? A sphere has no faces, a cone has a circular face, and a cylinder has two circular faces. Therefore, the number of faces increases in one from one figure to the other. This is a pattern. What about prisms? Is there a pattern here? There is definitely a pattern in relation to prisms. As the number of sides at the base and the cusp increases the parallel faces, the number of side faces increases the same amount. A triangular prism, therefore, has 3 sides plus the lower and upper base, that is, 5 in total. A hexagonal prism has 6 sides plus the lower and upper base, that is to say, 8 faces in total. A prism has a base with number n sides. How many faces does the prism have? A prism with a number n from sides? This means that we can insert any number to n. If we put 3 and make a triangular prism, how many faces will the prism have? As we said, it will have 3 side faces, a bottom, and a top base, or 5 faces. What happens if we insert 6 for n and make a hexagonal prism? The figure will have 6 lateral faces plus the lower and upper base, that is to say, 8 faces in total. If we insert 9 for n, the figure will have 9 lateral faces, a lower base and an upper one, that is to say, 11 faces in total. Can you see the pattern? In a prism, we always have a number of side faces that are determined by the number of sides of the polygon that is the base. So we add two because there is always a lower base and an upper one. In other words, to find the total number of faces we add 2 to the number of sides of the base. We can write a formula to help us understand this. If the base has a number n sides then the prism will have a number n + 2 of faces. Here is another example. A base has seven sides. How many faces does it have? If the base has 7 sides, we can use the formula to find the number of faces. n + 2= number of faces 7 + 2 = 9 This figure has nine faces. Now it’s your turn to practice. How many faces does each figure have, given the information about the shape of the base? Example A A base of a pentagon Solution: 7 faces Example B A base of a nonagon Solution: 11 faces Example C A base of a hexagon Solution: 8 faces Here is the original problem once again. Solid figures have their own characteristics, just like flat figures. Candice and Trevor are working wrapping gifts at the mall. A customer brings a figure that has 10 faces. It is a unique type of jewelry box. Do you know what the shape of the base of the figure is? To work on this exercise, we must go back. If the number of faces is n + 2, then the number of sides at the base will be x – 2. 10 is the number of faces. That’s ours. 10 – 2 = 8 The base is an 8-sided figure. An eight-sided figure is an octagon. Also, may interest you: Vocabulary • Flat figure: A two-dimensional figure. • Solid figure: A three-dimensional figure. • Expensive: The flat polygon of a solid figure. A figure can have more than one face. • Prism: The flat polygon of a solid figure. A figure can have more than one face. • Pyramid: A three-dimensional figure with a polygon as the base and all faces meet at a vertex. • Aristae: The line where two faces meet. • It will be A three-dimensional figure in which all points are equidistant from the center. • Cone: A three-dimensional figure with a circular base and a side that meets at a vertex. • Cylinder: A three-dimensional figure with two circular bases. • Vertex: A point where two or more edges meet.
__label__pos
0.99977
From:       To:       Home > Documentation > SQL Server Improve Performance of SQL Queries This article explores effective strategies and techniques to optimize SQL queries, enhancing their performance and efficiency. By implementing these SQL query optimization tips, you can significantly improve the execution speed and overall performance of your database system. The main goal of tuning queries is to decrease its working time by identifying fragments of SQL code that may cause poor query performance. The query runtime can be evaluated by monitoring key metrics of the particular execution plan via SQL Server Query Optimizer. The SQL Server Query Optimizer utilizes a cost-based approach to optimize query execution. Each potential execution plan is assigned a cost, representing the computational resources required. The Query Optimizer's goal is to analyze the available plans and select the one with the lowest estimated cost. For complex SELECT statements, which can have numerous potential plans, the Query Optimizer doesn't exhaustively evaluate every combination. Instead, it employs sophisticated algorithms to identify an execution plan with a cost that closely approximates the minimum possible cost. By using this approach, the Query Optimizer efficiently finds an optimal plan for query execution. Add Indexes Adding proper indexes (that probably were missed on design stage by mistake) to database tables can significantly enhance query performance and efficiency. In SQL Server, the query optimizer generates an execution plan when executing a query. If it identifies the absence of an index that could improve performance, it includes this information in the warning section of the execution plan. This suggestion highlights the specific columns that could benefit from indexing and provides insights into how performance can be enhanced after implementing the recommended indexes. By heeding these suggestions, you can optimize the execution of your SQL queries and achieve improved performance. Do not Use Multiple OR in the WHERE Predicates To improve performance of queries having multiple conditions, avoid using OR operator within a single WHERE predicate. SQL Server does not process OR operations efficiently, as it evaluates each component separately, which can result in poor performance. Instead, either split the query into separate parts with distinct search expressions or find alternative approaches to combine the conditions effectively. For example, the query: SELECT FROM people WHERE first_name='John' OR last_name='Doe' should be optimized as follows: SELECT FROM people WHERE first_name='John' UNION SELECT FROM people WHERE last_name='Doe' This trick allows SQL Server use the relates indexes, and the query will be optimized. Reduce Number of JOINs When including multiple tables in a query and performing joins, there is a risk of overloading the query and potentially creating an inefficient execution plan. The SQL query optimizer needs to determine the order of table joins, how to apply filters and aggregations, and other optimization factors when generating the execution plan. To achieve more efficient query plans, reduce number of JOIN operators in the query. Remove redundant JOIN by breaking down a single query into multiple separate queries and later joining them. This approach helps streamline the query and remove components that may degrade the performance. Do not Use Heading Wildcards Wildcards act as a placeholder at the beginning/end of regular expressions in search or filtering conditions. Use wildcards at the end of a phrase only to make SQL Server extract the data faster through the corresponding indexes. For example: SELECT FROM people WHERE first_name LIKE 'Jo%' Some tasks may require to search by the last symbols of a phrase, for example phone numbers ending by "321". The straight forward approach is to use the heading wildcard "%321", however it is not optimized. The workaround is to create a computed column that is REVERSE of the original and search across it using trailing placeholders. For the task of searching phone numbers ending by "321" it can be done as follows: CREATE TABLE people( id INT IDENTITY PRIMARY KEY, first_name VARCHAR(100), last_name VARCHAR(100), phone VARCHAR(50), reversed_phone AS REVERSE(phone) PERSISTED ) GO CREATE INDEX idx_reversed_phone ON people(reversed_phone) GO --searching for phones that end in 321 SELECT * FROM people WHERE reversed_phone LIKE '123%' Avoid Using SELECT * Some queries extract more data that it is required due to inaccurate design. Using SELECT * to extract all table columns leads to essential overhead on the large databases. Validate all of such queries to make sure you actually need the data from every column. Otherwise, specify the exact column list to make SQL Server retrieve only necessary data that will save the system resources. If the same fields are extracted regularly, build the covering indexes on these columns. Index containing all the fields required by query and can significantly improve the performance. Have questions? Contact us See also Useful SQL Server Queries Configuring SQL Server for Intelligent Converters Execution Plans in SQL Server
__label__pos
0.982219
Math Science More Exercises 1a. 1 answ 1b. 0,1,2,5 3 answ 1c. 3 answ 1d. 2 answ 2a. 5,12 3 answ 2b. 0,5,12 3 answ 2c. 3 answ 2d. 2 answ 3a. 1 3 answ 3b. 1 3 answ 3c. -13,-6,1 3 answ 3d. 3 answ 4a. 4 4 answ 4b. 4 3 answ 4c. 2 answ 4d. 2 answ 5a. 3 answ 5b. 2 answ 5c. 3 answ 5d. 2 answ 6a. 2 answ 6b. 2 answ 6c. 2 answ 6d. 2 answ 7. 2 answ 8. 0.333 3 answ 9. 3 answ 10. 0.625 3 answ 11. 3 answ 12. 2 answ 13. 2 answ 14. 2 answ 15. 3 answ 16. 3 answ 17. 2 answ 18. 2 answ 19a. The set of all real numbers less than or equal to 5. 2 answ 19b. See diagram 2 answ 19c. Unbounded 2 answ 20a. The set of all real numbers greater than or equal to -2. 2 answ 20b. See diagram 2 answ 20c. Unbounded 2 answ 21a. The set of all negative real numbers. 2 answ 21b. See diagram 2 answ 21c. Unbounded 2 answ 22a. The set of all real numbers greater than 3. 2 answ 22b. See diagram 2 answ 22c. Unbounded 2 answ 23a. The set of all real numbers greater than or equal to 4. 2 answ 23b. See diagram 2 answ 23c. Unbounded 2 answ 24a. The set of all real numbers less than 2. 2 answ 24b. See diagram 2 answ 24c. Unbounded 2 answ 25a. The set of all real numbers between -2 and 2, not including -2 and 2. 2 answ 25b. See diagram 2 answ 25c. Bounded 2 answ 26a. The set of all real numbers between 0 and 5, including 0 and 5. 2 answ 26b. See diagram 2 answ 26c. Bounded 2 answ 27a. The set of all real numbers between -1 and 0, including -1 and excluding 0. 2 answ 27b. See diagram 2 answ 27c. Bounded 2 answ 28a. The set of all real numbers between 0 and 6, including 6 and excluding 0. 2 answ 28b. See diagram 2 answ 28c. Bounded 2 answ 29. 1 answ 30. 1 answ 31. 2 answ 32. 2 answ 33. 2 answ 34. 2 answ 35. 2 answ 36. 2 answ 37. This interval consists of all real numbers greater than or equal to 0 and less than 8 1 answ 38. This interval consists of all real numbers greater than or equal to -5 and less than or equal to 7 1 answ 39. This interval consists of all real numbers greater than -6 1 answ 40. This interval consists of all real numbers less than or equal to 4 1 answ 57. 1 answ 58. 1 answ Algebra 2 Q&A Larson Algebra and Trigonometry, 6th Edition CHEAT SHEET SLADER FASTER remove page add page Ads keep Slader free. Upgrade to remove ads. There was an error saving. Please reload the page. Enter your math below Preview more about LaTeX helpful editing tips! Place math +
__label__pos
0.993294
Creating Text Fields by Using JTextField Class In this tutorial, you will learn how to use JTextField class to create text filed widgets. Text field is one of the most important widget that allows user to input text value in a single line format. To create a text field widget in Java swing, you use JTextField class. Here are the constructors of the JTextField class: JTextField ConstructorsMeaning public JTextField()Creates a new text field. public  JTextField(Document doc, String text, int columns)Creates a new text field with given document and number of columns. public  JTextField(String text)Creates a new text field with a given text. public  JTextField(int columns)Creates a new text field with a given columns. public  JTextField(String text, int columns)Creates a new text field with given text and number of columns. Example of creating text fields In this example, we will create two simple text fields: first name and last name as the picture below: JTextField Demo In order to run the demo application, you’ll need SpringUtilities class. Click the following link to download SprintUtilities.java file: Java Swing Spring Utilities (8.13 kB) 950 downloads
__label__pos
0.999974
Links PostgreSQL data output This article provides an introduction to PostgreSQL along with a guide on creating a PostgreSQL data output using Upsolver. What is PostgreSQL? PostgreSQL, also known as Postgres, is a free and open-source relational database management system emphasizing extensibility and SQL compliance. It supports both SQL (relational) and JSON (non-relational) querying and is used as the primary data store or data warehouse for many web, mobile, geospatial, and analytics applications. Create a PostgreSQL data output 1. Go to the Outputs page and click New. 2. Select PostgreSQL as your output type. 3. Name your output and select your Data Sources. 4. Select New to create a new table or Existing to output to an existing table. Then click Next. If outputting to an existing table, complete the database options as prompted before clicking Next again. If necessary, create a new PostgreSQL connection. Click Properties to review this output's properties. See: Output properties 5. Click the information icon in the fields tree to view information about a field. The following will be displayed: Density in Events Density in Data Distinct Values Total Values First Seen Last Seen How many of the events in this data source include this field, expressed as a percentage (e.g. 20.81%). The density in the hierarchy (how many of the events in this branch of the data hierarchy include this field), expressed a percentage. How many unique values appear in this field. The total number of values ingested for this field. The first time this field included a value, for example, a year ago. The last time this field included a value, for example, 2 minutes ago. Value Distribution Field Content Samples Over Time Selected The percentage distribution of the field values. These distribution values can be exported by clicking Export. A time-series graph of the total number of events that include the selected field. The most recent data values for the selected field and columns. You can change the columns that appear by clicking Choose Columns. 6. Click the information icon next to a hierarchy element (such as the overall data) to review the following metrics: # of Fields # of Keys # of Arrays Fields Breakdown Fields Statistics The number of fields in the selected hierarchy. The number of keys in the selected hierarchy. The number of arrays in the selected hierarchy. A stacked bar chart (by data type) of the number of fields versus the density/distinct values or a stacked bar chart of the number of fields by data type. A list of the fields in the hierarchy element, including Type, Density, Top Values, Key, Distinct Values, Array, First Seen, and Last Seen. 7. Click the plus icon in the fields tree to add a field from the data source to your output. This will be reflected under the Data Source Field in the Schema tab. • If required, modify the column name under Schema Column. • Additionally, click the gear icon to modify other details such as Column Type and Size. • To remove a field, click the unlink icon to clear the column mapping then the garbage icon to drop the column. Toggle from UI to SQL at any point to view the corresponding SQL code for your selected output. You can also edit your output directly in SQL. See: Transform with SQL 8. Add any required calculated fields and review them in the Calculated Fields tab. See: Adding calculated fields 9. Add any required lookups and review them under the Calculated Fields tab. 10. Through the Filters tab, add a filter like WHERE in SQL to the data source. See: Adding filters 11. Click Make Aggregated to turn the output into an aggregated output. Read the warning before clicking OK and then add the required aggregation. This aggregation field will then be added to the Schema tab. See: Aggregation functions 12. In the Aggregation Calculated Fields area under the Calculated Fields tab, add any required calculated fields on aggregations. See: Functions, Aggregation functions 13. To keep only the latest event per upsert key, click More > Manage Upserts then select the following: • Keys: A unique key identifying a row in the table. • Deletions: The delete key (events with the value true in their deletion key field will be deleted). Click Preview at any time to view a preview of your current output. 14. Click Run and fill out the following fields: • Schema • Table Name • Intermediate Storage Location: Where Upsolver will store the intermediate bulk files which it will then load into PostgreSQL using the load data infile command 15. Click Next and complete the following: Compute Cluster Processing Time Range Select the compute cluster to run the calculation on. Alternatively, click the drop-down and create a new compute cluster. The range of data to process. This can start from the data source beginning, now, or a custom date and time. This can never end, end now or end at a custom date and time. 16. Finally, click Deploy to run the output. It will show as Running in the output panel and is now live in production and consumes compute resources. You have now successfully outputted your table to your PostgreSQL database.
__label__pos
0.933224
LESSON 8-1 SIMILARITY IN RIGHT TRIANGLES PROBLEM SOLVING So we know that triangle ABC– We went from the unlabeled angle, to the yellow right angle, to the orange angle. That is going to be similar to triangle– so which is the one that is neither a right angle– so we’re looking at the smaller triangle right over here. Draw a pair of vertical angles with the given measure. Upside-down answers are provided right on the page. To log in and use all the features of Khan Academy, please enable JavaScript in your browser. What is the solution of the system? If the relationship is proportional, identify the constant of proportionality. And then in the second statement, BC on our larger triangle corresponds to DC on our smaller triangle. The cost of visiting the hospital for x number of visitors is shown in the table. Right triangle word problems. Two Dimensional Motion and Vectors. What is the total amount of money after 2 years? lesson 8-1 similarity in right triangles problem solving Similar Triangles in right triangles to solve problems. And we know that the length of this side, which we figured out through this problem is 4. Right triangle word problems. This course expands the study of numbers to include complex numbers and includes the study of exponents and radicals, rational expressions, as well as quadratic, polynomial, exponential, and logarithmic functions. Math 8CP Meet Mrs. What is the actual distance between two cities if the map distance is 4 inches?   UWO ESSAY BIRD COURSES Chapter 4 Sumilarity test: Apply relationships in special right triangles to solve problems. If you need to reference any proble the lessons or want some additional practice, please select the PDF below of the textbook pages for Chapter 5. And now that we know that they are similar, we can attempt to take ratios lexson the sides. Solving similar triangles: same side plays different roles (video) | Khan Academy Order today from Curriculum Express! Find the time it takes for the diver to reach the surface. This triangle, this triangle, and this larger triangle. Improve your skills with free problems in ‘Similar triangles and indirect measurement’ and thousands triagles other practice lessons. Sabo Solving Problems with Similar Triangles Page 2 of 6 Here we have used a common righg for indicating corresponding angles between the similagity High School: The instructional materials contain a clear road map for teachers to follow when planning instruction. The answers for these pages appear at the back of this booklet. Test, Form 3A Write the correct answer in the blank at the right of each 6: And so this is interesting because we’re already involving BC. Create your website today. Day 2 of lessons 2. So we have shown that they are similar. So if they share that angle, then they definitely share two angles.   ESSAY ON APS PESHAWAR ATTACK IN ENGLISH lesson 8-1 similarity in right triangles problem solving A and C is going to correspond to BC. Search Enter the terms you wish to search for. White vertex to the 90 degree angle vertex to the orange vertex. Math High school geometry Similarity Solving similar triangles. Similarity In Right Triangles Practice And Problem Solving A/B Projected Schedule for Chapter 5 If you are absent, please be sure to check the agenda tab for specifics! Trevor Collins 1, views. lesson 8-1 similarity in right triangles problem solving Accelerated Algebra 2 is an accelerated mathematics course. Keep or Give Review Game Homework: Algebraic expression – One or more numbers and variables along with one or more algebraic expressions. Strong positive association, fairly linear data, association is moderately strong. High School: Geometry » Similarity, Right Triangles, & Trigonometry Chapter 1 Homework and Notes; Chapter 5 Vocabulary. Two Dimensional Motion and Vectors. Which solid has the top, the side, and the front views given? Accelerated math 7 chapter 9 1 9 5 review. Author: admin
__label__pos
0.970515
10 đề minh họa TNPT lần 1 năm học 20162017 file word có đáp án chi tiết 211 160 0 • Loading ... Loading... 1/211 trang Tải xuống Thông tin tài liệu Ngày đăng: 25/04/2017, 09:19 Bộ 10 đề minh họa Tốt nghiệp THPT năm học 20162017 file word có đáp án chi tiết cực hay,là tài liệu cực kì hữu ích giúp các em học sinh tự luyện ở nhà,giáo viên cũng có thể làm tài liệu tham khảo giảng dạy ĐỀ THI MINH HỌA KỲ THI THPT QUỐC GIA NĂM 2017 Môn: TOÁN Thời gian làm bài: 90 phút Đề số 001 y = x − 3x + 3x − Câu 1: Hàm số A có cực trị ? B Câu 2: Cho hàm số C y = − x − 2x − x − 3 A Hàm số cho nghịch biến B Hàm số cho nghịch biến C Hàm số cho nghịch biến D Hàm số cho nghịch biến Khẳng định sau ? 1   −∞; − ÷ 2     − ; +∞ ÷   1     −∞; − ÷∪  − ; +∞ ÷ 2    ¡ Câu 3: Hàm số sau đồng biến y = tan x ¡ ? y = 2x + x A B y = x − 3x + C A x y = x3 + D Câu 4: Trong hàm số sau, hàm số đồng biến y = 4x − D ¡ ? y = 4x − 3sin x + cos x B y = 3x − x + 2x − y = x3 + x C D y = 1− x2 Câu 5: Cho hàm số Khẳng định sau ? [ 0;1] A Hàm số cho đồng biến ( 0;1) B Hàm số cho đồng biến ( 0;1) C Hàm số cho nghịch biến D Hàm số cho nghịch biến y= Câu 6: Tìm giá trị nhỏ hàm số ( −1;0 ) x2 − x+3 [ 0; 2] đoạn y = − x∈[ 0;2] A y = − x∈[ 0;2] B y = −2 x∈[ 0;2] C D y = x − 3x + 2x − Câu 7: Đồ thị hàm số y = −10 x∈[ 0;2 ] y = x − 3x + cắt đồ thị hàm số hai điểm phân biệt A, B Khi độ dài AB ? A AB = B AB = 2 C AB = D AB = y = x − 2mx + 2m + m Câu 8: Tìm tất giá trị thực m cho đồ thị hàm số có ba điểm cực trị tạo thành tam giác A m=0 B m= 33 C m = −3 y= Câu 9: Tìm tất giá trị thực m để đồ thị hàm số A m=0 B y= Câu 10: Cho hàm số m0 D m>3 có đồ thị (C) Tìm điểm M thuộc đồ thị (C) cho khoảng cách từ M đến tiệm cận đứng hai lần khoảng cách từ M đến tiệm cận ngang M1 ( 1; −1) ; M ( 7;5 ) M1 ( 1;1) ; M ( −7;5 ) A B M1 ( −1;1) ; M ( 7;5 ) M1 ( 1;1) ; M ( 7; −5 ) C D 16π m3 Câu 11: Một đại lý xăng dầu cần làm bồn dầu hình trụ tôn tích đáy r hình trụ cho hình trụ làm tốn nguyên vật liệu A 0,8m B 1,2m C 2m D 2,4m a a a Câu 12: Cho số dương a, biểu thức A 5 a3 a7 a6 a3 B y = ( 4x − 1) Câu 13: Hàm số A viết dạng hữu tỷ là: ¡ C D −4 có tập xác định là: ( 0; +∞ ] B C  1 ¡ \ − ;   2 D  1 − ; ÷  2 Tìm bán kính y=x π Câu 14: Phương trình tiếp tuyến đồ thị hàm số y= A π x +1 π π x − +1 2 y= B điểm thuộc đồ thị có hoành độ là: y= C π x −1 y= D π π x + −1 2 y = x − 2x Câu 15: Cho hàm số Khẳng định sau sai A Đồ thị hàm số cắt trục tung y=2 B Đồ thị hàm số cắt đường thẳng C Hàm số có giá trị nhỏ lớn -1 D Đồ thị hàm số cắt trục hoành điểm y = log ( x − 3x + ) Câu 16: Tìm tập xác định D hàm số D = ( −2;1) D = ( −2; +∞ ) A B D = ( 1; +∞ ) C D = ( −2; +∞ ) \ { 1} D Câu 17: Đồ thị hình bên hàm số nào: y = −2 x y = −3x A B y = x2 −1 y = 2x − C D y= Câu 18: Tính đạo hàm hàm số y' = ln ( x − 1) − (2 ) y' = x A B 1− x 2x x−2 2x a = log 5; b = log Câu 19: Đặt log15 20 = C 2−x 2x y' = D log15 20 Hãy biểu diễn theo a b a (1+ a) b ( a + b) A log15 20 = b(1+ a) a (1+ b) log15 20 = a ( + b) b ( 1+ a) B log15 20 = C y' = b ( 1+ b) a ( 1+ a ) D ln ( x − 1) − 2x Câu 20: Cho số t hực a, b thỏa 1< a < b Khẳng định sau 1 0, ∀ m Ta có: Do pt có nghiệm phân biệt với m Vậy d cắt (C) điểm phân biệt với m Câu 8: Đáp án D  x = ⇒ y = m3  y' = x − 3mx ⇒ y ' = ⇔  x = m ⇒ y = Ta có: Để hàm số có hai điểm cực trị Giả sử m≠0 uuur     A  0; m ÷, B ( m;0 ) ⇒ AB =  m, − m ÷     r r n = ( 1; −1) ⇒ u = ( 1;1) Ta có vtpt d Để uuur r m = AB ⊥ d ⇔ AB.u = ⇔ m − m3 = ⇔  ⇒m=± 2 m = ± Câu 9: Đáp án A Xét phương trình x + 4x − m = , với ∆ ' = + m < ⇔ m < −4 phương trình vô nghiệm nên đồ thị hàm số tiệm cận đứng Câu 10: Đáp án A Gọi h r chiều cao bán kính đáy hình trụ Bài toán quy việc tính h r phụ thuộc theo R hình chữ nhật ABCD nội tiếp hình tròn (O,R) thay đổi V = πr h Ta có: đạt giá trị lớn AC2 = AB2 + BC ⇔ 4R = 4r + h 202     V = π  R − h ÷h = π  − h + R h ÷ ( < h < 2R )     2R   V ' = π  − h2 + R ÷⇔ h = ±   V = Vmax = 2R πR ⇔ h = Vậy x 2R 2R y' y + Lúc - 4R 2R R r2 = R − = ⇒r= 3 Câu 11: Đáp án D y= u = cot x, u ∈ ( 0;1) Đặt y 'x = 2−m ( u − m) u 'x = u−2 u−m 2−m ( u − m)  − + cot x )  =  ( − ( − m) ( u − m) ( + cot x ) Ta có: Hàm số đồng biến π π  ; ÷ ⇔ y 'x > 4 2 với x thuộc π π  ; ÷ 4 2 Câu 12: Đáp án A Điều kiện x2 −1 > log ( x − 1) = ⇔ x = ⇔ x = ±2 Phương trình , thỏa điều kiện Câu 13: Đáp án B y' = x.ln Câu 14: Đáp án C 203 hay m > ⇔m>2  m ∉ ( 0;1) 3x − > ⇔ x > Điều kiện log ( 3x − 1) > ⇔ 3x − > ⇔ x > , kết hợp điều kiện ta x >3 Câu 15: Đáp án A x − 4x ⇔ x ( x − ) > ⇔ x > Điều kiện xác định: Câu 16: Đáp án A ( 1; ) Đồ thị hàm số qua điểm có A, D thỏa nhiên đáp án D có đồ thị parabol Câu 17: Đáp án A B = 32log3 a − log a log a 25 = 3log3 a − log a.log a = a − Ta có: Câu 18: Đáp án C ' y' = Ta có: x+4 8  x −4 = =  ÷  x−4  x +  ( x − ) ln ( x + ) ( x − ) ln  ÷ln  x+4 Câu 19: Đáp án A Ta có log 50 = log 32 50 = log 50 log 50 = log Suy 150 = log3 15 + log3 10 − = a + b − 1 log 50 = log 50 = ( a + b − 1) 2 Hoặc học sinh kiểm tra MTCT Câu 20: Đáp án C x> ĐK: ( *) log x + log ( 2x − 1) + log ( 4x + ) < ⇔ log ( 2x − x ) < log ( 4x + ) ⇔ 2x − 5x − < ⇔ − < x < kết hợp đk (*) ta Câu 21: Đáp án B 204 < x - Xem thêm - Xem thêm: 10 đề minh họa TNPT lần 1 năm học 20162017 file word có đáp án chi tiết, 10 đề minh họa TNPT lần 1 năm học 20162017 file word có đáp án chi tiết, 10 đề minh họa TNPT lần 1 năm học 20162017 file word có đáp án chi tiết Gợi ý tài liệu liên quan cho bạn Từ khóa liên quan Nhận lời giải ngay chưa đến 10 phút Đăng bài tập ngay Nạp tiền Tải lên Đăng ký Đăng nhập
__label__pos
0.990391
4 I have been using gdal from the command line to convert an asc file to a GeoJSON output. I can do this successfully: gdal_polygonize.py input.asc -f "GeoJSON" output.json Now I wish to use Python and follow this process for a range of files. import gdal import glob for file in glob.glob("dir/*.asc"): new_name = file[:-4] + ".json" gdal.Polygonize(file, "-f", "GeoJSON", new_name) Hpwever, for exactly the same file I get the following error TypeError: in method 'Polygonize', argument 1 of type 'GDALRasterBandShadow *' Why does the command line version work and the python version not? 10 You are confusing the use of the gdal_polygonize command line utility with the python function gdal.Polygonize(). As you mentioned, you've managed to use the command line utility successfully; however, the Python function works differently and expects different arguments than those specified in the utility. The first argument should be a GDAL Band object, not a string, so this is why you get your error. To get the Band object you need to open the input file using gdal.Open() and use the GetRasterBand() method to get your intended band. Additionally, you need to create an output layer in which the resulting polygons will be created. The Python GDAL/OGR Cookbook has a good example on how to use this function. The required parameters are explained in a bit more detail here. Alternative Following on from your comment, if you would prefer to keep using the command line utility, one solution is to call it from within a Python script using the subprocess module i.e. import subprocess script = 'PATH_TO_GDAL_POLYGONIZE' for in_file in glob.glob("dir/*.asc"): out_file = in_file[:-4] + ".json" subprocess.call(["python",script,in_file,'-f','GeoJSON',out_file]) This loops through the files and updates the input/output paths. This way you get the result that you are use to getting from the utility, but the ability to loop through your files. If you do plan on going this way, the subprocess documention will be useful. • Thanks Ali! Very useful. One question - How do I specify all bands? The GetRasterBand function in the example takes an integer (srcband = src_ds.GetRasterBand(1)), I want the conversion for the entire file (as the command line does) – LearningSlowly Oct 19 '16 at 9:34 • No problem. To do this for all bands you will have to loop through them. You can find out how many bands there are using the RasterCount attribute. – Ali Oct 19 '16 at 9:39 • Happy days - the count was 1 ;) Thanks Ali! – LearningSlowly Oct 19 '16 at 10:40 • I've updated my answer to provide an alternative method you could use for looping through your files. Happy Pythoning! – Ali Oct 19 '16 at 11:27 Your Answer By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.640571
Injecting dependencies into microservices duration 15 minutes Prerequisites: Learn how to use Contexts and Dependency Injection (CDI) to manage scopes and inject dependencies into microservices. What you’ll learn You will learn how to use Contexts and Dependency Injection (CDI) to manage scopes and inject dependencies in a simple inventory management application. The application that you will be working with is an inventory service, which stores the information about various JVMs that run on different systems. Whenever a request is made to the inventory service to retrieve the JVM system properties of a particular host, the inventory service communicates with the system service on that host to get these system properties. The system properties are then stored and returned. You will use scopes to bind objects in this application to their well-defined contexts. CDI provides a variety of scopes for you to work with and while you will not use all of them in this guide, there is one for almost every scenario that you may encounter. Scopes are defined by using CDI annotations. You will also use dependency injection to inject one bean into another to make use of its functionalities. This enables you to inject the bean in its specified context without having to instantiate it yourself. The implementation of the application and its services are provided for you in the start/src directory. The system service can be found in the start/src/main/java/io/openliberty/guides/system directory, and the inventory service can be found in the start/src/main/java/io/openliberty/guides/inventory directory. If you want to learn more about RESTful web services and how to build them, see Creating a RESTful web service for details about how to build the system service. The inventory service is built in a similar way. What is CDI? Contexts and Dependency Injection (CDI) defines a rich set of complementary services that improve the application structure. The most fundamental services that are provided by CDI are contexts that bind the lifecycle of stateful components to well-defined contexts, and dependency injection that is the ability to inject components into an application in a typesafe way. With CDI, the container does all the daunting work of instantiating dependencies, and controlling exactly when and how these components are instantiated and destroyed. Getting started The fastest way to work through this guide is to clone the Git repository and use the projects that are provided inside: git clone https://github.com/openliberty/guide-cdi-intro.git cd guide-cdi-intro The start directory contains the starting project that you will build upon. The finish directory contains the finished project that you will build. Try what you’ll build The finish directory in the root of this guide contains the finished application. Give it a try before you proceed. To try out the application, first go to the finish directory and run the following Maven goal to build the application and deploy it to Open Liberty: mvn liberty:run After you see the following message, your application server is ready. The defaultServer server is ready to run a smarter planet. Point your browser to the http://localhost:9080/inventory/systems URL. This is the starting point of the inventory service and it displays the current contents of the inventory. As you might expect, these are empty since nothing is stored in the inventory yet. Next, point your browser to the http://localhost:9080/inventory/systems/localhost URL. You see a result in JSON format with the system properties of your local JVM. When you visit this URL, these system properties are automatically stored in the inventory. Go back to http://localhost:9080/inventory/systems and you see a new entry for localhost. For simplicity, only the OS name and username are shown here for each host. You can repeat this process for your own hostname or any other machine that is running the system service. After you are finished checking out the application, stop the Open Liberty server by pressing CTRL+C in the command-line session where you ran the server. Alternatively, you can run the liberty:stop goal from the finish directory in another shell session: mvn liberty:stop Handling dependencies in the application You will use CDI to inject dependencies into the inventory manager application and learn how to manage the life cycles of your objects. Managing scopes and contexts Navigate to the start directory to begin. When you run Open Liberty in dev mode, the server listens for file changes and automatically recompiles and deploys your updates whenever you save a new change. Run the following goal to start in dev mode: mvn liberty:dev After you see the following message, your application server in dev mode is ready: Press the Enter key to run tests on demand. Dev mode holds your command-line session to listen for file changes. Open another command-line session to continue, or open the project in your editor. Create the InventoryManager class. src/main/java/io/openliberty/guides/inventory/InventoryManager.java InventoryManager.java 1// tag::copyright[] 2/******************************************************************************* 3 * Copyright (c) 2017, 2019 IBM Corporation and others. 4 * All rights reserved. This program and the accompanying materials 5 * are made available under the terms of the Eclipse Public License v1.0 6 * which accompanies this distribution, and is available at 7 * http://www.eclipse.org/legal/epl-v10.html 8 * 9 * Contributors: 10 * IBM Corporation - Initial implementation 11 *******************************************************************************/ 12// end::copyright[] 13package io.openliberty.guides.inventory; 14 15import java.util.ArrayList; 16import java.util.Collections; 17import java.util.List; 18import java.util.Properties; 19import io.openliberty.guides.inventory.model.InventoryList; 20import io.openliberty.guides.inventory.model.SystemData; 21import javax.enterprise.context.ApplicationScoped; 22 23// tag::ApplicationScoped[] 24@ApplicationScoped 25// end::ApplicationScoped[] 26public class InventoryManager { 27 28 private List<SystemData> systems = Collections.synchronizedList(new ArrayList<>()); 29 30 // tag::add[] 31 public void add(String hostname, Properties systemProps) { 32 Properties props = new Properties(); 33 props.setProperty("os.name", systemProps.getProperty("os.name")); 34 props.setProperty("user.name", systemProps.getProperty("user.name")); 35 36 SystemData system = new SystemData(hostname, props); 37 if (!systems.contains(system)) { 38 systems.add(system); 39 } 40 } 41 // end::add[] 42 43 // tag::list[] 44 public InventoryList list() { 45 return new InventoryList(systems); 46 } 47 // end::list[] 48} This bean contains two simple functions. The add() function is for adding entries to the inventory. The list() function is for listing all the entries currently stored in the inventory. This bean must be persistent between all of the clients, which means multiple clients need to share the same instance. To achieve this by using CDI, you can simply add the @ApplicationScoped annotation onto the class. This annotation indicates that this particular bean is to be initialized once per application. By making it application-scoped, the container ensures that the same instance of the bean is used whenever it is injected into the application. Create the InventoryResource class. src/main/java/io/openliberty/guides/inventory/InventoryResource.java InventoryResource.java 1// tag::copyright[] 2/******************************************************************************* 3 * Copyright (c) 2017, 2020 IBM Corporation and others. 4 * All rights reserved. This program and the accompanying materials 5 * are made available under the terms of the Eclipse Public License v1.0 6 * which accompanies this distribution, and is available at 7 * http://www.eclipse.org/legal/epl-v10.html 8 * 9 * Contributors: 10 * IBM Corporation - Initial implementation 11 *******************************************************************************/ 12// end::copyright[] 13package io.openliberty.guides.inventory; 14 15import java.util.Properties; 16import javax.enterprise.context.ApplicationScoped; 17import javax.inject.Inject; 18import javax.ws.rs.GET; 19import javax.ws.rs.Path; 20import javax.ws.rs.PathParam; 21import javax.ws.rs.Produces; 22import javax.ws.rs.core.MediaType; 23import javax.ws.rs.core.Response; 24import io.openliberty.guides.inventory.model.InventoryList; 25import io.openliberty.guides.inventory.client.SystemClient; 26 27// tag::ApplicationScoped[] 28@ApplicationScoped 29// end::ApplicationScoped[] 30// tag::endpoint[] 31@Path("/systems") 32// end::endpoint[] 33// tag::InventoryResource[] 34public class InventoryResource { 35 36 // tag::inject[] 37 @Inject 38 // end::inject[] 39 InventoryManager manager; 40 41 // tag::inject2[] 42 @Inject 43 // end::inject2[] 44 SystemClient systemClient; 45 46 @GET 47 @Path("/{hostname}") 48 @Produces(MediaType.APPLICATION_JSON) 49 public Response getPropertiesForHost(@PathParam("hostname") String hostname) { 50 // Get properties for host 51 // tag::properties[] 52 Properties props = systemClient.getProperties(hostname); 53 // end::properties[] 54 if (props == null) { 55 return Response.status(Response.Status.NOT_FOUND) 56 .entity("{ \"error\" : \"Unknown hostname " + hostname 57 + " or the inventory service may not be running " 58 + "on the host machine \" }") 59 .build(); 60 } 61 62 // Add to inventory 63 // tag::managerAdd[] 64 manager.add(hostname, props); 65 // end::managerAdd[] 66 return Response.ok(props).build(); 67 } 68 69 @GET 70 @Produces(MediaType.APPLICATION_JSON) 71 public InventoryList listContents() { 72 // tag::managerList[] 73 return manager.list(); 74 // end::managerList[] 75 } 76} 77// tag::InventoryResource[] The inventory resource is a RESTful service that is served at the inventory/systems endpoint. Annotating a class with the @ApplicationScoped annotation indicates that the bean is initialized once and is shared between all requests while the application runs. If you want this bean to be initialized once for every request, you can annotate the class with the @RequestScoped annotation instead. With the @RequestScoped annotation, the bean is instantiated when the request is received and destroyed when a response is sent back to the client. A request scope is short-lived. Injecting a dependency Refer to the InventoryResource class you created above. The @Inject annotation indicates a dependency injection. You are injecting your InventoryManager and SystemClient beans into the InventoryResource class. This injects the beans in their specified context and makes all of their functionalities available without the need of instantiating them yourself. The injected bean InventoryManager can then be invoked directly through the manager.add(hostname, props) and manager.list() function calls. The injected bean SystemClient can be invoked through the systemClient.getProperties(hostname) function call. Finally, you have a client component SystemClient that can be found in the src/main/java/io/openliberty/guides/inventory/client directory. This class communicates with the system service to retrieve the JVM system properties for a particular host that exposes them. This class also contains detailed Javadocs that you can read for reference. Your inventory application is now completed. InventoryResource.java 1// tag::copyright[] 2/******************************************************************************* 3 * Copyright (c) 2017, 2020 IBM Corporation and others. 4 * All rights reserved. This program and the accompanying materials 5 * are made available under the terms of the Eclipse Public License v1.0 6 * which accompanies this distribution, and is available at 7 * http://www.eclipse.org/legal/epl-v10.html 8 * 9 * Contributors: 10 * IBM Corporation - Initial implementation 11 *******************************************************************************/ 12// end::copyright[] 13package io.openliberty.guides.inventory; 14 15import java.util.Properties; 16import javax.enterprise.context.ApplicationScoped; 17import javax.inject.Inject; 18import javax.ws.rs.GET; 19import javax.ws.rs.Path; 20import javax.ws.rs.PathParam; 21import javax.ws.rs.Produces; 22import javax.ws.rs.core.MediaType; 23import javax.ws.rs.core.Response; 24import io.openliberty.guides.inventory.model.InventoryList; 25import io.openliberty.guides.inventory.client.SystemClient; 26 27// tag::ApplicationScoped[] 28@ApplicationScoped 29// end::ApplicationScoped[] 30// tag::endpoint[] 31@Path("/systems") 32// end::endpoint[] 33// tag::InventoryResource[] 34public class InventoryResource { 35 36 // tag::inject[] 37 @Inject 38 // end::inject[] 39 InventoryManager manager; 40 41 // tag::inject2[] 42 @Inject 43 // end::inject2[] 44 SystemClient systemClient; 45 46 @GET 47 @Path("/{hostname}") 48 @Produces(MediaType.APPLICATION_JSON) 49 public Response getPropertiesForHost(@PathParam("hostname") String hostname) { 50 // Get properties for host 51 // tag::properties[] 52 Properties props = systemClient.getProperties(hostname); 53 // end::properties[] 54 if (props == null) { 55 return Response.status(Response.Status.NOT_FOUND) 56 .entity("{ \"error\" : \"Unknown hostname " + hostname 57 + " or the inventory service may not be running " 58 + "on the host machine \" }") 59 .build(); 60 } 61 62 // Add to inventory 63 // tag::managerAdd[] 64 manager.add(hostname, props); 65 // end::managerAdd[] 66 return Response.ok(props).build(); 67 } 68 69 @GET 70 @Produces(MediaType.APPLICATION_JSON) 71 public InventoryList listContents() { 72 // tag::managerList[] 73 return manager.list(); 74 // end::managerList[] 75 } 76} 77// tag::InventoryResource[] SystemClient.java 1// tag::copyright[] 2/******************************************************************************* 3 * Copyright (c) 2017, 2019 IBM Corporation and others. 4 * All rights reserved. This program and the accompanying materials 5 * are made available under the terms of the Eclipse Public License v1.0 6 * which accompanies this distribution, and is available at 7 * http://www.eclipse.org/legal/epl-v10.html 8 * 9 * Contributors: 10 * IBM Corporation - Initial implementation 11 *******************************************************************************/ 12// end::copyright[] 13package io.openliberty.guides.inventory.client; 14 15import javax.enterprise.context.ApplicationScoped; 16import javax.inject.Inject; 17import javax.ws.rs.client.Client; 18import javax.ws.rs.client.ClientBuilder; 19import javax.ws.rs.client.Invocation.Builder; 20import javax.ws.rs.core.HttpHeaders; 21import javax.ws.rs.core.MediaType; 22import javax.ws.rs.core.Response; 23import javax.ws.rs.core.Response.Status; 24import java.util.Properties; 25import java.net.URI; 26import org.eclipse.microprofile.config.inject.ConfigProperty; 27 28@ApplicationScoped 29public class SystemClient { 30 31 // Constants for building URI to the system service. 32 private final String SYSTEM_PROPERTIES = "/system/properties"; 33 private final String PROTOCOL = "http"; 34 35 @Inject 36 @ConfigProperty(name = "system.http.port") 37 String SYS_HTTP_PORT; 38 39 // Wrapper function that gets properties 40 public Properties getProperties(String hostname) { 41 String url = buildUrl(PROTOCOL, hostname, Integer.valueOf(SYS_HTTP_PORT), SYSTEM_PROPERTIES); 42 Builder clientBuilder = buildClientBuilder(url); 43 return getPropertiesHelper(clientBuilder); 44 } 45 46 // tag::doc[] 47 /** 48 * Builds the URI string to the system service for a particular host. 49 * @param protocol 50 * - http or https. 51 * @param host 52 * - name of host. 53 * @param port 54 * - port number. 55 * @param path 56 * - Note that the path needs to start with a slash!!! 57 * @return String representation of the URI to the system properties service. 58 */ 59 // end::doc[] 60 protected String buildUrl(String protocol, String host, int port, String path) { 61 try { 62 URI uri = new URI(protocol, null, host, port, path, null, null); 63 return uri.toString(); 64 } catch (Exception e) { 65 System.err.println("Exception thrown while building the URL: " + e.getMessage()); 66 return null; 67 } 68 } 69 70 // Method that creates the client builder 71 protected Builder buildClientBuilder(String urlString) { 72 try { 73 Client client = ClientBuilder.newClient(); 74 Builder builder = client.target(urlString).request(); 75 return builder.header(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON); 76 } catch (Exception e) { 77 System.err.println("Exception thrown while building the client: " + e.getMessage()); 78 return null; 79 } 80 } 81 82 // Helper method that processes the request 83 protected Properties getPropertiesHelper(Builder builder) { 84 try { 85 Response response = builder.get(); 86 if (response.getStatus() == Status.OK.getStatusCode()) { 87 return response.readEntity(Properties.class); 88 } else { 89 System.err.println("Response Status is not OK."); 90 } 91 } catch (RuntimeException e) { 92 System.err.println("Runtime exception: " + e.getMessage()); 93 } catch (Exception e) { 94 System.err.println("Exception thrown while invoking the request: " + e.getMessage()); 95 } 96 return null; 97 } 98 99} Running the application The Open Liberty server was started in development mode at the beginning of the guide and all the changes were automatically picked up. You can find the inventory and system services at the following URLs: Testing the inventory application While you can test your application manually, you should rely on automated tests since they trigger a failure whenever a code change introduces a defect. Since the application is a RESTful web service application, you can use JUnit and the RESTful web service Client API to write tests. In testing the functionality of the application, the scopes and dependencies are being tested. Create the InventoryEndpointIT class. src/test/java/it/io/openliberty/guides/inventory/InventoryEndpointIT.java The @BeforeAll annotation is placed on a method that runs before any of the test cases. In this case, the oneTimeSetup() method retrieves the port number for the Open Liberty server and builds a base URL string that is used throughout the tests. The @BeforeEach and @AfterEach annotations are placed on methods that run before and after every test case. These methods are generally used to perform any setup and teardown tasks. In this case, the setup() method creates a JAX-RS client, which makes HTTP requests to the inventory service. This client must also be registered with a JSON-P provider (JsrJsonpProvider) to process JSON resources. The teardown() method simply destroys this client instance. See the following descriptions of the test cases: • testHostRegistration() verifies that a host is correctly added to the inventory. • testSystemPropertiesMatch() verifies that the JVM system properties returned by the system service match the ones stored in the inventory service. • testUnknownHost() verifies that an unknown host or a host that does not expose their JVM system properties is correctly handled as an error. To force these test cases to run in a particular order, annotate your InventoryEndpointIT test class with the @TestMethodOrder(OrderAnnotation.class) annotation. OrderAnnotation.class runs test methods in numerical order, according to the values specified in the @Order annotation. You can also create a custom MethodOrderer class or use built-in MethodOrderer implementations, such as OrderAnnotation.class, Alphanumeric.class, or Random.class. Label your test cases with the @Test annotation so that they automatically run when your test class runs. Finally, the src/test/java/it/io/openliberty/guides/system/SystemEndpointIT.java file is included for you to test the basic functionality of the system service. If a test failure occurs, then you might have introduced a bug into the code. InventoryEndpointIT.java 1// tag::copyright[] 2/******************************************************************************* 3 * Copyright (c) 2017, 2020 IBM Corporation and others. 4 * All rights reserved. This program and the accompanying materials 5 * are made available under the terms of the Eclipse Public License v1.0 6 * which accompanies this distribution, and is available at 7 * http://www.eclipse.org/legal/epl-v10.html 8 * 9 * Contributors: 10 * IBM Corporation - Initial implementation 11 *******************************************************************************/ 12// end::copyright[] 13// tag::testClass[] 14package it.io.openliberty.guides.inventory; 15 16import static org.junit.jupiter.api.Assertions.assertEquals; 17import static org.junit.jupiter.api.Assertions.assertTrue; 18 19import javax.json.JsonArray; 20import javax.json.JsonObject; 21import javax.ws.rs.client.Client; 22import javax.ws.rs.client.ClientBuilder; 23import javax.ws.rs.core.MediaType; 24import javax.ws.rs.core.Response; 25 26import org.apache.cxf.jaxrs.provider.jsrjsonp.JsrJsonpProvider; 27import org.junit.jupiter.api.AfterEach; 28import org.junit.jupiter.api.BeforeEach; 29import org.junit.jupiter.api.BeforeAll; 30// tag::MethodOrderer[] 31import org.junit.jupiter.api.MethodOrderer.OrderAnnotation; 32// end::MethodOrderer[] 33import org.junit.jupiter.api.Order; 34import org.junit.jupiter.api.Test; 35import org.junit.jupiter.api.TestMethodOrder; 36 37// tag::TestMethodOrder[] 38@TestMethodOrder(OrderAnnotation.class) 39// end::TestMethodOrder[] 40public class InventoryEndpointIT { 41 42 private static String port; 43 private static String baseUrl; 44 45 private Client client; 46 47 private final String SYSTEM_PROPERTIES = "system/properties"; 48 private final String INVENTORY_SYSTEMS = "inventory/systems"; 49 50 // tag::BeforeAll[] 51 @BeforeAll 52 // end::BeforeAll[] 53 // tag::oneTimeSetup[] 54 public static void oneTimeSetup() { 55 port = System.getProperty("http.port"); 56 baseUrl = "http://localhost:" + port + "/"; 57 } 58 // end::oneTimeSetup[] 59 60 // tag::BeforeEach[] 61 @BeforeEach 62 // end::BeforeEach[] 63 // tag::setup[] 64 public void setup() { 65 client = ClientBuilder.newClient(); 66 // tag::JsrJsonpProvider[] 67 client.register(JsrJsonpProvider.class); 68 // end::JsrJsonpProvider[] 69 } 70 // end::setup[] 71 72 // tag::AfterEach[] 73 @AfterEach 74 // end::AfterEach[] 75 // tag::teardown[] 76 public void teardown() { 77 client.close(); 78 } 79 // end::teardown[] 80 81 // tag::tests[] 82 // tag::Test1[] 83 @Test 84 // end::Test1[] 85 // tag::Order1[] 86 @Order(1) 87 // end::Order1[] 88 // tag::testHostRegistration[] 89 public void testHostRegistration() { 90 this.visitLocalhost(); 91 92 Response response = this.getResponse(baseUrl + INVENTORY_SYSTEMS); 93 this.assertResponse(baseUrl, response); 94 95 JsonObject obj = response.readEntity(JsonObject.class); 96 97 JsonArray systems = obj.getJsonArray("systems"); 98 99 boolean localhostExists = false; 100 for (int n = 0; n < systems.size(); n++) { 101 localhostExists = systems.getJsonObject(n) 102 .get("hostname").toString() 103 .contains("localhost"); 104 if (localhostExists) { 105 break; 106 } 107 } 108 assertTrue(localhostExists, 109 "A host was registered, but it was not localhost"); 110 111 response.close(); 112 } 113 // end::testHostRegistration[] 114 115 // tag::Test2[] 116 @Test 117 // end::Test2[] 118 // tag::Order2[] 119 @Order(2) 120 // end::Order2[] 121 // tag::testSystemPropertiesMatch[] 122 public void testSystemPropertiesMatch() { 123 Response invResponse = this.getResponse(baseUrl + INVENTORY_SYSTEMS); 124 Response sysResponse = this.getResponse(baseUrl + SYSTEM_PROPERTIES); 125 126 this.assertResponse(baseUrl, invResponse); 127 this.assertResponse(baseUrl, sysResponse); 128 129 JsonObject jsonFromInventory = (JsonObject) invResponse.readEntity(JsonObject.class) 130 .getJsonArray("systems") 131 .getJsonObject(0) 132 .get("properties"); 133 134 JsonObject jsonFromSystem = sysResponse.readEntity(JsonObject.class); 135 136 String osNameFromInventory = jsonFromInventory.getString("os.name"); 137 String osNameFromSystem = jsonFromSystem.getString("os.name"); 138 this.assertProperty("os.name", "localhost", osNameFromSystem, 139 osNameFromInventory); 140 141 String userNameFromInventory = jsonFromInventory.getString("user.name"); 142 String userNameFromSystem = jsonFromSystem.getString("user.name"); 143 this.assertProperty("user.name", "localhost", userNameFromSystem, 144 userNameFromInventory); 145 146 invResponse.close(); 147 sysResponse.close(); 148 } 149 // end::testSystemPropertiesMatch[] 150 151 // tag::Test3[] 152 @Test 153 // end::Test3[] 154 // tag::Order3[] 155 @Order(3) 156 // end::Order3[] 157 // tag::testUnknownHost[] 158 public void testUnknownHost() { 159 Response response = this.getResponse(baseUrl + INVENTORY_SYSTEMS); 160 this.assertResponse(baseUrl, response); 161 162 Response badResponse = client.target(baseUrl + INVENTORY_SYSTEMS + "/" 163 + "badhostname").request(MediaType.APPLICATION_JSON).get(); 164 165 assertEquals(404, badResponse.getStatus(), 166 "BadResponse expected status: 404. Response code not as expected."); 167 168 String obj = badResponse.readEntity(String.class); 169 170 boolean isError = obj.contains("error"); 171 assertTrue(isError, 172 "badhostname is not a valid host but it didn't raise an error"); 173 174 response.close(); 175 badResponse.close(); 176 } 177 // end::testUnknownHost[] 178 // end::tests[] 179 180 private Response getResponse(String url) { 181 return client.target(url).request().get(); 182 } 183 184 private void assertResponse(String url, Response response) { 185 assertEquals(200, response.getStatus(), "Incorrect response code from " + url); 186 } 187 188 private void assertProperty(String propertyName, String hostname, 189 String expected, String actual) { 190 assertEquals(expected, actual, "JVM system property [" + propertyName + "] " 191 + "in the system service does not match the one stored in " 192 + "the inventory service for " + hostname); 193 } 194 195 private void visitLocalhost() { 196 Response response = this.getResponse(baseUrl + SYSTEM_PROPERTIES); 197 this.assertResponse(baseUrl, response); 198 response.close(); 199 200 Response targetResponse = client.target(baseUrl + INVENTORY_SYSTEMS 201 + "/localhost").request().get(); 202 targetResponse.close(); 203 } 204} 205// end::testClass[] SystemEndpointIT.java 1//tag::copyright[] 2/******************************************************************************* 3* Copyright (c) 2017, 2019 IBM Corporation and others. 4* All rights reserved. This program and the accompanying materials 5* are made available under the terms of the Eclipse Public License v1.0 6* which accompanies this distribution, and is available at 7* http://www.eclipse.org/legal/epl-v10.html 8* 9* Contributors: 10* IBM Corporation - initial API and implementation 11*******************************************************************************/ 12// end::copyright[] 13package it.io.openliberty.guides.system; 14 15import static org.junit.jupiter.api.Assertions.assertEquals; 16import javax.json.JsonObject; 17import javax.ws.rs.client.Client; 18import javax.ws.rs.client.ClientBuilder; 19import javax.ws.rs.client.WebTarget; 20import javax.ws.rs.core.Response; 21 22import org.apache.cxf.jaxrs.provider.jsrjsonp.JsrJsonpProvider; 23import org.junit.jupiter.api.Test; 24 25public class SystemEndpointIT { 26 27 @Test 28 public void testGetProperties() { 29 String port = System.getProperty("http.port"); 30 String url = "http://localhost:" + port + "/"; 31 32 Client client = ClientBuilder.newClient(); 33 client.register(JsrJsonpProvider.class); 34 35 WebTarget target = client.target(url + "system/properties"); 36 Response response = target.request().get(); 37 38 assertEquals(200, response.getStatus(), "Incorrect response code from " + url); 39 40 JsonObject obj = response.readEntity(JsonObject.class); 41 42 assertEquals(System.getProperty("os.name"), 43 obj.getString("os.name"), 44 "The system property for the local and remote JVM should match"); 45 46 response.close(); 47 } 48} Running the tests Since you started Open Liberty in dev mode, press the enter/return key to run the tests. If the tests pass, you see a similar output to the following example: ------------------------------------------------------- T E S T S ------------------------------------------------------- Running it.io.openliberty.guides.system.SystemEndpointIT Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.99 sec - in it.io.openliberty.guides.system.SystemEndpointIT Running it.io.openliberty.guides.inventory.InventoryEndpointIT Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.325 sec - in it.io.openliberty.guides.inventory.InventoryEndpointIT Results : Tests run: 4, Failures: 0, Errors: 0, Skipped: 0 To see whether the tests detect a failure, change the endpoint for the inventory service in the src/main/java/io/openliberty/guides/inventory/InventoryResource.java file to something else. Then, run the tests again to see that a test failure occurs. InventoryResource.java 1// tag::copyright[] 2/******************************************************************************* 3 * Copyright (c) 2017, 2020 IBM Corporation and others. 4 * All rights reserved. This program and the accompanying materials 5 * are made available under the terms of the Eclipse Public License v1.0 6 * which accompanies this distribution, and is available at 7 * http://www.eclipse.org/legal/epl-v10.html 8 * 9 * Contributors: 10 * IBM Corporation - Initial implementation 11 *******************************************************************************/ 12// end::copyright[] 13package io.openliberty.guides.inventory; 14 15import java.util.Properties; 16import javax.enterprise.context.ApplicationScoped; 17import javax.inject.Inject; 18import javax.ws.rs.GET; 19import javax.ws.rs.Path; 20import javax.ws.rs.PathParam; 21import javax.ws.rs.Produces; 22import javax.ws.rs.core.MediaType; 23import javax.ws.rs.core.Response; 24import io.openliberty.guides.inventory.model.InventoryList; 25import io.openliberty.guides.inventory.client.SystemClient; 26 27// tag::ApplicationScoped[] 28@ApplicationScoped 29// end::ApplicationScoped[] 30// tag::endpoint[] 31@Path("/systems") 32// end::endpoint[] 33// tag::InventoryResource[] 34public class InventoryResource { 35 36 // tag::inject[] 37 @Inject 38 // end::inject[] 39 InventoryManager manager; 40 41 // tag::inject2[] 42 @Inject 43 // end::inject2[] 44 SystemClient systemClient; 45 46 @GET 47 @Path("/{hostname}") 48 @Produces(MediaType.APPLICATION_JSON) 49 public Response getPropertiesForHost(@PathParam("hostname") String hostname) { 50 // Get properties for host 51 // tag::properties[] 52 Properties props = systemClient.getProperties(hostname); 53 // end::properties[] 54 if (props == null) { 55 return Response.status(Response.Status.NOT_FOUND) 56 .entity("{ \"error\" : \"Unknown hostname " + hostname 57 + " or the inventory service may not be running " 58 + "on the host machine \" }") 59 .build(); 60 } 61 62 // Add to inventory 63 // tag::managerAdd[] 64 manager.add(hostname, props); 65 // end::managerAdd[] 66 return Response.ok(props).build(); 67 } 68 69 @GET 70 @Produces(MediaType.APPLICATION_JSON) 71 public InventoryList listContents() { 72 // tag::managerList[] 73 return manager.list(); 74 // end::managerList[] 75 } 76} 77// tag::InventoryResource[] When you are done checking out the service, exit development mode by pressing CTRL+C in the command-line session where you ran the server, or by typing q and then pressing the enter/return key. Great work! You’re done! You just used CDI services in Open Liberty to build a simple inventory application. Guide Attribution Injecting dependencies into microservices by Open Liberty is licensed under CC BY-ND 4.0 Copied to clipboard Copy code block Copy file contents Prerequisites: Nice work! Where to next? What did you think of this guide? Extreme Dislike Dislike Like Extreme Like What could make this guide better? Raise an issue to share feedback Create a pull request to contribute to this guide Need help? Ask a question on Stack Overflow Like Open Liberty? Star our repo on GitHub. Star
__label__pos
0.899225
Index: /trunk/GSASIIddataGUI.py =================================================================== --- /trunk/GSASIIddataGUI.py (revision 4079) +++ /trunk/GSASIIddataGUI.py (revision 4080) @@ -28,4 +28,5 @@ import GSASIIctrlGUI as G2G import numpy as np +import numpy.linalg as nl WACV = wx.ALIGN_CENTER_VERTICAL @@ -178,4 +179,5 @@ pass Obj.SetValue("%.5f"%(UseList[G2frame.hist]['Size'][4][pid])) #reset in case of error + wx.CallAfter(UpdateDData,G2frame,DData,data,G2frame.hist) else: try: @@ -405,22 +407,44 @@ dataSizer.Add(sizeVal,0,WACV|wx.BOTTOM,5) return dataSizer - + def EllSizeDataSizer(): parms = zip(['S11','S22','S33','S12','S13','S23'],UseList[G2frame.hist]['Size'][4], UseList[G2frame.hist]['Size'][5],range(6)) - dataSizer = wx.FlexGridSizer(0,6,5,5) - for Pa,val,ref,Id in parms: + dataSizer = wx.BoxSizer(wx.VERTICAL) + # dataSizer = wx.FlexGridSizer(0,6,5,5) + matrixSizer = wx.FlexGridSizer(0,6,5,5) + Sij = [] + for Pa,val,ref,id in parms: sizeRef = wx.CheckBox(DData,wx.ID_ANY,label=Pa) sizeRef.thisown = False sizeRef.SetValue(ref) - Indx[sizeRef.GetId()] = [G2frame.hist,Id] + Indx[sizeRef.GetId()] = [G2frame.hist,id] sizeRef.Bind(wx.EVT_CHECKBOX, OnSizeRef) - dataSizer.Add(sizeRef,0,WACV) -# azmthOff = G2G.ValidatedTxtCtrl(G2frame.dataDisplay,data,'azmthOff',nDig=(10,2),typeHint=float,OnLeave=OnAzmthOff) + # dataSizer.Add(sizeRef,0,WACV) + matrixSizer.Add(sizeRef,0,WACV) + # azmthOff = G2G.ValidatedTxtCtrl(G2frame.dataDisplay,data,'azmthOff',nDig=(10,2),typeHint=float,OnLeave=OnAzmthOff) sizeVal = wx.TextCtrl(DData,wx.ID_ANY,'%.3f'%(val),style=wx.TE_PROCESS_ENTER) - Indx[sizeVal.GetId()] = [G2frame.hist,Id] + # Create Sij matrix + Sij += [val] + Indx[sizeVal.GetId()] = [G2frame.hist,id] sizeVal.Bind(wx.EVT_TEXT_ENTER,OnSizeVal) sizeVal.Bind(wx.EVT_KILL_FOCUS,OnSizeVal) - dataSizer.Add(sizeVal,0,WACV) + # dataSizer.Add(sizeVal,0,WACV) + matrixSizer.Add(sizeVal,0,WACV) + dataSizer.Add(matrixSizer, 0, WACV) + Esize,Rsize = nl.eigh(G2lat.U6toUij(np.asarray(Sij))) + lengths = Esize + G,g = G2lat.cell2Gmat(data['General']['Cell'][1:7]) #recip & real metric tensors + GA,GB = G2lat.Gmat2AB(G) #Orthogonalization matricies + hkls = [x/(sum(x**2)**0.5) for x in np.dot(Rsize, GA)] + Ids = np.argsort(lengths) + dataSizer.Add(wx.StaticText(DData,label=' Principal ellipsoid components:'),0,WACV) + compSizer = wx.FlexGridSizer(3,3,5,5) + Axes = [' Short Axis:',' Middle Axis:',' Long Axis:'] + for Id in Ids: + compSizer.Add(wx.StaticText(DData,label=Axes[Id]),0,WACV) + compSizer.Add(wx.StaticText(DData,label='(%.3f, %.3f, %.3f) '%(hkls[Id][0], hkls[Id][1], hkls[Id][2])),0,WACV) + compSizer.Add(wx.StaticText(DData,label='Length: %.3f'%lengths[Id]),0,WACV) + dataSizer.Add(compSizer) return dataSizer Index: /trunk/GSASIIpwd.py =================================================================== --- /trunk/GSASIIpwd.py (revision 4079) +++ /trunk/GSASIIpwd.py (revision 4080) @@ -1112,10 +1112,10 @@ def ellipseSize(H,Sij,GB): - 'needs a doc string' + 'Implements r=1/sqrt(sum((1/S)*(q.v)^2) per note from Alexander Brady' HX = np.inner(H.T,GB) lenHX = np.sqrt(np.sum(HX**2)) Esize,Rsize = nl.eigh(G2lat.U6toUij(Sij)) - R = np.inner(HX/lenHX,Rsize)*Esize #want column length for hkl in crystal - lenR = np.sqrt(np.sum(R**2)) + R = np.inner(HX/lenHX,Rsize)**2*Esize #want column length for hkl in crystal + lenR = 1./np.sqrt(np.sum(R)) return lenR
__label__pos
0.941024
Guest Two trains , each having a speed of 30km/h, are headed at each other on the same straight track . A bird that can fly 60 km/h flies off the front of one train when they are 60 km apart and heads directly for other train . On reaching the other train , the bird flies directly back to the first train and so forth . What is the total distance the bird travels before the trains collide ? Two trains , each having a speed of 30km/h, are headed at each other on the same straight track .  A bird that can fly 60 km/h flies off the front of one train when they are 60 km apart and heads directly for other train . On reaching the other train , the bird flies directly back to the first train and so forth . What is the total distance the bird travels before the trains collide ? Grade:12th Pass 5 Answers aakash 33 Points 8 years ago 60km infinity Pushkala Krishnan 18 Points 8 years ago pl post the method . PRAJVAL98 18 Points 8 years ago relative velocity of train wrt the other train is 30-(-30)=60km/h relative distance is 60 km therefore time taken for collision is 60km/60km/h= 1hr   total distance the bird will cover in 1hr = 60km/h * 1h=60km Anjali 11 Points 5 years ago Each train is having a speed of 30km/hr. Their initial relative distance is 60km. The bird flies off the front of the first train when both the trains are 60km apart from each other. Speed of the bird=60km/hr        Relative distance=Relative speed*Time         Relative speed is the sum of the speeds of two objects when they are moving towards each other or in opposite directions  and it is the difference of the speeds of the objects when they are moving in the same direction. Here, time taken for the bird to meet the second train coming in opposite direction=60km/(30+60)km/hr=60/90hrs=2/3hours distance travelled by the bird when it meets the second train=2/3hrs*60km/hr=40km distance travelled by the second train=2/3hrs*30km/hr=20km So,after the bird reaches the second train ,the bird has travelled 40km and the train 20km. Distance travelled by the first train in 2/3hrs=2/3hrs*30km/hr=20km Now the bird sets off from the second train with a speed of 60km/hr and from a distance of 20km from the first train as their relative separation now has become 20km{40km(distance travelled by the bird on reaching the train B)-20km(distance travelled by train A)} Time taken for the bird to reach back train A=Relative distance(20km)/Relative speed(60km/hr+30km/hr)                                                                 =20/90hrs=2/9hrs Distance travelled by the bird in the direction of train B=2/9hrs*60km/hr=40/3km   Crisp dunk 26 Points one year ago The answer will be 60km and it'll take the bird infinite number of trips to do that.   In its first flight The bird will meet the train in time t1=2/3hr and travel a distance S1=40km. During this time period the trains would also cover a distance of 20km each, now the remaining distance between both of them is 20km.   In second flight The bird will meet the train in time t2=2/9hr and will travel a distance S2=40/3km. During this time period the trains would also cover a distance of 20/3km each and now the remaining distance between them would be 20/3km.   Similarly we will get a patter of infinite trips t3=2/27hr and S3=40/9km t4=2/81hr and S4=40/27km ...... This is an infinite GP Total distance will be = (40 + 40/3 + 40/9 + 40/27........)km = 60km Think You Can Provide A Better Answer ?
__label__pos
0.999349
How do you undo an action in Notion? How do I restore a deleted property in notion? You can view the content of pages you’ve deleted by clicking on them in your trash. They’ll appear with a red bar that gives you the option to restore or delete them permanently. What is the use of undo and redo action? The undo function is used to reverse a mistake, such as deleting the wrong word in a sentence. The redo function restores any actions that were previously undone using an undo. How do you undo a mistake? Ctrl+Z (or Command+Z on a Mac) is a common keyboard shortcut for Undo. Usually, programs with the Undo function keep track of not just your most recent change but an entire series of your most recent changes. What does Ctrl Y do? To reverse your last Undo, press CTRL+Y. You can reverse more than one action that has been undone. You can use Redo command only after Undo command. How do I remove Timeline from Notion? There’s a “delete” button in the “…” menu at the top right of your database! Or, use the “…” next to the database’s name in the sidebar, then “delete.” THIS IS FUNNING:  Best answer: Can I reorder subtasks in Jira? How do you delete a workspace in Notion? Delete a workspace 1. Click on Settings & Members in your left sidebar. Then click on the Settings tab. 2. Scroll down to the heading Danger zone. Click Delete entire workspace . You’ll be asked to type the name of your workspace just to double check that this is what you want to do. How does undo work? Undo is an interaction technique which is implemented in many computer programs. It erases the last change done to the document, reverting it to an older state. In some more advanced programs, such as graphic processing, undo will negate the last command done to the file being edited. How do you undo something? To undo an action, press Ctrl + Z. To redo an undone action, press Ctrl + Y. How are undo and redo actions related to each other Class 9? Answer: The undo function is used to reverse a mistake, such as deleting the wrong word in a sentence. The redo function restores any actions that were previously undone using an undo. Is there an Undo button on notion? Notion on Twitter: “You can use Control + Command + Z to undo … ” How do I undo in notion windows? How to Undo in Notion 1. Mac: CMD + Z. 2. Windows: CTRL + Z. 3. iOS: Shake iPhone back and forth and select Undo. How do I change shortcuts in notion? Keyboard Shortcuts for Notion 1. Create a new page ⌘ N. 2. Open a new window ⌘ ⇧ N. 3. Open search ⌘ P. 4. Go back a page ⌘ [ 5. Go forward a page ⌘ ] 6. Switch to dark mode ⌘ ⇧ L. THIS IS FUNNING:  What is the role of human resources planning in project management?
__label__pos
1
跳到主要内容 74. netty系列之:HashedWheelTimer一种定时器的高效实现 简介 定时器是一种在实际的应用中非常常见和有效的一种工具,其原理就是把要执行的任务按照执行时间的顺序进行排序,然后在特定的时间进行执行。JAVA提供了java.util.Timer和java.util.concurrent.ScheduledThreadPoolExecutor等多种Timer工具,但是这些工具在执行效率上面还是有些缺陷,于是netty提供了HashedWheelTimer,一个优化的Timer类。 一起来看看netty的Timer有何不同吧。 java.util.Timer Timer是JAVA在1.3中引入的。所有的任务都存储在它里面的TaskQueue中: private final TaskQueue queue = new TaskQueue(); TaskQueue的底层是一个TimerTask的数组,用于存储要执行的任务。 private TimerTask[] queue = new TimerTask[128]; 看起来TimerTask只是一个数组,但是Timer将这个queue做成了一个平衡二叉堆。 当添加一个TimerTask的时候,会插入到Queue的最后面,然后调用fixup方法进行再平衡: void add(TimerTask task) { // Grow backing store if necessary if (size + 1 == queue.length) queue = Arrays.copyOf(queue, 2*queue.length); queue[++size] = task; fixUp(size); } 当从heap中移出运行的任务时候,会调用fixDown方法进行再平衡: void removeMin() { queue[1] = queue[size]; queue[size--] = null; // Drop extra reference to prevent memory leak fixDown(1); } fixup的原理就是将当前的节点和它的父节点进行比较,如果小于父节点就和父节点进行交互,然后遍历进行这个过程: private void fixUp(int k) { while (k > 1) { int j = k >> 1; if (queue[j].nextExecutionTime <= queue[k].nextExecutionTime) break; TimerTask tmp = queue[j]; queue[j] = queue[k]; queue[k] = tmp; k = j; } } fixDown的原理是比较当前节点和它的子节点,如果当前节点大于子节点,则将其降级: private void fixDown(int k) { int j; while ((j = k << 1) <= size && j > 0) { if (j < size && queue[j].nextExecutionTime > queue[j+1].nextExecutionTime) j++; // j indexes smallest kid if (queue[k].nextExecutionTime <= queue[j].nextExecutionTime) break; TimerTask tmp = queue[j]; queue[j] = queue[k]; queue[k] = tmp; k = j; } } 二叉平衡堆的算法这里不做详细的介绍。大家可以自行查找相关的文章。 java.util.concurrent.ScheduledThreadPoolExecutor 虽然Timer已经很好用了,并且是线程安全的,但是对于Timer来说,想要提交任务的话需要创建一个TimerTask类,用来封装具体的任务,不是很通用。 所以JDK在5.0中引入了一个更加通用的ScheduledThreadPoolExecutor,这是一个线程池使用多线程来执行具体的任务。当线程池中的线程个数等于1的时候,ScheduledThreadPoolExecutor就等同于Timer。 ScheduledThreadPoolExecutor中进行任务保存的是一个DelayedWorkQueue。 DelayedWorkQueue和DelayQueue,PriorityQueue一样都是一个基于堆的数据结构。 因为堆需要不断的进行siftUp和siftDown再平衡操作,所以它的时间复杂度是O(log n)。 下面是DelayedWorkQueue的shiftUp和siftDown的实现代码: private void siftUp(int k, RunnableScheduledFuture<?> key) { while (k > 0) { int parent = (k - 1) >>> 1; RunnableScheduledFuture<?> e = queue[parent]; if (key.compareTo(e) >= 0) break; queue[k] = e; setIndex(e, k); k = parent; } queue[k] = key; setIndex(key, k); } private void siftDown(int k, RunnableScheduledFuture<?> key) { int half = size >>> 1; while (k < half) { int child = (k << 1) + 1; RunnableScheduledFuture<?> c = queue[child]; int right = child + 1; if (right < size && c.compareTo(queue[right]) > 0) c = queue[child = right]; if (key.compareTo(c) <= 0) break; queue[k] = c; setIndex(c, k); k = child; } queue[k] = key; setIndex(key, k); } HashedWheelTimer 因为Timer和ScheduledThreadPoolExecutor底层都是基于堆结构的。虽然ScheduledThreadPoolExecutor对Timer进行了改进,但是他们两个的效率是差不多的。 那么有没有更加高效的方法呢?比如O(1)是不是可以达到呢? 我们知道Hash可以实现高效的O(1)查找,想象一下假如我们有一个无限刻度的钟表,然后把要执行的任务按照间隔时间长短的顺序分配到这些刻度中,每当钟表移动一个刻度,即可以执行这个刻度中对应的任务,如下图所示: 这种算法叫做Simple Timing Wheel算法。 但是这种算法是理论上的算法,因为不可能为所有的间隔长度都分配对应的刻度。这样会耗费大量的无效内存空间。 所以我们可以做个折中方案,将间隔时间的长度先用hash进行处理。这样就可以缩短间隔时间的基数,如下图所示: 这个例子中,我们选择8作为基数,间隔时间除以8,余数作为hash的位置,商作为节点的值。 每次遍历轮询的时候,将节点的值减一。当节点的值为0的时候,就表示该节点可以取出执行了。 这种算法就叫做HashedWheelTimer。 netty提供了这种算法的实现: public class HashedWheelTimer implements Timer HashedWheelTimer使用HashedWheelBucket数组来存储具体的TimerTask: private final HashedWheelBucket[] wheel; 首先来看下创建wheel的方法: private static HashedWheelBucket[] createWheel(int ticksPerWheel) { //ticksPerWheel may not be greater than 2^30 checkInRange(ticksPerWheel, 1, 1073741824, "ticksPerWheel"); ticksPerWheel = normalizeTicksPerWheel(ticksPerWheel); HashedWheelBucket[] wheel = new HashedWheelBucket[ticksPerWheel]; for (int i = 0; i < wheel.length; i ++) { wheel[i] = new HashedWheelBucket(); } return wheel; } 我们可以自定义wheel中ticks的大小,但是ticksPerWheel不能超过2^30。 然后将ticksPerWheel的数值进行调整,到2的整数倍。 然后创建ticksPerWheel个元素的HashedWheelBucket数组。 这里要注意,虽然整体的wheel是一个hash结构,但是wheel中的每个元素,也就是HashedWheelBucket是一个链式结构。 HashedWheelBucket中的每个元素都是一个HashedWheelTimeout. HashedWheelTimeout中有一个remainingRounds属性用来记录这个Timeout元素还会在Bucket中保存多久。 long remainingRounds; 总结 netty中的HashedWheelTimer可以实现更高效的Timer功能,大家用起来吧。 点我查看更多精彩内容:www.flydean.com关注公众号加我好友 Loading Comments...
__label__pos
0.883133
🔥 Media and Topologies part 1 Most Liked Casino Bonuses in the last 7 days 🖐 Filter: Sort: BN55TO644 Bonus: Free Spins Players: All WR: 60 xB Max cash out: $ 200 Describe major LAN access methods Physical topology is different than Logical topology Carrier Sense Multiple Access with Collision Detection. Ethernet Signal type: Baseband transmission. Medium: Twisted Pair. - Speed: 10 Mbps. Enjoy! Valid for casinos Visits Likes Dislikes Comments BN55TO644 Bonus: Free Spins Players: All WR: 60 xB Max cash out: $ 200 Carrier-sense multiple access with collision detection (CSMA/CD) is a media access control (MAC) method used most notably in early Ethernet technology for​. Enjoy! Valid for casinos Visits Likes Dislikes Comments 💰 Software - MORE BN55TO644 Bonus: Free Spins Players: All WR: 60 xB Max cash out: $ 200 For topologies with a collision domain (bus, ring, mesh, point-to-multipoint topologies), controlling when data is sent and when to wait is necessary to avoid​. Enjoy! Valid for casinos Visits Likes Dislikes Comments 💰 Software - MORE BN55TO644 Bonus: Free Spins Players: All WR: 60 xB Max cash out: $ 200 Transmit Media Access Management enforces the collision by transmitting a bit slotTime can be increased to a sufficient value for the desired topologies. Enjoy! Valid for casinos Visits Likes Dislikes Comments 💰 Software - MORE BN55TO644 Bonus: Free Spins Players: All WR: 60 xB Max cash out: $ 200 WLANs utilize the IEEE standard, which is a set of media access This standard contains two types of topology nodes – one is a star network that is access (TDMA) scheme eliminates the possibility of upstream data collisions. Enjoy! Valid for casinos Visits Likes Dislikes Comments 💰 Software - MORE BN55TO644 Bonus: Free Spins Players: All WR: 60 xB Max cash out: $ 200 WLANs utilize the IEEE standard, which is a set of media access This standard contains two types of topology nodes – one is a star network that is access (TDMA) scheme eliminates the possibility of upstream data collisions. Enjoy! Valid for casinos Visits Likes Dislikes Comments 💰 Software - MORE BN55TO644 Bonus: Free Spins Players: All WR: 60 xB Max cash out: $ 200 Collision presence signal. A and D transmitting. 1 Mbps Ethernet (Fast Ethernet), star topology, m links,. m or m —inefficient at low load: delay in channel access, 1/N bandwidth Shared Medium Hub and Layer 2 Switch. Enjoy! Valid for casinos Visits Likes Dislikes Comments 💰 Software - MORE BN55TO644 Bonus: Free Spins Players: All WR: 60 xB Max cash out: $ 200 Transmit Media Access Management enforces the collision by transmitting a bit slotTime can be increased to a sufficient value for the desired topologies. Enjoy! Valid for casinos Visits Likes Dislikes Comments 💰 Software - MORE BN55TO644 Bonus: Free Spins Players: All WR: 60 xB Max cash out: $ 200 Describe major LAN access methods Physical topology is different than Logical topology Carrier Sense Multiple Access with Collision Detection. Ethernet Signal type: Baseband transmission. Medium: Twisted Pair. - Speed: 10 Mbps. Enjoy! Valid for casinos Visits Likes Dislikes Comments 💰 Software - MORE BN55TO644 Bonus: Free Spins Players: All WR: 60 xB Max cash out: $ 200 Computers in a star topology are connected by cables to a hub. (​Ethernet) Carrier Sense Multiple Access with Collision Detection (CSMA/CD) LAN. Enjoy! Valid for casinos Visits Likes Dislikes Comments Mesh topologies use routers to determine the best path. A line break at any point along the trunk cable will result in total network failure. In a ring topology network computers are connected by a single loop of cable, the data signals travel around the loop in one direction, passing through each computer. FDDI networking technologies. In bus topologies, all computers are connected to a single cable or "trunk or backbone", by a transceiver either directly or by using a short drop cable. All ends of the cable must be terminated, that is plugged into a device such as a computer or terminator. When computers are connected to a cable that forms a continuous loop this is called a ring topology. Wireless and wired devices can coexist on the same network. Because each device has a point-to-point connection to every other device, mesh topologies are the most expensive and difficult to maintain. Token Ring. In an effort to provide a solution to this problem, some network implementations such as FDDI support the use of a double-ring. Fiber Distributed Data Interface, shares many of the same features as token ring, such as a token passing, and the continuous network loop configuration. If one computer fails the network will continue to function, but if a hub fails all computers connected to it will also be affected. Forwarded from device to device or port to port on a hub in a closed loop.{/INSERTKEYS}{/PARAGRAPH} While the computer is listening for a data signal, that would be the carrier sense part. But FDDI has better fault tolerance because of its use of a dual, counter-rotating ring that enables the ring to reconfigure itself in case of a link failure. If there is a line break, or if you are adding or removing a device anywhere in the ring this will bring down the network. The token is passed from computer to computer until it gets to a computer that has data to send. An Access points also have at least one fixed Ethernet port to allow the wireless network to be bridged to a traditional wired Ethernet network.. Before they can transmit data they must wait for a free token, thus token passing does not allow two or more computers to begin transmitting at the same time. A Mesh topology Provides each device with a point-to-point connection to every other device in the network. Computers on a bus only listen for data being sent they do not move data from one computer to the next, this is called passive topology. Ring topology is an active topology because each computer repeats boosts the signal before passing it on to the next computer. In this topology management of the network is made much easier such as adding and removing devices , because of the central point. Access points act as wireless hubs to link multiple wireless NICs into a single subnet. The number of computers on a bus network will affect network performance, since only one computer at a time can send data, the more computers you have on the network the more computers there will be waiting send data. When all devices attached to the dual ring are functioning properly, data travels on only one ring. A type of media access control. A wireless network consists of wireless NICs and access points. Because most star topologies use twisted-pair cables, the initial installation of star networks is also easier. Mesh networks provide redundancy, in the event of a link failure, meshed networks enable data to be routed through any other site connected to the network. Collision detection indicates that the computers are also listening for collisions, if two computers try to send data at the same time and a collision occurs, they must wait a random period of time before transmitting again. Most bus topologies use coax cables. The IEEE Token Ring computers are situated on a continuous network loop. These are most commonly used in WAN's, which connect networks over telecommunication links. However because it is centralized more cable is required. One method of transmitting data around a ring is called token passing. Signal Propagation Method. Computers in a star topology are connected by cables to a hub. Data travels in one direction on the outer strand and in the other direction on the inner strand. Multiple access means, there are multiple computers trying to access or send data on the network at the same time. Star topologies are, or are becoming the topology of choice for networks. Each device in the ring attaches to the adjacent device using a two stranded fiber optic cable. FDDI transmits data on the second ring only in the event of a link failure. Hierarchical or cascading star. A Token Ring controls access to the network by passing a token, from one computer to the next. If no other computer is transmitting, the computer can then send its data. Maximum Connections. {PARAGRAPH}{INSERTKEYS}Media and Topologies part 1. If computers are connected in a row, along a single cable this is called a bus topology, if they branch out from a single junction or hub this is known as a star topology. If the primary ring breaks, or a device fails, the secondary ring can be used as a backup.
__label__pos
0.659404
Javascript: The Definitive Guide Previous Chapter 4 Next   4. Expressions and Operators Contents: Expressions Operator Overview Arithmetic Operators Comparison Operators String Operators Logical Operators Bitwise Operators Assignment Operators Miscellaneous Operators Expressions and operators are fundamental to most programming languages. This chapter explains how they work in JavaScript. If you are familiar with C, C++, or Java, you'll notice that expressions and operators in JavaScript are very similar, and you'll be able to skim this chapter quickly. If you are not a C, C++, or Java programmer, this chapter will teach you what you need to know about expressions and operators in JavaScript. 4.1 Expressions An expression is a "phrase" of JavaScript that a JavaScript interpreter can evaluate to produce a value. Simple expressions are constants (e.g., string or numeric literals) or variable names, like these: 1.7 // a numeric literal "Oh no! We're out of coffee!" // a string literal true // a Boolean literal null // the literal null value i // the variable i sum // the variable sum The value of a constant expression is simply the constant itself. The value of a variable expression is the value that the variable refers to. These expressions are not particularly interesting. More complex (and interesting) expressions can be created by combining simple expressions. For example, we saw that 1.7 is an expression and i is an expression, so the following is also an expression: i + 1.7 The value of this expression is determined by adding the values of the two simpler expressions. The plus sign in this example is an operator that is used to combine two expressions into a more complex expression. Another operator is - which is used to combine expressions by subtraction. For example: (i + 1.7) - sum This expression uses the - operator to subtract the value of the sum variable from the value of our previous expression i + 1.7. JavaScript supports a number of other operators, besides + and -, which we'll learn about in the next section. Previous Home Next Data Type Wrapper Objects Book Index Operator Overview  
__label__pos
0.792911
What Is DoD In Jira? Why do we use Jira? Jira is an issue-tracking tool that’s mainly used by software developers to track, organize, and prioritize bugs, new features, and improvements for certain software releases. Here at K15t Software, we carefully organize the development process for every Scroll add-on. Bug. or Documentation tasks.. What is done in agile? Being done in agile means that the team is aware of what is expected of them to deliver and they have delivered that. Done is a means of transparency. It makes sure that the quality of the work fits the purpose of the product and the organization. What does DoD stand for in Scrum? consistent acceptance criteriaEach Scrum Team has its own Definition of Done or consistent acceptance criteria across all User Stories. A Definition of Done drives the quality of work and is used to assess when a User Story has been completed. Who accepts user stories in agile? Anyone can write user stories. It’s the product owner’s responsibility to make sure a product backlog of agile user stories exists, but that doesn’t mean that the product owner is the one who writes them. Over the course of a good agile project, you should expect to have user story examples written by each team member. What are the features of JIRA? AnswerThe ability to plan agile work from project backlog to sprints.Fully customizable Kanban and Scrum boards.The ability to estimate time for issues as you prioritize your backlog.Robust reporting features, ranging from burndown charts to velocity measurements.Customizable workflows to fit your frameworks. Who leads scrum of scrums? This may involve two or more teams working together for a time, re-negotiating areas of responsibility, and so forth. To keep track of all of this, it is important the Scrum of Scrums have a Product Backlog of its own to be maintained by the Chief ScrumMaster. What Jira stands for? This software is used for bug tracking, issue tracking, and project management. The name “JIRA” is actually inherited from the Japanese word “Gojira” which means “Godzilla”. The basic use of this tool is to track issue and bugs related to your software and Mobile apps. It is also used for project management. What is the difference between DoD and DoR? DOR from a scrum team perspective, is a story ready to be pulled into a sprint to work on without further refinement. DOD from a scrum team perspective, is a story that work has been completed and is ready to deploy into production without further adieu, if the PO so decides. What is Sprint zero in Scrum based Agile? A Sprint 0 is the name often given to a short effort to create a vision and a rough product backlog which allows creating an estimation of a product release. Who defines DoD in agile? Per the Scrum Guide, the Dev Team defines the DoD ONLY when the DoD is not laid out by the Development Organization. Basically, if the organization set the DoD, then the Scrum Team’s DoD would match the DoD put forth by the organization. Who writes scrum criteria? Generally, acceptance criteria are initiated by the product owner or stakeholder. They are written prior to any development of the feature. Their role is to provide guidelines for a business or user-centered perspective. However, writing the criteria is not solely the responsibility of the product owner. Do bugs need acceptance criteria? A bug or a defect is a result of a missed acceptance criteria or an erroneous implementation of a piece of functionality, usually traced back to a coding mistake. Furthermore, a bug is a manifestation of an error in the system and is a deviation from the expected behaviour. Who uses Jira? According to Atlassian, Jira is used for issue tracking and project management by over 75,000 customers in 122 countries. What is DoD and Dor in agile? Definition of Done (DoD) A sprint is a time-boxed development cycle that takes high-priority items off the Sprint Backlog and turns them into a product increment. If developers work off of insufficiently detailed or defined user stories, they are unlike to produce high quality code. … What is the difference between DoD and acceptance criteria? Definition of done is defined up front before development begins, and applies to all user-stories within a sprint, whereas acceptance criteria are specific to one particular feature and can be decided on much later, just before or even iteratively during development.
__label__pos
0.999803
Choosing the Right Laptop for Your Needs When it comes to selecting a laptop, it is important to find the perfect match for your specific requirements. With so many options available in the market, understanding your needs and priorities is crucial. This article aims to guide you through the process of choosing the right laptop for your needs. Consider Your Usage Before making a decision, it is essential to consider how you plan to use your laptop. Are you primarily going to use it for work, gaming, multimedia purposes, or a combination of these? Understand the demands of your usage as this will help you narrow down the options and make an informed decision. Determine Your Budget Setting a budget is a crucial step in finding the right laptop. Laptops come in a wide price range, with various features and specifications. Decide how much you are willing to invest and explore laptops within that price range. Remember, a higher price does not always equate to better performance. Operating System: Windows, Mac, or Linux? One of the first decisions you need to make is choosing the right operating system (OS) for your laptop. Windows, macOS, and Linux are the three main options available. Each has its own benefits and drawbacks. Windows is widely used and offers compatibility, variety, and a user-friendly interface. Macbooks are known for their sleek design and stability, while Linux provides customization and security. Consider your familiarity and preferences when selecting an OS. Screen Size and Portability The screen size of your laptop is another important factor to consider. If you plan to use it on the go, a smaller and lighter laptop may be more suitable. However, if your usage involves working with detailed graphics or multitasking, a larger screen would be beneficial. Additionally, consider the weight and overall portability of the laptop, especially if you travel frequently. Specification Requirements The specifications of a laptop determine its performance capabilities. Consider the following key specifications: Processor The processor is the brain of your laptop. It determines the speed and efficiency of your tasks. Intel and AMD are the leading processor manufacturers, each offering a range of options. Look for a processor that suits your performance needs. RAM Random Access Memory (RAM) affects the laptop’s ability to handle multiple tasks simultaneously. 8GB or more is generally recommended for smooth multitasking, photo editing, and gaming. Storage Decide between a traditional Hard Disk Drive (HDD) or a faster Solid State Drive (SSD) for storage. SSDs provide faster data access and better durability, whereas HDDs offer larger storage capacities at a lower cost. Graphics If you plan to use your laptop for gaming or video editing, a dedicated graphics card is essential. Integrated graphics are suitable for general usage and multimedia consumption. Additional Features Consider any additional features that may be important to you. These could include touchscreen capability, backlit keyboards, fingerprint scanners, or even the presence of certain ports such as USB-C or HDMI. These features add convenience and functionality to your laptop, but may also increase the overall cost. Research and Reviews Before finalizing your decision, conduct thorough research and read reviews on the shortlisted laptops. Evaluate the opinions of experts and fellow users to gain insights into their experiences. This will help you make an informed decision and avoid potential regrets later. Warranty and Customer Support Always check the warranty offered by the manufacturer and the availability of customer support. Accidents and technical issues can occur, and having reliable support is essential for timely assistance and peace of mind. Conclusion Choosing the right laptop requires careful consideration of your usage, budget, operating system, specifications, and additional features. By understanding your needs and priorities, conducting thorough research, and reading reviews, you can find the perfect laptop that suits your requirements. Remember, a laptop is an investment, so take your time and make an informed decision. You May Also Like More From Author
__label__pos
0.842045
inbitcoin SDK for JavaScript The inbitcoin SDK for JavaScript provides a set of client-side functionality that enables you to accept bitcoin as a payment method. The SDK work on both desktop and mobile web browsers. This quickstart will show you how to setup the SDK. Basic Setup Include the integration code in your HTML on each page you want to load the SDK, directly after the opening <body> tag. <script> (function(c, o, i, n) { var js, fjs = c.getElementsByTagName(o)[0]; if (c.getElementById(i)) return;js = c.createElement(o); js.id = i; js.src = "https://inbitcoin.it/static/sdk/v.2/sdk.js"; fjs.parentNode.insertBefore(js, fjs); js.onload=function () {INBIT.init(this, n);} }(document, 'script', 'inbitcoin-jssdk', {clientid:'your-client-id'})); </script> The code will asynchronously load the SDK. The async load means that it does not block loading other elements of your page. Making a standard HTML payment button Inbitcoin SDK detect and handle automatically tags with className = "inbit-btnpay". <a href="#" data-price="10" data-currency="EUR" class="inbit-btnpay" title="Pay in bitcoin"> <img src="https://inbitcoin.it/static/img/btnpay.png" width="96" height="48" /> </a> Try it now: Order-id and onpaid-url You can add a data-order-id parameter to track order. After payment send user to a “thank you” page with data-onpaid-url. <a href="#" class="inbit-btnpay" data-price="10" data-currency="EUR" data-order-id="1" data-onpaid-url="/thankyou.html">Pay in bitcoin</a> Making a Javascript payment button <a href="#" id="button_pay">Pay in bitcoin</a> <script> window.inbitAsyncInit = function() { document.getElementById("button_pay").onclick(function (e) { e.preventDefault(); INBIT.showInvoice({price:5,currency:"EUR",order_id:"2"}) }) }; </script> Handling events <a href="#" id="button_pay" title="Pay in bitcoin"> <img src="/static/img/btnpay.png" width="96" height="48" /> </a> <div id="invoice_info"></div> <div id="invoice_result"></div> <script> window.inbitAsyncInit = function() { var info=document.getElementById("invoice_info") var result=document.getElementById("invoice_result") document.getElementById("button_pay").onclick=function (e) { e.preventDefault(); var _this=this INBIT.showInvoice({ price:5,currency:"EUR",order_id:"3", onshow:function (invoice) { info.innerHTML= 'Invoice id: '+invoice.id+'<br/>' +'BTC price: '+invoice.btcPrice+'<br/>' +'Rate: '+invoice.rate+'<br/>' }, onpaid:function (invoice) { result.innerHTML="Paid" _this.innerHTML="Thank You!" }, oncancel:function (err) { result.innerHTML="Canceled: "+err } }) } }; </script> Try it now:
__label__pos
0.848739
10 $\begingroup$ I'm trying to find two compact, nonhomeomorphic subsets of the plane, say $X$ and $Y$, such that $X \times [0,1]$ is homeomorphic to $Y \times [0,1]$. I can not think of how a homeomorphism arises when you product with the interval. $\endgroup$ • $\begingroup$ +1 Interesting question. Out of curiosity: why do you ask for compact sets? Do you know an example for non-compact ones? $\endgroup$ – M. Winter Dec 6 '17 at 9:40 • 2 $\begingroup$ The closest example I have found is here: Let $X$ be the torus with a hole and $Y$ be a disc with two holes. Then $X \times [0,1] \approx Y \times[0,1]$ as they are both solids in $\Bbb R^3$ bounded by a sphere with two handles. However for obvious reasons $X$ is not a subset of the plane... $\endgroup$ – Ali Caglayan Dec 13 '17 at 0:00 5 $\begingroup$ This CW answer is supposed to kick this question from the unanswered queue. I strictly follow the approach mentioned in What to do with questions that are exact duplicates from MathOverflow? There are indeed counterexamples to which Igor Belegradek gave a reference. Here is another counterexample in the plane, perhaps the simplest there is: Let $X$ be an annulus with one arc attached to one of its boundary components and another arc attached to the other boundary component, and $Y$ - an annulus with two disjoint arcs attached to the same one of its boundary components. enter image description here The above answer is written by @WlodekKuperberg MO link: Is it true that $X\times I\sim Y\times I\implies X\sim Y$? $\endgroup$ Your Answer By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.962384
Document analyzer Examples of analyzing documents are defined. In an example, a request to analyze a document may be received. A knowledge model corresponding to a guideline associated with the document may be obtained. The knowledge model may include at least one of a hypothetical question and a logical flow to determine an inference to the hypothetical question. The hypothetical question relates to an element of the guideline. Based on the knowledge model, data from the document may be extracted for analysis using an artificial intelligence (AI) component. The Ai component may be configured to extract and analyze data, based on the knowledge model. Based on the analysis, a report indicating whether the document falls within a purview of the guideline may be generated. Skip to: Description  ·  Claims  ·  References Cited  · Patent History  ·  Patent History Description BACKGROUND Generally, documents, such as, for example, transactional documents may form an integral part of an enterprise. The transactional documents may govern various tasks, operations, transactions, etc., between different businesses, between a business and a consumer, and between a business and a government. Such transactional documents are often required to be comprehended with respect to certain predefined policies, guidelines, or standards to perform a variety of tasks, such as, for example, for performing asset management directly from the transactional documents, reconciling invoices against the transactional documents, and the like. Traditionally, the transactional documents may be managed by a system, which may perform functions, such as, for example, electronically storing the transactional documents in a database. However, such systems may fail to perform additional analysis, which may involve comprehending or reviewing the transactional documents, such as determining obligations of participating parties, revenue that would generated by the performance of an obligation, transfer of title, etc. Moreover, often times, the policies such as rule based policies may be comprehensive, and reviewing of the transactional documents corresponding to such policies may be cumbersome, resource intensive, and error prone. Furthermore, since each transactional document may be specific to a domain, skilled labour may be required to perform analysis specific to each domain. The present disclosure provides a technical solution to a problem to efficiently and accurately assist systems in comprehending transactional documents. BRIEF DESCRIPTION OF DRAWINGS FIG. 1 illustrates a network environment implementing a document analysis system, according to an example embodiment of the present disclosure. FIG. 2 illustrates various components of the document analysis system, according to an example embodiment of the present disclosure. FIG. 3a illustrates an example of a knowledge model for analyzing a transactional document, according to an example embodiment of the present disclosure. FIG. 3b illustrates an example of another knowledge model for analyzing the transactional document, according to an example embodiment of the present disclosure. FIG. 4 illustrates an example depicting various transactional relationships between two parties, according to an example embodiment of the present disclosure. FIG. 5 illustrates a hardware platform for implementation of the system, according to an example embodiment of the present disclosure. FIG. 6 illustrates a method for generating a knowledge model(s) corresponding to a guideline for analyzing a transactional document, according to an example embodiment of the present subject matter. FIG. 7 illustrates a method for analyzing the transactional document, according to an example embodiment of the present subject matter. DETAILED DESCRIPTION For simplicity and illustrative purposes, the present disclosure is described by referring mainly to examples thereof. The examples of the present disclosure described herein may be used together in different combinations. In the following description, details are set forth in order to provide an understanding of the present disclosure. It will be readily apparent however, that the present disclosure may be practiced without limitation to all these details. Also, throughout the present disclosure, the terms “a” and “an” are intended to denote at least one of a particular element. As used herein, the term “includes” means includes but not limited to, the term “including” means including but not limited to. The term “based on” means based at least in part on, the term “based upon” means based at least in part upon, and the term “such as” means such as but not limited to. The present disclosure describes systems and methods for analyzing documents such as, for example, transactional documents with respect to predefined guidelines in an efficient and accurate manner. The predefined guidelines may be standards, policies, or rules defined for a domain, such as, for example, an international financial reporting standard (IFRS), which provides common global language for business operations so that company accounts are understandable and comparable across the globe. Therefore, organizations ensure that documents such as, for example, transactional documents adhere to these guidelines and the transactional documents are analyzed based on the guidelines. Further, the transactional documents may have to analyzed and comprehended continuously for a variety of reasons, such as, for example, for performing asset management directly from the transactional documents and reconciling invoices against contracts. However, often times, the guidelines and the transactional documents may be comprehensive and involve complex language, thereby making analysis of the transactional documents with respect to the guidelines cumbersome, time consuming, and resource (computational and otherwise) intensive. The present disclosure provides for efficient analysis of the guidelines, which in turn provides for efficient and accurate analysis of the transactional documents. In an example embodiment, a guideline analyzer may obtain a guideline to be analyzed. Upon analysis, element(s) of the guideline may be extracted. An element of a guideline may be understood as a process or a factor governing a core principle of the guideline. For example, for IFRS 15, which is a standard for guidance on accounting for revenue recognition from contracts with customers, extraction of elements may involve identifying a contract with a customer vs. with a business partner involving in kind arrangement; identifying performance obligations, such as need to deliver goods or services at a certain time to a certain location in a transactional document; determining transaction price, and other such elements, such as the existence of loss language and minimum guaranteed throughput. Upon extracting the element, the guideline analyzer may determine knowledge model(s) defining the extracted element. The knowledge models may include, for example, hypothetical questions and logical flows to obtain an inference to the hypothetical questions. In one example, the knowledge models may be generated specific to an organization, a product category, etc. For example, the knowledge models may be used to describe for an organization dealing with non-gas liquid (NGL) products. Alternatively, the knowledge models may be defined generally, with variables, which may be adjusted for different organizations, products, and the like. The knowledge models for the guideline may be stored for further use. The knowledge models may be generated semi- or fully automatically using AI techniques and/or natural language processing techniques. In an example, to analyze a transactional document, a document analyzer may determine a guideline corresponding to the transactional document. For example, a domain of the transactional document may be identified and a guideline governing transactional documents in the domain may be determined. For the determined guideline, corresponding knowledge models may be obtained. Based on the knowledge models, the document analyzer may extract and analyze data, referred to as knowledge data, from the transactional document. The knowledge data may include relevant or key aspects of the transactional document on which analysis may generally be performed. The knowledge extraction and analysis may involve at least one of geo-spatial entity extraction, sentence similarity, supervised classification, search indices extraction, unsupervised clustering, topic detection, table extraction, entity and relationship extraction, and dependency parsing. In addition to the knowledge models, case based data, may also be used for document analysis. The case based data often includes a case descriptor, which describes the key aspects of a case in the form of a vector. Other similar cases may be retrieved based on the similarity of the case descriptors of those cases. For example, this approach enables the analysis of another transactional document in the same domain. The analysis may include, for example, ascertaining whether the transactional document falls within the purview of a guideline for a domain to which the transactional document belongs. Further, on receiving a user request to interpret an aspect of the transactional document, the document analyzer may also provide an interpretation of the aspect of the transactional document may be provided. The aspect may be, for example, to determine whether a transaction price is defined or whether a minimum transaction price is defined for a case when obligation is not met. The interpretation may also be provided based on the knowledge models and/or case based data, as discussed above. Accordingly, the present system may intelligently process guidelines to generate knowledge models to efficiently capture relevant and key aspects. Furthermore, the system may then analyze various transactional documents, based on the knowledge models to accurately interpret the transactional documents. Because system may capture all relevant elements (processes and/or features) of a guideline and the subsequent analysis of a transactional document may be performed based on knowledge models corresponding to the elements, the analysis may be substantially free from errors. Furthermore, the extraction of relevant features and the use of appropriate knowledge models to automatically analyze documents may reduce the CPU (Central Processing Unit) cycles and memory required as opposed to conventional systems. Additionally, because the knowledge models once generated may be used for a variety of transactional documents, time and resources may also be better utilized. Furthermore, the system may be configured to address amendments as and when they come to provide updated knowledge models, thereby ensuring that the transactional documents are analyzed as per latest guidelines, which may in turn help ensuring the accuracy of the analysis. Thus, systems such as, for example, transaction management systems, which use techniques consistent with the present disclosure may accurately analyze the transactional documents in a time and resource efficient manner. FIG. 1 illustrates a network environment 100 implementing a document analysis system 105, according to an example implementation of the present disclosure. In an example embodiment, the document analysis system 105 uses Artificial Intelligence (AI) techniques, such as machine learning, data mining, and knowledge discovery, for the purpose, of analyzing transactional documents and corresponding guidelines. In an example embodiment, the network environment 100 may be a public network environment, including thousands of individual computers, laptops, various servers, such as blade servers, and other computing devices. In another example embodiment, the network environment 100 may be a private network environment with a limited number of computing devices, such as individual computers, servers, and laptops. Furthermore, the system 100 may be implemented in a variety of computing systems, such as a laptop, a tablet, and the like. According to an example embodiment, the system 105 is communicatively coupled to with a transactional document database 110 and a guideline database 115 through a network 120. In another example, the transactional document database 110 and the guideline database 115 may be integrated with the system 105. The transactional document database 110 may store transactional documents of an organization and metadata pertaining to the transactional documents. The metadata may include, for example, date on which document was signed, details pertaining to negotiations, etc. The guideline database 115 may store data relating to guidelines, standards, policies, and rules defined for various domains. In an example, the system 105 may retrieve guideline from a variety of sources, including third party sources, such as document repositories and other such information sources, data stores, and/or third party applications. The system 105 may further decompose and curate the existing guidelines into multiple knowledge representations, and store the data into the guideline database 115 for future representation, as is explained in detail later in description of FIG. 1. Further, the guideline database 115 may be periodically updated. For example, new data may be added into the guideline database 115, existing data in the guideline database 115 may be modified, or non-useful data may be deleted from the guideline database 115. In an example embodiment, the network 120 may be wireless network, a wired network, or a combination thereof. The network 120 may also be an individual network or a collection of many such individual networks, interconnected with each other and functioning as a single large network, e.g., the Internet or an Intranet. The network 120 may be implemented as one of the different types of networks, such as Intranet, Local Area Network (LAN), Wide Area Network (WAN), the Internet, and the like. Further, the network 120 may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, and the like. According to an example embodiment, the system 105 may include a guideline analyzer 125 and a document analyzer 130. The guideline analyzer 125 may analyze a guideline stored in the guideline database 115 to generate a knowledge model for each element of the guideline. It will be appreciated that the term guideline encompasses a guideline, a standard, a regulation, a policy, a rule, a principle, or the like. The guideline may provide guidance on how a document, such as a transactional document, should be defined to meet industry standards. In other words, the guidelines provide basic rules and principles in determining whether a transactional document falls under the purview of current regulations. The guideline analyzer 125 may implement a variety of AI techniques for analyzing the guideline. In an example, the guideline analyzer 125 may obtain the guideline, such as, for example, IFRS 9 and IFRS 15, to be analyzed from the guideline database 115. Upon obtaining, the guideline analyzer 125 may extract an element central to the guideline. An element defines a concept, an aspect, a process, or a factor central to the core principle of the guideline. The guideline analyzer 125 may implement techniques, such as, for example, text mining techniques and ontology construction to extract the elements. In an example, the guideline analyzer 125 may perform deep parsing (dependency parsing) of multiple transactional documents in a knowledge base or an ontology corresponding to a domain to identify entities and relationships therebetween, based on the frequencies of the relationships across the multiple-transactional documents. Upon extracting the element, the guideline analyzer 125 may generate hypothetical questions and/or logical flows to obtain inferences to the hypothetical questions on analyzing a transactional document. Each knowledge model may provide a logic, which may help in determining whether the transactional document includes text corresponding to the element of the guideline. A hypothetical question may be related to the element of the guideline. The knowledge models for the guideline may be stored in the guideline database 115 for further use. In an example, the guideline analyzer 125 may perform natural language processing tasks, such as, for example, topic modelling and clustering of sentences, words, or sample queries drawn from a guideline, which helps in identification of important topics/element pertaining to the guideline. For such topics/entities a knowledge model may be defined. For example, rule based approaches may be followed for generating logical flows for certain hypothetical questions. For example, semantic similarity may be performed for a training sample and semantic similarity may then be used to predict the answer to the question (pertaining to the guideline) to identify similar sentences, i.e., sentences that may be interpreted same as the training sample. Accordingly, a logical flow with questions with answers in affirmative/negative may be defined. The knowledge model generation is explained in detail with the help of an example with reference to description of FIG. 2. In operation, a user may provide a request to review, analyze, or comprehend a transactional document. On receiving the request, the document analyzer 130 may obtain the transactional document to be analyzed, for example, from the transactional database 110. In one example, the document analyzer 130 may identify a domain of the transactional document to determine a guideline corresponding to the domain. In an example, the document analyzer 130 may identify the words in the document based on word-related statistics, such as, frequency counts and other statistics (tf-idfs, etc) to map the document to a particular domain or knowledge base. For the determined guideline, corresponding knowledge models may be obtained from the guideline database 115. The document analyzer 130 may extract knowledge data from the transactional document to analyze the transactional document, based on the knowledge models. The knowledge data may include data defining features of the transactional document and data, which may be required for analysis with respect to the knowledge models. The document analyzer 130 may include various components (shown in FIG. 2) to perform knowledge extraction and analysis. For example, the document analyzer 130 may perform search indices construction, entity/relationship extraction, topic detection, sentence similarity search, geospatial entity extraction, dependency parsing to extract the parameters associated with the entities, table extraction, extraction of domain specific ontology, etc. The knowledge data may be stored in the transactional database 110. The document analyzer 130 may then analyze the extracted knowledge data with respect to the knowledge models indicative of the guideline. Additionally, the document analyzer 130 may also analyze the knowledge data with respect to case based data (not shown in figures), which may be stored in the system 105 or separately in a database. The case based data may include details pertaining analysis of the transactional documents in the same domain. The knowledge from the analysis performed for a previous case (transactional document) may be used to intelligently analyze a present case. Based on the analysis, the document analyzer 130 may ascertain whether the transactional document falls under the purview of the guideline regulating the transactional document and/or to provide an interpretation of an aspect of the transactional document. The details of the analysis of the transactional document are provided in subsequent paragraphs. FIG. 2 illustrates various components of the system 105, according to an example implementation of the present disclosure. The functioning of the system 105 has been explained in detail with respect to IFRS 15; however it will be appreciated principles described may be applicable on other guidelines as well. As mentioned earlier, the system 105 may be in communication with various data sources, such as the transactional database 110, the guideline database 115, the case based data to obtain transactional documents, guidelines, knowledge models, and domain specific data for analysis. In an example, the system 105 may include the guideline analyzer 125 and the document analyzer 130. The guideline analyzer 125 may include an element extractor 205 and a knowledge model generator 210. The document analyzer 130 may include a knowledge analyzer 215, and a variety of AI components, such as, for example, a geospatial entity extractor 225, a sentence similarity analyzer 230, a supervised classifier 235, a search indices extractor 240, an unsupervised cluster generator 245, a topic detector 250, a table extractor 255, an entity and relationship extractor 260, and a dependency parser 265. In one example, the element extractor 205 may obtain a guideline from the guideline database 115. The element extractor 205 may periodically obtain new guidelines or may obtain a guideline on receiving a user input. The element extractor 205 on obtaining the guideline may extract an element of the guideline. As mentioned earlier, an element of a guideline may correspond to a process, a feature, a factor central to the core principle of the guideline. The element extractor 205 may implement AI techniques, such as, for example, text mining techniques, natural language processing techniques, etc., for element extraction. For example, for IFRS 15, the core principle may be to recognize revenue to depict transfer of goods or services. Accordingly, the text of IFRS 15 may be analysed to identify following process as the element: • 1. Identify a transaction with a customer • 2. Identify performance obligations in the transaction • 3. Determine a transaction price • 4. Allocate the transaction price • 5. Recognize revenue when (or as) a performance obligation is satisfied Thus, each transactional document, which is required to adhere to IFRS 15 needs to satisfy the process outlined above, i.e., each transactional document is to be in accordance with the element of the guideline. The element extractor 205 may store the element in the guideline database 115 or in the system 105. Upon extracting the element, the knowledge model generator 210 may generate the knowledge model(s) corresponding to the element. In an example, the knowledge models may include hypothetical questions and for each hypothetical question, a flow chart/logic to determine an inference to the hypothetical question may be defined. In one example, the knowledge models may be generated specific to an organization, a product category, etc. Alternatively, the knowledge models may be defined generally, with variables, which may be adjusted for different organizations, products, and the like Referring to example of IFRS 15 above, hypothetical questions pertaining to the process for an organisation dealing in non-gas liquid (NGL) may be, for example, • 1. Does title of the product pass to organization at delivery point? • 2. Does title of product pass back to contract party anywhere? • 3. Does contract party retain any products? • 4. Does contract party take any in kind election? • 5. If yes, take in kind for residue, for NGL, fee deduction for Transfer In Kind (TIK)? • 6. Consideration paid to contract party based on proceeds from sales? • 7. Is organization required to market products to specific party or market? • 8. For organization, is there deduction for transportation fractionation and storage? • 9. Does the contract party have the stated right to bypass the plant? • 10. Is there risk of loss language in the contract? • 11. Is there a guaranteed throughput volume? • 12. If yes, does the contract contain deficiency fee terms? The first six hypothetical questions may relate to an element, which corresponds to a process to adjudicate whether the transactional document falls under IFRS 15, and remaining hypothetical questions may relate to other elements, which may correspond to aspects, such as, for example, determining how transaction price was set and how transaction price is calculated, and is there any minimum transaction price to be paid, when obligation is not met. Further, for each of the above-mentioned hypothetical questions, a flow chart may be defined to obtain an inference to the hypothetical question. Examples of flow charts are described with reference to description of FIG. 3a and FIG. 3b. Thus, knowledge models for all the guidelines pertinent to an organization may be efficiently generated so that they may be used for analyzing transactional documents. In an example, a user may provide a request to analyze, for instance, to interpret a transactional document in light of a corresponding guideline. On receiving such a request, the knowledge analyzer 215 may determine a guideline corresponding to the transactional document. The guideline may be determined, based on a domain to which the transactional document belongs. For example, a domain may be “revenue from contracts with customers”. In said example, IFRS 15 may be identified as current regulation for contacts in said domain. Accordingly, the knowledge models corresponding to IFRS 15 may be obtained. Based on the knowledge models, the knowledge analyzer 215 may extract the knowledge data from the transactional document to be analyzed. For example, based on the knowledge models, geo-spatial entities, tables, parties, titles, etc., may be obtained. The knowledge analyzer 215 may implement natural language processing techniques, text mining techniques, machine learning, etc., techniques to extract the knowledge data. Further, the knowledge analyzer 215 may configure one or more of the AI components to aid in knowledge extraction. In an example, based on the user request, an ontology corresponding to a domain or a knowledge model corresponding to the user request may be queried to identify the AI components to be activated. In an example, from ontology corresponding to the knowledge model/domain, relevant AI components may be selected. For instance, for each domain, such data may be predefined; or there may be a pre-defined mapping which may indicate which AI component is to be activated and for what purpose. Accordingly, the knowledge analyzer 215 may refer to such predefined data/mapping for each domain ontology/knowledge model ontology to identify the AI components to be selected. On determining the AI components to be configured, the knowledge analyzer 215 may refer another domain specific mapping to determine parameters to be set and the values of the parameters may be determined. Such mapping may also be derived from the ontology of the corresponding domain. Based on the determined values, corresponding parameters may be set for the AI component. For example, if the result of querying the ontology indicates that it is to be determined if a particular relationship exists between two entities, then the knowledge analyzer may select a dependency parser and configure the same accordingly. To perform the configuration, the corresponding parameters may be set. Referring to the example of a dependency parser, parameters may be configured to indicate the entities of interest and the relationships between these entities that should be looked at. Thus, the knowledge analyzer 215 may configure the AI components for extracting entity and relationships, for entity resolution, for searching indices, for dependency parsing, and for knowledge graph extraction, based on the knowledge models corresponding to the guideline. Accordingly, the AI components may be configured to perform, for example, sentence similarity, for terms defined in the logic flow in the knowledge model. In an example, the entire corpus of documents can be analysed based on either the supervised classifier 235 or the unsupervised classifier 245 so that similar documents or paragraphs can be identified. In another example, the knowledge analyzer 215 may identify different sections and/or search indices of the transactional document, based on a semantic search. The knowledge analyser 215 may leverage the topic detector 250 to determine the purpose of a paragraph or a section. The transactional document may include various sections, such as ‘parties’, ‘obligations’, ‘titles’, ‘revenue’, ‘geographical entities’, or the like. The sections may form a basis for storing different types of information that is extracted from the transactional document. For the purpose, the knowledge analyzer 215 may implement the search indices extractor 240, which may aid identification of sections and search indices in the transactional document The knowledge analyzer 215 may also parse the transactional document to identify the participating parties, such as seller, the buyer, the intermediate entity, and the like. For example, a sentence similarity analyzer 230 may perform similarity based search may be performed to compare text of the transactional document with predetermined keywords, such as ‘party’, ‘seller’, ‘buyer’, ‘retainer’, or the like. Further, the knowledge analyzer 215 may determine the obligations associated with the identified parties. In an example, the knowledge analyzer 215 may implement the entity and relationship extractor 260, which may provide for identification of entities or parties and a relationship therebetween. In some cases, the dependency parser 265 may be triggered to establish the association between the parameter(s) and its value. The knowledge analyzer 215 may also extract geospatial entities, for instance, to determine the location of parties, products, etc. In an example, the knowledge analyzer 215 may implement the geospatial entity extractor 225, which may extract geospatial information in the transactional document. The geospatial information may be included, for instance, in exhibits or annexures of the transactional documents. The geospatial entity extractor may analyze such sections as well to extract geospatial information. In an example, the geospatial information may appear in the form of, for example, latitude and longitude, when only the centroid is given, or may be specified in the form of a bounding box, when a rectangular region is given. Further, a legal description of land, such as, in the form of section, range, township, etc.; or survey, abstract, etc. may also appear in a document. The geospatial information may be useful in contract reconciliation or jurisdiction related issues. The knowledge analyzer 215 may also extract tables, using a table extractor 255 in order to extract the descriptions of assets that are often listed in a document in the table format. Further, using various AI components, the knowledge analyzer 215 may obtain counter-parties within the transactional document, obligations, pricing conditions, geo-spatial information, etc. In an example, the obligations may be captured in a knowledge graph. The knowledge graph may be constructed from obligations by performing information extraction of the entities and relationships from the obligations. The knowledge graph or inferencing rules may also capture pricing conditions and formulas. The knowledge graphs and/or inferencing rules may be stored in the knowledge data. The extracted knowledge, i.e., the knowledge data may be stored in the transactional database 110, in the system 105, or in a separate knowledge base (not shown in figures). The knowledge data may be analyzed with respect to the knowledge models. In an example, the analysis includes ascertaining whether the transactional document falls within the purview of the guideline. As mentioned earlier, the knowledge models may include logic or flow charts to hypothetical questions corresponding to the guideline. The knowledge data may be analyzed to determine an inference to a hypothetical question, based on the logic defined for the hypothetical question in the knowledge model. For example, based on the knowledge model it may be determined whether the contract party may retain products or does title of a portion of the product pass to organization at delivery point? For the sake of brevity, the knowledge models are not discussed in detail here and are described with help of examples with reference to description of FIGS. 3a and 3b. In addition to knowledge models, the knowledge analyzer 230 may also use the case based data to analyze the knowledge data and thus, the transactional document. Based on the inferences to the knowledge models, the knowledge analyzer 215 may provide a report indicating aspects, which may/may not be under the purview of the transactional document. Accordingly, even a complex and comprehensive transactional document may be easily and efficiently analyzed with accuracy. The report may be provided to the user through a user interface. In another example, the user may provide a query pertaining to interpretation of an aspect of the transactional document. For example, the query may correspond to one of the knowledge models, such as, for example, whether the title of the product pass to the organization at delivery point. In said example, the knowledge analyzer 130, upon determining that the transactional document falls within the purview of the guideline, may extract relevant knowledge model and analyze the transactional document, as described above. The report may indicate an interpretation of the transactional document to answer a query of the user. For example, the user may have queried about a minimum transaction price to be paid, if obligation is not met. FIGS. 3a and 3b illustrate examples of the knowledge models 300 and 350 generated by the system 105 for IFRS 15, according to an example implementation of the present disclosure. The knowledge model for a guideline may include a hypothetical question and a logic to determine an inference to the hypothetical question. The knowledge model 300 corresponds to a hypothetical question—may the contract party retain any products? The answers to each block in the knowledge model 300 may be determined, based on the knowledge data. At block 302, the logic may begin. At block 304-1, the knowledge analyzer 215 may be directed to go to section “commitment or delivery points or redelivery points” or “seller's take-in-kind option”, or “consideration”. At block 304-2, the knowledge analyzer 215 may be directed to search for terms “retain title”, “pass title” or “solely responsible” “redelivery/equivalent/volume”, “redeliver in kind to” or “take in kind”. At block 306, it may be determined any of the search terms exist. If the search terms do not exist, the knowledge analyzer 215 may exit the knowledge model 300. However, if the search term exist, at block 308, dependency parsing may be performed to determine answers to title related questions, such as “who retains title?”, “redeliver to whom?”, “who is solely responsible”, “pass to who”? In an example, the knowledge analyzer 215 may implement the dependency parser 265 to determine if the search term exists. At block 310, based on the parsing, it may be determined if any of a supplier, shipper, or a seller is identified? Based on a determination made at block 310, an inference, yes or no, may be made. Upon analysis, the knowledge analyzer 215 may exit. Referring FIG. 3(b), the knowledge model 350 is illustrated, which provides a logic flow to determine an inference to a hypothetical question: does title of product pass to the organization at the tailgate? At block 352, the knowledge analyzer 215 may search for terms, such as ‘title’, ‘pass’ and ‘redelivery’, and determine if the terms appear in the same sentence. Based on the determination, an inference may be determined. In an example, the knowledge analyzer 215 may implement the sentence similarity analyzer 230 to determine if the terms appear in the same sentence. If inference is negative, the knowledge analyzer 215 may proceed to next block. At block 354, the knowledge analyzer 215 may search for terms ‘title’, ‘sell’ and ‘tailgate’, and determine if the terms appear in the same sentence. Based on the determination, an inference may be determined. If inference is negative, the knowledge analyzer 215 may proceed to next block. At block 356, the knowledge analyzer 215 may search for terms title’, ‘transfer’ and ‘redelivery’, and determine if the terms appear in the same sentence. Based on the determination, an inference may be determined, and the knowledge analyzer 215 may exit the knowledge model 350. FIG. 4 illustrates an example depicting various relationships between two parties, according to an example implementation of the present disclosure. FIG. 4 illustrates three types of contract, viz., a gas purchase contract 410, a gas gather contract 420, and a gas process contract 430. In each contract, role and obligations of an organization may change, thereby making it important to accurately determine titles and obligation of parties involved in a contract. Referring to the gas purchase contract 410, a seller 410-1 may sell gas to a buyer 410-2, who in turn may sell it to customers 410-3. In this case, the organization may be the buyer 450. In this case, the organization (buyer 410-2) would get raw gas, process the gas, and sell the processed gas to the customer. Referring to the gas gather contract 420, a shipper 420-1 may ship gas to an organization, who may now be a gatherer 420-2. In this case, there is no end customer, and the role of the organization may be that of a gatherer. Referring to the gas process contract 430, a supplier 430-1 may provide gas to a processor 430-2, who may process gas and sell it to customers 430-3. Thus, for the same organization and same domain, there may be multiple contracts and thus the transactional documents. Each such transactional document may have to be comprehended accurately to perform various tasks, such as revenue generation. FIG. 5 illustrates a hardware platform for implementation of the system 105, according to an example embodiment of the present disclosure. Particularly, computing machines such as but not limited to internal/external server clusters, quantum computers, desktops, laptops, smartphones, tablets and wearables which may be used to execute the system 105 or may have the structure of the hardware platform 500. The hardware platform 500 may include additional components not shown and that some of the components described may be removed and/or modified. In another example, a computer system with multiple GPUs can sit on external-cloud platforms including Amazon Web Services, or internal corporate cloud computing clusters, or organizational computing resources, etc. Over the FIG. 5, the hardware platform 500 may be a computer system 500 that may be used with the examples described herein. The computer system 500 may represent a computational platform that includes components that may be in a server or another computer system. The computer system 500 may execute, by a processor (e.g., a single or multiple processors) or other hardware processing circuit, the methods, functions and other processes described herein. These methods, functions and other processes may be embodied as machine readable instructions stored on a computer readable medium, which may be non-transitory, such as hardware storage devices (e.g., RAM (random access memory), ROM (read only memory), EPROM (erasable, programmable ROM), EEPROM (electrically erasable, programmable ROM), hard drives, and flash memory). The computer system 500 may include a processor 505 that executes software instructions or code stored on a non-transitory computer readable storage medium 510 to perform methods of the present disclosure. The software code includes, for example, instructions to gather data and documents and analyze documents. In an example, the guideline analyzer 125 and the document analyzer 130 are software codes or components performing these steps. The instructions on the computer readable storage medium 510 are read and stored the instructions in storage 515 or in random access memory (RAM) 520. The storage 515 provides a large space for keeping static data where at least some instructions could be stored for later execution. The stored instructions may be further compiled to generate other representations of the instructions and dynamically stored in the RAM 520. The processor 505 reads instructions from the RAM 520 and performs actions as instructed. The computer system 500 further includes an output device 525 to provide at least some of the results of the execution as output including, but not limited to, visual information to users, such as external agents. The output device can include a display on computing devices and virtual reality glasses. For example, the display can be a mobile phone screen or a laptop screen. GUIs and/or text are presented as an output on the display screen. The computer system 500 further includes input device 530 to provide a user or another device with mechanisms for entering data and/or otherwise interact with the computer system 500. The input device may include, for example, a keyboard, a keypad, a mouse, or a touchscreen. In an example, output of the document analyzer 130 is displayed on the output device 525. Each of these output devices 525 and input devices 530 could be joined by one or more additional peripherals. A network communicator 535 may be provided to connect the computer system 500 to a network and in turn to other devices connected to the network including other clients, servers, data stores, and interfaces, for instance. A network communicator 535 may include, for example, a network adapter such as a LAN adapter or a wireless adapter. The computer system 500 includes a data source interface 540 to access data source 545. A data source is an information resource. As an example, a database of exceptions and rules may be a data source. Moreover, knowledge repositories and curated data may be other examples of data sources. FIG. 6 shows a method 600 for generating knowledge models corresponding to a guideline and FIG. 7 shows a method 700 for analyzing transactional documents, according to the present disclosure. It should be understood that method steps are shown here for reference only and other combination of the steps may be possible. Further, the methods 600 and 700 may contain some steps in addition to the steps shown in the FIG. 6 and FIG. 7, respectively. For the sake of brevity, construction and operational features of the system 105 which are explained in detail in the description of FIG. 1, FIG. 2, FIG. 3, and FIG. 5 are not explained in detail in the description of FIG. 6 and FIG. 7. The method 600 may be performed by a component of the system 105, such as the guideline analyzer 125 and the method 700 may be performed by the document analyzer 130. At block 605, a guideline may be obtained from a guideline database. In an example, the element extractor 205 of the guideline analyzer 125 may obtain the guideline from the guideline database 115. The guideline may correspond to regulations defined for transactional documents in a domain. For example, IFRS 15, which is defined for revenue from contracts with customers and IFRS 16, which is for lease compliance. At block 610, an element of the guideline may be obtained using the AI technique. The element may relate to a process outlined by the guideline and/or a factor governing a core principle of the guideline. In an example, the element extractor 205 may extract the element. At block 615, a knowledge model defining the element of the guideline using one of the AI technique and a natural language processing technique may be generated. The knowledge model may include at least one of a hypothetical question and a logical flow to determine an inference to the hypothetical question. The hypothetical question relates to the element of the guideline. In an example, the knowledge model generator 210 of the guideline analyzer 125 may generate the knowledge models. In an example, the hypothetical questions may providing for determining whether a transactional documents falls within the purview of the guideline or provides for interpreting an aspect of the transactional document, such as, for example, how a transaction price is calculated. Referring to method 700, at block 705, a request to analyze a transactional document may be received. The request may be to analyze the transactional document as a whole or only a portion/aspect of the transactional document. In an example, the user may provide the transactional document or the document analyzer 130 may obtain the transactional document from the transactional document database 110. At block 710, the guideline corresponding to the transactional document, based on a domain of the transactional document, may be identified. As mentioned earlier, for each domain, there may be a predefined guideline. Thus, in case the transactional document pertains to lease compliance, IFRS 16 may be identified as the guideline. At block 715, a knowledge model corresponding to the guideline associated with the transactional document may be obtained. In an example, the knowledge model may be obtained from the guideline database 115. At block 720, one or more AI components of a system analyzing the transactional documents may be configured, based on an ontology corresponding to a domain of the guideline. The ontologies and corresponding data may be predefined for each domain and/or guideline. Based on the ontology the AI components relevant for processing the document as per the knowledge model(s)/guidelines may be selected. Upon selection, various parameters may be set using the ontology to configure the AI components for processing the document as per the guideline/knowledge model. The AI components may perform AI tasks, such as, for example, extracting geospatial entity, performing sentence similarity, performing supervised classification, extracting search indices, performing unsupervised clustering, detecting topics, extracting tables, extracting entities and relationship therebetween, and performing dependency parsing. At block 725, based on the knowledge model, data from the transactional document may be extracted using configured AI component. At block 730, extracted data may be analyzed, based on the knowledge model, to determine an inference to a hypothetical question associated with the knowledge model. The inference may be determined using a logic defined by the knowledge model. In an example, the document analyzer 130 may use the configured AI component to analyze extracted data. At block 735, based on the analysis, a report may be generated indicating whether the transactional document falls within a purview of the guideline. In an example, where the user has requested for analysis pertaining to an aspect of the transactional document, the report may indicate, interpretation of the aspect. What has been described and illustrated herein are examples of the present disclosure. The terms, descriptions and figures used herein are set forth by way of illustration only and are not meant as limitations. Many variations are possible within the spirit and scope of the subject matter, which is intended to be defined by the following claims and their equivalents in which all terms are meant in their broadest reasonable sense unless otherwise indicated. Claims 1. A system comprising: a processor; and a document analyzer coupled to the processor, the document analyzer to, receive a request to analyze a document; obtain a knowledge model corresponding to a guideline associated with the document, the knowledge model including at least one of a hypothetical question and a logical flow to determine an inference to the hypothetical question, wherein the hypothetical question relates to an element of the guideline; based on an ontology corresponding to a domain of the guideline, select and configure an artificial intelligence (AI) component from a plurality of AI components, the AI component being one of a geospatial entity extractor, a sentence similarity analyzer, a supervised classifier, a search indices extractor, an unsupervised cluster generator, a topic detector, a table extractor, an entity and relationship extractor, and a dependency parser; implement the configured AI component to extract data from the document and analyze data, based on the knowledge model indicative of the guideline; and based on the analysis, generate a report, the report being indicative of at least one of: whether the document falls within a purview of the guideline; and an interpretation of an aspect of the document, as requested by a user. 2. The system as claimed in claim 1, wherein the system further comprises a guideline analyzer coupled to the processor, the guideline analyzer to generate the knowledge model corresponding to the guideline using an AI technique. 3. The system as claimed in claim 2, wherein the guideline analyzer further comprises an element extractor to: obtain the guideline from a guideline database, the guideline corresponding to regulations defined for documents in the domain; and extract the element of the guideline using the AI technique, the element relating to one of a process outlined by the guideline and a factor governing a core principle of the guideline. 4. The system as claimed in claim 2, wherein the guideline analyzer further comprises a knowledge model generator to determine the knowledge model defining the element of the guideline using one of the AI technique and a natural language processing technique, the element relating to one of a process outlined by the guideline and a factor governing a core principle of the guideline. 5. The system as claimed in claim 1, wherein the document analyzer is to identify words in the document based on a word-related statistic to determine the domain corresponding to the document. 6. The system as claimed in claim 1, wherein the document analyzer comprises a knowledge analyzer to identify the guideline corresponding to the document, based on the domain of the document. 7. The system as claimed in claim 1, wherein the document analyzer comprises a knowledge analyzer, the knowledge analyzer to determine a parameter and a corresponding value of the parameter, based on the ontology of the domain of the guideline, to configure the AI component. 8. A computer-implemented method, executed by at least one processor, the method comprising: receiving a request to analyze a document; obtaining a knowledge model corresponding to a guideline associated with the document, the knowledge model including at least one of a hypothetical question and a logical flow to determine an inference to the hypothetical question, wherein the hypothetical question relates to an element of the guideline; based on an ontology corresponding to a domain of the guideline, configuring an artificial intelligence (AI) component from a plurality of AI components, the AI component being one of a geospatial entity extractor, a sentence similarity analyzer, a supervised classifier, a search indices extractor, an unsupervised cluster generator, a topic detector, a table extractor, an entity and relationship extractor, and a dependency parser; implementing the configured AI component to extract data from the document and analyze data, based on the knowledge model indicative of the guideline; and based on the analysis, generating a report, the report being indicative of at least of, whether the document falls within a purview of the guideline; and an interpretation of an aspect of the document, as requested by a user. 9. The computer implemented method of claim 8, wherein the method comprises generating the knowledge model corresponding to the guideline using an AI technique. 10. The computer implemented method of claim 9, wherein generating the knowledge models comprises: obtaining the guideline from a guideline database, the guideline corresponding to regulations defined for documents in the domain; and extracting the element of the guideline using the AI technique, the element relating to one of a process outlined by the guideline and a factor governing a core principle of the guideline. 11. The computer implemented method of claim 9, wherein generating the knowledge models comprises determining the knowledge model defining the element of the guideline using one of the AI technique and a natural language processing technique, the element relating to one of a process outlined by the guideline and a factor governing a core principle of the guideline. 12. The computer implemented method of claim 8, wherein the method further comprises analyzing the extracted data, based on the knowledge model to provide the inference to the hypothetical question corresponding to the knowledge model. 13. The computer implemented method of claim 8, wherein configuring the AI component comprises determining a parameter to be configured and a corresponding value of the parameter, based on the ontology of the domain of guideline, to configure the AI component. 14. The computer implemented method as claimed in claim 8, wherein the obtaining further comprises: identifying words in the document based on a word-related statistic to determine the domain corresponding to the document; and identifying the guideline corresponding to the document, based on the domain of the document. 15. A non-transitory computer readable medium including machine readable instructions that are executable by a processor to: receive a request to analyze a document; obtain a knowledge model corresponding to a guideline associated with the document, the knowledge model including at least one of a hypothetical question and a logical flow to determine an inference to the hypothetical question, wherein the hypothetical question relates to an element of the guideline; based on an ontology corresponding to a domain of the guideline, configure an artificial intelligence (AI) component from a plurality of AI components, the AI component being one of a geospatial entity extractor, a sentence similarity analyzer, a supervised classifier, a search indices extractor, an unsupervised cluster generator, a topic detector, a table extractor, an entity and relationship extractor, and a dependency parser; implementing the configured AI component to extract data from the document and analyze data, based on the knowledge model indicative of the guideline; and based on the analysis, generate a report, the report being indicative of at least of, whether the document falls within a purview of the guideline; and an interpretation of an aspect of the document, as requested by a user. 16. The non-transitory computer readable medium as claimed in claim 15, further including instructions executable by the processor to generate the knowledge model corresponding to the guideline using an AI technique. 17. The non-transitory computer readable medium as claimed in claim 15, wherein to generate the knowledge model, the processor is to: obtain the guideline from a guideline database, the guideline corresponding to regulations defined for documents in the domain; extract the element of the guideline using the AI technique, the element relating to one of a process outlined by the guideline and a factor governing a core principle of the guideline; and determine the knowledge model defining the element of the guideline using one of the AI technique and a natural language processing technique. 18. The non-transitory computer readable medium as claimed in claim 15, wherein to generate the report, the processor is to analyze the extracted data, based on the knowledge model to provide the inference to the hypothetical question corresponding to the knowledge model. 19. The non-transitory computer readable medium as claimed in claim 15, wherein to configure the AI component, the processor is to determine a parameter to be configured and a corresponding value of the parameter, based on the ontology of the domain corresponding to the guideline. 20. The non-transitory computer readable medium as claimed in claim 15, wherein the processor is to identify words in the document based on a word-related statistic to determine the domain corresponding to the document; and identify the guideline corresponding to the document, based on the domain of the document. Referenced Cited U.S. Patent Documents 20050182657 August 18, 2005 Abraham-Fuchs et al. 20080320550 December 25, 2008 Strassner et al. 20100174754 July 8, 2010 B'Far et al. Other references • Luca et al., “Ontology-Based Semantic Online Classification of Documents: Supporting Users in Searching the Web”, Jan. 2004, 9 pages. • Doganata et al., “Authoring and deploying business policies dynamically for compliance monitoring”, Jul. 2011, 9 pages. Patent History Patent number: 11373101 Type: Grant Filed: Apr 6, 2018 Date of Patent: Jun 28, 2022 Patent Publication Number: 20190311271 Assignee: ACCENTURE GLOBAL SOLUTIONS LIMITED (Dublin) Inventors: Chung-Sheng Li (San Jose, CA), Guanglei Xiong (Pleasanton, CA), Swati Tata (Bangalore), Pratip Samanta (Bengaluru), Madhura Shivaram (Bangalore), Golnaz Ghasemiesfeh (Sunnyvale, CA), Giulio Cattozzo (Houston, TX), Lisa Blackwood (Houston, TX), Nagendra Kumar M R (Bangalore), Priyanka Chowdhary (San Francisco, CA) Primary Examiner: Brian Whipple Application Number: 15/947,518 Classifications Current U.S. Class: Non/e International Classification: G06N 5/02 (20060101); G06N 5/04 (20060101); G06F 40/40 (20200101); G06V 30/413 (20220101); G06V 30/414 (20220101);
__label__pos
0.98832
Friday August 1, 2014 Posts by gary Total # Posts: 299 maths A car and a van started from two cities at the same time and travelled towards each other at steady speeds. The car took 4 hours to cover the distance between the two cities and the van took 6 hours. After what amount of time did they pass each other? 4. chemistry If 5.6 grams of potassium hydroxide is added to enough water to make 950 ml of solution, what is the concentration of the potassium? alegbra Use a calculator to evaluate an ordinary annuity formula A = m [ 1+ r over n ^nt -1] than rn is under--____________________ r over n for m, r, and t (respectively). Assume monthly payments. (Round your answer to the nearest cent.) $150; 6%; 40 yr A = $ alegbra Find the future value, using the future value formula and a calculator. (Round your answer to the nearest cent.) $990 at 5.5% compounded quarterly for 3 years Find the present value, using the present value formula and a calculator. (Round your answer to the nearest cent.) Ach... alegbra The learning curves describes the rate at which a person learns certain tasks. If person sets a goal of typing N words per minute (wpm), the length of time t (in days) to achieve this goal is given by a) according to this formula what is the maximum number of words per minute?... alegbra Write the equations in logarithmic form. 25=(1/5)^2 thanks for the help. alegbra An artifact was found and tested for its carbon-14 content. If 75% of the original carbon-14 was still present, what is its probable age (to the nearest 100 years)? (Carbon-14 has a half-life of 5,730 years.) thank you ALGEBRA t = - 6.25 In ( 1 N over 80) according to the formula what is the maximum number of word per minute. Round off to the nearest whole number. Solve N ALGEBRA the atmosphere pressure P in pounds per square inch (psi) is given by P=14.7e upper o.21a where a is the attitude above sea level in miles. If a city has an atmosphere pressure y 13.29 psi what is its altitude. Recall that one mile = 5,280 feet. Round to the nearest feet math trigonometry At night, a security camera pans over a parking lot. The camera is on a post at point a which is 53m from point c and 71 m from point b.the distance from B to c is 68m. Calculate the angle through which the camera pans math trigonometry At night, a security camera pans over a parking lot. The camera is on a post at point a which is 53m from point c and 71 m from point b.the distance from B to c is 68m. Calculate the angle through which the camera pans math Maya chatted online with all her friends for 2/3 of an hour on saturday and 1/4 of an hour on sunday.During that time period she chatted with Paul for 1/8 of an hour. What fraction of the time Maya spent chatting with her friends online was spent chatting with Paul? PHYSICS 100turns of insulated wire are wraped around a wooden cylindrical core of cross-sectional area 12cm2. The two ends of the wire are connected to a resistor. The total circuit resistance is 13Ω. If an externally applied uniform magnetic field along the core changes from 1.6... Physics A long, ideal solenoid has a diameter d=12cm and n=1200turns/meter carrying current I=20A. If the current is lowered at 4.33amp/s to zero, what is the magnitude of the induced electric field in V/m at a position 8.2cm from the solenoid's axis physics A long, ideal solenoid has a diameter d=12cm and n=1200turns/meter carrying current I=20A. If the current is lowered at 4.33amp/s to zero, what is the magnitude of the induced electric field in V/m at a position 2.2cm from the solenoid's axis (so at a point inside the sole... physics 100turns of insulated wire are wraped around a wooden cylindrical core of cross-sectional area 12cm2. The two ends of the wire are connected to a resistor. The total circuit resistance is 13Ω. If an externally applied uniform magnetic field along the core changes from 1.6... physics A long, ideal solenoid has a diameter d=12cm and n=1200turns/meter carrying current I=20A. If the current is lowered at 4.33amp/s to zero, what is the magnitude of the induced electric field in V/m at a position 8.2cm from the solenoid's axis (so at a point outside the sol... physics HELP is the radius of the cable or of the axis to the point p? Math There are boxes without tops. Find the box with the least surface area for a given volume, V. Make a diagram (if you can). Hint: Volume=x^2y Area=x^2+4xy math The base of the lamp of a triangular prism with an equilateral triangle base . The surface of the stand is to be painted.what is the area that will be painted?Give the answer to the nearest whole number. Calculus I If x^2+y^2=25, find dy/dt when x=3 and dx/dt= -8. Calculus I Let A be the area of a circle with radius r that is increasing in size with respect to time. If the rate of change of the area is 8 cm/s, find the rate of change of the radius when the radius is 3 cm. Chemistry 55cm Chemistry If each balloon is filled with carbon dioxide at 20 deg. C and 1 atmosphere, calculate the mass and the number of moles of carbon dioxide in each balloon at maximum inflation. math Bianca and yoko work together to mow the lawn suppose yoko mows 5/12 of the lawn and bianca mows 2/5 of the lawn how much of the lawn still needs to be mowed? math On a sunny day, if a 36-inch yardstick casts a 21-inch shadow,how tall is a building whose shadow is 168 ft ? math thank you, ms sue, the other question can i do the same way like this x/36 = 168/21 21x = 6048 x = 288 math Oregon is about 400 miles from west to east, and 300 miles from north to south. If a map of Oregon is 15 inches tall ( from north to south), about how wide is the map? math Space the length of one day on venus is 35 earthdays write a product and an exponent . poetry The poem is: Canada Day Love Match My mother stalked her future son in law, convinced he was the one for me. Through the crowd at the Forks Market, she kept behind him like a cat ready to pounce. I kept out of his line of sight. Blind to us, he watched other women. Red turban,... poetry (Pioneer by Dorothy Livesay) 8 cannot be D because none of the words rhyme. It also does not ask questions. I will go with A. Does that seem right? poetry (Pioneer by Dorothy Livesay) Would number 8 be C ? I think 4 would be D because I don't think the pioneer will retire. I'm confused between B and D. Thanks for your help :) poetry (Pioneer by Dorothy Livesay) I have corrected the answers 1. A 4. A 5. C 7. B Do they seem right now? poetry (Pioneer by Dorothy Livesay) I have to answer the questions according to this poem, I have answered them but just want someone to look over them to see if they are right. The last one I had trouble with and need help on that one too. The answer that I think it is I have put arrows on them. Help would be a... poetry The poem is: Canada Day Love Match My mother stalked her future son in law, convinced he was the one for me. Through the crowd at the Forks Market, she kept behind him like a cat ready to pounce. I kept out of his line of sight. Blind to us, he watched other women. Red turban,... Finance Explain how a firm that has issued a floating-rate bond with a coupon equal to the LIBOR rate can use swaps to convert that bond into a synthetic fixed-rate bond. math The price of a sweater went up 20% since last year . If last year's price was x, what is this year's price in terms of x? math please explain how you got 1.2x math The price of a sweater went up 20% since last year.If last year's price was x what is this year's price in terms of x? math thank you math A. John is now 10 years older than Marcus. Three times John's age 5 years from now will be the same as five times Marcus's age 5 years ago. How old is John now? B. In 1 year Kristen will be four times as old as Danielle. Ten years from then Kristen will only be twice a... math A. A woman was 30 years old when her daughter was born.Her age is now 6 years more than three times her daughter's age. How old will the daughter be in 5 years? B. Lisa is 15 years old and her father is 40. How many years ago was the father six times as old as Lisa? C. Joe... math A. Brad is 12 years older than Sam . If Brad were 8 years older than he is now. he would be twice as old as Sam. How old is Sam now? B. Barrie is now 2 years older than Krista. In 15 years Barrie's age will be 2 years more than twice Krista's age now. How old will Barr... math D. ( x + x+1 + x+2 )/3=84 math the number B (?) four consecutive odd integers (X + X+2 + X+4 + X+6 ) /4 = 16 I'm not sure math A. Four times one odd integer is 14 less than three times the next even integer. Find the integers. B. The average of four consecutive odd integers is 16. Find the largest integer. C. When the sum of three consecutive integers is divided by 9 the result is 7. Find the three in... math A. One board is one-third the length of another. Six times the sum of the length of the short board and -10 is equal to the length of the longer board decreased by 11 inches. Find the length of the longer board. B. The length of a rectangle is 4 feet more than twice the width.... math A. In triangle, the second angle measures twice the first, and the third angle measures 5 more than the second. If the sum of the angles' measures is 180^0, find the measure of each angle. b. The price of a pack of gum today is 63 cent. This is 3 cent more than three times... math 1. A hotel has 120 rooms. If the number of double room is 8 more than three times the number of single rooms, how many single rooms does the hotel have? 2. A mechanic earns $5 more per hour than his helper. On a six-hour job the two men earn a total of $114. how much does each... math I have two more question, I hope you can help me. 1. A hotel has 120 rooms. If the number of double room is 8 more than three times the number of single rooms, how many single rooms does the hotel have? 2. A mechanic earns $5 more per hour than his helper. On a six-hour job th... math thank you for help math 4. The perimeter of a tgriangle is 44 inches. If one side is 5 inches longer than the smallest side and the largest side is 1 inch less than twice the smallest side, how many inches are there in the smallest side? math 1. The sum of the ages of Ed and his father is 59 years. If his father's age is 11 years less than four times Ed's age, how old is Ed? 2. Five less than seven times a certain number is 58.Find the number. 3. The sum of 1/2 a certain number and 1/3 of the same number is... physics A boat sails 5.0(45 degrees W of N). It then hanges direction and sails 7.0km(45 degrees S of E) where does the boat end up with reference to its starting point.? Please explain steps answer is 2km(45 degrees S of E) Business In the following statement, a business owner attempts to explain and justify his slow growth in his business. I limit my growth pace and every effort to service my present customers in the manner they deserve. I have some peer pressure to do otherwise by following the advice o... math 1. solve the following inequality to find a range of values for x: -11 < -4-x < -7 2. solve the following inequality to find a range of values for x: -25 < -x < -10 math 1. solve the following inequality to find a range of values for x: -11 < -4-x < -7 2. solve the following inequality to find a range of values for x: -25 < -x < -10 math thank you, Dr. Jane Can i ask two more question 1. solve the following inequality to find a range of values for x: -11 < -4-x < -7 2. solve the following inequality to find a range of values for x: -25 < -x < -10 math thank you, Dr. Jane Can i ask two more question 1. solve the following inequality to find a range of values for x: -11 < -4-x < -7 2. solve the following inequality to find a range of values for x: -25 < -x < -10 math 1. solve the following inequality to find a range of values for x : -18 < x -7 < -6 2. solve the following inequality to find a range of values for x : -96 < -12x < -12 math 1. 19cd , + 6d^2 , + 8 , 19cd 6d^2 math 2. -7 , y + 13 , ab^2 + x -7 13 math So, 1. 19cd , + 6d^2 , + 8 , 2. -7 13 is it correct math my son does'nt understand , me too. math 1. List the terms in the expression below, place a comma between each term. 19cd + 6d^2 + 8 2. List the coefficients in the expression below, place a comma between each coefficient. -7y + 13ab^2 + x Statistics "number wrong" = 10 - "number right" is a linear transformation that says, "Multiply the list by -1, then add 10." The addition does nothing to the SD, and the multiplication by -1 multiplies the SD by |-1|=1. So there's no change to the SD. I... math Find the solution x of the following equation : 12X =32. math I don't understand 3/4 / 5/4 = 3/5 the answer L is 3/5 is it correct Calculus H(x) = (x^4 - 2x +7)(x^-3 + 2x^-4) H'(x)= Calculus On what values of x does the graph of f(x) = 2x^3 - 3x^2 + 12x + 87 have a horizontal tangent? algebra (xy)^1/4(x^2y^2)^1/2 over(x^2y)^3/4 help? math ? 1. when z is divided by 8, the remainder is 5. What is the remainder when 4z is divided by 8 ? 2. If n is an integer, which of the following must be odd ? A. 3n-5 B. 3n + 4 C. 4n + 10 D. 4n - 5 E. 5n + 7 math 1. when z is divided by 8, the remainder is 5. What is the remainder when 4z is divided by 8 ? 2. If n is an integer, which of the following must be odd ? A. 3n-5 B. 3n + 4 C. 4n + 10 D. 4n - 5 E. 5n + 7 math 1. when z is divided by 8, the remainder is 5. What is the remainder when 4z is divided by 8 ? 2. If n is an integer, which of the following must be odd ? A. 3n-5 B. 3n + 4 C. 4n + 10 D. 4n - 5 E. 5n + 7 math 1. If x people working together make a total of y dollars after an hour of work. how much money will z people make if they work 4 hours at the same rate per person ? 2.Each of the n members of an organization may invite up to 3 guests to a conference. What is the maximum numbe... math 1. If x people working together make a total of y dollars after an hour of work. how much money will z people make if they work 4 hours at the same rate per person ? 2.Each of the n members of an organization may invite up to 3 guests to a conference. What is the maximum numbe... math A. Mandy buy a sweater that is on sale for 20% less than the original price, and then she uses a coupon worth an additional 15% off of the sale price. What percentage of the original price has she saved ? B. If the price of a stock increases by 40% and then by an additional 25... math thank you math a spinner has four equal parts marked 1-4, another spinner has 3 equal parts colored red, blue and yellow. What are all the possibe outcomes? English3 In “The Turtle” from The Grapes of Wrath, how does the stem of the wild oat seeds use the turtle to carry it to a place where it can grow? math x=6y/5-8y-3y+9y,what is the value of y in terms of x ? math 1. x=3y-2/4,what is the value of y in terms of x ? 2. x=y^3/3y^2+y/2,what is the value of y in terms of x ? math steve, i still have two question I don't understand. 1. x=3y-2/4,what is the value of y in terms of x ? 2. x=y^3/3y^2+y/2,what is the value of y in terms of x ? math steve, i still have two question I don't understand. 1. x=3y-2/4,what is the value of y in terms of x ? 2. x=y^3/3y^2+y/2,what is the value of y in terms of x ? math thank you, steve math thank you, steve math I want to know how the way to do that, then i can teach my children math my children doesn't know how to do that, i don't know how to teach him math I don't know how to do that, steve math If x=5y+2-3(y+2),what is the value of y in terms of x ? math If x=y3-80,what is the value of y in terms of x ? math If x=y+|-4|-|-5|+6,what is the value of y in terms of x? stats if i have a sample size of 200, 16% are poor, what is my mean and standard deviation. my answers do not make any sense math thank you, I post a question by mistake math Each rabbit constume needs one and one half yards of white fur fabric. a yard of blue striped fabric, and a quarter of a yard of pink felt for the ears. If Gail has ten yards of white fur fabric, seven yards of blue striped fabric, and one and three quarter yards of pink felt.... math thank you , Ms. Sue math one more question 6. 68% of 250 it mean 250 x 70% = 180 math thank you , Ms.Sue math 2. 100 x 50% = 50 math 2. 50 x 100% = 50 3. 500 x 20% = 100 4. 100 x 70% = 70 5. 80 x 70% = 60 Pages: 1 | 2 | 3 | Next>> Search Members
__label__pos
0.998104
Web Development Best Way To Learn Coding Best way to learn coding Coding and programming are no longer the sole realms of computer scientists and people with complicated university degrees behind them. A lot of people teach themselves how to code from the comfort of their living room by using interactive online courses and tutorials. While this type of learning is effective, it is important to identify the best way to learn coding before you start, otherwise, you won’t be getting the most out of your time. There are a lot of different ways to learn coding, depending on your end goals and the language you choose to learn. While some people still attend courses at their local university or another teaching institute, online courses are becoming a lot more popular. Many of these courses are interactive, which means that you can write your code while you are learning, fast-tracking your progress and increasing your chances of becoming a gun programmer. Some people still use textbooks as the basis for their learning, while others learn by watching video courses or using coding apps. This article will begin by identifying some common reasons why you should learn to code. It will explore coding for beginners while looking at a few contenders for the best way to learn coding. Finally, some of our top tips for learning coding will be presented to help you along on your coding journey. Enjoy! Why Should I Learn How To Code? As noted above, learning to code is becoming a very popular pastime, especially among younger people. Programming and coding are everywhere in the modern world. Pretty much every electronic device or other object containing a computer system – including things like cars and machinery – has to be programmed before it can be used, resulting in huge demand for experienced programmers. However, a lot of people look at coding for beginners and ask themselves the question ‘Why should I learn how to code?’. The reality is, coding is the way of the future. If you can effectively learn coding, you will experience some of the following benefits: • Learning how to code will make you more self-sufficient. Even if you never plan on taking up coding as a career, learning the basics of languages like HTML, CSS and JavaScript could help you in your current job. Instead of having to call on technical support every time you can’t get a blog post to look right or can’t seem to work out how to add an animation to your website, you will be able to fix the problem yourself. • Learning to code will make you much more employable. Even if coding and programming aren’t a requirement for your job, knowing them will make you a lot more valuable to your employer – see point 1 above. This can lead to increased job security, pay raises, and other benefits. • Coding could lead to a new career path. If you decide to learn how to code, you could find yourself working as a freelance or contract programmer in no time at all. This will allow you to spend more time doing the things you enjoy and less time working – something we all dream of! As you can see, there are many, many reasons why you should try and teach yourself how to code. Who knows – you might even find that it’s your real calling in life! Now that we’ve covered why you should learn coding, let’s move on to a debate about the best way to learn coding. What Is The Best Way To Learn Coding? Before we start here, it is important to note one key point which everyone should be aware of: There is no single ‘best way to learn coding’. That’s right, there isn’t anyone best way to learn coding. Since everyone is different and everyone learns differently, the best way to learn coding for one person will be completely different from the best way for another. With this in mind, we have explored some of the most common ways to learn coding for beginners. We have looked at modern ways to learn, along with more traditional computer science methods. An Online Course In the modern world, online courses are probably the most favored way to learn programming basics, especially if you are trying to teach yourself in your spare time. Online courses are flexible, they usually cover a decent amount of material, and they are usually designed for people with little to no experience with coding. Best way to learn coding - BitDegree platform Online courses come in a wide range of shapes and sizes. Some of the more popular types include: Video courses, which usually contain lecture series with worksheets or exercises that allow you to practice coding. Although simple, a lot of these courses run side by side with a code editor, allowing you to write your code as you watch the videos. This can provide huge learning benefits, as it ensures that you remember the maximum amount possible and that you get the most out of your course. You can try it yourself by enrolling in some coding video courses offered by BitDegree. By using BitDegree coupons you can even get these courses for free, so it’s worth it to check out. Learning paths is a learning method that is especially effective when there’s a lot of information to absorb. It focuses on dividing complex or/and vast topics into smaller chunks. It’s a perfect learning method for those who are determined about their careers and want all of the information in one place. Already have a career you want to pursue? Check the learning paths hand-picked and crafted by us. The roadmaps we created include courses from some of the best instructors in their area and are focused on practicality instead of theory. Interactive online courses, which are something of a new invention. Interactive courses will lead you through a predefined scenario, guiding you towards an end goal. They are fun, exciting, and especially effective for young learners who may have trouble concentrating on basic video or text tutorials. Text-based courses, which are usually cheap and effective. If you don’t have a lot of time or money to put towards your new coding endeavors, you should consider taking a simple text-based course. Many text-based programming courses run alongside a code editor, allowing you to write your code and see it in action as you learn. Many people will argue that the best way to learn coding is through structured online courses. While we won’t argue with that, we will point out that there are other ways to learn, including: By Watching Video Tutorials A lot of people prefer not to follow a structured approach to learning things like how to code and programming basics. Instead, they like to teach themselves by doing things like watching videos online, reading stand-alone articles, and doing a lot of independent research and learning. Popular video-sharing platforms like YouTube are great places to start if you would like to teach yourself how to code by watching video tutorials. Simply decide which language you want to learn and type it into the search bar. Filter through the results until you find a couple of decent channels that offer engaging, high-quality content, and bookmark them for future reference. There are two main benefits to taking this approach. First, learning like this allows you to learn as fast or slow as you want. If you are having trouble getting your mind around a concept, you can simply spend more time on it. Likewise, if you are finding things easy, you can skip ahead rapidly, learning how to code in the shortest amount of time possible. The second benefit to learning like this is the cost. While a lot of online courses and tutorials will cost you money, learning by watching videos will not! This is good for people who don’t have a lot of money to spend, who want to learn coding in their spare time, or who plan on learning for fun as much as anything. Best way to learn coding - Python tutorial Using Textbooks And Practicing Although this probably isn’t the best way to learn coding, since it is a discipline that will inevitably involve computers and other technologies, a lot of people choose to start their coding journey with textbooks and other offline resources. Textbooks and other offline resources can offer a lot of good information which is easy to access and simple to understand. However, we believe that they should be used in conjunction with decent online courses – such as those offered on the BitDegree platform. Using Gamified Apps There is an increasing focus on teaching children how to code from a very young age. This has lead to the development of a large number of apps that are designed to teach coding in a fun, engaging manner. Although a lot of adults may find coding games simple and boring, this is arguably the best way to learn coding for children. While we probably wouldn’t recommend using coding apps exclusively, they can offer a great way to practice writing code. In an ideal world, you should use them alongside other resources like online courses. When used right, coding apps can help fast track your progress, allowing you to enter the coding world and start developing your meaningful programs in next to no time. 7 Tips To Help You Learn Coding Faster Now that we have covered some of the best ways to learn coding, it’s time to look at the learning process itself. A lot of people start teaching themselves how to code but give up quickly due to a lack of drive, direction, or motivation. When it comes to something like learning programming, you should be writing your code within a few weeks if you commit a decent amount of time to it. Unfortunately, a lot of people get lost, meaning that their progress slows and that it takes a lot of time and effort for them to move forward. this in mind, we have put together a shortlist of a few of our top tips to help you learn to programme faster: 1. Don’t Neglect Books Sure, books and other offline resources may seem a little obsolete in the modern world of computer programming – after all, programming is something that is done with computers, on computers, and for computers. However, it is important to realize that books are still a very good resource, especially while you are still getting your head around your code and the best way to write it. Once you have chosen a language or two, buy yourself a couple of reference books for those languages. Choose ones which have a full list of the syntax and functions of the language, as well as explanations of the most common functions. Having this to refer to while learning and practicing will help you learn faster and more efficiently. 1. Teach Someone Else While this may seem like a strange thing to do while you’re learning yourself, teaching and mentoring someone else can help you retain information better and learn faster. Spend a few weeks or months learning the basics of your chosen language, and then start searching for someone to mentor. Websites like Hack.pledge are designed for exactly this, and you will be able to find someone who you can help here. When you are just starting, you might even choose to find a mentor here to help you get past difficult concepts or things that you’re having trouble with. 1. Play Games Remember when you were in school, and you used to play maths, spelling, and typing games? Although you probably didn’t realize it, these games would have been carefully designed to complement your learning and to help you overcome difficult concepts. In the same way, playing coding games can help you learn faster. When used right, they will help you revise difficult concepts that you might have learned in the past, reinforcing them so that they stay in your brain. Although they are probably aimed more at children and younger learners, people of all ages will benefit from playing coding games. 1. Explore Someone Else’s Code Since a lot of coding and programming is open source, it’s very easy to find a piece of code somewhere to explore yourself. Try and find something which isn’t too complex if you’re a beginner, and then look at it closely, noting the following: • Consider the function of each line of code. Are the most efficient methods being used, or are there better ways to do some things? • Think about ways you could change to the code to add more functionality or to make it do different things. • Are there any mistakes in the code? If so, where? You should be able to find good source code snippets on a website like GitHub, but remember to re-share your code if you manage to make improvements to it! 1. Take A Free Course Free online courses are a great way to get started when it comes to learning the basics of coding. Some people would even argue that free online courses are the best way to learn to code, especially for beginners. Unfortunately, most free courses – including those offered by BitDegree – are not comprehensive enough to teach you everything that you need to know. Sure, they are a good starting point, but you will need to take the initiative and move onto a better course once you have completed your free learning. For example, you might decide that you want to learn HTML, CSS, and other front-end programming techniques. You could start with BitDegree’s free Interactive HTML, CSS & Web Development course. However, you will complete this in a couple of hours if you put your mind to it, after which you will need to move onto something like the Comprehensive HTML5 Tutorial. 1. Identify Why You Want To Code Now, arguably the most important thing to do before you start your coding journey is to identify your reasons for learning how to code. Different people want to learn programming for different reasons, and the courses you take and the direction you go in will depend on your reasons for learning. Consider the following: • What do you hope to get out of programming? • Do you want to become a career programmer, or is it simply a hobby for you? • Are you interested in building games, websites, apps, or something else? There are plenty of different types of coders and programmers out there, each of which needs a different skill set. If you want to do a certain job with your coding knowledge, make sure that you learn the right languages. 1. Focus On One Language & Be Patient! In the same way, it is important to focus on one language (in most cases) when you are getting started, otherwise it is too easy to get confused and to mix up the syntax. Choose a simple language like Python, JavaScript, or HTML/CSS to begin with, and wait until you are relatively comfortable with your first language before you move onto a second one. The exception to this rule would be when you are planning on becoming a front-end web developer. In this case, you would start by learning both HTML and CSS together. Neither of these languages is very useful on their own, so in most cases, you will be using them both at the same time anyway. Getting Started So you’ve done a bit of research, have been thinking about it for a while, and have decided that you want to become a coder. But now what? How do you go about getting started on your journey? Well, the first thing to do is to identify the best way to learn coding for you. For most people, the best way to start learning will be using an interactive coding course like those offered by BitDegree. The following steps should guide you as you look for coding courses, decide on a language, and think about the best resources to use. 1. Start by choosing a language. Think about what you hope to get out of your coding course, what kind of work you hope to do with your new programming knowledge in the future, and how much time you have to commit to coding. Most people choose simple languages like Python, Java, or HTML/CSS when they are starting, but this is by no means a must. 2. Find a course. Once you have chosen your language, it’s time to think about what course you’re going to take. There are plenty of options out there for real beginners, especially if you’re happy to pay for them. For example, if you choose to learn Python on the BitDegree platform, you will have a choice of four different courses – a Python Tutorial, a Python Basics course, a Python Imaging course, and Learn to Make Python Data Structures. 3. Start learning! Now, all that you have to do is start learning your new language. Make sure that you practice regularly, try writing your programs once you have developed a little knowledge, and take notes about difficult concepts. Best way to learn coding - Learn Java 101 Conclusion The best way to learn coding is something that programmers, developers, and computer scientists throughout the world have been arguing about for the last decade or so. While there is no clear ‘best way to learn coding’ that applies to everyone, interactive online courses are becoming increasingly popular. They allow people to learn from home in their spare time, they offer increasingly efficient learning pathways, and they are fun at the same time! If you are thinking about learning how to program, you will need to start by choosing a language to learn. Base your choice on the type of programming work you hope to do in the future, and take a look at the wide range of courses offered on the BitDegree platform. Remember, progress does take time, so don’t expect to become a master programmer overnight. Stick to it and practice regularly, and you will improve rapidly. Good luck, and most importantly, remember to have fun on your journey towards becoming the world’s next super hacker! Add Comment Click here to post a comment More in Web Development Teaching the best way to learn Java Best Way to Learn Java: Where to Start? Close
__label__pos
0.876544
rreeves rreeves - 3 months ago 30 C++ Question QT 5.0 - Built in Logging? I was doing some research on Qt 5.0 Logging and it appears to have built in classes for logging. I'm having trouble finding an example. I have located the classes i believe are relevant here. QMessageLogger QMessageLogContext I can see roughly how to create the QMessageLogger Object from the documentation, but how can I create a log file and append to it? Huy Huy Answer By default using qDebug(), qWarning(), etc will allow you to log information out to the console. #include <QtDebug> qDebug() << "Hello world!"; QMessageLogger is designed to leverage special C++ macros (e.g. function, line, file) QMessageLogger(__FILE__, __LINE__, 0).debug() << "Hello world!"; In Qt5 the message logger is used behind the scenes since qDebug() is a macro that will eventually instantiate an instance of QMessageLogger. So I'd just go with using the regular qDebug(). The QMessageLogContext contains what I'd consider as "meta-data", i.e. the file, line number, etc that the qDebug() statement it was called from. Normally you'd concern yourself with the log-context if you're defining your own QtMessageHandler (see qInstallMessageHandler()). The message handler allows for more control of the logging mechanism - like sending logging information to a custom logging server or even to a file. As provided in the Qt Documentation, creating a custom message handler is simple: void myMessageHandler(QtMsgType type, const QMessageLogContext &context, const QString &msg) { std::cout << msg.toStdString(); } Check out better examples and explanations here.
__label__pos
0.976678
Export (0) Print Expand All Registering a COM Callback Instead of polling for changes in the status of a job, you can register to receive notification when the job's status changes. To receive notification, you must implement the IBackgroundCopyCallback2 interface. The interface contains the following methods that BITS calls depending on your registration: For an example that implements the IBackgroundCopyCallback2 interface, see Example Code in the IBackgroundCopyCallback interface topic. The IBackgroundCopyCallback2 interface provides notification for when a file is transferred. Typically, you use this method to validate the file so that the file is available for peers to download; otherwise, the file is not available to peers until you call the IBackgroundCopyJob::Complete method. To validate the file, call the IBackgroundCopyFile3::SetValidationState method. To register your implementation with BITS, call the IBackgroundCopyJob::SetNotifyInterface method. To specify which methods BITS calls, call the IBackgroundCopyJob::SetNotifyFlags method. The notification interface becomes invalid when your application terminates; BITS does not persist the notify interface. As a result, your application's initialization process should register existing jobs for which you want to receive notification. If you need to capture state and progress information that occurred since the last time your application was run, poll for state and progress information during application initialization. Before exiting, your application should clear the callback interface pointer (SetNotifyInterface(NULL)). It is more efficient to clear the callback pointer than to let BITS discover that it is no longer valid. Note that if more than one application calls the SetNotifyInterface method to set the notification interface for the job, the last application to call the SetNotifyInterface method is the one that will receive notifications—the other applications will not receive notifications. The following example shows how to register for notifications. The example assumes the IBackgroundCopyJob interface pointer is valid. For details on the CNotifyInterface example class used in the following example, see the IBackgroundCopyCallback interface. HRESULT hr; IBackgroundCopyJob* pJob; CNotifyInterface *pNotify = new CNotifyInterface(); if (pNotify) { hr = pJob->SetNotifyInterface(pNotify); if (SUCCEEDED(hr)) { hr = pJob->SetNotifyFlags(BG_NOTIFY_JOB_TRANSFERRED | BG_NOTIFY_JOB_ERROR ); } pNotify->Release(); pNotify = NULL; if (FAILED(hr)) { //Handle error - unable to register callbacks. } }     Community Additions ADD Show: © 2014 Microsoft
__label__pos
0.636316
Exemplo n.º 1 0 // Open opens a Cayley database, creating it if necessary and return its handle func Open(dbType, dbPath string) error { if store != nil { log.Errorf("could not open database at %s : a database is already opened", dbPath) return ErrCantOpen } var err error // Try to create database if necessary if dbType == "bolt" || dbType == "leveldb" { if _, err := os.Stat(dbPath); os.IsNotExist(err) { // No, initialize it if possible log.Infof("database at %s does not exist yet, creating it", dbPath) if err = graph.InitQuadStore(dbType, dbPath, nil); err != nil { log.Errorf("could not create database at %s : %s", dbPath, err) return ErrCantOpen } } } else if dbType == "sql" { graph.InitQuadStore(dbType, dbPath, nil) } store, err = cayley.NewGraph(dbType, dbPath, nil) if err != nil { log.Errorf("could not open database at %s : %s", dbPath, err) return ErrCantOpen } return nil } Exemplo n.º 2 0 func Init(cfg *config.Config) error { if !graph.IsPersistent(cfg.DatabaseType) { return fmt.Errorf("ignoring unproductive database initialization request: %v", ErrNotPersistent) } return graph.InitQuadStore(cfg.DatabaseType, cfg.DatabasePath, cfg.DatabaseOptions) } Exemplo n.º 3 0 // Open opens a Cayley database, creating it if necessary and return its handle func Open(config *config.DatabaseConfig) error { if store != nil { log.Errorf("could not open database at %s : a database is already opened", config.Path) return ErrCantOpen } if config.Type != "memstore" && config.Path == "" { log.Errorf("could not open database : no path provided.") return ErrCantOpen } var err error options := make(graph.Options) switch config.Type { case "bolt", "leveldb": if _, err := os.Stat(config.Path); os.IsNotExist(err) { log.Infof("database at %s does not exist yet, creating it", config.Path) err = graph.InitQuadStore(config.Type, config.Path, options) if err != nil && err != graph.ErrDatabaseExists { log.Errorf("could not create database at %s : %s", config.Path, err) return ErrCantOpen } } case "sql": // Replaces the PostgreSQL's slow COUNT query with a fast estimator. // Ref: https://wiki.postgresql.org/wiki/Count_estimate options["use_estimates"] = true err := graph.InitQuadStore(config.Type, config.Path, options) if err != nil && err != graph.ErrDatabaseExists { log.Errorf("could not create database at %s : %s", config.Path, err) return ErrCantOpen } } store, err = cayley.NewGraph(config.Type, config.Path, options) if err != nil { log.Errorf("could not open database at %s : %s", config.Path, err) return ErrCantOpen } return nil } Exemplo n.º 4 0 Arquivo: db.go Projeto: oren/user func New(path string) Db { db := Db{location: path} graph.InitQuadStore("bolt", path, nil) store, err := cayley.NewGraph("bolt", path, nil) db.Store = *store if err != nil { log.Fatalln(err) } return db } Exemplo n.º 5 0 func GetStorage() (s *Storage, err error) { if storage == nil { graph.InitQuadStore("bolt", BoltPath, nil) var handle *cayley.Handle handle, err = cayley.NewGraph("bolt", BoltPath, nil) s = &Storage{handle} storage = s } else { s = storage } return s, err } Exemplo n.º 6 0 // Open opens a Cayley database, creating it if necessary and return its handle func Open(dbType, dbPath string) error { if store != nil { log.Errorf("could not open database at %s : a database is already opened", dbPath) return ErrCantOpen } var err error options := make(graph.Options) switch dbType { case "bolt", "leveldb": if _, err := os.Stat(dbPath); os.IsNotExist(err) { log.Infof("database at %s does not exist yet, creating it", dbPath) err = graph.InitQuadStore(dbType, dbPath, options) if err != nil { log.Errorf("could not create database at %s : %s", dbPath, err) return ErrCantOpen } } case "sql": // Replaces the PostgreSQL's slow COUNT query with a fast estimator. // See: // Ref: https://wiki.postgresql.org/wiki/Count_estimate options["use_estimates"] = true graph.InitQuadStore(dbType, dbPath, options) } store, err = cayley.NewGraph(dbType, dbPath, options) if err != nil { log.Errorf("could not open database at %s : %s", dbPath, err) return ErrCantOpen } return nil } Exemplo n.º 7 0 func main() { path := "/tmp/pc" graph.InitQuadStore("bolt", path, nil) m := martini.Classic() unescapeFuncMap := template.FuncMap{"unescape": unescape} m.Use(session.Middleware) m.Use(render.Renderer(render.Options{ Directory: "templates", // Specify what path to load the templates from. Layout: "layout", // Specify a layout template. Layouts can call {{ yield }} to render the current template. Extensions: []string{".tmpl", ".html"}, // Specify extensions to load for templates. Funcs: []template.FuncMap{unescapeFuncMap}, // Specify helper function maps for templates to access. Charset: "UTF-8", // Sets encoding for json and html content-types. Default is "UTF-8". IndentJSON: true, // Output human readable JSON })) storage, err := models.GetStorage() if err != nil { log.Fatalln(err) } server, err := server.NewServer(storage) if err != nil { log.Fatalln(err) } user := models.NewUser("admin") user.Iteration() staticOptions := martini.StaticOptions{Prefix: "assets"} m.Use(martini.Static("assets", staticOptions)) m.Get("/", routes.IndexHandler) m.Get("/login", routes.GetLoginHandler) m.Get("/logout", routes.LogoutHandler) m.Post("/login", routes.PostLoginHandler) m.Get("/view:id", routes.ViewHandler) m.Post("/gethtml", routes.GetHtmlHandler) m.Get("/socket.io/", func(w http.ResponseWriter, rnd render.Render, r *http.Request, s *session.Session) { server.SetSession(s) server.ServeHTTP(w, r) }) m.Run() } Exemplo n.º 8 0 func open() { path := "./db" // Initialize the database graph.InitQuadStore("bolt", path, nil) // Open and use the database store, err := cayley.NewGraph("bolt", path, nil) if err != nil { log.Fatalln(err) } p := cayley.StartPath(store, "").Out("type").Is("Person") it := p.BuildIterator() for cayley.RawNext(it) { log.Println(store.NameOf(it.Result())) } } Exemplo n.º 9 0 func initDb() (*graph.NodeGraph, error) { var handle *cayley.Handle var err error if !debug { dbPath := filepath.Join(env.EnvPath(env.DbPath), "db.dat") if !env.Exists(dbPath) { if err = cgraph.InitQuadStore("bolt", dbPath, nil); err != nil { return nil, err } } if handle, err = cayley.NewGraph("bolt", dbPath, nil); err != nil { return nil, err } } else { if handle, err = cayley.NewMemoryGraph(); err != nil { return nil, err } } return graph.NewGraph(handle) } Exemplo n.º 10 0 func open() { path := "./db" // Initialize the database graph.InitQuadStore("bolt", path, nil) // Open and use the database store, err := cayley.NewGraph("bolt", path, nil) if err != nil { log.Fatalln(err) } store.AddQuad(cayley.Quad("person:sophie", "type", "Person", "")) store.AddQuad(cayley.Quad("person:sophie", "name", "Sophie Grégoire", "")) store.AddQuad(cayley.Quad("person:sophie", "born", "1974", "")) store.AddQuad(cayley.Quad("person:sophie", "lives in", "country:canada", "")) store.AddQuad(cayley.Quad("person:justin", "type", "Person", "")) store.AddQuad(cayley.Quad("person:justin", "name", "Justin Trudeau", "")) store.AddQuad(cayley.Quad("person:justin", "born", "1972", "")) store.AddQuad(cayley.Quad("person:justin", "in love with", "person:sophie", "")) } Exemplo n.º 11 0 func TestInit() (*graph.NodeGraph, string) { if dir, err := ioutil.TempDir(os.TempDir(), ".olympus"); err != nil { panic(err) } else { os.Setenv("OLYMPUS_HOME", dir) if err = env.InitializeEnvironment(); err != nil { panic(err) } dbPath := filepath.Join(env.EnvPath(env.DbPath), "db.dat") if !env.Exists(dbPath) { cgraph.InitQuadStore("bolt", dbPath, nil) if handle, err := cayley.NewGraph("bolt", dbPath, nil); err != nil { panic(err) } else if ng, err := graph.NewGraph(handle); err != nil { panic(err) } else { return ng, dir } } else { return nil, "" } } }
__label__pos
0.996031
Text Functions Google Docs spreadsheet has the following text function apart from others LEFT, RIGHT,MID, TRIM, LEN, FIND, SEARCH, SUBSTITUTE, REPT etc., We will see syntax with examples of each function. LEFT FUNCTION The LEFT function returns a string consisting of the specified number of characters from the left end of a given string. LEFT(text, number) text :  A string. and is  a string value. number:  An optional argument specifying the desired length of the returned string. string-length is a number value and must be greater than or equal to 1. Notes The count includes all spaces, numbers, and special characters. the formula in cell B1 is =LEFT(A1,5) the formula extracted the first 5 characters of the text string in cell A1 as stated earlier in the notes it has counted the space also. the formula in cell B3 is =LEFT(A3) we have ignored the second argument that means only one character will be extracted RIGHT FUNCTION The RIGHT function returns a string consisting of the specified number of characters from the right end of a given string. RIGHT(text, number) text:  A string. source-string is a string value. number:  An optional argument specifying the desired length of the returned string. string-length is a number value and must be greater than or equal to 1. Notes If string-length is greater than or equal to the length of source-string, the string returned is equal to source-string. the formula in cell B1 is =RIGHT(A1,6) the formula extracted the first 6 characters of the text string in cell A1 as stated earlier in the notes it has counted the space also. the formula in cell B3 is =RIGHT(A3) we have ignored the second argument that means only one character will be extracted MID FUNCTION The MID function returns a string consisting of the given number of characters from a string starting at the specified position. MID(text, start, number) text :  A string. source-string is a string value. start:  The position within the specified string at which the action should begin. start is a number value that must be greater than or equal to 1 and less than or equal to the number of characters in source-string. number:  The desired length of the returned string. string-length is a number value and must be greater than or equal to 1. Notes If number is greater than or equal to the length of text, the string returned is equal to text, beginning at start. The formula in cell B1 is =MID(A1,3,2) even though the cell A1 is numeric value still the MID function extract the data but they are in string form that is why the value in the cell B1 is left justified. to convert that string to number we can modify the formula as follows: =MID(A1,3,2)+0 Similarly the formula in cell B2 is =MID(A2,6,4) and formula in cell B3 is =MID(A3,20,3) MID function is extensively used along with search, find to extract text. TRIM FUNCTION The TRIM function returns a string based on a given string, after removing extra spaces. TRIM(text) text:  A string. source-string is a string value. Notes TRIM removes all spaces before the first character, all spaces after the last character, and all duplicate spaces between characters, leaving only single spaces between words. This function is very useful whenever we are importing CSV files into google docs and cleaning the leading and trailing spaces in the string LEN FUNCTION The LEN function returns the number of characters in a string. The LEN function very frequently used in google docs spreadsheets when every there is a array formula. it is application is very unique to google spreadsheets. LEN(text) text:  A string. source-string is a string value. Notes The count includes all spaces, numbers, and special characters. have look at the following formula structure in the above formula LEN function is used to restrict the population of the LEFT function to 3 rows. FIND FUNCTION The FIND function returns the starting position of one string within another. FIND(find_text, text, position) find_text:  The string to find. search-string is a string value. text:  A string. source-string is a string value. position:  An optional argument that specifies the position within the specified string at which the action should begin. position is a number value that must be greater than or equal to 1 and less than or equal to the number of characters in find-text. Notes The search is case sensitive and spaces are counted. Wildcards are not allowed. To use wildcards or to ignore case in your search, use the SEARCH function. SEARCH FUNCTION The SEARCH function returns the starting position of one string within another, ignoring case and allowing wildcards. SEARCH(find_text, text, position) find_text:  The string to find. search-string is a string value. text :  A string. text is a string value in which we want to search position:  An optional argument that specifies the position within the specified string at which the action should begin. position is a number value that must be greater than or equal to 1 and less than or equal to the number of characters in source-string. Notes Wildcards are permitted in search-string. In search-string, use an * (asterisk) to match multiple characters or a ? (question mark) to match any single character in find_text. Specifying position permits you to begin the search for text within, rather than at the beginning of, source-string. This is particularly useful if find_text may contain multiple instances of text and you wish to determine the starting position of other than the first instance. If start-pos is omitted, it is assumed to be 1. To have case considered in your search, use the FIND function. Difference Between FIND & SEARCH 1. Both functions find the position of a substring in a string – the position of some characters within a different set of characters. 2. FIND is case sensitive and does not allow wildcards such as * (1 or more characters) or ? a single character. 3. SEARCH is NOT case sensitive and it accepts wildcards. ? Is wildcard for a single character. * Is wildcard for one or more characters. in the above image we can clearly see the difference between search and find function in cell B3 & B4, the Find function failed it identify the “a” in the because it is case sensitive and whereas search could able to identify the “a” position correctly. for all practical purposes we can use the search function instead of find function. SUBSTITUTE FUNCTION The SUBSTITUTE function returns a string where the specified characters of a given string have been replaced with a new string. SUBSTITUTE(text, search_text, new text, occurrence) text:  A string. source-string is a string value. search_text:  The string within the given string that is to be replaced. search_text string is a string value. new text:  The text used as a replacement for the section of the given string that is replaced. new-string is a string value. It does not have to be the same length as the text replaced. occurrence:  An optional value specifying the occurrence that should be replaced. occurrence is a number value and must be greater than or equal to 1, or omitted. If greater than the number of times existing-string appears within source-string, no replacement will occur. If omitted, all occurrences of existing-string within source string  will be replaced by new-string. Notes You can replace individual characters, whole words, or strings of characters within words. in the above example the character “A” is replaced with “X” in cell B1 and in cell B2 we have replaced the second occurrence of “A” has been replaced with “X”. the substitution function is widely used to extract the certain part of the string along with other function such as Find, Search, MID etc. Example 1: Substitute is wide used to count spaces suppose A1 hold This  is an example for counting spaces =LEN(A1)-LEN(SUBSTITUTE(A1,” “,””)) Example 2: :   If we are extracting the text from the 1234567890abcd we have to use the substitute function in other spreadsheet applications the formula will be is as follows: =substitute(substitute(substitute(substitute(substitute(substitute(substitute(substitute(substitute(SUBSTITUTE(F1,1,””),2,””),3,””),4,””),5,””),6,””),7,””),8,””),9,””),0,””) fortunately the Google docs spreadsheet supports the regular expression the formula will in cell H1 is =REGEXREPLACE(F1,”[0-9]”,””) Example 3: Our objective is to extract 4.1 which is in the square brackets. to extract this we have started with MID function to identify the start point we have used the find function to identify the position and added 1 to avoid the left square bracket we have removed the right square bracket by using the substitute function by replacing the “]” with a blank now you can clearly see 4.1 the in Cell F5, but the problem is it is left justified that means it is text. to convert the text to number we just added 0 (zero) to the formula. the final formula is =SUBSTITUTE((MID(F3,FIND(“[“,F3)+1,10)),”]”,””)+0 REPT FUNCTION The REPT function returns a string that contains a given string repeated a specified number of times. REPT(text, number) text:  A string. text is a string value. number:  The number of times the given string should be repeated. repeat number is a number value that must be greater than or equal to 0. the REPT function is also used to create a Histogram as shown in the below image. Data is in cell A2:A21, the formula in cell C1 is =FREQUENCY(A2:A21,B2:B6) formula in cell D1 is =REPT(“|”,C2) and drag it down Advertisements 3 Responses to Text Functions 1. Andrew says: For Google Docs, what function would you use to trim the character count of all cells in a column to a specific character count? Scenario: You want all cells in Column B to have a 21 character limit and if cells have more than 21 characters they are trimmed right-to-left until they meet the 21 character limit. If a cell had the following text: “United States of America” the function would trim it down to “United States of Ameri”. 2. Goose says: This article was wildly useful. Thanks 🙂 Leave a Reply Fill in your details below or click an icon to log in: WordPress.com Logo You are commenting using your WordPress.com account. Log Out /  Change ) Google+ photo You are commenting using your Google+ account. Log Out /  Change ) Twitter picture You are commenting using your Twitter account. Log Out /  Change ) Facebook photo You are commenting using your Facebook account. Log Out /  Change ) w Connecting to %s %d bloggers like this:
__label__pos
0.967489
Would it be weird to show a program I wrote to my professor? In summary, the speaker is a second-year Computer Engineering student who has written their own 8085 simulator. They are unsure of what to do with it and are considering showing it to their professor, but are worried about coming across as arrogant or annoying. They receive advice to approach their professor in a humble manner and ask for their opinion on the simulator. They are grateful for the advice and plan to follow it. • #1 Quem 2 0 I'm a second year Comp E student, and in an intro class we're learning 8085 assembler. The professor wrote his own simulator for the class, but it's kinda buggy and I feel like it's been given minimal updates since he wrote it some 15 years ago. I was looking for a programming project, so I wrote my own 8085 simulator. The problem is now I'm not sure what to do with it. I was thinking I could show it to my professor, but I'm not sure if that would be weird? Any advice would be appreciated :)   Physics news on Phys.org • #2 Just send him a polite email (last thing you want to do is insult him) and say you have written an 8085 simulator and you wanted his opinion on it because he has written one before for your intro class. Maybe from there you can suggest that he uses it in the future. But just approach it as a 'look what i did' instead of 'im better than you'.   • #3 Try to be humble about this if you can. In any case I think you should show your professor. If your work is good this will reflect very very highly on you.   • #4 Alright, thanks. I just wasn't sure how to word the email, but I think asking for his opinion on it sounds good. I also wasn't sure if it he would see it as annoying or if I would be bothering him, but I guess not. Thanks for the advice!   • #5 As a scientist, it is always encouraged to share your work and ideas with your peers and mentors. In this situation, it would not be weird to show your program to your professor. In fact, it could be a valuable learning experience for both you and your professor. Your professor may be interested in seeing your approach to simulating the 8085 assembler and may even have some suggestions or feedback for improvement. Additionally, showing your work to your professor could also showcase your skills and passion for the subject, potentially leading to future opportunities such as research or projects. It is always important to take advantage of opportunities to share and receive feedback on your work, and in this case, showing your program to your professor would be a great way to do so.   1. Is it appropriate to show my professor a program I wrote? Yes, it is absolutely appropriate to show your professor a program you wrote. In fact, many professors encourage students to share their work and seek feedback. 2. Will my professor think it's weird if I show them my program? No, your professor will not think it's weird. They are there to help you learn and improve, and seeing your work can give them a better understanding of your progress and abilities. 3. Should I only show my professor a program that I think is perfect? No, you should not wait until your program is perfect to show your professor. It is important to seek feedback and make improvements throughout the development process. 4. Is it better to show my program in person or through email? This depends on your professor's preferences and availability. If possible, it is always better to show your program in person so you can explain your thought process and receive immediate feedback. 5. What if my professor doesn't understand my program? If your professor doesn't understand your program, it could be an opportunity for you to clarify and explain your code. This can also help you identify areas that may need improvement. Remember, it's okay if your professor doesn't fully understand your program - the important thing is that you are learning and seeking feedback. Similar threads • STEM Academic Advising Replies 25 Views 2K • STEM Academic Advising Replies 6 Views 853 • STEM Academic Advising Replies 4 Views 1K • STEM Academic Advising Replies 4 Views 2K • STEM Academic Advising Replies 16 Views 347 • STEM Academic Advising 3 Replies 87 Views 12K • STEM Academic Advising 2 Replies 63 Views 4K • STEM Academic Advising Replies 12 Views 1K Replies 4 Views 816 • STEM Academic Advising Replies 5 Views 918 Back Top
__label__pos
0.884602
Beefy Boxes and Bandwidth Generously Provided by pair Networks go ahead... be a heretic   PerlMonks   Re^2: DBIx::Connector timeout? by leuchuk (Novice) on Aug 02, 2013 at 13:20 UTC ( #1047595=note: print w/ replies, xml ) Need Help?? in reply to DBIx::Connector timeout? The timeout comes most probable from the database. Every database has a predefined timeout. Usually on the clients side there is a parameter for the time wainting for an answer from the server. There are some solutions for your problem: First try whether your database is the one giving you this problem. It could be a timeout for example from IP or a wrong routing when e.g. the firewall interrups the traffic. Then check whether your connection parameters are right. A wrong port number could give you the head ache. If you identify the database as the one making the timeout: Rise the connection timeout in your database. There is usually an entry in a configuration file or depending on database and OS an environment variable. Rise the limit of open connections if it is a problem of too many open (and probably in parts idle) connections. If you have lots of users try pooling. Leuchuk Comment on Re^2: DBIx::Connector timeout? Log In? Username: Password: What's my password? Create A New User Node Status? node history Node Type: note [id://1047595] help Chatterbox? and the web crawler heard nothing... How do I use this? | Other CB clients Other Users? Others pondering the Monastery: (5) As of 2015-04-25 01:59 GMT Sections? Information? Find Nodes? Leftovers? Voting Booth? Who makes your decisions? Results (475 votes), past polls
__label__pos
0.86622
What is Hashing? The reliability and integrity of blockchain is rooted in there being no chance of any fraudulent data or transactions, such as a double spend, being accepted or recorded. A cornerstone of the technology as a whole and the key components in maintaining this reliability is hashing. Hashing is the process of taking an input of any length and turning it into a cryptographic fixed output through a mathematical algorithm (Bitcoin uses SHA-256, for example). Examples of such inputs can include a short piece of information such as a message or a huge cache of varying pieces of information such as a block of transactions or even all of the information contained on the internet. Securing Data with Hashing Hashing drastically increases the security of the data. Anyone who may be trying to decrypt the data by looking at the hash will not be able to work out the length of the encrypted information based on the hash. A cryptographic hash function needs to have several crucial qualities to be considered useful, these include: Impossible to produce the same hash value for differing inputs:  This is important because if it were not the case it would be impossible to keep track of the authenticity of inputs.  The same message will always produce the same hash value:  The importance of this is similar to the prior point. Quick to produce a hash for any given message: The system would not be efficient or provide value otherwise. Impossible to determine input based on hash value:  This is one of the foremost aspects and qualities of hashing and securing data. Even the slightest change to an input completely alters the hash:  This is also a matter of a security. If a slight change only made a slight difference it would be considerably easier to work out what the input was. The better and more complex the hashing algorithm, the larger the impact of changing an input will be on what the output is.  Hashing secures data by providing certainty that it hasn’t been tampered with before being seen by the intended recipient. So, as an example, if you downloaded a file containing sensitive information, you could run it through a hashing algorithm, calculate the hash of that data and compare it to the one shown by whoever sent you the data. If the hashes don’t match, you can be certain that the file was altered before you received it. Blockchain Hashing In blockchain, hashes are used to represent the current state of the world, or to be more precise, the state of a blockchain. As such, the input represents everything that has happened on a blockchain, so every single transaction up to that point, combined with the new data that is being added. What this means is that the output is based on, and therefore shaped by, all previous transactions that have occurred on a blockchain. As mentioned, the slightest change to any part of the input results in a huge change to the output; in this lies the irrefutable security of blockchain technology. Changing any record that has previously happened on a blockchain would change all the hashes, making them false and obsolete. This becomes impossible when the transparent nature of blockchain is taken into account, as these changes would need to be done in plain sight of the whole network. The first block of a blockchain, known as a genesis block, contains its transactions that, when combined and validated, produce a unique hash. This hash and all the new transactions that are being processed are then used as input to create a brand new hash that is used in the next block in the chain. This means that each block links back to its previous block through its hash, forming a chain back to the genesis block, hence the name blockchain. In this way, transactions can be added securely as long as the nodes on the network are in consensus on what the hash should be. An Explanation of Data Structures Data structures are a specialized way of storing data. The two foremost hashing objects carrying out this function are pointers and linked lists. Pointers store addresses as variables and as such point to the locations of other variables. Linked lists are a sequence of blocks connected to one another through pointers. As such, the variable in each pointer is the address of the next node, with the last node having no pointer and the pointer in the first block, the genesis block, actually lying outside of the block itself. At its simplest, a blockchain is simply a linked list of recorded transactions pointing back to one another through hash pointers. Hash pointers are where blockchain sets itself apart in terms of certainty as pointers not only contain the address of the previous block, but also the hash data of that block too. As described earlier, this is the foundation of the secure nature of blockchain. For example, if a hacker wanted to attack the ninth block in the chain and change its data, he would have to alter the data of all previous eight blocks, as they all too would change. In essence this makes it impossible to alter any data that is recorded on a blockchain. Hashing is of the core fundamentals and foremost aspects of the immutable and defining potential of blockchain technology. It preserves the authenticity of the data that is recorded and viewed, and as such, the integrity of a blockchain as a whole. It is one of the more technical aspect of the technology, however understanding it is a solid step in understanding how blockchain functions and the immeasurable potential and value that it has. What is a TXID? A TXID is a transaction ID, produced by hashing transaction data (such as the amount being sent, the receiving address, the timestamp etc) and appearing in a string of numbers and letters, that can be used to identify and confirm a transaction has happened. What are Merkle Trees? A merkle tree, otherwise called a hash tree, is a data structure of hashes used to record data onto a blockchain in a secure and efficient manner. The concept was patented by Ralph Merkle in 1979. The system works by running a block of transactions through an algorithm to generate a hash as a means of verifying the validity of that data based on the original transactions. An entire block of transactions is not run through a hash function at once, but rather each transaction is hashed, with those transactions being linked and hashed together. Eventually, this creates one hash for the entire block. When visualized, the structure resembles that of a tree, albeit in a simplified manner as each block will normally contain hundreds, if not thousands, of transactions. Hashes on the bottom row are known as ‘leaves’, while middle hashes are referred to as ‘branches’ with the hash at the top being the ‘root’. Merkle trees are especially useful as they allow anyone to confirm the validity of an individual transaction without having to download a whole blockchain. For instance, as long as you have the root hash (12345678), you can easily confirm transaction (8) by accounting for the hashes (7), (56) and (1234). As long as they are all there on a blockchain, transaction (8) is surely there and as such accounted for and as a result true, and meant to be there. The Hash of the merkle root is normally contained in a block header along with: • Hash of the previous block • Timestamp • Nonce • The block version number • The current difficulty target Merkle trees and hashes are a key component in allowing blockchain technology to function whilst providing security, integrity and irrefutability and, alongside consensus protocols, are arguably the most important aspects of blockchain technology. 
__label__pos
0.980089
CRUX Handbook RELEASE 1.2 2003-09-17 Это руководство описывает инсталляцию, конфигурирование и управление CRUX линуксом. Пожалуйста, используйте это руководство только для решения вопросов, специфичных для CRUX. Информацию о Linux ищите на Linux Documentation Project. Оглавление Предисловие 1. Введение 1.1. Что такое CRUX? 1.2. Почему нужно использовать CRUX? 1.3. Лицензия 1.3.1. Пакеты 1.3.2. Инсталляционные скрипты 1.3.3. ОТСУТСТВИЕ ГАРАНТИЙ 2. Установка CRUX 2.1. Поддерживаемое аппаратное обеспечение 2.2. Установка с CD-ROM 2.3. Обновление с CD-ROM 2.4. Другие методы инсталляции 2.4.1. Сборка собственного загрузочного ядра 2.4.2. Инсталляция по сети 3. Система пакетов 3.1. Введение 3.2. Использование системы пакетов 3.2.1. Установка пакета 3.2.2. Обновление пакета 3.2.3. Удаление пакета 3.2.4. Запрос информации из базы пакетов 3.3. Создание пакета 3.4. Принципы сборки пакета 3.4.1. Главное 3.4.2. Каталоги 3.4.3. Удаление лишних файлов 3.4.4. Pkgfile 4. Система портов 4.1. Введение 4.1.1. Что такое порт? 4.1.2. Что такое система портов? 4.2. Использование системы портов 4.2.1. Синхронизация локальной структуры портов 4.2.2. Просмотр списка локальных портов 4.2.3. Просмотр соответствия версий 4.2.4. Сборка и инсталляция пакетов 5. Конфигурирование 5.1. Инициализационные скрипты 5.1.1. Уровни загрузки 5.1.2. Расположение 5.1.3. Параметры конфигурации в /etc/rc.conf 5.1.4. Конфигурация сети 5.2. Пароли 5.3. Обновление ядра 6. ЧаВО 6.1. Общее 6.2. Инсталляция 6.3. Конфигурирование Предисловие Пер Лиден (Per Liden) написал данное руководство. Роберт МакМикан (Robert McMeekin) сконвертировал его в DocBook. Александр Усков перевел его на русский язык. Многие другие давали отзывы и советы. Параграф 1. Введение 1.1. Что такое CRUX? CRUX - это компактный, i686-ориентированный дистрибутив Linux, нацеленный на опытных Linux-пользователей. Основная цель дистрибутива - сохранить его простым, это достигается за счет простой, основанной на tar.gz, пакетной системы, BSD-подобных инитскриптов, и относительно маленькой коллекции урезанных пакетов. Вторая цель - использование новых возможностей Linux, утилит и библиотек. Также в CRUX входит система портов, которая предназначена облегчить инсталляцию и обновление приложений. 1.2. Почему CRUX? В настоящее время существует много разных дистрибутивов Linux, зачем использовать именно этот? Конечно, все, что перечислено дальше вопрос вкуса. Я попытаюсь объяснить вам мое мнение, возможно, вы его разделите. В первую очередь, мне нужен дистрибутив простой с начала и до конца. Далее я хочу, чтоб пакеты были не какие-то альфа-бета и т.п. версий, а стабильные. Я хочу очень просто создавать новые и обновлять старые пакеты (зачастую, для обновления пакета в CRUX достаточно сказать pkgmk -d -u). Я хочу пакеты, оптимизированные под мой процессор (это -march=i686). Я не хочу, чтоб мою файловую загромождали файлы, которые я ни когда не использую (например /usr/doc/*, и т.п.). Если мне нужна дополнительная информация о каких-то программах, ее можно найти в man или в сети. И на конец, я хочу использовать новые возможности Linux, такие как devfs, reiserfs, ext3fs и т.д. Если вы достаточно опытный Linux-пользователь, вам нужна чистый и цельный Linux-дистрибутив, как база для ваших инсталляций, если вы для управления предпочитаете текстовый редактор графическому интерфейсу, не боитесь закачивать и компилировать программы самостоятельно, этот дистрибутив для вас. 1.3. Лицензия 1.3.1. Пакеты CRUX - дистрибутив Linux-а, он содержит программы, написанные многими людьми. Каждый программный пакет имеет свою лицензию, выбранную автором(ами) программы. Для того чтобы узнать, какую лицензию имеет каждый конкретный пакет, необходимо посмотреть его исходные коды. 1.3.2. Скрипты сборки Все скрипты сборки пакетов в CRUX (в категориях base и opt) принадлежат Перу Лиден (Per Liden) и лицензированы по GNU General Public License. 1.3.3. ОТСУТСТВИЕ ГАРАНТИЙ CRUX распространяется в надежде, что он будет полезен, но БЕЗ ВСЯЧЕСКИХ ГАРАНТИЙ. Вы используете данный продукт на СВОЙ СТРАХ И РИСК. Параграф 2. Инсталляция CRUX 2.1. Поддерживаемое аппаратное обеспечение Пакеты в официальном CRUX ISO образе собраны с оптимизацией под i686 (Pentium-Pro/Celeron/Pentium-II и больше). Не пробуйте установить его на i586 (Pentium, AMD K6/K6-II/K6-III) и более слабых процессорах, это просто, но работать не будет. Для того чтобы установить CRUX на i586 системе, вам необходимо скачать i586 версию CRUX ISO образа. Ядро, используемое для инсталляции (загружаемое с CRUX ISO образа (El Torito)) собрано с поддержкой следующих дисковых контроллеров и USB: Система Драйвера, включенные в загрузочное ядро IDE Generic PCI IDE chipset SCSI 7000FASST, ACARD, Adaptec AACRAID, Adaptec AIC7xxx, Adaptec I2O RAID, AdvanSys, AM53/79C974, AMI MegaRAID, BusLogic, Compaq Fibre Channel, NCR5380/53c400, IBM ServeRAID, SYM53C8XX, Tekram DC390(T) and Am53/79C974 USB USB device filesystem, UHCI (Intel PIIX4, VIA, ...) support, USB Human Interface Device (full HID) support, HID input layer support Для того, чтоб можно было установить CRUX, ваш дисковый контроллер должен быть перечислен в предыдущей таблице. Если ваше аппаратное обеспечение не указано в таблице, или вы имеете другие проблемы с инсталляцией CRUX, обратитесь к Разделу 2.4. 2.2. Установка с CD-ROM 1. Выкачайте ISO образ CRUX (crux-1.2.iso). Для того чтобы удостовериться, что образ скачан правильно, используйте утилиту md5sum. $ md5sum crux-1.2.iso Сравните полученные данные с файлом crux-1.2.md5sum, который находится там же, откуда вы качали образ. Если контрольные суммы совпали, значит образ закачен правильно и его можно прожечь на CD. 2. ISO образ загрузочный, вставьте свежезависанный CD в ваш привод и перегрузите компьютер. После приглашения загрузчика нажмите Enter. 3. Подключитесь как root (пароль не нужен). 4. Создайте (если необходимо) и отформатируйте партицию(ии) куда вы будете ставить CRUX. $ fdisk /dev/discs/disc?/disc $ mkreiserfs /dev/discs/disc?/part? $ mkswap /dev/discs/disc?/part? Необходимое количество дискового пространства зависит от тех пакетов, которые вы установите. Я рекомендую не более 1G на рут партицию (CRUX использует около 200MB-500MB в зависимости от конфигурации). Использование ReiserFS рекомендовано, поддержка Ext2fs/Ext3fs и JFS тоже есть. В будущем, я рекомендую разделять системные и пользовательские данные, например, использовать отдельную партицию для /home (и по возможности для /var), это заметно упростит вашу жизнь при обновлении/реинсталляции/удалении системы. [Note] Замечание Проверьте, чтобы значение BIOS Virus Protection стояло в DISABLED, эта опция может помешать fdisk-у правильно сохранить партиции. 5. Смонтируйте партицию, на которую вы будете устанавливать систему. $ mount /dev/discs/disc?/part? /mnt Если вы устанавливаете систему на несколько партиций, смонтируйте их в том порядке, как оно должно быть. Например, для разных /home или /var, необходимо: $ mkdir /mnt/var $ mount /dev/discs/disc?/part? /mnt/var 6. Активируйте вашу свап партицию(ии). $ swapon /dev/discs/disc?/part? 7. Введите Setup для запуска скрипта установки пакетов. Скрипт спросит, куда смонтирован новая рут партиция и где находятся пакеты, которые планируется установить. Будут установлены ТОЛЬКО те пакеты, которые вы выберите. Однако, я рекомендую установку всех пакетов, отмеченных base. После того, как выбранные пакеты будут установлены,Setup покажет лог установки. Удостоверьтесь, что последняя строка в логе - 0 error(s) Если позже вам нужно будет добавить какой либо пакет, смонтируйте CRUX CD-ROM и используйте pkgadd для установки. [Note] Замечание В пакетах не проверяются зависимости. Это значит, что, например, если вы выбрали sendmail, вам необходимо выбрать еще и db. Скриншоты к Setup 8. Теперь самое время собрать ваше ядро и сделать основную настройку системы. Сборка ядра требует "chroot" в вашу новую инсталяцию CRUX. $ mount -t devfs devfs /mnt/dev $ mount -t proc proc /mnt/proc $ chroot /mnt /bin/bash 9. Установите пароль для root. $ passwd 10. Отредактируйте /etc/fstab в соответствии с конфигурацией ваших файловых системы. Доступные редакторы vim и pico. 11. Отредактируйте /etc/rc.conf для конфигурации клавиатуры, сервисов и таймзоны. /etc/rc.conf описан в Разделе 5.1.3. 12. Отредактируйте /etc/rc.d/net, /etc/hosts и /etc/resolv.conf для настройки вашей сети (ip адрес/гейтвей/хостнейм/домен/dns). 13. Перейдите в /usr/src/linux-2.4.21, отконфигурируйте и соберите новое ядро. $ cd /usr/src/linux-2.4.21 $ make menuconfig $ make dep $ make clean $ make bzImage $ make modules $ make modules_install $ cp arch/i386/boot/bzImage /vmlinuz $ cp System.map / Запомните! Обязательно должны быть включены следующие опции: Code maturity level options ---> [*] Prompt for development and/or incomplete code/drivers File systems ---> [*] /dev file system support [*] Automatically mount at boot 14. Отредактируйте /etc/lilo.conf под ваше новое ядро и запустите lilo чтобы сделать систему загрузочной. 15. Выньте CRUX CD-ROM из вашего привода и перезапустите систему с жесткого диска. 2.3. Обновление с CD-ROM 1. Выкачайте CRUX ISO образ (crux-1.2.iso). Для того чтобы удостовериться, что образ скачан правильно, используйте утилиту md5sum. $ md5sum crux-1.2.iso Сравните полученные данные с файлом crux-1.2.md5sum, который находится там же, откуда вы качали образ. Если контрольные суммы совпали, значит образ закачен правильно и его можно прожечь на CD. 2. ISO образ загрузочный, вставьте свежезаписанный CD в ваш привод и перегрузите компьютер. После приглашения загрузчика нажмите Enter. 3. Подключитесь как root (пароль не нужен). 4. Смонтируйте вашу рут партицию $ mount /dev/discs/disc?/part? /mnt Если вы устанавливали систему на несколько партиций, смонтируйте их в том порядке, как оно должно быть. Например, для разных /home или /var, необходимо:: $ mount /dev/discs/disc?/part? /mnt/var 5. Активируйте вашу свап партицию(ии). $ swapon /dev/discs/disc?/part? 6. Введите Setup для запуска скрипта установки пакетов. Скрипт спросит, куда смонтирован новая рут партиция и где находятся пакеты, которые планируется обновить. Правильнее обновлять все пакеты, чтобы не иметь проблем в дальнейшем. Некоторые новые библиотеки не 100% обратно совместимы. Когда Setup обновит выбранные пакеты, будет выведен лог обновления. Удостоверьтесь, что последняя строка лога - 0 error(s). Если позже вам понадобятся еще какие либо пакеты, вы сможете смонтировать CRUX CD-ROM и используя pkgadd добавить их. 7. Теперь необходимо собрать новое ядро. Сборка ядра требует "chroot" в вашу CRUX инсталляцию. $ mount -t devfs devfs /mnt/dev $ mount -t proc proc /mnt/proc $ chroot /mnt /bin/bash 8. Перейдите в /usr/src/linux-2.4.21, отконфигурируйте и соберите новое ядро. Запомните! Обязательно должны быть включены следующие опции: Code maturity level options ---> [*] Prompt for development and/or incomplete code/drivers File systems ---> [*] /dev file system support [*] Automatically mount at boot 9. Отредактируйте /etc/lilo.conf для загрузки с вашего ядра и запустите lilo, чтобы сделать систему загрузочной. 10. Выньте CRUX CD-ROM из вашего привода и перезапустите систему с жесткого диска. 2.4. Другие методы инсталляции 2.4.1. Сборка собственного загрузочного ядра Если вы не можете установить CRUX с CD-ROM, потому что ваше оборудование не поддерживается загрузочным ядром, вы можете собрать собственное загрузочное ядро, в которое добавить поддержку того, что вам необходимо. Для этого необходима 1.44Mb дискета, доступ к другой Linux станции и CRUX ISO прожженный на CD. Общее понимание того, как конфигурировать и собирать Linux ядро, вообщем-то тоже необходимо. 1. Соберите новое ядро с поддержкой вашего железа. Возьмите конфигурацию ядра, используемую старым загрузочным ядром, как начальную точку (вы можете взять конфигурацию здесь) и добавте поддержку вашего железа. Если ядро получается слишком большее, можно удалить SCSI и USB драйвера (конечно, если они вам не нужны), но ни в коем случае не меняйте настроек, связанных с файловыми системами. 2. Скачайте и распакуйте boot floppy creation kit. 3. Перейдите в каталог mkbootfloppy и выполните скрипт mkbootfloppy (от root). Этот скрипт требует один аргумент - ядро, которое нужно разместить на образе дискеты. Перед тем, как начать, проверьте, что у вас ни чего не примонтированно в /mnt, так как mkbootfloppy использует этот путь, как точку монтирования. $ cd mkbootfloppy $ ./mkbootfloppy /path/to/linux/kernel/arch/i386/boot/bzImage 1440+0 records in 1440+0 records out mke2fs 1.27 (8-Mar-2002) Added CRUX * 4. Запишите полученный boot.img на дискету. $ dd if=boot.img of=/dev/fd0 5. Вставьте дискету и CRUX CD в компьютер, на который вы хотите инсталлироваться, и перезагрузите его. 6. Установите CRUX. 2.4.2. Установка по сети Если у вас нет устройства записи CD, или вы не можете загрузить вашу машину с CRUX CD-ROM, или по любой другой причине, не позволяющей установить CRUX нормальным путем (Раздел 2.2) проверьте CRUX Network Setup Guide от Мартина Опель (Martin Opel) или HOWTO install CRUX via NFS от Юджина Дауберта (Jurgen Daubert). Параграф 3. Система пакетов 3.1. Введение Система пакетов (pkgutils) сделана максимально простой, в ней все пакеты обычные tar.gz файлы (т-е без всяческих метаданных). Пакеты именуются следующим образом <name>#<version>-<release>.pkg.tar.gz, где <name> название программы, <version> ее версия, и <release> версия пакета. Расширение pkg.tar.gz используется (в отличие от просто tar.gz) для отличия от простых tar.gz файлов, но tar.gz необходимо для использования в pkgadd. Этот путь позволяет легко отделять пакеты от остальных tar.gz файлов. pkgadd(8), pkgrm(8), pkginfo(8), and pkgmk(8) утилиты управления пакетами. Эти утилиты позволяют устанавливать, удалять, проверять, создавать пакеты и проверять их наличие в базе пакетов. Когда добавляется пакет, с помощью pkgadd в базу пакетов (расположенную в /var/lib/pkg/db) добавляется запись о нем. Система пакетов не проверяет зависимости и не выдает предупреждений, если вы ставите пакет, требующий наличие другого пакета. В следующей секции коротко описывается, как использовать утилиты пакетной системы. Дополнительная информация об этих утилитах размещена в man-ах. 3.2. Использование системы пакетов 3.2.1. Установка пакетов Установка пакета производится утилитой pkgadd. Эта утилита требует только один аргумент - пакет, который вы хотите установить. Например: $ pkgadd bash#2.05-1.pkg.tar.gz Устанавливая пакет, менеджер пакетов гарантирует, что ни какие установленные ранее файлы не будут перезаписаны. Если случится конфликт имен, будет выдано сообщение об ошибке, и pkgadd прекратит работу без установки пакета. Сообщение об ошибке содержит имена конфликтующих файлов. Например: $ pkgadd bash#2.05-1.pkg.tar.gz bin/sh usr/man/man1/sh.1.gz pkgadd error: listed file(s) already installed (use -f to ignore and overwrite) Чтобы установить пакет, перезаписав конфликтующие файлы, используйте опцию -f (или --force). Например: $ pkgadd -f bash#2.05-1.pkg.tar.gz Система пакетов подразумевает, что файл принадлежит только одному пакету. Когда при установке с ключом -f файл перезаписывается, его принадлежность переходит к новому пакету. Каталоги могут принадлежать нескольким пакетам. [Warning] Предупреждение Переустановка пакетов, зачастую, не есть хорошая идея. Конфликт файлов может быть показателем того, что пакет испорчен или устанавливает не нужные файлы. Старайтесь не пользоваться этой возможностью. Лучше всего не пользуйтесь ей совсем. Как сказано раньше, пакет, сам по себе, не содержит никаких метаданных. Вместо этого менеджер пакета использует имя файла пакета, чтобы определить название пакета и его версию. Таким образом, устанавливая файл пакета с именем bash#2.05-1.pkg.tar.gz пакетный менеджер интерпретирует это, как пакет с именем bash версии 2.05-1. Если pkgadd не может интерпретировать имя файла (например # отсутствует или имя файла не заканчивается на .pkg.tar.gz) выдается ошибка и pkgadd прекращает работу без установки пакета. 3.2.2. Обновление пакетов Обновление пакета производится командой pkgadd с ключом -u. Например: $ pkgadd -u bash#2.05-1.pkg.tar.gz Эта команда заменит ранее установленный bash новым. Если bash, раньше не был установлен, pkgadd выдаст сообщение об ошибке. Система пакетов не контролирует номера версий, следовательно вы можете "обновить" пакет версии2.05-1 до версии 2.04-1 (или до 2.05-1). Устанавливаемый пакет заменит старый. Обновление пакета эквивалентно выполнению pkgrm перед pkgadd за одним (большим) исключением. Когда вы обновляете пакет (командой pkgadd -u) вы имеете возможность не заменять некоторые ранее установленные файлы. Это обычно нужно для того, чтобы сохранить файлы конфигурации и статистики. pkgadd при выполнении читает файл /etc/pkgadd.conf. Этот файл содержит правила, описывающие, как pkgadd должен производить обновление. Правило состоит из 3х частей; событие, маска и действие. Событие описывает ситуацию, в которой должно быть применено правило. Сейчас доступен только один тип событий, это UPGRADE. Маска это маска имени файла, записанная, как регулярное выражение/ Действие, в случае события UPGRADE может быть YES или NO. Использование нескольких правил с одним типом события допустимо. Первое правило будет иметь самый низкий приоритет, последнее самый высокий. Например: # # /etc/pkgadd.conf: pkgadd(8) configuration # UPGRADE ^etc/.*$ NO UPGRADE ^var/log/.*$ NO UPGRADE ^etc/X11/.*$ YES UPGRADE ^etc/X11/XF86Config$ NO # End of file В этом примере pkgadd ни когда не будет обновлять файлы в каталогах /etc/ и /var/log/ (включая подкаталоги), будет обновлять в /etc/X11/ (включая подкаталоги), не тронет /etc/X11/XF86Config. Правило по умолчанию - обновление всего. [Note] Замечание Маска ни когда не должна содержать начальный "/" т-к вы обращаетесь к файлам в пакете, а не к файлам на диске. Если pkgadd находит файл, который не должен обновляться, файл перемещается в /var/lib/pkg/rejected/. Пользователь свободен в возможности исследования и/или удаления файла вручную. Файлы в этом каталоге ни когда не добавляются в базу пакетов. Пример (используя предыдущий /etc/pkgadd.conf): $ pkgadd -u bash#2.05-1.pkg.tar.gz pkgadd: rejecting etc/profile, keeping existing version $ ls /var/lib/pkg/rejected/ etc/ $ ls /var/lib/pkg/rejected/etc/ profile 3.2.3. Удаление пакетов Пакеты удаляются командой pkgrm. Эта утилита использует только один аргумент - имя удаляемого пакета. Пример: $ pkgrm bash [Warning] Внимание Эта команда удаляет все файлы, принадлежащие пакету без всяких подтверждений. Подумайте дважды перед выполнением этой команды. Проверьте правильность написания имени пакета (например glibc вместо glib). 3.2.4. Запрос информации из базы пакетов Запрос информации осуществляется утилитой pkginfo. Эта утилита имеет несколько опций для ответа на разные вопросы. Опция Описание -i, --installed Список установленных пакетов с версиями. -l--list package|file Список файлов, принадлежащих указанному package или содержащихся в file -o, --owner file Владелец (список владельцев) file. Пример: $ pkginfo -i audiofile 0.2.3-1 autoconf 2.52-1 automake 1.5-1 <...> xmms 1.2.7-1 zip 2.3-1 zlib 1.1.4-1 $ pkginfo -l bash bin/ bin/bash bin/sh etc/ etc/profile usr/ usr/man/ usr/man/man1/ usr/man/man1/bash.1.gz usr/man/man1/sh.1.gz $ pkginfo -l grep#2.5-1.pkg.tar.gz usr/ usr/bin/ usr/bin/egrep usr/bin/fgrep usr/bin/grep usr/man/ usr/man/man1/ usr/man/man1/egrep.1.gz usr/man/man1/fgrep.1.gz usr/man/man1/grep.1.gz $ pkginfo -o bin/ls e2fsprogs usr/bin/lsattr fileutils bin/ls modutils sbin/lsmod 3.3. Создание пакетов Пакеты создаются с помощью команды pkgmk. Эта команда использует файл Pkgfile, содержащий информацию о пакете (такую как имя, версия и т.п.) и команды, которые должны быть исполнены для сборки пакета. Более детально: Pkgfile - это bash(1) скрипт, с несколькими определенными переменными (name, version, release and source) и функцией (build). Ниже пример Pkgfile-а, как он должен выглядеть, для сборки утилиты grep(1). Добавлены некоторые комментарии. # Название пакета. name=grep # Версия пакета. version=2.4.2 # Релиз пакета. release=1 # Исходник(и), используемые для сборки пакета. source=(ftp://ftp.ibiblio.org/pub/gnu/$name/$name-$version.tar.gz) # Функция build() вызывается pkgmk после того, # как указанные исходники распакованы. build() { # В первую очередь, переходим в каталог с распакованными исходниками. cd $name-$version # Запускаем configure скрипт с нужными аргументами. # В данном случае grep должен быть размещен в /usr/bin и # должен быть выключен NLS. ./configure --prefix=/usr --disable-nls # Сборка. make # Устанавливаем файлы, но не в /usr, а в $PKG/usr с помощью переменной # DESTDIR. Переменная $PKG указывает на временный каталог, который # позже будет упакован в tar.gz-файл. Замечание: переменную # DESTDIR не все Makefile-ы, некоторые используют prefix, # другие ROOT, и т.д. Исследуйте Makefile не предмет этой возможности # Некоторые Makefile-ы вообще не имеют такой возможности. # В этом случае, необходимо их пропатчить. make DESTDIR=$PKG install # Удаляем ненужные файлы, в данном случае страницы info. rm -rf $PKG/usr/info } В реальном случае, не нужно вставлять все эти комментарии. Ниже приведен реальный Pkgfile для grep(1): # $Id: package.xml,v 1.1 2003/04/28 23:18:22 per Exp $ # Maintainer: Per Liden <[email protected]> name=grep version=2.4.2 release=1 source=(ftp://ftp.ibiblio.org/pub/gnu/$name/$name-$version.tar.gz) build() { cd $name-$version ./configure --prefix=/usr --disable-nls make make DESTDIR=$PKG install rm -rf $PKG/usr/info } [Note] Замечание Функция build() в этом примере показывает, как нужно собирать grep. Содержимое функции различается, в зависимости от программы, например, программа может не использовать autoconf. После того, как build() завершит работу, каталог $PKG будет собран в пакет с именем <name>#<version>-<release>.pkg.tar.gz. Перед тем, как собрание пакета завершится, pkgmk сверит содержимое пакета с файлом .footprint. Если этого файла нет, то он будет создан и тест пропущен. Файл .footprint содержит список всех файлов в пакете, если файл был создан ранее или список всех файлов в $PKG (если .footprint не был создан). Если произойдет несовпадение сверки, пакет собран не будет и будет выдано сообщение об ошибке. Вы не должны создавать .footprint вручную. Вместо этого, после того, как пакет обновлен, вам нужно обновить содержимое .footprint файла командой pkgmk -uf. Если пакет собрался без ошибок, то вы можете устанавливать его с помощью pkgadd. Я настоятельно рекомендую смотреть, как сделаны Pkgfile-ы в других пакетах. Это хороший способ научиться создовать свои. 3.4. Принципы сборки пакетов 3.4.1. Главное • Имя пакета всегда должно быть в нижнем регистре (т.е. name=eterm и не name=Eterm). В случае, если пакет добавлен в систему портов CRUX, то же самое имя должно быть использовано в структуре каталогов, т.е. /usr/ports/???/eterm. • Не совмещайте несколько разных программ/библиотек в одном пакете. Создайте несколько пакетов. 3.4.2. Каталоги • В основном, пакеты должны размещать файлы в следующих каталогах. Исключения конечно дозволяются, но нужно иметь для этого хорошую причину. Но всегда, когда возможно, старайтесь придерживаться следующей структуры. Каталог Описание /usr/bin/ Пользовательские команды/приложения /usr/sbin/ Системные команды/ приложение (например демоны) /usr/lib/ Библиотеки /usr/include/ Файлы заголовков /usr/lib/<prog>/ Плагины, аддоны и т.д. /usr/man/ Man страницы /usr/share/<prog>/ Файлы данных /usr/etc/<prog>/ Файлы конфигурации /etc/ Файлы конфигурации для системного софта (демонов и т.д.) • /usr/X11R6 и /usr/???/X11 зарезервированы для XFree86. X клиенты, не входящие в XFree86 должны быть размещены в /usr, а НЕ в /usr/X11R6 или /usr/???/X11. • /opt зарезервирован для ручной сборки/установки приложений. Пакеты ни когда не должны здесь размещаться. • /usr/libexec/ не используется в CRUX, пакеты не должны устанавливаться сюда. Используйте /usr/lib/<prog>/ . 3.4.3. Удаление ненужных файлов • Пакеты не должны содержать "ненужные файлы". Под ними подразумеваются info и другая документация кроме man (например usr/doc/*, README, *.info, *.html, и т.д.). • Программы, поддерживающие NLS, должны быть собраны с --disable-nls когда это возможно. • Бесполезный или устаревшие программы (такие как /usr/games/banner и /sbin/mkfs.minix). 3.4.4. Pkgfile • Не добавляйте новые переменные в Pkgfile. Только в очень редких случаях это увеличивает читаемость или качество пакета. Единственными переменными, которые будут гарантированно работать в будущих версиях pkgmk будут name, version, release, и source. Другие переменные могут конфликтовать с внутренними переменными pkgmk. • Используйте переменные $name и $version для упрощения модификации пакета. Например, source=(http://xyz.org/$name-$version.tar.gz) лучше, чем source=(http://xyz.org/myprog-1.0.3.tar.gz), потому, что URL автоматически будет изменен с изменением переменной $version. • Запомните, source это массив, т.е. всегда используйте source=(...) а не source=... Параграф 4. Система портов 4.1. Введение 4.1.1. Что такое порт? Порт, это каталог, содержащий файлы, необходимые для сборки пакета с помощью pkgmk. Это означает, что в этом каталоге как минимум содержатся Pkgfile (с описанием процесса сборки) и .footprint (для определения правильности сборки). Далее, каталог порта может содержать патчи и другие файлы, необходимые для сборки пакета. Важно понимать, что исходный код программы не обязательно должен присутствовать в каталоге порта. В место этого Pkgfile содержит URL, указывающий, откуда исходник может быть выкачен. Использование слова порт (port) в данном контексте позаимствовано в мире BSD, где подразумевает программу, портированную на систему или платформу. Слово порт иногда может вводить в заблуждение, потому что большинство программ не требуют действительного портирования для запуска на CRUX (или Linux в целом). 4.1.2. Что такое система портов? Термин Система портов (Ports System) описывает CVS репозиторий, содержащий порты и клиентские программы, которые могут выкачивать порты из этого CVS репозитория. Пользователи CRUX могут использовать команду ports(8) для выкачивания портов из CVS репозитория и размещения из в /usr/ports/. Утилита ports использует CVSup(1) для скачивания/синхронизации. 4.2. Использование системы портов 4.2.1. Синхронизация локальной структуры портов Сразу после установки CRUX структура портов (/usr/ports/) пустая. Для того, чтобы создать локальную структуру используйте команду ports с ключом -u. Пример: $ ports -u Опция -u (update) указывает команде ports связаться с CVS репозиторием и выкачать новые и обновленные порты. В процессе работы программа выводит примерно следующее: Connected to cvsup.fukt.bth.se Updating collection base/cvs ... Updating collection opt/cvs ... Finished successfully В выводе видно, какие файлы выкачиваются, обновляются или удаляются. 4.2.2. Просмотр списка локальных портов Когда локальная структура обновлена, каталог /usr/ports/ содержит две категории пакетов, base и opt. В каждом, из этих каталогов вы можете искать порты. Вы просто просматриваете структуру каталога для просмотра, какие порты доступны. $ cd /usr/ports/base/ $ ls autoconf/ filesystem/ man/ sh-utils/ automake/ fileutils/ man-pages/ shadow/ bash/ findutils/ modutils/ sysklogd/ bin86/ flex/ nasm/ sysvinit/ binutils/ gawk/ ncurses/ tar/ bison/ gcc/ net-tools/ tcp_wrappers/ bsdinit/ glibc/ netkit-base/ tcsh/ bzip2/ grep/ patch/ textutils/ cpio/ groff/ perl/ time/ db/ gzip/ pkgutils/ traceroute/ dcron/ kbd/ procps/ util-linux/ devfsd/ less/ psmisc/ vim/ diffutils/ libtool/ readline/ wget/ e2fsprogs/ lilo/ reiserfsprogs/ which/ ed/ m4/ sed/ zlib/ file/ make/ sendmail/ Еще вы можете использовать команду ports с ключом -l для просмотра всех локальных портов. Например: $ ports -l base/autoconf base/automake base/bash base/bin86 base/binutils base/bison ... opt/xfree86 opt/xmms Если вы ищете определенный пакет, то проще будет использовать конструкцию вида ports -l | grep sendmail для проверки наличия пакета и его расположения. 4.2.3. Просмотр соответствия версий Для того, чтобы узнать, есть ли в структуре портов порты, отличающиеся ( возможно более новые), чем те, что установлены у вас в системе, используйте ключ -d. Если разница в версиях найдена, то будет выдана информация, подобная этой: $ ports -d Collection Name Port Installed base glibc 2.2.5-1 2.2.4-2 opt xfree86 4.2.0-1 4.1.0-2 Если различий найдено не будет, вывод будет следующим: $ ports -d No differences found 4.2.4. Сборка и установка пакетов После того, как вы нашли порт, который вы хотите собрать и установить, просто перейдите в его каталог и выполните команду pkgmk. Например: $ cd /usr/ports/base/sendmail $ pkgmk -d Опция -d указывает pkgmk, что нужно выкачать недостающие исходники, указанные в Pkgfile (в случае, если исходники уже скачаны эта опция будет проигнорирована). После того, как исходники будут выкачены, пакет начнет собираться. Если пакет соберется нормально, вы можете использовать pkgadd, для его установки или обновления. Например: $ pkgadd sendmail#8.11.6-2.pkg.tar.gz Для облегчения жизни pkgmk -d имеет еще два параметра - -i (для установки собранного пакета) и -u (для обновления). Например: $ pkgmk -d -i или $ pkgmk -d -u Эти команды выкачают и установят/обновят пакет. Замечание: пакет будет установлен/обновлен только в том случае, если он будет нормально собран. Параграф 5. Конфигурация 5.1. Скрипты инициализации 5.1.1. Уровни загрузки Следующие уровни загрузки используются в CRUX (определены в /etc/inittab). Уровень загрузки Описание 0 Halt 1 (S) Single-user Mode 2 Multi-user Mode 3-5 (не используются) 6 Reboot 5.1.2. Расположение Инициализационные скрипты, используемые в CRUX следуют BSD-нотации и расположены следующим образом. Файл Описание /etc/rc Скрипт загрузки системы /etc/rc.single Загрузочный скрипт для Single-user mode /etc/rc.modules Скрипт инициализации модулей /etc/rc.multi Загрузочный скрипт для Multi-user mode /etc/rc.local Локальный загрузочный скрипт для multi-user mode (по умолчанию пустой) /etc/rc.shutdown Скрипт выключения системы /etc/rc.conf Системная конфигурация /etc/rc.d/ Каталог старт/стоповых скриптов для сервисов Изменяйте /etc/rc.modules, /etc/rc.local и /etc/rc.conf так, как вам нужно. 5.1.3. Конфигурационные переменные в /etc/rc.conf Следующие конфигурационные переменные находятся в /etc/rc.conf. Переменная Описание KEYMAP Описывает раскладку клавиатуры, используемую системой после загрузки. Содержимое этой переменной используется как параметр для loadkeys(1). Доступные раскладки клавиатуры находятся в /usr/share/kbd/keymaps/. Пример: KEYMAP=sv-latin1 TIMEZONE Описывает таймзону, используемую в системе. Доступные файлы описания зон находятся в /usr/share/zoneinfo/. Пример: TIMEZONE=Europe/Stockholm HOSTNAME Описывает имя хоста. Пример: HOSTNAME=pluto SERVICES Описывает сервисы, которые должна стартовать вместе с системой. Сервисы, описанные в этом массиве, должны иметь старт/стоп скрипты в /etc/rc.d/. Когда система переходит в multi-user mode описанные скрипты вызываются в указанном порядке с параметром start. Когда система выключается или переходит в single-user mode, скрипты вызываются в обратном порядке с параметром stop. Пример: SERVICES=(crond identd sshd sendmail) 5.1.4. Конфигурация сети Конфигурация сети находится в сервисном скрипте /etc/rc.d/net. Для включения этого сервиса вам нужно добавить net в массив SERVICES в /etc/rc.conf. По умолчанию, в скрипте сконфигурирован только lo интерфейс, вы можете добавить дополнительные ifconfig(8) и route(8) команды, если вы настраиваете другие сетевые интерфейсы (eth0, eth1, etc). Например: #!/bin/sh # # /etc/rc.d/net: start/stop network # if [ "$1" = "start" ]; then /sbin/ifconfig lo 127.0.0.1 /sbin/ifconfig eth0 195.38.1.140 netmask 255.255.255.224 /sbin/ifconfig eth1 192.168.0.1 netmask 255.255.255.0 /sbin/route add default gw 195.38.1.129 elif [ "$1" = "stop" ]; then /sbin/ifconfig eth1 down /sbin/ifconfig eth0 down /sbin/ifconfig lo down else echo "usage: $0 start|stop" fi # End of file Если вам нужно сконфигурировать систему с DHCP клиентом, используйте dhcpcd(8) команду (вместо ifconfig(8)). Например: #!/bin/sh # # /etc/rc.d/net: start/stop network # if [ "$1" = "start" ]; then /sbin/ifconfig lo 127.0.0.1 /sbin/dhcpcd eth0 [add additional options if needed] elif [ "$1" = "stop" ]; then killall -q /sbin/dhcpcd /sbin/ifconfig lo down else echo "usage: $0 start|stop" fi # End of file 5.2. Пароли CRUX по умолчанию использует MD5SUM пароли. Это может быть выключено, если вам нужно использовать традиционные DES пароли. Не забывайте, что DES пароли менее безопасные. Для отключения MD5SUM криптования измените параметр MD5_CRYPT_ENAB в /etc/login.defs на no. Далее, когда вы компилируете программы, использующие функцию crypt(3) для аутентификации пользователей, удостоверьтесь, что программы слинкованы с библиотекой libcrypt (т.е. используйте -lcrypt при линковке), которая содержит MD5SUM и совместима с DES). 5.3. Обновление ядра Исходники ядра находятся в /usr/src/linux-2.4.21/ и установлены без использования pkgadd. Если вы решили обновить ваше ядро, вы можете безопасно заменить исходный код ядра более новой версией (или любой другой). Это не внесет противоречий в базу пакетов (потому, что ядро не устанавливалось pkgadd-ом), и при этом не затронет заголовочные файлы, находящиеся в /usr/include/linux и /usr/include/asm, так как они не симлинки, на исходники ядра, а копии. Параграф 6. Часто задаваемые вопросы 6.1. Главное 1. Почему название "CRUX"? Если вы посмотрите в словаре, то найдете следующее определение "основная, центральная или критическая точка или возможность", также слово CRUX созвучно UNIX/Linux... и наверное по этому я выбрал его. 2. Когда выйдет следующая версия? Стандартный ответ на этот вопрос - "когда она будет закончена". Однако новые версии выходят каждые 3-4 месяца. Между выходом новых версий доступно обновление пакетов через систему портов. 6.2. Установка 1. CRUX будет работать с AMD K6/K6-II/K6-III? Да и нет. AMD K6, K6-II и K6-III имеют i586 (Pentium) совместимый набор инструкций. Пакеты в официальном CRUX ISO собраны с -march=i686, что значит, что CRUX требует процессор с i686 совместимым набором инструкций (т.е. Intel PPro/Celeron/PII/PIII/P4 или AMD K7/Athlon). Однако Юджин Дауберт (Jurgen Daubert) собрал i586 версию CRUX ISO образа, которая находится здесь. i586 версия CRUX ISO работает на AMD K6/K6-II/K6-III. 2. При загрузке с CRUX CD-ROM получаю kernel panic с диагностикой "VFS: Unable to mount root fs". Что не правильно? Это возможно, если в системе более одного CD-ROM устройства. Удостоверьтесь, что вы загружаетесь с "первого" CD-ROM устройства, т.е. /dev/cdroms/cdrom0. Если вы хотите загрузиться не с первого устройства, вы должны ввести CRUX root=/dev/cdroms/cdrom1 в загрузочном приглашении. (cdrom1 указывает на второе устройство, cdrom2 на третье, и так далее). 3. При первой загрузке CRUX получаю ошибку "Unable to open initial console". Что не так? Вы забыли включить поддержку devfs или автоматическое монтирование devfs при загрузке. В инструкции (Секция 2.2) расказано, как включить их. 4. Когда я логинюсь на свежеустановленный CRUX в первый раз, система спрашивает пароль, но в документации сказано "Подключитесь как root (пароль не требуется)". Что не правильно? Скорее всего, вы забыли отредактировать /mnt/etc/fstab перед перезагрузкой или вы ввели не правильное имя root-раздела в загрузочном приглашении. 6.3. Конфигурация 1. Почему изменения, сделанные в /dev теряются после перезагрузки CRUX? CRUX использует devfs, виртуальную файловую систему, находящуюся в RAM. Изменения, сделанные в /dev всегда теряются после выключения питания. Однако, вы можете сконфигурировать devfsd(8) на восстановление настроек после загрузки. Вам нужно изменить /etc/devfsd.conf в соответствии с вашими потребностями. Дополнительную информацию смотрите в man devfsd(8). Пример: # # /etc/devfsd.conf: devfsd(8) configuration # REGISTER .* MKOLDCOMPAT UNREGISTER .* RMOLDCOMPAT LOOKUP .* MODLOAD REGISTER ^sound/.* PERMISSIONS root.users 660 REGISTER ^v4l/.* PERMISSIONS root.users 660 # End of file 2. Как мне запустить sshd? Отредактируйте файлы /etc/hosts.deny и/или /ets/hosts.allow, чтобы указать, какие хосты (не)имеют доступ. Для того, чтобы разрешить соединяться с вами кому угодно, добавьте sshd: ALL в /etc/hosts.allow. Смотрите hosts_access(5) man для дополнительной информации о формате файлов. После того, как вы это сделаете, стартуйте sshd командой /etc/rc.d/sshd start и/или отредактируйте /etc/rc.conf и добавьте sshd в массив SERVICES, т.е. SERVICES=(... sshd ...), для того, чтобы sshd стартовал вместе с системой. 3. Mozilla вываливается или отказывается стартовать, в чем проблема? Mozilla экстремально чувствительно к отсутствию файла fonts.cache-1. Если Mozilla отказывается стартовать (из-за нарушения сегментации или вообще без диагностики) это может происходить из-за того, что кеш-файл шрифтов отсутствует. Запустите fc-cache (как root) для создания/обновления кеш-файла. Смотрите fc-cache(1) man для информации об этой программе.
__label__pos
0.563182
Documentation Home MySQL 5.5 Reference Manual Related Documentation Download this Manual PDF (US Ltr) - 27.2Mb PDF (A4) - 27.2Mb PDF (RPM) - 26.1Mb HTML Download (TGZ) - 6.5Mb HTML Download (Zip) - 6.5Mb HTML Download (RPM) - 5.6Mb Man Pages (TGZ) - 170.5Kb Man Pages (Zip) - 279.0Kb Info (Gzip) - 2.6Mb Info (Zip) - 2.6Mb Excerpts from this Manual 13.7.1.6 SET PASSWORD Syntax SET PASSWORD [FOR user] = password_option password_option: { PASSWORD('auth_string') | OLD_PASSWORD('auth_string') | 'hash_string' } The SET PASSWORD statement assigns a password to a MySQL user account, specified as either a cleartext (unencrypted) or encrypted value: • 'auth_string' represents a cleartext password. • 'hash_string' represents an encrypted password. Important SET PASSWORD may be recorded in server logs or on the client side in a history file such as ~/.mysql_history, which means that cleartext passwords may be read by anyone having read access to that information. For information about password logging in the server logs, see Section 6.1.2.3, “Passwords and Logging”. For similar information about client-side logging, see Section 4.5.1.3, “mysql Logging”. SET PASSWORD can be used with or without a FOR clause that explicitly names a user account: • With a FOR user clause, the statement sets the password for the named account, which must exist: SET PASSWORD FOR 'jeffrey'@'localhost' = password_option; • With no FOR user clause, the statement sets the password for the current user: SET PASSWORD = password_option; Any client who connects to the server using a nonanonymous account can change the password for that account. To see which account the server authenticated you as, invoke the CURRENT_USER() function: SELECT CURRENT_USER(); Setting the password for a named account (with a FOR clause) requires the UPDATE privilege for the mysql database. Setting the password for yourself (for a nonanonymous account with no FOR clause) requires no special privileges. When the read_only system variable is enabled, SET PASSWORD requires the SUPER privilege in addition to any other required privileges. If a FOR user clause is given, the account name uses the format described in Section 6.2.3, “Specifying Account Names”. For example: SET PASSWORD FOR 'bob'@'%.example.org' = PASSWORD('auth_string'); The host name part of the account name, if omitted, defaults to '%'. The password can be specified in these ways: • Use the PASSWORD() function The PASSWORD() argument is the cleartext (unencrypted) password. PASSWORD() hashes the password and returns the encrypted password string for storage in the mysql.user account row. The PASSWORD() function hashes the password using the hashing method determined by the value of the old_passwords system variable value. It should be set to a value compatible with the hash format required by the account authentication plugin. For example, if the account uses the mysql_native_password authentication plugin, old_passwords should be 0 for PASSWORD() to produce a hash value in the correct format. For mysql_old_password, old_passwords should be 1. Permitted old_passwords values are described later in this section. • Use the OLD_PASSWORD() function: The 'auth_string' function argument is the cleartext (unencrypted) password. OLD_PASSWORD() hashes the password using pre-4.1 hashing and returns the encrypted password string for storage in the mysql.user account row. This hashing method is appropriate only for accounts that use the mysql_old_password authentication plugin. • Use an already encrypted password string The password is specified as a string literal. It must represent the already encrypted password value, in the hash format required by the authentication method used for the account. The following table shows the permitted values of old_passwords, the password hashing method for each value, and which authentication plugins use passwords hashed with each method. ValuePassword Hashing MethodAssociated Authentication Plugin 0 or OFFMySQL 4.1 native hashingmysql_native_password 1 or ONPre-4.1 (old) hashingmysql_old_password Caution If you are connecting to a MySQL 4.1 or later server using a pre-4.1 client program, do not change your password without first reading Section 6.1.2.4, “Password Hashing in MySQL”. The default password hashing format changed in MySQL 4.1, and if you change your password, it might be stored using a hashing format that pre-4.1 clients cannot generate, thus preventing you from connecting to the server afterward. For additional information about setting passwords and authentication plugins, see Section 6.3.5, “Assigning Account Passwords”, and Section 6.3.6, “Pluggable Authentication”. User Comments Sign Up Login You must be logged in to post a comment.
__label__pos
0.808129
If you're seeing this message, it means we're having trouble loading external resources on our website. Si estás detrás de un filtro de páginas web, por favor asegúrate de que los dominios *.kastatic.org y *.kasandbox.org estén desbloqueados. Contenido principal Ejemplo resuelto: solución lineal de una ecuación diferencial Si una solución particular de una ecuación diferencial es lineal, y=mx+b, podemos plantear un sistema de ecuaciones para encontrar m y b. Mira cómo funciona esto en este video. ¿Quieres unirte a la conversación? ¿Sabes inglés? Haz clic aquí para ver más discusiones en el sitio en inglés de Khan Academy. Transcripción del video vamos a ponernos un poco más cómodos en la comprensión de lo que es una ecuación diferencial y aquí tenemos justamente una ecuación diferencial y no hemos explorado aún cómo hallar las soluciones de una ecuación diferencial pero digamos que tú ves esto y alguien por la calle no se te dice oye te daré una pista hay una solución a esta ecuación diferencial que es una función lineal y cuando pensamos en una función lineal pensamos algo de la forma m x más de verdad entonces esa pista que nos han dado nos dicen que desde la de esta forma es una función lineal y lo que nosotros tenemos que hacer es determinar qué valores de m y qué valores debe satisfacen la ecuación bueno más bien hacen que esta función satisfaga la ecuación diferencial y esto tiene que ser cierto para todas x verdad que es nuestra variable independiente entonces lo que tenemos que hacer es sustituir y ver qué condiciones debe cumplir tanto la m como la d para que se satisfaga la ecuación entonces por ejemplo si nosotros calculamos la derivada de esta función que es lo que tendríamos la derivada de nuestra función con respecto de x sería la derivada de mx que es m más la derivada de una constante que vale 0 verdad entonces si si nosotros queremos hallar mb que hagan que esa función cumpla la ecuación diferencial tendríamos que del lado izquierdo sería m esto sería menos 2 x tres veces pero en este caso como nos dijo esta persona en la calle es mx + b y finalmente restamos 5 muy bien entonces aquí viene la parte interesante de este problema porque bueno quizás deberíamos ir desarrollando esto para ver si y simplificando tendremos ms menos 2x y ahora desarrollamos esta expresión esto sería 3 m x + 3 b menos 5 verdad o lo que es lo mismo podríamos poner esto como m igual a podríamos agrupar nuestra equis verdad entonces tendríamos 3m -2 aquí está esta 3m y este menos 2 que multiplican a x más la parte constante verdad que es 3 b menos 5 entonces piensa digamos de la siguiente forma del lado izquierdo nosotros tenemos una constante que es la constante m y esto debe ser igual a una constante por x más otra constante pero debe ser cierto para todas x entonces si nos damos cuenta esto que tenemos del lado derecho es algo que cambia con con x mientras que del lado izquierdo tenemos algo constante entonces eso nos dice que esta parte que depende de x debe digamos de alguna forma debe desaparecer verdad y una forma de que esto desaparezca es justamente si este coeficiente de aquí el 3m -2 se vuelve cero o bien otra forma de pensarlo es que esto lo podemos escribir como 0 x + gm debe ser igual a algo por x más otra cosa entonces tenemos que igualar los coeficientes verdad pero tendría que ser 3m -2 y m tendría que ser 3 b los 50 son dos formas de pensarlo y lo que tenemos es justamente que 3m -2 tiene que ser igual a cero o lo que es lo mismo tres m tienen que ser igual a dos o bien m tiene que ser dos tercios verdad y finalmente con esta información que ya ya tenemos con este conocimiento que hemos adquirido ya podemos determinar quién es nuestro nuestra verdad porque tendríamos entonces que b tiene que ser igual a tres b menos cinco de la misma forma que 0 fue 3m -2 verdad entonces m tiene que ser tres b menos cinco o lo que es lo mismo m ya vimos que era dos tercios entonces dos tercios tiene que ser 3b menos cinco y algo que podemos hacer para eliminar este 5 digamos este menos 5 es sumar justamente cinco de ambos lados pero vamos a hacerlo en términos de tercios verdad podríamos nosotros sumar quince tercios quince tercios son justamente cinco entonces eso se cancela y también tendremos que sumar quince tercios del lado izquierdo esto nos dice que diecisiete tercios que es lo que tenemos del lado izquierdo es 3 b y finalmente podemos concluir que nuestra b es 17 tercios entre tres que es 17 novenos entonces ahí tenemos el valor de m que ya está de este lado y el valor debe entonces en resumen si le hacemos caso a esta persona que nos dio la pista de que la solución es de esta forma entonces ya tenemos que valores de m&b satisfacen o más bien hacen que la función satisfaga la ecuación diferencial verdad entonces hemos terminado en ese sentido hemos ya concluido con la solución con una solución de la ecuación diferencial verdad que tendría que ser mx que en este caso es dos tercios de x + b que es 17 novenos y por supuesto te invito a que verifique es que en realidad esta función de aquí esta función que depende de x es una solución a la ecuación diferencial
__label__pos
0.517482
Warning: This document is for an old version of RDFox. The latest version is 5.4. 10. Command-line Interface This section describes the command-line interface of RDFox including the syntax for launching RDFox processes and complete reference documentation for the RDFox shell. Note All command syntaxes are described using standard BNF notation: [x] means that x is optional, and <y> means that y is an argument (instead of a plain string). 10.1. Starting the RDFox Process The RDFox process can be started using the following command: RDFox [-<option> <value> ...] [-temp-role | [-role <role>] [-password <password>]] {daemon [<endpointOption> <optionValue> ...] | {shell | sandbox} [<root> [<command> ...]]} All variants of this command create a single RDFox Server parameterized by the specifed -<option> <value> key pairs. See Section 6.1 for the list of parameters supported by RDFox Servers. In daemon mode, RDFox starts its endpoint, parameterized using the specified key-value pairs, and listens until signalled to exit. For a description of the endpoint and its supported parameters, see the documentation of the endpoint shell command (Section 10.2.2.14). Note that, when specified on the command line, endpoint option names must not include the prefix endpoint. that is used in the names of the corresponding shell variables. In daemon mode, both persist-ds and persist-roles are defaulted to file. In shell and sandbox modes, RDFox creates an instance of the RDFox shell (see Section 10.2), sets the dir.root shell variable to <root>, runs all supplied commands, and then returns the command prompt. In shell mode both persist-ds and persist-roles are defaulted to file whereas in sandbox mode, both are set to off. A role name and password are required at startup if access control has not been initialized, if an instance of the shell is being created or if both of these conditions hold. If both conditions hold, the -temp-role option can be set. This initializes access control and logs on to the shell with a temporary role (random name and password) which will be deleted when the shell closes. The -temp-role option is intended to be used to restore a transcribed RDFox instance, see transcribe for more information. If -temp-role is not specified, but a role name and password are required, RDFox will look for the arguments -role <role> and -password <password>. If one or both of these options is missing, RDFox will next inspect the RDFOX_ROLE and RDFOX_PASSWORD environment variables respectively. If after this one or both of these variables remains unset, the behavior will be as follows: • shell mode will prompt for the missing information • sandbox mode will use the value guest to fill in the blanks • daemon mode will terminate. 10.2. RDFox Shell Reference The RDFox shell is a command-line interface for controlling the RDFox server within the same process. It can be used interactively or as a script execution environment and supports a range of variables and commands giving flexible access to RDFox’s features. The commands available within the shell fall into three broad categories: commands controlling behaviour of the shell itself, commands addressing the process’s RDFox server and commands addressing one of the server’s data stores. To determine which data store is addressed by commands in the last category, the shell maintains the variable active which stores the name of the data store to be addressed. At startup, this variable is initialized to the name default after which it can be changed using the active command. Note The shell does not validate that the new value for the active variable matches the name of an existing data store. Commands that depend on this variable print a warning if at run-time the server does not contain a data store with the specified name. Shell variables can hold string, signed integer, or Boolean values. As well as a predefined set of variables that control the behaviour of the shell or individual commands (Section 10.2.3), users can define their own variables. A shell variable called var can be used in commands in the form $(var). Shell variables can be set using the set command (Section 10.2.2.36). 10.2.1. Script execution When RDFox encounters an unrecognized command name, it checks the directories identified by the dir.scripts and dir.root shell variables (in that order) for a file whose name matches the given command or the given command with the file extension .rdfox. If a file is found, RDFox will attempt to interpret it as a shell script. Shell scripts may use any of the commands available when the shell is running interactively and may themselves call other scripts. RDFox will treat anything between a # character and the end of the containing line as a comment. 10.2.2. Shell Commands This section describes the commands that can be used in the shell. 10.2.2.1. active Syntax: active [<name>] Description: If <name> is omitted, this command prints the name of the active data store; otherwise, it sets the active data store name to <name>. Note that setting the active data store name does not create the data store with that name: the data store should still be initialized or loaded before it can be used. 10.2.2.2. answer Syntax: answer (! <query_text> | <filename>*) Description: This command evaluates one or more SELECT, ASK, or CONSTRUCT queries. The query can be either given explicitly as text after the ! symbol, or one can specify zero or more file names that contain the queries to be evaluated. Each relative <filename> (e.g., it does not start with / on Unix-based platforms) is interpreted as relative to the content of the dir.queries shell variable. Example: The following command checks whether the a1:Org class contains any instances: answer ! ask { ?X rdf:type a1:Org } 10.2.2.3. ask Syntax: ask <remaining_query_text> Description: This command queries the current data store (against all IDB facts) with the specified SPARQL query. An ask query tests whether or not a query pattern has a solution. Example: The last command of the following scripts tests whether the specified pattern can be matched in the materialization, and prints out the total number of matched tuples. dstore create seq import "LUBM.ttl" import "LUBM.dlog" mat prefix a1: <http://lehigh.edu/onto/univ-bench.owl#> set output out ask { ?X rdf:type a1:Org } 10.2.2.4. begin Syntax: begin [interruptible-read | read | write] Description: This command starts a transaction on the current data store. The transaction is interruptible read-only (if interruptible-read parameter is specified), read-only (if read parameter is specified), or read/write (if write parameter is specified). The read/write mode is the default. 10.2.2.5. clear Syntax: clear [rules-explicate-facts | facts-keep-rules] Description: This command clears various parts of the data store. • With no arguments, it removes all facts, axioms, and rules. • With rules-explicate-facts, it clears all rules and makes all facts explicit – that is, it adds all facts from the IDB fact domain into the EDB domain. This operation can be used when the facts derived by one set of rules should be fed as input to another set of rules. • With facts-keep-rules, it clears all facts but keeps all rules currently loaded into the data store. This operation can be useful when the same set of rules needs to be applied to different data. Example: After the clear rules-explicate-facts command of the following script is issued, all facts will become explicit and all rules will be deleted. Therefore, if a1:Org[http://www.University389.edu] is derived during the materialization, then the first “explain” command will print information about how this fact is derived, whereas the second “explain” command will simply tell the user that the fact is an explicit fact. dstore create par-complex-nn import "LUBM.ttl" import "LUBM.dlog" prefix a1: <http://lehigh.edu/onto/univ-bench.owl#> explain shortest a1:Org[<http://www.University389.edu>] clear rules-explicate-facts explain shortest a1:Org[<http://www.University389.edu>] 10.2.2.6. commit Syntax: commit Description: This command commits the transaction on the current data store. 10.2.2.7. compact Syntax: compact Description: This command compacts all facts in the data store, reclaiming the space used by the deleted facts in the process and persistent storage. This operation may take a long time to complete, the time taken is roughly proportional to the number of triples in the data store. 10.2.2.8. construct Syntax: construct <remaining_query_text> Description: This command queries the current data store (against all IDB facts) with the specified SPARQL CONSTRUCT query. The resulting triples are stored using the Turtle format. 10.2.2.9. daemon Syntax: daemon Description: This command switches RDFox into daemon mode by first ensuring that the endpoint is listening and then closing the shell. 10.2.2.10. delete Syntax: delete <remaining_query_text> Description: This command can be used to remove EDB facts from the data store based on bindings for a query pattern specified in a where clause. Example: The following command removes all facts matching the specified query pattern ?X a1:headOf ?Y from the data store. delete { ?X a1:headOf ?Y } where{ ?X a1:headOf ?Y } 10.2.2.11. dsource Syntax: dsource list | show <dsname> | add <type> <dsname> <parameters> | sample <dsname> <table> [<size>] | drop <dsname> | attach <IRI> <dsname> <parameters> Description: This command manages the data sources of the current store. The command is useful when the user wishes to import and manage data of non-RDF formats in RDFox. • Option list prints the currently available data sources. • Option show shows information about the data source with name <dsname>. • Option add adds a new data source of type <type> and with name <dsname>. Information about the data source is specified by the key-value pairs in <parameters>. • Option sample shows a preview of up to <size>; rows from table <table> of data source <dsname>. • Option drop deletes a data source with name <dsname>. • Option attach attaches a tuple table with name <IRI> from a data source with name <dsname> as specified by <parameters>. This is an abbreviation for tupletable add <IRI> <parameters>, where key-value pair dataSourceName = <dsname> is added implicitly (i.e., it does not need to be specified on the command line). Section 6.6 describes in more detail on how data sources are imported and used in RDFox. 10.2.2.12. dstore Syntax: dstore list | create <name> <type> [<parameterKey> <parameterValue>]* | delete <name> Description: This command manages the data stores of the server. • Option list prints the currently available data stores. • Option create adds a new data store with name <name> and type <type>. See Section 6.2.1 for the list of supported types. The various options of a data store may be specified using a possibly empty list of key-value pairs; see Section 6.2.2 for the list of supported options. • Option delete deletes the data store with name <name> from the server. 10.2.2.13. echo Syntax: echo <tok>* Description: This command prints all tokens given after the command, separating them by a single space. All variables occurring in the tokens are expanded as usual, which can be used to print useful information. 10.2.2.14. endpoint Syntax: endpoint (start | stop) Description: This command starts or stops the RDFox endpoint. The endpoint provides REST access to the process’s RDFox server and also serves the RDFox Console. Since the endpoint accesses the same server that is accessible through the command line, the results of any commands that affect the state of the server (e.g., dstore create) will be immediately visible on the endpoint. For a description of the RESTful API available when the endpoint is started, see Section 8. The configuration of the endpoint is determined by the following shell variables. • endpoint.port determines the port at which the endpoint is started. The port be specified as a verbatim port number or as a TCP service name. The default is 12110. For legacy reasons, the port can also be specified using endpoint.service-name; moreover, if both options are present, then endpoint.port takes precedence. • endpoint.num-threads determines the number of threads that endpoint will use to process RESTful requests. The default value is one less than the number of logical processors of the machine on which RDFox is run. • endpoint.channel determines the connection type that the endpoint should use. • unsecure means the endpoint will use the unsecured HTTP connection. This is the default value. • ssl means the endpoint will use SSL/TLS using the platform’s native secure communication package. On macOS this is Secure Transport, and on Linux and Windows this is openSSL. • open-ssl means the endpoint will use SSL/TLS implemented using the openSSL package. This option is available on all platforms. • secure-transport means the endpoint will use SSL/TLS implemented using the Secure Transport library. This option is available only on macOS 10.8 or later. • The following parameters determine the configuration of the SSL/TLS connections, such as the server certificate and private key, as well as intermediate certificates. • endpoint.credentials specifies the server certificate and private key, and the intermediate certificates as a verbatim string in PEM format. The string must contain the server’s private key, the server’s certificate, and zero or more intermediate certificates. For example, this file could look as follows: -----BEGIN RSA PRIVATE KEY----- ... server key ... -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- ... server certificate ... -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- ... 1st intermediate certificate ... -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- ... 2st intermediate certificate ... -----END CERTIFICATE----- • endpoint.credentials-file specifies the name of the file whose content contains the credentials. The file content must have the same format as the endpoint.credentials parameters. • endpoint.credentials-name specifies the comma-separated list of names of items in the system’s keystore. The first name must identify a certificate and a private key, which are used as a main identity of the server. The remaining names identify intermediate certificates. This option is available only on macOS, where the keystore is the system’s keychain. • endpoint.credentials-passphrase provides the passphrase that can be used to unlock the credentials in case they are encrypted. This parameter is optional. • endpoint.min-secure-protocol determines the minimum protocol version that the server should use. The allowed values are ssl2, ssl3, tls1, tls11, tls12, and tls13. The default value is tls12. • endpoint.listening-backlog determines the TCP listening backlog for the socket accepting the connection. The default value is 10. • endpoint.receive-buffer and endpoint.send-buffer determine the sizes in bytes of the receive and send buffers for the sockets servicing the requests. The default values are zero, which means that the system will determine the buffer sizes depending on the properties of the connection. For more information, please refer to the SO_RCVBUF and SO_SNDBUF socket options. • endpoint.sweep-period and endpoint.sweeps-to-reclaim govern the reclamation of unused objects. During its operation, the endpoint retains certain objects between requests either for performance reasons (e.g., the endpoint may cache cursors of partially evaluated queries) or to ensure its operation (e.g., the endpoint will maintain objects associated with transactions). In order to prevent these objects from accumulating, every endpoint.sweep-period seconds the endpoint will sweep through all retained objects, and it will delete all objects (including transactions) that have not been used in the last endpoint.sweeps-to-reclaim sweeps. The default values for these parameters are 60 and 5, respectively. • endpoint.access-control-allow-origin configures the RDFox endpoint to include the Access-Control-Allow-Origin header in responses with the specified origin. If unset (the default), the header is omitted. • endpoint.protocol determines which network layer protocol the endpoint will use. • IPv4 means the endpoint will use Internet Protocol version 4. • IPv6 means the endpoint will use Internet Procotol version 6. • IPv6-v4 means the endpoint will use Internet Protocol version 6 if possible or Internet Protocol version 4 if not. This is the default value. Example: The following commands start the RESTful endpoint on port 4567. set endpoint.port "4567" endpoint start 10.2.2.15. exec Syntax: exec [<repeat_num>] <filename> [<argument>]* Description: This command executes the contents of the specified script repeatedly for the specified number of times. If <repeat_num> is not specified, then the script is executed once. All <argument> tokens are passed as variables $(1), $(2), and so on. If <filename> is relative (e.g., it does not start with / on Unix-based platforms), it is interpreted as relative to the content of the dir.scripts shell variable. Example: The following command executes the script stored in file testMat.rdfox. exec "testMat.rdfox" Example: The following scripts accesses arguments passed to it. dstore create seq import "$(1)" import "$(2)" mat import - "$(3)" mat quit Assuming that the script is stored in file testMat.rdfox, it can be invoked as follows. exec "testMat.rdfox" data.ttl program.dlog delta.ttl If a script file has suffix “.rdfox” and is in the directory that the dir.scripts shell variable points to, then both exec and the suffix .rdfox can be omitted. Together with the support for argument passing, one to group arbitrary commands together in a script and use the latter as if it were a new command. Example: The following command does the same as the exec command in the previous example, provided that the script file testMat.rdfox can be found in the directory specified by the dir.scripts shell variable. testMat data.ttl program.dlog delta.ttl 10.2.2.16. explain Syntax: explain [shortest] [<max_depth> [<max_rule_inst>]] <fact> Description: This command explains how a fact has been derived. The fact is specified using the Datalog syntax – that is, a triple can be written as [s, p, o] or p[s, o], a triple [s, rdf:type, C] can be written as C[s], and so on. A fact can be derived in more than one way, and by default all possible derivations will be printed. If shortest is specified, then just one shortest derivation (in terms of height) is printed; if there are several derivations of the same height, one is arbitrarily chosen. Finally, <max_depth> can be specified to limit the maximal depth of a proof tree, and <max_rule_inst> can be specified to limit the maximal number of rule instances in each node of the proof tree. Example: The last command of the following script explains how the specified fact was derived (in the shortest way) during the materialization. dstore create seq import "LUBM.ttl" import "LUBM.dlog" mat prefix a1: <http://lehigh.edu/onto/univ-bench.owl#> explain shortest a1:Org[<http://www.University389.edu>] 10.2.2.17. export Syntax: export <filename> [<format_name> [<parameterKey> <parameterValue>]*] Description: This command exports the data in the current store to the specified file in the specified format. One can optionally specify a number of key-value pairs that customize the export process. The available key-value pairs are specific to the answer format. At present, only the application/n-triples, text/turtle, application/n-quads, and application/trig formats support parameters; moreover, the only supported parameter is fact-domain, and its value is the fact domain determining which facts get exported. The default format is text/turtle with parameter fact-domain equal to EDB. If <filename> is relative (e.g., it does not start with / on Unix-based platforms), then it is interpreted as relative to the content of the dir.facts shell variable if the selected format can store facts, or as relative to the content of the dir.dlog shell variable if the selected format cannot store facts. This command can also be be used to export the OWL axioms and rules in current store, by specifying supported output formats text/owl-functional and application/x.datalog respectively. In these cases the supported parameter is axiom-domain or rule-domain respectively and the value is the domain that will be output, defaulting to the user axiom domain (6.3) or user rule domain (6.4). Example: The following command exports the derived facts from the data store in the application/n-triples format. export "output.ttl" "application/n-triples" fact-domain IDB Example: The following command exports the OWL 2 axioms that have been translated from the data in the current store. export "output.fss" "text/owl-functional" axiom-domain triples Example: The following command exports the rules that have been imported by the user i.e. imported into the default “user” rule domain. export "output.dlog" "application/x.datalog" 10.2.2.18. grant Syntax: grant privileges <actions> <resource-specifier> to <role> | role <super-role> to <role> Description: This command grants privileges and role memberships to roles in a server’s role database. The counterpart to this command is revoke. • Option privileges grants the privileges to perform <actions> on the resource(s) matched by <resource-specifier> to the role <role>, where <actions> is a comma-separated list of the elements read, write, grant and full and <resource-specifier> is a string meeting the requirements of a resource specifier described in Section 9.1.2.1. • Option role grants membership of the role with name <super-role> to the role with name <role>. See Section 9.2.5 for more information about role membership. Section 9 describes RDFox’s access control model in more detail. Example: The following command grants read and write access over the family data store to the role graphuser. grant privileges read,write >datastores|family to graphuser 10.2.2.19. help Syntax: help [<command_name>] Description: When executed without arguments, this command prints the list of all available commands. When executed with one or more commands, help information about each of the specified commands will be printed. 10.2.2.20. import Syntax: import [+|-] (! <text> | <filename>*) Description: This command adds the specified items (i.e., facts and/or axioms/rules) into the current store (if nothing or + is specified), or removes the specified items from the current store (if is specified). The user may choose to specify items in plain text, in which case the text follows the ! symbol; alternatively, the user can group the items to import in one or more files and simply pass the filename(s) as argument(s) here. RDFox supports importing triples and rules in N-Triple, Turtle, TriG, OWL 2 Functional-Style Syntax and Datalog formats. Note that, in addition to OWL 2 Functional-Style syntax documents, OWL 2 axioms may be imported from files in one of the three RDF triple formats if the owl-in-rdf-support option is set to relaxed or strict (See Section 6.2.2.10 ). If <filename> is relative (e.g., it does not start with / on Unix-based platforms), it is first interpreted as relative to the content of the dir.facts shell variable, and if no file is found then it is interpreted as relative to the content of the dir.dlog shell variable. <filename> may be quoted, i.e. surrounded with single-quotes or double-quotes, if required, for example to support filenames containing spaces. Example: The following command adds a rule to the current data store; informally, the rule says that if ?X is a person and ?X likes something, then ?X is a person with hobby. import ! a1:PHobby[?X]:- a1:Person[?X], a1:like[?X,?Y] . Example: The following command adds a fact to the current data store. import ! a1:Person[a1:john] . 10.2.2.21. info Syntax: info [extended | axioms | rulestats [print-rules] [by-body-size] | ruleplans] Description: This command prints various information about a data store. The exact information printed is determined by the command options. • If no argument is specified, only a short breakdown of memory use and the state of the data store is shown. • If extended is specified, the summary from the previous item is extended with detailed information about the memory use and the state of various subcomponents of RDFox. This diagnostic information depends on the internal structure of RDFox and is thus not meant to be used by users, it is moreover likely to change in future, and is mainly intended to aid Oxford Semantic Technologies in providing client support. • If axioms is specified, then the OWL axioms currently loaded in the data store are printed. • If rulestats is specified, then statistics (i.e., the numbers of recursive, nonrecursive, and all rules) is printed for each component of the currently loaded datalog program. The optional argument print-rules determines whether the rules will be printed, and the argument by-body-size determines whether the rules will be grouped by rule body size (i.e., the number of atoms in the rule body) inside each component. • If ruleplans is specified, then the query plans of the compiled rules are printed. This is mainly used for troubleshooting. 10.2.2.22. insert Syntax: insert <remaining_query_text> Description: This command adds EDB facts to the data store based on bindings for a query pattern specified in a where clause. Example: The following command evaluates ?X a1:headOf ?Y in the data store, and for each value of ?X and ?Y it creates a triple ?Y a1:hasHead ?X. insert { ?Y a1:hasHead ?X } where { ?X a1:headOf ?Y } 10.2.2.23. load Syntax: load <filename> [<type> [<parameterKey> <parameterValue>]*]" Description: This commands load creates a new data store using the content of the specified file. The name of the newly created data store is determined using the active shell variable. If <filename> is relative (e.g., it does not start with / on Unix-based platforms), it is interpreted as relative to the content of the dir.stores shell variable. This command can load binary files in both standard and raw formats. In case of the former, one can override the data store parameters by specifying various data store options in the same way as for the dstore create command (but one cannot change the equality mode of the data store). 10.2.2.24. lookup Syntax: lookup <ResourceID>* Description: The system assigns each IRI resource a unique ID, and this command returns the corresponding resources for the specified IDs. 10.2.2.25. mat Syntax: mat Description: This command explicitly updates the set of materialized facts in the data store. In normal operation, RDFox will invoke this operation internally as needed so that, when queries are issued, query results correctly reflect all additions/deletions of facts/rules to the data store. Hence, this command is useful mostly when one must know exactly materialization is to be updated. For example, this can be the case when benchmarking reasoning algorithms, or when debugging the reasoning process. Since materialization is updated automatically when a transaction is committed, this command should be used only inside transactions. Example: The following commands starts a transaction, imports facts from the testData.ttl file, imports rules from the testProgram.dlog file, and them updates the Materialization. Next, it deletes facts from the factsToDelete.ttl file, again updates the materialization, and commits the transaction. When mat is first invoked, the system performs ‘reasoning from scratch’, whereas in the second case it updates the materialization incrementally. dstore create seq begin import "testData.ttl" import "testProgram.dlog" mat import - "factsToDelete.ttl" mat commit 10.2.2.26. password Syntax: password Description: This command initiates an interactive process to change the password of the logged-in role. 10.2.2.27. prefix Syntax: prefix <prefixname> <prefixIRI> Description: This command associates a prefix name with the given IRI. Such prefix names are used to abbreviate IRIs on the command line. Example: The following command declares prefix a1: and then uses it in a SELECT query. prefix a1: <http://www.a1.org/a1#> SELECT ?X ?Y WHERE { ?X a1:hasName ?Y } 10.2.2.28. quit Syntax: quit Description: This command terminates the RDFox instance. 10.2.2.29. recompilerules Syntax: recompilerules Description: This command recompiles the rules in the current data store according to the current statistics. This can be used after the stats update command so that the rule compilation takes advantage of up-to-date statistics. 10.2.2.30. revoke Syntax: revoke privileges <actions> <resource-specifier> from <role> | role <super-role> from <role> Description: This command revokes privileges and role memberships from roles in a server’s role database. The counterpart to this command is grant. • Option privileges revokes the privileges to perform <actions> on the resource(s) matched by <resource-specifier> from the role <role>, where <actions> is a comma-separated list of the elements read, write, grant and full and <resource-specifier> is a string meeting the requirements of a resource specifier described in Section 9.1.2.1. • Option role revokes membership of the role with name <super-role> from the role <role>. See Section 9.2.5 for more information about role membership. Section 9 describes RDFox’s access control model in more detail. Example: The following command revokes write access over the family data store from the role graphuser. revoke privileges write >datastores|family from graphuser 10.2.2.31. role Syntax: role [list | show <role> | switch <role> | create <role> [hash <password_hash>] | delete <role>] Description: This command manages the set of Roles defined within the system. • If no argument is specified, the name of the currently active role is shown. • Option list lists the roles defined within the system. • Option show shows the privileges, memberships and members of role <role>. • Option switch switches the currently active role to <role> subject to successful authentication. • Option create creates a new role with name <role>. If the hash option is used then the role is created using the given <password_hash>, otherwise the user is prompted to enter a new password for the role. A password hash of an existing role can obtained by listing role information using the role show shell command or programmatically as described in Section 8.12.4. • Option delete deletes role <role>. Section 9 describes RDFox’s access control model in more detail. 10.2.2.32. rollback Syntax: rollback Description: This command rolls back the currently running transaction. 10.2.2.33. root Syntax: root <directory> Description: This command sets the dir.root shell variable (which determines the root directory) to the specified string. Many other shell variables are updated as well, as specified at the beginning of this section. 10.2.2.34. save Syntax: save <filename> [raw] Description: This command saves the contents of the current data store to a binary file. The standard format is used for the output, unless the raw option is specified in which case the raw format is used. If <filename> is relative (e.g., it does not start with / on Unix-based platforms), it is interpreted as relative to the content of the dir.stores shell variable. 10.2.2.35. select Syntax: select <remaining_query_text> Description: This command queries the current data store (against all IDB facts) with the specified SPARQL query. Example: The following commands load a data and and run a query. The output of the command will be written into file $(dir.output)/results.txt; note that directory $(dir.output) must exist for query evaluation to succeed. dstore create seq import "LUBM.ttl" "LUBM.dlog" set output "results.txt" SELECT ?X WHERE { ?X rdf:type <http://lehigh.edu/onto/univ-bench.owl#Org> } 10.2.2.36. set Syntax: set [<variable> [<value>]] Description: This command assigns the specified value to the specified variable. If no argument is given at all, then all variable-value pairs are printed; if the variable is given but the value is not, then the current value for the given variable is printed. Issue the set command with no arguments or see Section 10.2.3 for details of the available variables. 10.2.2.37. sleep Syntax: sleep <milliseconds> Description: This command makes the system sleep for the specified number of milliseconds. 10.2.2.38. stats Syntax: stats list | show <name> | add <name> <parameters> | drop <name> | update [<name>] Description: This command maintains the statistics that RDFox uses internally for tasks such as query planning. • Option list prints the currently available statistics. • Option show shows information about the statistics with name <name>. • Option add adds the statistics with name <name>. Information that governs how the statistics are created is specified by the key-value pairs in <parameters>. • Option ‘drop’ deletes the statistics with name <name>. • Option ‘updates’ all statistics if <name> is not specified, or it updates the statistics with name <name>. Note that, if auto-update-stats option is set to true, then statistics will be updated automatically whenever the number of facts in the system changes by more than 10%. 10.2.2.39. threads Syntax: threads [<number_of_threads>] Description: This command sets the number of threads that the server will use for tasks such as reasoning or importation of data. The initial value of this parameter can be specified using the -num-threads server option at the command line. The default is the number of logical processors on the machine. 10.2.2.40. transcribe Syntax: transcribe [force] <directory_name> [<datastore_name>*] Description: This command saves the RDFox server state to a collection of files under a directory named <directory_name>. A file name main_restore.txt will be created under the directory <directory_name> that can be executed in another instance of RDFox to restore all transcribed content. By default transcribe will save the content of all data stores that have persistence enabled. It is possible to transcribe only certain data stores, regardless of whether persistence is enabled or not, by specifying one or more data stores names as <datastore_name> parameters. transcribe is intended to be used to transfer the entire server state to another RDFox instance. To prevent changes occurring while the transcribe command is running, the normal behaviour is to raise an error if the endpoint is running. This check can be disabled by using the force option. Example To transcribe the content of an existing RDFox instance to a directory named save_directory. Log into the existing RDFox shell and execute the following transcribe save_directory quit Then invoke another instance of RDFox, typically a newer version, and restore the settings into into a new server directory (in example new_server_dir) as follows. ./RDFox -server-directory new_server_dir -temp-role shell save_directory main_restore.txt 10.2.2.41. tstamp Syntax: tstamp [<variable_name>] Description: This command saves the current time stamp into the variable with the specified name. If no variable is specified, the system prints the current time stamp. 10.2.2.42. tupletable Syntax: tupletable list | show <IRI> | add <IRI> <parameters> | drop <IRI> Description: This command manages the tuple tables of the current store. • Option list prints the currently available tuple tables. • Option show shows information about the tuple table with name <IRI>. • Option add adds a new tuple table with name <IRI>. Information about the tuple table is specified by the key-value pairs in <parameters>. • Option ‘drop’ deletes a tuple table with name <IRI>. Example: The following command adds a tupletable from a delimitedFile datasource, in the same way as dsource attach. tupletable add <myTupletable> dataSourceName myDataSource columns 3 "1" "http://oxfordsemantic.tech/data/entity#{id}" "1.datatype" "iri" "2" "{name}" "3" "{dob}" "3.datatype" "xsd:dateTime" "3.if-empty" "absent" Example: The following command adds a tupletable from a table in a SQL datasource (either PostgreSQL or ODBC). tupletable add <myTupletable> dataSourceName mySQLdsource table.name salaries columns 2 "1" "http://oxfordsemantic.tech/data/entity#{employee_id}" "1.datatype" "iri" "2" "{salary}" "2.datatype" "xsd:decimal" "2.if-empty" "absent" Example: The following command adds a tupletable from a query in a SQL datasource. tupletable add <myTupletable> dataSourceName mySQLdsource query "SELECT ssn.social_security_number AS col1, salaries.salary AS col2 FROM ssn JOIN salaries ON ssn.employee_id = salaries.employee_id" columns 2 "1" "http://oxfordsemantic.tech/data/ssn#{col1}" "1.datatype" "iri" "2" "{col2}" "2.datatype" "xsd:decimal" "2.if-empty" "absent" 10.2.2.43. update Syntax: update (! <query_text> | <filename>*) Description: This command evaluates one or more update queries. The query can be either given explicitly as text after the ! symbol, or one can specify zero or more file names that contain the queries to be evaluated. If <filename> is relative (e.g., it does not start with / on Unix-based platforms), it is interpreted as relative to the content of the dir.queries shell variable. Example: The following command evaluates an update query. update ! delete { ?p :givenName 'Bill' } insert { ?p :givenName 'William' } where { ?p :givenName 'Bill' } 10.2.3. Shell Variables Shell variables are set by invoking the set command (see Section 10.2.2.36). Variables that are initially set after starting RDFox, their default values, and a summary of each is given in the table below. Similar information for all currently set variables may be obtained in the RDFox shell by running the set command with no arguments. Variable Name Default Value Description active default Contains the name of the active data store. dir.dlog ./ Determines the directory for resolving relative file names of datalog programs. dir.facts ./ Determines the directory for resolving relative file names of RDF files. dir.output ./ Determines the directory for resolving relative file names of output files (as specified by the output variable). dir.queries ./ Determines the directory for resolving relative file names of query files. dir.root ./ Determines the root directory of the current data set. dir.scripts ./ Determines the directory for resolving relative file names of script files. dir.stores ./ Determines the directory for resolving relative file names of binary store files. log-frequency 0 Determines the time in seconds during which various logs are produced (see note below table). output null Determines how command results (including queries) are printed: null (nothing is printed), out, (to stdout), or a file name. query.answer-format application/x.sparql-results+turtle-abbrev Determines the name of the format used to serialize query answers (see note below table). query.cardinality true If true, then queries return the correct cardinality. query.delete-output-if-answer-empty false If true, then the output file is deleted when the query answer is empty. query.explain false If true, the query plan is printed after compilation. query.fact-domain IDB Determines the fact domain of the matched tuples: EDB matches the explicitly stated tuples; IDB matches all tuples; IDBrep matches the non-merged tuples; IDBrepNoEDB matches the non-merged tuples that are not EDBs. query.monitor off Determines whether and how query evaluation is monitored: off (no monitoring), stats (gather statistics), and trace (print query evaluation trace). query.planning-algorithms rewriting greedy Determines the sequence of planning algorithms that will be used when evaluating queries. query.print-options false If true, then query compilation options are printed before queries are evaluated. query.print-statistics false If true, the statistics about query evaluation is printed after query is evaluated. query.print-summary true If true, then a summary of query evaluation (number of returned tuples and query evaluation time) is printed after a query is evaluated. reason.monitor off Determines whether and how reasoning is monitored: off (no monitoring), stats (gather statistics), progress (report progress during reasoning), and trace (print reasoning trace). run true The shell is running while this variable is true. version e.g., 3.0.0 (96b08d35d74e54c8763c9ef5d6face6800e44397) Contains the current version of RDFox. Note: • Additional variables are available to control the RDFox endpoint, see Section 10.2.2.14. • It can be useful to set the log-frequency variable to a non-zero value n when large amounts of data are being imported. This will cause progress to be reported every n seconds. • The query.answer-format variable may be set to any of the values detailed in Section 8.9.2, and text/turtle in the case of queries over exactly three variables called ?S, ?P, and ?O.
__label__pos
0.735959
用户名:  密码: 兄弟在线    标题:flash as 3.0 控制声音程序代码 作者:佚名 来源:一聚教程 时间:2011-09-15 package { import flash.display.Sprite; import flash.events.*; import flash.media.Sound; import flash.media.SoundChannel; import flash.net.URLRequest; import flash.utils.Timer; import flash.text.TextField; import flash.text.TextFieldAutoSize; import flash.filters.DropShadowFilter; public class As3Sound extends Sprite { private var url:String = "http://sxl001.xfyun.com/music/lib/myRussia.mp3"; private var soundFactory:Sound; private var channel:SoundChannel; private var positionTimer:Timer; private var play_btn:Sprite; private var stop_btn:Sprite; private var d_filtersropShadowFilter=new DropShadowFilter(5,45,0x000000,80,8,8); //用于记录音乐现在是否为暂停状态 private var bSoundStop:Boolean = false; public function As3Sound() { var sxl_txt:TextField = new TextField(); sxl_txt.text="CS4中如何控制声音的播放或停止的"; sxl_txt.autoSize=TextFieldAutoSize.LEFT; sxl_txt.x=stage.stageWidth/2-sxl_txt.width/2; sxl_txt.y=20; addChild(sxl_txt); var mp3_request:URLRequest = new URLRequest(url); soundFactory = new Sound(); //成功加载数据后 soundFactory.addEventListener(Event.COMPLETE, completeHandler); //在存在可用于 MP3 声音的 ID3 数据时 soundFactory.addEventListener(Event.ID3, id3Handler); //加载音乐错误时 soundFactory.addEventListener(IOErrorEvent.IO_ERROR, ioErrorHandler); //音乐加载中... soundFactory.addEventListener(ProgressEvent.PROGRESS, progressHandler); soundFactory.load(mp3_request); channel = soundFactory.play(); //音乐播放完成 channel.addEventListener(Event.SOUND_COMPLETE, soundCompleteHandler); //用Timer监听音乐的播放进度 positionTimer = new Timer(1000); positionTimer.addEventListener(TimerEvent.TIMER, positionTimerHandler); positionTimer.start(); //创建一个按钮,用于播放音乐 play_btn = new Sprite(); play_btn.graphics.beginFill(0xFFCC32); play_btn.graphics.drawRoundRect(0, 0, 70, 18, 10, 10); play_btn.graphics.endFill(); var play_txt:TextField = new TextField(); play_txt.text = "播放"; play_txt.x=18; play_btn.x=50; play_btn.y=100; play_txt.selectable = false; play_btn.addChild(play_txt); play_btn.filters=[d_filters]; play_btn.addEventListener(MouseEvent.CLICK, soundPlay); addChild(play_btn); //创建一个按钮,用于停止音乐 stop_btn = new Sprite(); stop_btn.graphics.beginFill(0xFFCC32); stop_btn.graphics.drawRoundRect(0, 0, 70, 18, 10, 10); stop_btn.graphics.endFill(); stop_btn.x=130; stop_btn.y=100; var stop_txt:TextField = new TextField(); stop_txt.x=18; stop_txt.text = "暂停"; stop_txt.selectable = false; stop_btn.addChild(stop_txt); stop_btn.filters=[d_filters]; stop_btn.addEventListener(MouseEvent.CLICK, soundStop); addChild(stop_btn); } //监听音乐的播放进度 private function positionTimerHandler(event:TimerEvent):void { var ybf:int = channel.position.toFixed(0); var zcd:int = soundFactory.length; var bfs:int = Math.floor(ybf/zcd*100); //trace("音乐总长度:"+zcd, "音乐已播放:"+ybf, "播放进度为:"+bfs+"%"); } //加载音乐完成时 private function completeHandler(event:Event):void { //trace("加载音乐完成: " + event); } //在存在可用于MP3声音的ID3数据时 private function id3Handler(event:Event):void { //trace("音乐的ID3信息如下:"); for (var s in soundFactory.id3) { //trace("t", s, ":", soundFactory.id3[s]); } //trace("关于ID3信息介绍,请参见Sound类-->属性-->id3"); } //加载音乐错误时 private function ioErrorHandler(event:Event):void { //trace("加载音乐错误,错误信息如下:" + event); positionTimer.stop(); } //加载音乐时 private function progressHandler(eventrogressEvent):void { var yjz:int = event.bytesLoaded; var zcd:int = event.bytesTotal; var bfs:int = Math.floor(yjz/zcd*100); //trace("音乐总长度:"+zcd,"已加载: "+yjz, "加载进度为:"+bfs+"%"); } //音乐播放完成 private function soundCompleteHandler(event:Event):void { //trace("音乐播放完成: " + event); positionTimer.stop(); } //点击播放按钮事件 private function soundPlay(event:MouseEvent):void { if (bSoundStop) { bSoundStop = false; channel = soundFactory.play(channel.position.toFixed(0)); } } //点击停止按钮事件 private function soundStop(event:MouseEvent):void { if (!bSoundStop) { bSoundStop = true; channel.stop(); } } } } 编辑:admin 总点击 [1417]   评论  0 查看评论 上一篇:flash as 外部 3.0 API 要求和优点 下一篇:as 3.0调用多图片loading 代码 【关闭窗口】 您可能感兴趣的文章 我要评论            评论标题:   可以输入250   验证数字: 8 + 0 = 兄弟友情提示 · 请自觉遵守国家有关法律、法规,尊重网上道德。 · 兄弟在线坚决抵制不良言行,违者文责自负。 · 如果文章有版权或其他问题等,请联系我们,我们会尽快处理。 · 文章注名来自网络的旨在传播共享信息,不做其它用途;注名原创的本站支持原创,但不代表同意其观点。 · 兄弟在线拥有管理用户与其文章和评论的一切权利,并有权在网站内转载或引用。 兄弟在线 兄弟热门文章 兄弟推荐文章 兄弟站内搜索 兄弟感兴趣的文章 兄弟最新影视
__label__pos
0.992587
Category : relation How to filter relationships? I found models from related data. How do I select only those related models that match the search term? Basic query $toDolists = ToDoList::select([‘*’]); foreach ($tags as $tag) { $toDolists->whereHas( ‘item.tag’, function ($query) use ($tag) { $query->where(‘tags.name’, $tag); } ); } I want to select related models (item) that match the .. Read more I am running into this issue where I can’t retrieve the softdeleted data of the relation tables. My approach was to use the withTrashed method, but it gave me no luck whatsoever. This is the line of code: return response()->json($user->orders()->withTrashed()->with(‘order_details’, ‘order_details.product’, ‘address’)->get()); It like ignores what’s inside the with clause. How do I fix this .. Read more I have tables, ec_products : id | brand_id (id from ec_brands) | store_id (id from mp_stores) ec_brands: id mp_stores: id I am calculating total products belong to each brand and store using relations and withCount of Laravel, Like, Brand model, public function products() { return $this->hasMany(Product::class, ‘brand_id’)->where(‘is_variation’, 0); } Stores model public function products() { .. Read more When I use User model to return the data I need it gave me list of objects with all the relations. User Model class User extends Model implements AuthenticatableContract, AuthorizableContract { use SoftDeletes, Authenticatable, Authorizable, HasFactory, Notifiable; public function getNameAttribute() { return $this->last_name.’ ‘.$this->first_name; } public function service(){ return $this->BelongsTo(Service::class); } public function group(){ return .. Read more I’m trying this code $query = Parent::where(‘state’, 1) ->with(array(‘child’=> function ($q) use ($end_date) { $q->where(‘start_date’, ‘<=’, $end_date); })); $query->whereHas("child", function ($query) use ($filter) { if (isset($filter["id"]) && $filter["id"] != "") { $query->where("id", ‘=’, $filter["id"]); } }); then in Parent Models i have this code public function child() { return $this->hasOne(‘AppModelsChild’, ‘code’, ‘code’); } I want .. Read more I have user image saved on different table and I want have the following in User model public function Image() { return $this->hasOne(UserImages::class, ‘user_id’, ‘id’)->latest(); } The above relation returns the following. "image": { "id": 3, "user_id": 1, "image": "http://live.test/uploads/user/User-Oss8MewXVzHZCehHoOUgkdYoo3N1K0gYI9jY69ZsnyiHnqHsHv.png", "is_primary": 1, "created_at": "2021-04-12T08:01:47.000000Z", "updated_at": "2021-04-12T08:01:47.000000Z" }, I want to receive only image, how can .. Read more I am trying this code but in whith where query does not work any One can solve this problem? Quotation::with([‘QuoParts’, ‘client’=>function($cq){ $cq->orWhere(‘first_name’, ‘LIKE’, ‘%Muhammad%’); }, ‘user’])->orWhere(function($q) use ($s){ $q->orWhere(‘sku’, ‘LIKE’, ‘%’.$s.’%’); $q->orWhere(‘issue_date’, ‘LIKE’, ‘%’.$s.’%’); })->take($length)->skip($r->start)->get(); Sourc.. Read more how to select data in this relation count by qrcode and sum price if not null? $wajibRetribusis = WajibRetribusi::with([‘subDistrict’=> function($q) use($month,$year){$q->select(‘name’,’id’);},’payments’=> function($q) use($month,$year){ $q->whereMonth(‘payments.last_paid’,$month); $q->whereYear(‘payments.last_paid’,$year); $q->select(‘id’,’price’,’qr_code’,’category_id’,’last_paid’); },’category:id,price’])->select(‘sub_district_id’,’qr_code’,’category_id’)->get(); Sourc.. Read more i have two model Admin Model -> i use to save Admins inside it and i make Guard admin and his own routes in admin.php User Model (Default in laravel)-> i use to save Normal users inside it and and his own routes in web.php Comments Model : $table->id(); $table->string(‘comment’); $table->boolean(‘approved’)->default(0); $table->unsignedBigInteger(‘user_id’); $table->foreign(‘user_id’)->references(‘id’)->on(‘users’)->onDelete(‘cascade’); $table->unsignedBigInteger(‘post_id’); $table->foreign(‘post_id’)->references(‘id’)->on(‘posts’)->onDelete(‘cascade’); .. Read more hi I have 3 table that is Resort, Booking, Expense, these tables are join with relation. the code is given below, $resorts = Resort::where(‘status’,1)->with(‘bookings’)->withSum(‘bookings’, ‘amount’) ->with(‘expenses’)->withSum(‘expenses’, ‘amount’)->get(); I want to sort this table using the date field. how could I use the wherebetween in this query for bookings and expense Sourc.. Read more I have the following query that should return relations but is not. The models involved are Person and Role. The Person relation: public function roles() { return $this->belongsToMany(Role::class, ‘person_role’, ‘user_id’, ‘role_id’) ->where(‘person_role.org_id’, $this->defaultOrgID); } The roles relation: public function people() { return $this->belongsToMany(Person::class, ‘person_role’,’role_id’, ‘user_id’); } Troublesome Query: $persons = Person::with(‘roles’) //->selectRaw(‘person.personID, person.lastName, person.firstName, person.login, .. Read more This is table relation $result = Event::select(‘events.*’, ‘events.at as datatime’, ‘events.end_time as endtime’,’event_types.name as event_type_name’) ->from(‘events’) ->leftJoin(‘event_types’, ‘event_types.id’, ‘events.type_id’) ->where(‘events.id’, $id) ->where(‘events.user_id’, 350) ->get() ->toArray(); return $result; i want to change it to eloquent relation. for example, i think we can use with() method instead of select method of query. Sourc.. Read more I am going to make eloquent model as belongsTo, hasManythorugh.. from some tables. i can make it when there are 3 tables but am not sure if there are 3+ tables. this is code. ->from(array(‘cases’,’c’)) ->join(array(‘case_contact’,’cc’), ‘LEFT’)->on(‘cc.case_id’, ‘=’, ‘c.id’) ->join(array(‘case_statuses’,’cs’), ‘LEFT’)->on(‘cs.case_id’, ‘=’, ‘c.id’) ->join(array(‘statuses’,’s’), ‘LEFT’)->on(‘s.id’, ‘=’, ‘cs.status_id’) ->join(array(‘milestones’,’m’), ‘LEFT’)->on(‘m.id’, ‘=’, ‘s.milestone_id’) ->join(array(‘companies’,’com’), ‘LEFT’)->on(‘com.id’, ‘=’, ‘c.company_id’) .. Read more
__label__pos
0.948927
Best Practices Privacy code scanning: How to sync privacy compliance with software development privacymatters PrivadoHQ Privacy code scanning guide Vaibhav Antil June 12, 2024 By year-end 2024, Gartner predicts that 75% of the world's population will have its personal data covered under modern privacy regulations. Just since the beginning of 2023, three countries (Switzerland, South Korea, and Saudi Arabia) and five U.S. states have put new privacy regulations into effect, including the pivotal California Privacy Rights Act (CPRA). These regulations pose real concerns around data privacy and data security for businesses.  Companies have poured considerable resources into their people, processes, and technology to keep up with these laws. Yet, many are still grappling with the fundamental issue of "where is our data?", resulting in steep fines from regulatory bodies like the US Federal Trade Commission (FTC) and EU regulators for improper data usage and sensitive data breaches. Most privacy tools today focus on mapping data in storage, but they lack the data flow visibility necessary for even a base-level of privacy governance. These tools, often called data discovery tools, can only identify what data has been stored; they cannot determine how personal data has been collected, used, or shared. This full data lifecycle visibility is necessary to prevent a host of potential privacy violations.  The predominant method for building full lifecycle data maps is currently through manual assessments. Privacy teams conduct these manual assessments by sending product and engineering teams questionnaires asking how the company’s websites, user-facing applications, and backend systems collect, use, share, and store personal data. In an attempt to speed up this slow process, privacy teams sometimes also interview product and engineering stakeholders.  With or without data discovery tools, companies that process personal data still conduct manual assessments and still struggle to get full data visibility and prevent privacy violations. Manual assessments do not scale. They are slow and subjective, resulting in incomplete, inaccurate, and outdated information.  To truly address these issues, we need a new approach that tackles the fundamental source of the problem: the code itself.  The code is the primary source of privacy risk because it is where developers define the data collection, sharing, usage, and storage logic. By implementing privacy code scanning, companies can bridge the gap between privacy and engineering. This innovative solution provides complete visibility into the data lifecycle, including collection, flows, sharing, and storage. It also enables governance of data usage and allows for continuous privacy compliance within the software development lifecycle.  For any company building software that processes personal data, privacy code scanning is the only solution available to proactively minimize privacy risks and sync privacy compliance with software development.  In this guide to privacy code scanning, we will delve into: • What privacy code scanning is • Use cases for privacy code scanning • How privacy code scanning differs from current approaches • What impact privacy code scanning can have on your organization Approaches to privacy What is privacy code scanning? Privacy code scanning solutions create full lifecycle data maps and implement programmatic privacy governance. The approach starts with the code. This is where business logic for data collection, storage, sharing, use, and processing is written by developers.  Privacy code scanning solutions specifically scan the code written by a company’s engineering teams. For software-driven companies, their engineering teams’ code is what collects personal data and moves it in and out their websites, user-facing applications, and backend systems.  By scanning the codebase, privacy code scanning solutions can automatically identify and classify all personal data by using a combination of algorithms and AI/machine-learning models. This is a much more efficient process than scanning data in storage because a company’s entire codebase lives in typically one, maybe two, source code management tools and only the code has to be scanned, not the enormous amount of data itself.  In addition, privacy code scanning can automatically determine the context of personal data processing. Each instance of personal data processing can be linked to the exact code within an application, and engineers can quickly validate how the code is collecting, using, sharing, or storing personal data. When data processing violates privacy policies, issues are linked to the exact code causing the violation, and engineers can quickly resolve the issue. Privacy code scanning solutions typically run scans by securely integrating with source code management tools that store a company’s entire codebase. This approach is similar to how many application security tools scan code to identify security vulnerabilities.  After an initial scan is run to map all personal data flows and identify all live privacy issues, privacy code scans are triggered each time a change is made to the codebase. By continuously scanning for code changes, data maps, assessments, and reports can be updated automatically and privacy issues can be identified immediately.  With this level of real-time visibility, privacy code scanning solutions can implement Privacy by Design workflows to automatically flag and even stop privacy violations before they occur. These workflows can be set up to monitor and enforce internal privacy policies and privacy regulations like GDPR, CCPA, CPRA, MHMDA, the FTC, and HIPAA. Because privacy code scanning can be integrated into standard software development and delivery processes, non-compliant code can be flagged and fixed before it goes live.  Use cases for privacy code scanning Digital tracking governance: Prevent non-compliant data sharing  In the US, the largest privacy risk right now is non-compliant data sharing with marketing partners. Since 2023, the FTC has fined at least five companies for improperly sharing personal health data to marketing partners like Meta and Google.  In February and March of 2024, enforcement launched for two groundbreaking data privacy regulations: the California Privacy Rights Act (CPRA) and Washington state’s My Health My Data Act (MHMDA). Both regulations put more onus on companies to collect, track and uphold consent before sharing user data. Meanwhile, the EU’s General Data Protection Regulation (GDPR) remains the strictest law governing personal data sharing, requiring opt-in consent before data is collected or shared.  These new laws and increased enforcement require a new approach to stay compliant called digital tracking governance. Digital tracking governance is responsibly managing personal data shared with marketing partners by honoring user preferences. Privacy code scanning enables best-in-class digital tracking governance by:  • Identifying all marketing partners: Build a live inventory of all 3rd parties receiving personal data via pixels, cookies, tag managers, and SDKs from your websites, apps, and backend integrations/APIs • Tracking data flows: Gain full visibility by continuously monitoring how all data elements are collected and shared from your websites, apps, and internal systems  • Ensuring consent compliance: Continuously audit websites and apps to ensure consent banners limit data sharing according to regulations and user preferences Automate Record of Processing Activities for GDPR compliance GDPR requires that all processors and controllers of personal data for people in the EU must regularly maintain a live Record of Processing Activities or RoPA. RoPAs require privacy teams to list each processing activity, identify what categories of data are being used, and describe the purpose of each activity.  By leveraging its full lifecycle data maps, privacy code scanning can automate RoPA reporting to the point the engineers don’t need to do any questionnaires or interviews. Instead of waiting months to hear back from engineers, privacy teams can complete RoPAs in a matter of days.   In addition, RoPAs can be automatically updated each time there’s a software update that changes data flows. Because RoPAs typically take several months to complete, they are usually only updated once a year.  When over 42% of engineers release software at least once a month and over 69% release at least once every six months, most RoPAs are out-of-date before they’re even done. In addition, the RoPAs built from subjective questionnaires are likely to have missing or inaccurate information. Privacy code scanning eliminates compliance risks from inaccurate RoPA reporting by automatically generating reports based on real-time data flows.  Implement scalable Privacy by Design Identify and resolve privacy risks without assessments  In addition to compliance reporting like RoPAs, companies conduct internal manual assessments to identify potential privacy risks for new software products or features, changes to existing products or software infrastructure, or newly acquired businesses.  Without privacy code scanning, privacy teams rely on manual privacy assessments to identify nearly all privacy risks, even for small website or application changes such as adding a marketing partner’s SDK to a mobile app.  With privacy code scanning, privacy teams can automatically identify risks for small changes and reserve lengthy privacy assessments for larger, high-risk projects. Workflows can be set up in privacy code scanning solutions that automatically evaluate changes and identify risks according to regulations and internal privacy policies.   For example, a workflow can be set up to automatically identify if a marketing partner’s SDK collects any personal data without the user’s consent. In this case, when the new SDK’s code is pushed live in the next app update, a scan could identify whether the SDK is collecting or sharing any personal data in violation of this policy workflow. All the necessary mobile SDK checks could be put into workflows that automatically identify risks without having to conduct a manual privacy assessment.  Manual privacy assessments dramatically slow down privacy and engineering teams, and they should only be initiated for more complex situations such as building a new personal health app. Privacy code scanning can save an enormous amount of time from eliminating assessments for minor changes while enabling faster risk resolution.  Prevent privacy risks before they go live In addition to identifying live privacy risks, privacy code scanning can be used to prevent privacy risks in the dev process before they even go live. Similar to code scanning solutions for application security, privacy code scanning solutions can integrate with a company continuous integration / continuous delivery (CI/CD) pipeline tool to run a scan each time new code is submitted for review, before it is pushed live. This way privacy and engineering teams can identify and resolve risks before they ever affect users’ data. Although privacy assessments are typically done when new products/features are being designed, privacy risks often still arise because software commonly changes and evolves during the development process.  Instead of privacy teams only finding out about a software change after a privacy incident occurs, privacy code scanning can ensure non-compliant software updates don’t launch if they deviate from the latest privacy assessments or violate any privacy policies. Furthermore, integrating privacy code scanning in the development process can even accelerate product launches. If the privacy team is informed of software changes affecting privacy after the design phase, this will typically trigger manual privacy assessments that may take weeks. Only once the assessment is complete will the product team be informed of changes they need to make, causing the product launch to be delayed even further. With privacy code scanning, the privacy and product teams are both immediately alerted of privacy risks as the product is developed. This approach shifts privacy left in the process and enables developers to eliminate privacy risks before they cause further delays or issues.   Privacy assessment automation: PIAs, DPIAs, etc. When privacy assessments are needed for more complex, high-risk projects, privacy code scanning can automate the majority of assessments with more accurate information than a manual assessment.  The most common privacy assessments are Data Protection Impact Assessment or DPIAs and Privacy Impact Assessments (PIAs), and the bulk of information they attempt to gather is related to data maps generated by privacy code scanning.  Standard and custom reports can be built within privacy code scanning platforms to automatically pull in the required data map information such as what personal data is processed, how it is used, where it is sent to, etc. Such reporting can be combined with standard and custom questionnaires to fill in any remaining information.  For companies operating in the EU, GDPR requires a DPIA for “high risk” projects involving personal data. GDPR provides guidelines for how to conduct a DPIA and for when a DPIA is needed. It is typically up to the company’s Data Protection Officer or DPO to determine exactly how and when DPIAs are conducted. Regulators typically only review DPIAs if a company is being investigated for a GDPR violation.    The other most common privacy assessment is a PIA. PIAs are similar to DPIAs except they are conducted when DPIAs are not required by GDPR, most often in the US where GDPR does not typically apply. PIAs are less standardized than DPIAs, but they are used for similar high-risk projects and collect similar information such as how personal data is used and shared.  Privacy code scanning platforms can be set up to automate DPIAs and PIAs that are custom to the needs of each company and even project. Typically the most important and time-intensive information to gather lies in the code that generates privacy code scanning data maps. That is why privacy code scanning is best positioned to enable faster and more accurate privacy assessments.  Automate privacy reports for app store approval Apple and Google both require app owners to submit privacy reports for apps to be published in their respective app stores: the App Store and Google Play. The reports for both app stores require information that privacy code scanning gathers automatically: what personal data is collected, who it is shared with, and for what purpose.  Apple requires privacy manifest reports each time a new app or app update is submitted to the App Store for approval and requires app owners to maintain accurate Privacy Nutrition Labels. Privacy manifests are designed for Apple to determine privacy compliance when approving an app for the App Store while Privacy Nutrition Labels are designed to transparently communicate the app’s data privacy practices to users.   The Google Play Store requires app owners to complete their data safety form that is similar to Apple’s Privacy Nutrition Labels; the form is used to populate the data safety section that tells users how personal data is processed for each app in the Google Play Store.   To accurately complete these reports for each app, developers have to manually review their app’s code or documentation or wait for third parties to complete questionnaires explaining how they process personal data. Utilizing privacy code scanning, these reports can be generated automatically so that they simply need to be double-checked, saving an enormous amount of time while providing more accurate, up-to-date information.   Block sensitive data sharing with AI applications With AI application development and adoption at an all-time high, AI governance couldn’t be more important to privacy teams. AI applications are built with and fine-tuned with data that may include sensitive personal data. Users also input data into certain AI applications that may need to be filtered out for privacy or other reasons.  For data that engineering teams send to internal or external AI applications, privacy code scanning can ensure that no sensitive personal data is shared. Policy workflows can be set up to restrict select or all personal data elements from being sent to applications flagged as AI.  Govern data shared across borders In today’s ever evolving global privacy landscape, many countries now have laws that restrict cross-border transfers of personal data. Most notably, the EU’s GDPR and China’s Personal Information Protection Law (PIPL) restrict what data can be sent where and under what circumstances.  Privacy code scanning can prevent non-compliant data sharing across borders both internally and externally. Third parties and internal destinations can be categorized by location, and policy workflows can be set up to limit what personal data is sent where.  Assess and mitigate privacy risk for mergers & acquisitions  When acquiring or merging with another company, privacy code scanning can quickly assess their privacy risk profile and identify how to address compliance issues. Different companies have different privacy policies and practices, and typically the company with higher privacy standards has to spend months assessing the other company’s risk by reviewing documentation, conducting interviews, and/or waiting on teams to complete questionnaires. Privacy code scanning can eliminate the vast majority of those manual assessment activities. By scanning the new company’s entire codebase, a full inventory of personal data elements and potential privacy risks can be generated without any manual effort. Privacy code scanning enables a more comprehensive and accurate assessment to be completed in days that would normally take months.  After an acquisition is completed, it can also take months if not years for the new company to adopt the acquiring company’s privacy standards. Privacy code scanning can rapidly accelerate this integration process. The acquiring company’s privacy standards can be easily converted into privacy code scanning checks that identify exactly what code is violating which policy. Instead of new products and features getting dramatically delayed for not meeting the privacy standards, the automated checks can be built into the software development process, enabling developers to build with privacy in mind. As code moves to the code review stage, privacy checks can alert developers how to address deviations from the privacy standards.  Generate transparent privacy reporting for software vendors to expedite vendor assessments B2B sales can take a long time, especially when enterprise companies evaluate a new software vendor. B2B enterprise software cycles typically take 6-12 months and drain a lot of resources from the buyer and vendor in the process.  A privacy review of a vendor is one of many things that can slow down a deal along with reviewing security, technical feasibility, ethical practices, etc. What if the vendor could provide the buyer with an unbiased, objective report that enables the buyer to skip the privacy review altogether? This could save both sides many hours from reviewing privacy practices and completing and evaluating RFP questionnaires.  Privacy code scanning solutions can automatically create such a report for software vendors. This way software vendors could come to each deal with a standard report that may preemptively answer all of the buyer’s privacy questions. For example, these reports could show data maps with all personal data the vendor’s product collects, uses, stores, and shares. Depending on the buyer’s policies, the report could be tailored to include additional automated checks for each privacy regulation and standard required by the buyer. To enable quick validation, each finding in the report could be linked to each instance in the codebase where the data processing originates.   How does privacy code scanning compare to current approaches? Data discovery tools  Data discovery tools help companies build an inventory of all data they have in storage; this includes personal data and any other data relevant to the business.  Although these solutions are effective at building data inventories of what data is stored, they offer no coverage for how data is collected, used, or shared. This is because the logic for how data is generated and moved lives in the code of a website, app, or backend system.  Data discovery tools inventory data by scanning structured and unstructured data across data stores and select third party applications. Data discovery tools can scan column names and the actual data, using ML/AI techniques to discover and classify data. Data discovery can feel like playing whack-a-mole, where you are always reacting to personal data popping up in data stores with no control over the source of the problem.  Once the lengthy 6-12 month data discovery process is done, privacy teams still struggle to identify which teams use this data and still lack the data flows needed to accurately create RoPAs, conduct PIAs, and find privacy issues.  Doing data discovery alone can create a false sense of maturity in privacy programs because you know the data you have in data stores. But in reality, you don’t understand how your data is being used, you don’t know how it is being shared, and you don’t know how it is being collected. These gaps lead to privacy issues such as:  • Excessive data collection  • Sensitive data sharing  • Misuse of personal data  • Non-compliant data processing activities  Manual assessments After companies build a complete inventory of all data in storage, they still have to ask several teams how the data is collected, used, and shared. Visibility into the full data lifecycle is needed to complete RoPAs, DPIAs, PIAs, etc. and ensure compliance.  To get this visibility, most privacy teams send questionnaires and interview requests to teams that may know how personal data is being processed including, product management, engineering, data analytics, and marketing. This even includes privacy teams who already completed a 6-12 month implementation of a data discovery tool because data discovery tools can only identify what data is stored, not how it is used or shared.  If the privacy team asks the engineering team what personal data their websites and applications process, they would attempt to manually do what privacy code scanning does automatically, review their code.  Before doing that, engineering leads would struggle to find all the engineers with the knowledge of how their software processes personal data. Because some engineers have left the company and the engineering leads likely don’t know or don’t have the time to find the right owner for every part of the code for every application, a handful of engineers without full context or privacy expertise will attempt to answer the privacy team’s questionnaires for all applications.  After first waiting weeks or even months to look at the questionnaires because they’re busy meeting engineering sprint deadlines, each engineer will need to spend hours asking other engineers, reviewing documentation, and reviewing the code itself to complete questionnaires for each application.  Even for the engineers, the code is the best place to find the answers to the privacy questionnaires. The issue is it’s impossible for any one person to manually review a company’s entire codebase.  On top of that, the codebase is constantly changing as many engineering organizations now ship software updates at least once a week.  For companies that try to employ a Privacy by Design approach, they may do privacy reviews for new product changes at the design stage. While this is possible for top-down planned features, many features are built bottoms-up after the design stage. Even if design reviews are conducted for all new changes, development can still deviate from the original design, causing privacy gaps and issues to emerge.  The bottom line is that manual assessments do not scale and yield imprecise, out-of-date outputs. As a result, manual assessments open up companies to many unknown privacy risks while dramatically slowing down engineering and privacy teams.  Key advantages of privacy code scanning • Enables full data lifecycle visibility for software-driven companies: For companies building software that processes personal data, privacy teams can autogenerate data maps showing how all personal data elements are collected, used, shared, and stored. • Leverages AI/ML models that enable unparalleled accuracy: Static code analysis is supplemented with AI models that continue to increase data mapping accuracy and enable generative outputs like processing activity descriptions. • Continuous and real-time governance: Proactively detects privacy risks based out-of-the-box and custom privacy policy workflows and prevents risks by running privacy checks in the development process • Preserves data security: No personal data is ever scanned or accessed; only code is scanned. Customer code is never stored or shared and is never used to train AI models. • Rapid time to value: Get full visibility and governance in a matter of weeks. Privacy code scanning typically requires just one integration with a company’s source code management tool. The integration can be completed in a few weeks. Data mapping and risk identification can be completed in just a few days.  Key capabilities of privacy code scanning solutions Data visibility  • Inventory of all personal data collected, stored, or shared  • Sensitive data tags for CPRA, GDPR, MHMDA, etc.  • Inventory of all data destinations: third parties and internal systems receiving personal data via pixels, cookies, tag managers, SDKs, customer data platforms (CDPs), APIs, etc.  • Data flows showing every third party and internal data destination for each data element • Autogenerated descriptions of all processing activities Privacy governance • Risk discovery: Out-of-the-box and custom workflows to generate alerts for potential violations to internal policies and regulations like CCPA, CPRA, GDPR, MHMDA, and HIPAAsome text • Stop non-compliant data sharing with marketing partners • Block sensitive data sharing with AI applications • Govern data transferred across borders internally and externally • Continuously scan websites and apps to audit that consent is collected and acted on appropriately • Risk prevention: Workflows to block code with privacy risks during the dev release cycle • Assessment automation: Pre-filled, self-updating RoPAs, DPIAs, PIAs, etc.  Developer enablement • Privacy risk alerts embedded in dev tools  • Root cause identification: Flag exact code causing risk • Automated dev tickets for quick risk resolution  Impact driven by privacy code scanning • Provides accurate picture of privacy risks: It’s impossible to prevent unknown risks. Make critical business decisions based on a comprehensive understanding of all potential risks with live websites and applications and code in the development process   • Reduces risk at scale: Convert privacy policies into automated workflows that identify and block risks at scale as your tech stack grows and evolves • Enables rather than slows down product teams: Provide automated privacy guidance and risk alerts as developers code instead of delaying product launches for assessments that require engineers to stop coding and fill out questionnaires  • Breaks down communication gap between privacy and engineering teams: Translate privacy policies into privacy checks that identify exactly what code is violating which privacy policy   • Eliminates manual processes and saves time for privacy and engineering teams: Fully automate data mapping in days instead of waiting months to complete data discovery and questionnaires, eliminate unnecessary assessments for minor changes, and automate the majority of DPIAs and PIA with information synced from data maps • Allows for more focus on risk mitigation: Instead of spending the majority of resources on data gathering, address risks head on before a major breach or violation occurs Key takeaways • Privacy regulation and enforcement is increasing rapidly, particularly in the US: Since 2023, the FTC has fined at least five companies for improperly sharing personal health data to marketing partners like Meta and Google. As of March 2024, US companies must be compliant with the California Privacy Rights Act and Washington state’s My Health My Data Act. Nearly every US state without a privacy law in effect is currently in the process of implementing one.    • Current approaches to data privacy are inadequate: Data discovery tools can only determine what data is being stored, and they still require manual assessments to determine how data is collected, used, and shared. Manual assessments do not scale and yield imprecise, out-of-date results. • For companies building software that processes personal data, most privacy risks start in their codebase. The code determines what data is collected and how it flows in and out a company’s websites, user-facing applications, and backend systems. • Privacy code scanning enables complete and continuous data visibility and privacy governance by scanning the code that runs a company’s websites, user-facing applications, and backend systems to monitor how personal data is collected, used, shared, and stored. Learn more: Privacy code scanning whitepaper Read our whitepaper on privacy code scanning to learn more about this new approach. Download now. Download privacy code scanning whitepaper Frequently asked questions What is privacy code scanning? Privacy code scanning enables full data lifecycle visibility and continuous privacy governance by scanning the code that runs a company’s websites, user-facing applications, and backend systems to monitor how personal data is collected, used, shared, and stored. How is privacy code scanning different from data discovery tools? Data discovery tools scan data stores to build a comprehensive inventory of all data in storage, not just personal data. Data discovery tools can only determine what personal data is stored; they lack coverage for how personal data is collected, used, or shared.  Privacy code scanning solutions scan code, not data. By scanning the code that controls the creation and movement of personal data, privacy code scanning solutions can build full lifecycle data maps of how personal data is collected, used, shared, and stored. Privacy code scanning also enables continuous privacy governance by automatically identifying privacy risks as the codebase is updated.  How is privacy code scanning different application security tools that scan code? They scan code for different purposes and identify different risks. Application security scan code to identify security vulnerabilities such as unauthorized access to systems, cyberattacks, API token leaks, and outdated software packages.  Privacy code scanning solutions build full lifecycle data maps of how personal data is collected, used, shared, and stored and identify risks for violating internal privacy policies and regulations such as GDPR, CPRA, HIPAA, etc. Privacy code scanning solutions also automate privacy assessments and flag code in the development process with potential privacy risks.  What types of companies benefit most from privacy code scanning? Any company building software that processes personal data can benefit from privacy code scanning. That software could be the code that runs their websites, user-facing applications, and/or backend systems. Typically, companies with over 200 software engineers need privacy code scanning to scale their privacy governance program. Privacy code scanning has successfully reduced privacy risk for companies across industries including, ecommerce, finance, healthcare, gaming, software, telecommunications, transportation, insurance, ad tech, and data intelligence.   What code do privacy code scanning solutions scan?  Privacy code scanning solutions can scan any code written by a company’s engineering team. This code can include the code that runs a company’s websites, user-facing applications, and backend systems.  How do privacy code scanning solutions access code? Privacy code scanning solutions typically need just one integration for implementation, the customer’s source code management tool. Source code management tools contain all the code written by your engineering team and have a wide range of capabilities including deploying software updates via a CI/CD pipeline. Only read-only access to source code management tools is needed, meaning nothing in the source code management tool can be changed, including the code. No customer code should ever be stored or shared by privacy code scanning solutions.    Can privacy code scanning help my organization maintain compliance with GDPR? Privacy code scanning solutions are designed to support several aspects of GDPR compliance including, data mapping, Records of Processing Activity (RoPA) automation, Data Protection Impact Assessment (DPIA) automation, and GDPR privacy risk prevention. Privacy code scanning prevents risks related to personal data collection, usage, 3rd party sharing, and storage as well as consent compliance auditing.  Can privacy code scanning help my organization maintain compliance with CPRA? Privacy code scanning solutions are designed to support several aspects of CPRA compliance including, data mapping, prevent non-compliant data sharing, and auditing consent compliance (i.e., “do not sell or share”). How do privacy code scanning solutions communicate privacy risks to privacy and engineering teams?  Privacy code scanning solutions communicate risks in their own platform and whichever other tools privacy and engineering teams use including privacy management (e.g., OneTrust), Slack, Teams, ticketing systems (e.g., Jira), dev tools (e.g., GitHub), etc.  How can privacy code scanning build trust with stakeholders? Privacy code scanning builds trust and collaboration across teams including privacy, product, engineering, etc. by translating privacy policies into automated workflows that identify what code is violating which policy. Linking data maps and risks to code enables immediate validation and resolution from engineering teams. Additionally, risks are communicated seamlessly in the tools and language that each team uses. How can privacy code scanning build trust with customers? Privacy code scanning builds customer trust by ensuring a company’s privacy promises to customers are followed through on. Privacy teams are given the visibility and governance to monitor and prevent violations to the privacy policies communicated to customers.  Can privacy code scanning replace my current privacy management tool? Privacy code scanning solutions are designed to supplement, not replace privacy management tools like OneTrust. Data maps and risks can be seamlessly synced to privacy management tools to increase their efficiency and effectiveness.  Can privacy code scanning govern data used in AI applications?  For data that engineering teams send to internal or external AI applications, privacy code scanning can ensure that no sensitive personal data is shared. Policy workflows can be set up to restrict select or all personal data elements from being sent to applications flagged as AI.  Can privacy code scanning scan 3rd party applications to monitor personal data flows? Privacy code scanning solutions cannot scan 3rd party applications like Salesforce or Workday unless one-off integrations are built for each 3rd party tool. Privado has built integrations with the most prevalent tag managers and customer data platforms to prevent non-compliant data from being shared to marketing partners from those tools.   Privacy code scanning guide Posted by Vaibhav Antil in Best Practices on June 12, 2024 Vaibhav is the founder of privado.ai and a CIPM certified privacy professional. Get started with Privado Thank you for subscribing, we have sent a confirmation email to your inbox. Oops! Something went wrong while submitting the form.
__label__pos
0.589592